Human chess players are still incredibly valuable because we want to see what humans are capable of. For the same reason athletes are valuable even though a car can outrun them.
With mathematicians, and others working in intelligence-intensive tasks (most of us here probably), I’m not sure what the value would be post-AGI.
The point is that even with mathematics and programming, there is an underlying community aspect that cannot be ignored, but is hidden under layers of utility. For example, even in programming, people getting together to code, collaborating, and sharing their projects is a small but significant drop in people creating a community.
With mathematics, the sharing of ideas and slaving over the proof of a theorem brings meaning to lives by forging friendships. Same with any intellectual discipline: before generative AI, all the art around us was primarily from human minds and were echoes of other people through society.
Post-AGI, we abandon that sense of community in exchange for pure utility, a sort of final stage of human mechanization that rejects the very idea of community.
If a change in other people's ability to do mathematics affects your level of enjoyment in doing mathematics, you don't really enjoy mathematics. You enjoy feeling smarter than other people, of belonging to an exclusive club.
Preserving people's access to this kind of enjoyment is not something that should carry any weight in my opinion.
Oh come on, that's ridiculous. I wasn't referring to a change in ability, but a change in culture. The modern culture of mathematics is getting worse in my opinion, and many feel the same. Besides, I don't even practice math any more...
One of Bill Thurston's answers on MathOverflow should be required reading on this and a lot of related topics. When basically asked "How do I cope with the fact that I'm no Gauss or Euler?" he replied:
> The product of mathematics is clarity and understanding. Not theorems, by themselves... mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new. The real satisfaction from mathematics is in learning from others and sharing with others. All of us have clear understanding of a few things and murky concepts of many more. There is no way to run out of ideas in need of clarification. The question of who is the first person to ever set foot on some square meter of land is really secondary. Revolutionary change does matter, but revolutions are few, and they are not self-sustaining --- they depend very heavily on the community of mathematicians.
Ongoing relationships and cooperation is how humanity does its peak stuff and reaches peak understanding (and how humans usually find the most personal satisfaction).
LLMs are powerful precisely because they're a technology for concentrating and amplifying some aspects of cooperative information sharing. But we also sometimes let our tools isolate.
Something as simple as a map of a library is an interesting case: it is a form of toolified cooperation, you can use it to orient yourself in discovering and orienting library materials without having to talk to a librarian, which saves time/attention... and also reduces social surface area for incidental connection and cooperation.
That's a mild example with very mild consequences and only needs mild individual or cultural tools in order to address the tradeoffs. We might also consider markets and the social technology of business which have resorted in a kind of target-maximizing AGI. The effects here are also mixed, certainly in terms of connection / isolation, also potentially in terms of environmental impact. A paperclip maximizer has nothing on an AGI/business that benefits from mass deforestation, and we created that kind of thing hundreds of years ago.
The question is if we're going to maintain the kind of social/cultural infrastructure that could help us be aware of and continue to invest in the value the social/cultural infrastructure.
Or, put more simply, if we're going to build a future for people.
Again, I am not arguing against ALL computer use of chess. Just the chess engine/AI itself. Why do you insist on taking all of technology as an indivisible unit in your argument?
They’re the same technology: you don’t get to select only some of the applications, which appeal to your personal aesthetics.
We arrived at engines before online chess, and the two have come up together — both being enabled by the growth of computers. You can choose not to use an engine, but it will exist either way because others will choose to use it when it’s enabled by those same things.
To get rid of the engine, you have to get rid of computers — or in the case of Freestyle/Chess960, create so many openings a human can’t memorize them all so only has a short time to prepare.
You are right in some sense. Of course, my objective is a long-shot: to encourage people to eschew many advanced technologies and go to a simpler way of life. Some will listen, others won't. But I do think there is a future where technology is more restricted along the lines of the Amish way. A long shot I said, but one I intend to promote regardless.
And a suspicion and dislike of advanced technology IS growing among people outside the technophile sphere.
I think there will always be a demand for human knowledge workers. They might not push their respective fields forward in the same capacity as AI will be able to, but there will be a niche market for products and ideas authored entirely by humans. Programmers and mathematicians will actually be craftspeople, and communities will continue to exist around this. These will probably not be highly paid positions as they are today, and their products likely won't power mission-critical infrastructure. Some might pursue it simply as a hobby and for the mental exercise.
It wouldn't be much different from small artisan shops we have today in other industries. Mass production will always be more profitable, but there's a market for products built on smaller scales with care and quality in mind. Large companies that leverage AI black boxes won't have that attention to detail.
The problem with this is that most people will sense a reduced importance for themselves. Most people seem to think that with AI doing everything, we can just relax and do our hobbies. But that's just wishful thinking based on a culture of overworking: we overwork so we dream of a utopia where we don't work. But the opposite of overworking is a sense of complete irrelevance, which will in some sense be more problematic than everyone working too much.
Yes, a few people might find some meaning in a life where they are not that important, but most people need to feel important to others, and AI takes that away.
That's true. It's a problem that isn't discussed nearly enough.
This is partly why I think that the pace of AI development needs to slow down. We've had disruptive technologies in the past, and society eventually adapted when new jobs were created, but none of them had the potential to completely replace humans in most industries. None of them raised existential questions about our humanity, the value of human labor, our place in society, and the core pillars of economy, education, etc. And, crucially, none of them were developed in just a few years.
We need time to discuss these topics and prepare for the shift. But, of course, any mention of slowing down is met with criticism of regulations stifling innovation and profits, concern about losing a technological advantage over political opponents, etc., so this is unlikely to happen.
This century certainly won't be boring, so let's enjoy the ride, and hope that no major conflict pops off. Though with the way things are going, my hope is waning.
Well, I certainly agree with slowing things down. Time to discuss would be much better than nothing. As I tell many, I am glad I was born when I was, and not now. I cherish the time I had before anyone knew of the internet. Even though I am using it now, mostly to spread my ideas, I would gladly trade it for a world where it didn't exist.
Bluntly speaking, I think it is going to be the journey that matters; to work with mathematics is to work on yourself and a way to explore your creativity.
Right, I enjoy programming for the same reasons. But will I be able to make a living from it, 20 years from now? Probably not.
To be clear I don’t think AI is bad, and even if it is I’m pretty damn sure it’s not avoidable. But we’re in for drastic changes and we should start getting used to it.
I think it's a sad thing that people may not be able to make a living from what they love. A lot of people suggest hobbies, but I think it's nice to contribute to society with the skills of our minds.
I think we have to do both: get used to it, but fight it at the same time in case we can get rid of it.
With mathematicians, and others working in intelligence-intensive tasks (most of us here probably), I’m not sure what the value would be post-AGI.