IDK, my impression of the self driving car discussion 5 years ago were more akin to: "let us start designing AI only roads, get ready for no human drivers - they won't exist in 5 years! AI only cars will be so great, it will solve traffic congestion, pollution, noise, traffic deaths, and think of all the free time while you are lounging around in your commute!" Seemed like a conversation dominated by people gearing up for that VC money. Meanwhile, actual solutions for any of those problems seem to be languishing.. My perspective; was a lot of distraction away from real solutions, lead by a tech-maximalist group that had a LOT to gain by hype.
> I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel.
> best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow".
These two together.. Reminds me of this type of sentiment that seems somewhat common here: 'I feel that AI is growing exponentially, therefore we should stop learning to code - because AI will start writing all code soon!'
I think this points to where a lot of skepticism comes from. From the perspective of: the AI barely does a fraction of what is claimed, has grown even less a fraction of what is claimed, yet these 'feelings' that AI will change everything is driving tons of false predictions. IMO, those feelings are driving VC money to damn MBAs that are plastering AI on everything because they are chasing money.
There is an irony here though too, skepticism simply is a lack of belief without evidence. Belief without evidence is irrational. The skeptics are the ones simply asking for evidence, not feelings.
> I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel.
> best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow".
These two together.. Reminds me of this type of sentiment that seems somewhat common here: 'I feel that AI is growing exponentially, therefore we should stop learning to code - because AI will start writing all code soon!'
I think this points to where a lot of skepticism comes from. From the perspective of: the AI barely does a fraction of what is claimed, has grown even less a fraction of what is claimed, yet these 'feelings' that AI will change everything is driving tons of false predictions. IMO, those feelings are driving VC money to damn MBAs that are plastering AI on everything because they are chasing money.
There is an irony here though too, skepticism simply is a lack of belief without evidence. Belief without evidence is irrational. The skeptics are the ones simply asking for evidence, not feelings.