Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel. And how others using them make us feel. And how Hollywood-style stories about "AI" make us feel. And how people commenting on these things make us feel. And so on.

IMO it's best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow". The rest is too noisy for me. I'm OK with some skepticism, but not outright denial. You can't take an unbiased look at what these things can do today and say "well, yes, but can they do x y z"? That's literally moving the goalposts, and I find it extremely counter productive.

In a way I see a parallel to the self driving cars discussions of 5 years ago. Lots of very smart people were focusing on silly things like "they have to solve the trolley problem in 0.0001 ms before we can allow them on our roads", instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably, with some degree of variability between solutions (waymos, teslas, mercedes, etc). All that talk 5 years ago was useless, IMO.



> instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably,

No, we really aren’t. Let me know when any of those systems can get me from Souix falls South Dakota to Thunder Bay Ontario without multiple disengagements and we can talk.

Based on what I’ve seen we’re still about 10 years away best case scenario and more likely 20+ assuming society doesn’t collapse first.

I think people in the Bay Area commenting on how well self driving works need to visit middle America and find out just how bad and dangerous it still is…


When you put it like that, it makes me wonder if we can just stick to using the self-driving cars in the Bay Area and not go to these bad and dangerous places.


anywhere outside the bay area is "bad and dangerous"??


I agree that a lot of the noise at the moment is an emotional reaction to LLMs, rather than a dispassionate assessment of how useful they are. It's understandable - they are changing the way we work, and for lots of us (software developers), the reason we chose this career was because we _enjoy_ writing code and solving problems.

As with a lot of issues in today's world, each side is talking past the other. It can simultaneously be true that LLMs make writing code less enjoyable / gratifying, and that LLMs can speed up our work.


IDK, my impression of the self driving car discussion 5 years ago were more akin to: "let us start designing AI only roads, get ready for no human drivers - they won't exist in 5 years! AI only cars will be so great, it will solve traffic congestion, pollution, noise, traffic deaths, and think of all the free time while you are lounging around in your commute!" Seemed like a conversation dominated by people gearing up for that VC money. Meanwhile, actual solutions for any of those problems seem to be languishing.. My perspective; was a lot of distraction away from real solutions, lead by a tech-maximalist group that had a LOT to gain by hype.

> I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel.

> best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow".

These two together.. Reminds me of this type of sentiment that seems somewhat common here: 'I feel that AI is growing exponentially, therefore we should stop learning to code - because AI will start writing all code soon!'

I think this points to where a lot of skepticism comes from. From the perspective of: the AI barely does a fraction of what is claimed, has grown even less a fraction of what is claimed, yet these 'feelings' that AI will change everything is driving tons of false predictions. IMO, those feelings are driving VC money to damn MBAs that are plastering AI on everything because they are chasing money.

There is an irony here though too, skepticism simply is a lack of belief without evidence. Belief without evidence is irrational. The skeptics are the ones simply asking for evidence, not feelings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: