Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reviewing these conversations is like listening to horse and buggy manufacturers pooh-poohing automobiles:

1. they will scare the horses. a good team of horses is no match for funky 'automobile'

2. how will they be able to deal with our muddy, messy roads

3. their engines are unreliable and prone to breaking down stranding you in the middle and having to do it yourself..

4. their drivers cant handle the speed, too many miles driven means unsafe driving.. we should stick to horses they are manageable.

Meanwhile I'm watching a community of mostly young people building and using tools like copilot, cursor, replit, jacob etc and wiring up LLMs into increasingly more complex workflows.

this is snapshot of the current state, not a reflection of the future- Give it 10 years



Reading this comment is like listening to Tesla in 2014 tell me about how their cars will be driving themselves. Give it 10 years.


Tesla might not have managed to do it, but Waymo had over 7.1 million miles driven by their driverless cars by 2023: https://www.theverge.com/2023/12/20/24006712/waymo-driverles...

I'm still waiting for the future where a robot maid does my dishes, hangs my clothes, tidies and vacuums my apartment while I'm working on my piano skills. But at least I now have a robot vacuum with a camera that avoids whatever toys I forgot to pick up.


humans greatly over estimate trends short-term, and greatly under estimate it long-term .


Tesla gave out free trials of their self driving upgrade this month. I didn't buy it initially because of the grandiose claims that never seemed to materialize, but I've actually been pretty impressed with the trial. I still don't think it's worth it, but my Model Y has driven itself (with my close supervision of course) to a number of destinations I've keyed into the navigation, without me having to intervene.

There's also lots of times I have had to intervene, but we're closer than we are far, at this point, I think.

So I think your take here is a bit outdated. It was good a couple years ago, though.


Let me guess, you're in the US along one of the hardcoded routes?


I'm in the US, yes. But I feel like my small Indiana town is always behind the times, so I'd be surprised if anything about it was hardcoded for me.


It might not be perfect yet but my huble Model 3 drove me to a doctor appointment all by itself from my driveway last week. I would say it's pretty damn impressive the kind of progress they were able to make.


I dont see a young/old divide when it comes to AI. Altough there is a young/old divide in familial responsibilies and willingness to be a chip on the VC's roulette table.


Absolutely true. The oldest devs I work with are some of the most enthusiastic about using LLM chat to develop. Among the younger devs, they all seem to use it but the amount that can actually produce working code are few.

Now I get a lot of calls from team asking for help fixing some code they got from an AI.. Overall it is improving the code quality from the group, I no longer have to instruct people on basics to set up their approach/solution. Will admit there is a little difficulty dealing with pushback on my guidance because e.g. “well chatgpt said I should use this library” when the core SDK already supports something more recent than the AI was trained on


There is a young/old divide.

There was a similar divide in the 2000s when Google Search got ubiquitous and writing code got easier than ever. I know a lot of people quit to become 'managers' because they didn't want to fix code which most of the times was being copied from the internet and pasted. Similar arguments on correctness, verbosity and even future maintainability were made. Everybody knows how that went.

Millennials are just gradually turning into boomers as they enter their 40s.


I must be the exception! A big survey on this would be interesting to go from anecdotes to data.


I'm not sure why people can't be humble enough to accept that we don't really know what the future will hold. Just because people have underestimated some new technology in the past doesn't mean that will continue to be true for all new technologies.

The fact that LLMs currently do not really understand the answers they're giving you is a pretty significant limitation they have. It doesn't make them useless, but it means they're not as useful at a lot of tasks that people think they can handle. And that limitation is also fundamental to how LLMs work. Can that be overcome? Maybe. There's certainly a ton of money behind it and a lot of smart people are working on it. But is it guaranteed?

Perhaps I'm wrong and we already know that it's simply a matter of time. I'd love to read an technical explanation for why that is, but I mostly see people rolling their eyes at us mere mortals who don't see how this will obviously change everything as if we're too small minded to understand what's going on.

To be extra clear, I'm not saying LLMs won't be a technological innovation as seismic as the invention of the car. My confusion is why for some there doesn't seem to be room for doubt.


in the current state they are already plenty useful, I don't think it's worth proving mathematically that something can work 100% of the times when 80% is good enough.


The funny thing about this comment is there's an increasing number of people beginning to think automobiles were a mistake. They pollute, they're unhealthy, dangerous, cause congestion, but we've built our lives around them and basically addicted to them.

LLMs piece together language based on other language they've seen. It's not intelligent, it's just a language tool. Currently we have no idea what will happen once there are no more human inputs to train the LLMs. We might end up wishing we didn't build our whole lives around LLMs.


Exactly. Cars are horrible, they made everything worse for everyone except the few people with money to buy a car.

Cars produce toxic fumes, air pollution, noise pollution with their engine noises and horns, light pollution with their headlights pointed directly into my fucking eye, consume incredible amounts of resources to function, consume a fuckton of resources for road maintainability waste millions of man-hours in soul-crushing traffic jams, all that for them to be slower than me on my fucking bike inside the city.

Yeah the horse and buggy manufacturers were right, cars were a mistake. We just doubled down on that mistake.


But horses were still worse. They shat everywhere.


Yeah, but it's a silly comparison anyway. Horses weren't used for personal transport around town. They were used to pull heavy loads or crossing great distances.

When it comes to personal transport the current best invention is the safety bicycle. It's truly a marvel and can never be celebrated enough. A tubular frame, ball bearings, cable actuated brakes and gears, spoke tensioned wheels and pneumatic tyres provide a stiff yet lightweight machine that needs very little maintenance and no more energy than walking.

But unfortunately the car used all of that technology in a hilariously inefficient way, and unbridled use of fossil fuels meant it was attractive to use a vehicle that needs a million joules just to make it move without anything in it. If we weren't so greedy but instead considered each gain carefully we might have never ended up with cars.

But, alas, we're no better than a dog who got access to the food cupboard and made itself sick


Except the automobile in this case only reaches the destination correctly sometimes. They are less likely to reach the destination as the path becomes longer or more complex.


Ho, boy, you should read about the 1919 Motor Transport convoy -- the military sent a team across the United States by truck and car. It took them almost two months, and over 10% of the vehicles and many of the men didn't complete the journey.

https://en.wikipedia.org/wiki/1919_Motor_Transport_Corps_con...


It is hard to understand some phenomena if it stands to reduce your income. Even if the LLMs don't improve one bit from here and current state is froze, they are still too good and will be everywhere before we can finish talking of horses and automobiles.


LLMs make my job as a software engineer even more secure. Most of what I do is social and/or understand what is going on. LLMs are a tool to reduce mental load when in VSCode on some tasks. They are like the pilot's autopilot.

LLM takes my job then we have reached the singularity. Jobs wont matter anymore at that point.


Prospective and retrospective analysis are fundamentally different. It’s easy to point to successes and failures of the past, but that’s not how we predict the concrete future potential of one specific thing.


A reminder that we basically built cities around the cars, cause they still need fuel, break and drown in the mud.

What is your similar plan for LLMs?

Analogies always end somewhere, I’m just curious where yours does.


I guess we build a world where being catastrophically and confidently wrong about many things is completely normalized.

Don’t we already see a shift in this direction?

C suite is being sold the story that this new tech will let them fire x% of their workforce. The workforce says the tech is not capable of replacing people.

C suite doesn’t have the expertise to understand why and how exactly the tech is not ready but does understand people and suspects that their workforces warnings are just a self preservation impulse.

C suite also gets huge bonuses if they reduce cost.

So, they are very strongly encouraged to believe the story and the ones actually doing the work and knowing the difference are left to watch the companies products get destroyed.


Well, we’ll build all sort of APIs for LLMs to plug into.


I think a lot of the criticism is constructive. Many of the limitations won’t just magically go away - we’ll have to build tooling and processes and adjust our way of thinking to get there. Most devs will jump across to anything useful the second it’s ready, I would think


I do see a lot of constructive criticism from people who actually use these tools regularly, but there is also a heap of uninformed complaints from the luddites among us.

It's true that the limitations won't magically go away. Some may _never_ go away. I have a suspicion that neuroticism and hallucination are intrinsic qualities of intelligence, artificial or otherwise.

Many of the criticisms leveled could readily be applied to a fellow human. It seems what the naysayers really don't like are _other people_, especially imperfect ones.


> Meanwhile I'm watching a community of mostly young people building and using tools like copilot, cursor, replit, jacob etc and wiring up LLMs into increasingly more complex workflows.

And yet, I don't see much evidence that software quality is improving, if anything it seems in rapid decline.


I don't see much evidence that software quality is improving

Does it matter? Ever since FORTRAN and COBOL made programming easier for the unwashed masses, people have argued that all these 'noobs' entering the field is leading to software quality declining. I'm seeing novice developers in all kinds of fields happily solving complex real world problems and making themselves more productive using these tools. They're solving problems that only a few years ago would require an expensive team of developers and ML-experts to pull off. Is the software a great feat of high quality software engineering? Of course not. But it exists and it works. The alternative to them kludging something together with LLMs and OpenAI API calls isn't high quality software written by a team of experienced software engineers, it is not having the software.


Even if that were true (and I'd challenge that assumption[0]), there's no dichotomy here.

Software quality, for the most part, is a cost center, and as such will always be minimal bearable.

As the civil engineering saying goes, any fool can make a bridge that stands, it takes an engineer to build a bridge that barely stands.

And anyway, all of those concerns are orthogonal to the tooling used, in this case LLMs.

[0] things we now take for granted, such as automated testing, safer languages, ci/cd, etc; makes for far better software than when we used to roll our own crypto in C.


The ageism in this comment just serves to further undermine the unsubstatiated claims that are made, a common trait of the crypto-bro-migrated-to-ai-bro movement.


This is completely different: With an automobile (back then) you still needed ad driver.

This replaces the most human occupation of all: thinking. So young people go ahead and steal the whole open source corpus that they did not write. And are smug about it.

If your projections of progress are true, at least 90% of the people here who praise the code laundering machines will be made redundant.


No, it is similar. I think the analogy is that you being the horse and AI being cars. You are to upcoming AI what a horse was to cars.


It's not that different from a manager who can't code much hiring some programmers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: