Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI is a very strange thing where 2 seemingly smart coders use it and one comes out thinking it's obviously revolutionary and the other one thinking it's a waste of time and where 2 seemingly smart journalists use it and one thinks AGI and the end of the world is nigh and the other one thinks the market will crash when the hype dies over.

I think part of it is due to the politically and internet-induced death of nuance. But part of it I can't fully understand.

Personally I think it's rather useful. I don't consider myself a heavy user and still use it almost every day to help code, I ask it a lot of questions about specific and general stuff. It's partially or totally substituted for me: Stack Overflow, Google Search, Google Translate, most tech references. In the office I see people using it all the time, there's almost always a chatgpt window open in some of the displays.

I think it's very difficult to say this is 100% hype and/or a "phase". It's almost a proven fact it's useful and people will want it in their lives. Even if it never improves again, ever. It's a new tool in the toolbox and there will be businesses providing it as a service, or perhaps we will get to open source general availability.

On the other extreme, all the AI doomerism and AGI stuff to me seems almost as unfounded as before generative AI. Sure, it's likely we'll get to AGI one day. But if you thought we were 100 years away, I don't think chatgpt put us any closer to it and I just don't get people who now say 5. I'd rather they worried about the impact of image gen AI in deepfakes and misinformation. That's _already_ happening.



> AI is a very strange thing where 2 seemingly smart coders use it and one comes out thinking it's obviously revolutionary and the other one thinking it's a waste of time

My take on this is that those 2 developers are often working on very different tasks.

If you're a very smart coder working in a large codebase with tons of domain knowledge you'll find it's useless.

If you're a very smart coder working in a consultancy and your end result looks like a few thousand lines of glue code, then you're probably going to get a lot out of LLMs.

It's a bit like "software engineering" vs "coding". Current iterations of LLMs is good for "coding" but crap at "software engineering".


there's probably truth to that, but I find it's useful at a more micro-level. I don't tell the llm to write an architecture or a big piece. It's more like, I have this data in this shape, I want a function that gives data out that shape and it will spit something pretty good and idiomatic. I read it, understand it and implement it. Need to be careful with blind copy-paste, there are sometimes subtle bugs in the code.

It's specially useful when learning new frameworks, languages, etc. To me this is all applicable regardless of domain as the micro-level patterns tend to be variations of things that have been seen. I suspect if you try to load it with a lot of very specific high level domain logic, there are more chances of taking the llm out of its comfort zone.


Yep, that's exactly what I mean by "coding" as opposed to "engineering".


Totally agree with that. It's an aid to lay bricks, doesn't do your job for you.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: