> AI is a very strange thing where 2 seemingly smart coders use it and one comes out thinking it's obviously revolutionary and the other one thinking it's a waste of time
My take on this is that those 2 developers are often working on very different tasks.
If you're a very smart coder working in a large codebase with tons of domain knowledge you'll find it's useless.
If you're a very smart coder working in a consultancy and your end result looks like a few thousand lines of glue code, then you're probably going to get a lot out of LLMs.
It's a bit like "software engineering" vs "coding". Current iterations of LLMs is good for "coding" but crap at "software engineering".
there's probably truth to that, but I find it's useful at a more micro-level. I don't tell the llm to write an architecture or a big piece. It's more like, I have this data in this shape, I want a function that gives data out that shape and it will spit something pretty good and idiomatic. I read it, understand it and implement it. Need to be careful with blind copy-paste, there are sometimes subtle bugs in the code.
It's specially useful when learning new frameworks, languages, etc. To me this is all applicable regardless of domain as the micro-level patterns tend to be variations of things that have been seen. I suspect if you try to load it with a lot of very specific high level domain logic, there are more chances of taking the llm out of its comfort zone.
My take on this is that those 2 developers are often working on very different tasks.
If you're a very smart coder working in a large codebase with tons of domain knowledge you'll find it's useless.
If you're a very smart coder working in a consultancy and your end result looks like a few thousand lines of glue code, then you're probably going to get a lot out of LLMs.
It's a bit like "software engineering" vs "coding". Current iterations of LLMs is good for "coding" but crap at "software engineering".