Exactly this. I've been scratching my head about the belittlement in tech circles of the phenomenal breakthrough that chatGPT and other LLMs represents. I think it boils down to a conflation of a few different factors:
1. Our jobs are obviously (soon) replaceable or at least massively impacted - just like all the jobs we ourselves have replaced previously. We don't like that and throw stones at it to make it go away.
2. We used to be the goto-source for all things technical, now (soon) people can just ask chatGPT and get a better answer. We don't like that and throw stones at it to make it go away.
3. We are (most of us) schooled to think that a script, a program (made by us) produces a predictable result: every letter has to be carefully placed or the whole thing comes crashing down. This AI thing doesn't work like that at all, which we don't like and etc.
4. The answers given to us by the chat bots are sometimes just "hallucinations", because the tech isn't fully mature yet. We don't like that.
I agree with all of these; not necessarily in order of importance, but yet, these are factors.
I am at once astonished how useful LLM can be at its best, and how horrifyingly dangerous it can be if/when overused by those who don't understand its limitations.
You are very balanced :-) . Of course, there is valid criticisms of the hype and the way LLMs are, and will be, used but my annoyance at what I perceive as ignorant, self-interested and progress-hostile attitudes from my own industry sometimes gets the better of me.
When the dust settles, and the consensus concludes the obvious namely that: "yes, this thing actually does possess intelligence and knowledge, on an unprecedented level with vastly greater future potential" then maybe we can concentrate on steering education and professions into directions that will make the tech controllable and even more useful, and avoid the real pitfalls and dangers that lie ahead.
As long as we're stuck on the understanding that this is "just a hallucinating, stochastic parrot" and everything else is pure hype then we're not getting the right things done I'm afraid.
1. Our jobs are obviously (soon) replaceable or at least massively impacted - just like all the jobs we ourselves have replaced previously. We don't like that and throw stones at it to make it go away.
2. We used to be the goto-source for all things technical, now (soon) people can just ask chatGPT and get a better answer. We don't like that and throw stones at it to make it go away.
3. We are (most of us) schooled to think that a script, a program (made by us) produces a predictable result: every letter has to be carefully placed or the whole thing comes crashing down. This AI thing doesn't work like that at all, which we don't like and etc.
4. The answers given to us by the chat bots are sometimes just "hallucinations", because the tech isn't fully mature yet. We don't like that.
More?