> Remember the "leaks" from Google about an engineer trying to get the word out that they had created a sentient intelligence?
That's not what happened. Google stomped hard on Lemoine, saying clearly that he was wrong about LaMDA being sentient ... and then they fired him for leaking the transcripts.
Your whole argument here is based on false information and faulty logic.
Were you perchance noting that according to some people «LLMs ... can hallucinate and create illogical outputs» (you also specified «useless», but that must be a further subset and will hardly create a «litter[ing]» here), but also that some people use «false information and faulty logic»?
Noting that people are imperfect is not a justification for the weaknesses in LLMs. Since around late 2022 some people started stating LLMs are "smart like their cousin", to which the answer remains "we hope that your cousin has a proportionate employment".
If you built a crane that only lifts 15kg, it's no justification that "many people lift 10". The purpose of the crane is to lift as needed, with abundance for safety.
If we build cranes, it is because people are not sufficient: the relative weakness of people is, far from a consolation of weak cranes, the very reason why we want strong cranes. Similarly for intelligence and other qualities.
People are known to use use «false information and faulty logic»: but they are not being called "adequate".
> angry at
There's a subculture around here that thinks it normal to downvote without any rebuttal - equivalent to "sneering and leaving" (quite impolite), almost all times it leaves us without a clue about what could be the point of disapproval.
I think you're missing the point. He's pointing out what the atmosphere was/is around LLMs in these discussions, and how that impacts stories like with Lemoine.
I mean, you're right that he's silly and Google didn't want to be part of it, but it was (and is?) taken seriously that: LLMs are nascent AGI, companies are pouring money to get there first, we might be a year or two away. Take these as true, it's at least possible that Google might have something chained up in their basement.
In retrospect, Google dismissed him because he was acting in a strange and destructive way. At the time, it could be spun as just further evidence: they're silencing him because he's right. Could it have created such hysteria and silliness if the environment hadn't been so poisoned by the talk of imminent AGI/sentience?
Which comment claimed that LLMs were marketed as super-intelligence? I'm looking up the chain and I can't see it.
I don't think they were, but I think it's pretty clear they were marketed as being the imminent path to super-intelligence, or something like it. OpenAI were saying GPT-(n-1) is as intelligent as a high school student, GPT-(n) is a university student, GPT-(n+1) will be.. something.
That's the whole discussion here: "It's mostly because of how they were initially marketed. In an effort to drive hype 'we' were promised the world. Remember the "leaks" from Google about an engineer trying to get the word out that they had created a sentient intelligence?"
I did not miss any point and that's an ad hominem charge. He misrepresented the facts and based an argument on that misrepresentation and I pointed that out.
"In retrospect, Google dismissed him because he was acting in a strange and destructive way."
No, they dismissed him because he had released Google internal product information, "In retrospect" or otherwise.
That's not what happened. Google stomped hard on Lemoine, saying clearly that he was wrong about LaMDA being sentient ... and then they fired him for leaking the transcripts.
Your whole argument here is based on false information and faulty logic.