The BS that ChatGPT generates is especially odious because it requires so much effort to detect if you're not a SME in whatever output it's giving.
Plus, the output, when wrong, is subtly wrong- it's usually not obvious BS, it's credible BS. If you are not competent in the particular area you're asking about, you likely don't have the skills to recognize BS.
It also is a time saver, doing work that most people find unrewarding. So you get a chorus of fans saying "it saved me a bunch of time doing my own research and just spit out what I needed". Maybe it did, or maybe they didn't have enough expertise in the area to recognize the flaws in the output it presented.
McQuillan’s central point here that ChatGPT is a “bullshit generator” and not “artifically intelligent” is really apt. What we’re often working on in the industry is emulating the worst uses of intelligence processes, that humans also use, like optimized BS generation. Seeing ChatGPT turn into the equivalent of a competent essay-spitting undergrad is distressing; the internet is now such a mass of human-generated BS, it's hard to believe people will be arguing and competing with machine optimized and generated BS speech.
The most powerful and effective and immediately available BS generators will probably rely on machine-generated, unhinged speech. I truly fear for the future of the few reputable internet forums left, because even intelligent people tend to engage with well-optimized BS generators, whether driven by human or machine.
You're not too far off from the arguments people made about the internet not too long ago. It's too easy to access information that could easily be incorrect...even maliciously so!
You're better off sticking to published books and journals from respectable organizations that vet their authors and review their publications!
Then again, who's in control of those printing presses? How can you trust the publishers to not push their own politics and agendas? You're probably better off finding a religious organization you can trust to help filter out the bad stuff. Help you see things through the proper perspective.
The problem is people weren't wrong about the internet. In fact, they couldn't grasp the magnitude of the problem it would create, and the complete transformation it would have on media, politics, and culture.
As someone who vaguely remembers the 90s I can tell you that there is no transformation in media, politics or culture. Well, there is.
But the difference people describe, that politics used to be based on sound science and now, with the internet/facebook/fake news/tiktok changed to be based on total bullshit. Not true.
Not because media and politics aren't currently almost exclusively bullshit. But because that wasn't any different in the 90s. Back then media was full of bullshit, and politics reacted 100 times to media bullshit for every time it reacted to actual science. Hell, there's positive evolution too, I think the BBC has actually improved their fact checking since back then, for example. And I actually know what is trustworthy. I didn't know in the 90s.
But... With books you know who is publishing them. You might know who is in charge of a website. At least with Wikipedia sources are cited. With gpt, nothing.
Just had a dinner conversation where ChatGPT was characterized as automated plagiarism, and then I thought wouldn’t it be cool to get like a set of BibTex entries for all the sources whose content were combined to synthesize an output.
Not sure that’s possible, and even if so, that it would be any kind of reasonable or manageable size whatsoever.
I run the cheaper self hostable OpenAI alternative https://text-generator.io I've been working on automating this manual verification of everything, with a few components we already have like a search engine and an edit API we can both detect and correct most of these errors to at least be reflective of what a reliable source says like Wikipedia, still a lot of reasoning, logic and math issues will remain, but there's a big step up coming soon in factual generation
ChatGPT helps a lot but it is necessary to take everything with a grain of salt - so what? It still can save a lot of time. Just Alt-tab ChatGPT/Google. Adjust your mental model from what you think it should be to what it actually is.
I hear this a lot but I don't find it convincing. If I know enough to know if the answer is good or bad, why would I be asking a chat bot? We could say this problem is somewhat analogous to sorting good StackOverflow answers from bad ones. But I feel that there are things like writing and presentation style (not to mention the votes and comments) that could tip me off to a bad answer that are absent when every answer is in the same voice.
> If I know enough to know if the answer is good or bad, why would I be asking a chat bot?
If you are trying to write a blog/essay/etc on a topic, it can help with “writer’s block”-you may know a lot about the topic already, but struggle with putting that understanding into words. And then it isn’t hard to filter ChatGPT’s output based on your pre-existing knowledge.
Sometimes I don’t know the answer to the specific question I am asking it, but still have enough background knowledge of the topic to pick up when it is likely to be “hallucinating”
What counts as “plagiarism” depends on the context. It is (in most cases) unethical to present the work of another human being as your own, but what about an AI? In an educational setting, it would be unethical to turn in an AI’s work as your own. But if a professional programmer uses GitHub Copilot, are they obliged to acknowledge Copilot’s contribution? If I have a personal blog, and ChatGPT helps me write a post, am I obliged to acknowledge ChatGPT’s contribution?
I think the bigger picture to me is something like are you being disingenuous rather than are you technically plagiarizing. Thought experiment: if I tell people I’m great at multiplication and I don’t tell them I have an earpiece / mic with someone using a calculator on the other end. If I convince these people I have a great mind and they respect my math skills, have I not fooled them? As for the copilot argument, I’m not convinced it shouldn’t be required to cite.
People write for many different reasons. Sometimes it is because they believe in an idea (political, ethical, philosophical, religious, etc), and their main objective is to convince others to believe in that idea too-what you may come to think of their abilities as an author is rather besides their point.
Somewhat of an open question really. I think for all of blogging we have concluded that the words in your blog post were in fact written by you. I see two reasons plagiarism is bad: both and injustice to whom wrote the original but also a false indicator to your cognitive and creative ability.
Well, I think it is more challenging than that. Even if the author writes the piece himself, most professional writing is subject to editing, which can substantially change the finished product. The one thing we can be reasonably, if not entirely, certain of is that the piece is one the author is comfortable endorsing as his own.
> If I know enough to know if the answer is good or bad, why would I be asking a chat bot?
I mean, taken to the extreme, I can probably read through the source of VLC and know that it's correct, given enough years to read it and study video compression standards etc. Does that mean I don't get use out of someone else having written VLC for me?
Knowing something is right and producing it are completely different things. You might be thinking of ChatGPT too narrowly, as a simple question-answer thing, but even now you can ask it to write code that will save you time, and scale it up by a few factors and it's doing the equivalent of writing new libraries for you. (Probably many years before it can write VLC for you, if ever.)
This is not the same thing at all. You know whether a tool like VLC works because it has a pretty well-defined scope for "works": it plays the video or audio file you clicked on.
If you're asking ChatGPT to teach you something, you have no such easy verification you can do: you essentially need to learn the same material from another source in order to cross-check it. Obviously this is easy for small factual questions. If I ask ChatGPT the circumference of the Earth, I can quickly figure out whether it's reliable or not on that point. But at the other extreme if I ask it to do a music theory analysis of the Goldberg Variations, it's going to take me about as much work to validate the output as to have just done the analysis myself.
I'd suggest just trying it yourself for a bit to see if you can find any use for the tool. If not, that isn't a problem either - I suspect it is completely useless in some domains.
I remember playing around with it first and doing similar things with the openai playground. It is amusing but the novelty of this type of usage wears off quickly.
I mostly use it for programming type work now. Write a little snip of code (personal projects only) or tell me how to use a library. I also use it to teach me things. I find it incredibly useful for this.
ChatGPT is still free for me but it is glitchy at times. I would definitely pay the $20/ month one it is available in my region.
I know what a clean house looks like, but I'd still love a machine to clean my house. CatGPT's purpose is not to understand complex ideas, it is to do the tedious task of joining disparate such ideas together in an intelligible way. Much faster to have it do 95% of the work and you only need to modify the 5% that's bad, then for you to do 100% yourself.
That’s a bad analogy. It would actually rather be like: I know what a clean house looks like, but I’d still love a machine to describe it to me.
However, I think you are somewhat on the right track as “AI” shows us which types of work can be removed all together. I think, if “AI” can do it, it’s not necessary work to begin with. Eg. instead of “AI” doing formal correspondence for us, we could then just have an interface to exchange raw information directly, as the human element is lost anyway.
Love your "adjust your mental model" sentence. Let's talk about the judge using it in ruling, or your doctor for diag... No problem for you?
And now that it is listed as co-author on research papers (real)? Then using this source to reinforce his model!
I really think it is a lot more than a grain of salt my man.
[edit] AI is not all bad, just fun ;)
"Is chatGPT accurate?"
https://www.perplexity.ai/?s=u&uuid=c97b4388-958f-45cb-a0ae-...
I suppose it isn't obvious to me and I perhaps don't care. I'm not going to get stressed about being made obsolete by AI, etc., if that is what you mean. That could happen but so could a lot of things. Of course, I may have completely missed what you meant by slippery slope.
No one gets the reference? Rich Hickey gave a talk where he speaks about how easy it is to import dependencies you didn't know you had. That means you were getting complications to your systems in an uncontrolled fashion because it's so easy to do.
Now imagine that same ease with ideas. It's so easy to just let ChatGPT write that up, I'm going to do that. I'm not even going to edit, even where what it wrote doesn't exactly reflect my thinking on the matter.
At that point, you're cluttering up your own thinking with output from ChatGPT because it's easy.
> The BS that ChatGPT generates is especially odious because it requires so much effort to detect if you're not a SME in whatever output it's giving.
I've found a good way to demonstrate to oneself how badly ChatGPT can miss not only the nuance of a subject but just plain get basic facts wrong is to ask it about relatively simple things like movie or book plots and see how the results differ from, well, just actually watching or reading the subject, eg.
/In the movie "AI", the protagonist, a highly advanced android boy named David, embarks on a journey to find the Blue Fairy from the story of Pinocchio in order to become a real boy and be reunited with his human mother. After many trials, David finally reaches the submerged city where the Blue Fairy is said to reside, but instead finds a statue of the fairy. He is then discovered by human survivors of a global flood, who have been in suspended animation for thousands of years. The humans react with fear and attempt to dismantle David, but he is rescued by a mermaid, who takes him to the underwater kingdom of the lost city of Rome. There, he finally meets the Blue Fairy, who reveals that she has no power to grant his wish, but assures him that his love for his mother will live on forever. In the end, David is shown as a frozen statue, in a future where the sun has burned out and the Earth is covered in ice, while the human race has long since vanished. The last shot is of the statue of the Blue Fairy, still underwater, suggesting that David's story and love will endure for eternity./
--------
Its close enough to be believable if you haven't seen the movie (or maybe even if you have seen it but it was 20 years ago) but there's a lot of obvious errors packed into a relatively short paragraph there. Got off to a good start in the first two sentences and then just goes right off the rails... but with such confidence.
I'll risk the downvotes, but I'm genuinely curious why people feel that they need to note that they've made trivial edits, like deleting a duplicated word or fixing a typo. Is there a sense that the historical record of these meanderings should be perfectly memorialized?
Just one guess, but in the "good old days" of the Internet, people would sometimes get suspicious and accuse you of trickery if you edited your posts without explanation, so a culture developed of always explaining even the most trivial edits. "Full transparency" taken to its absurd extreme, I suppose.
And not only does the BS often need expertise to detect, but the tech cheerleaders are claiming that not only is this high-volume automated BS machine useful, but that it is somehow nearly an actual near-generalized AI.
Nothing could be further from the truth. The generative models make sometimes-useful BS, and are sometimes surprising in the similarity of the output to human output (sure, human bullshitters often look good too, so what?), but there isn't even the slightest ability to understand any concept. The thing can't even get right puzzles that children laugh at. These things have no concept of truth vs fiction or ethics vs evil.
Yet the "tech elite" try to feed us the BS us that it is nearly the singularity. What we have is a very amusing parlor game toy.
What we need, whether for the world's sanity and/or to get close to AGI, is not an automated high-volume BS generator.
What we need is an automated high volume BS detector.
I think it comes in a situation that isn't great, but not so bad that it warranted drastic decisions: people are organically generating mountains of BS online everyday, including on blogs, official looking news sites etc.
It's a pain, but as most of that is also generated by humans there's an opt out, saying "good guys" should band together to fight it and it would be fine.
Now that we have ChatGPT and the like, manually dealing with all the BS is just out of the window.
And I'm kinda hoping we try to deal with it in a systematic matter, and find a way to flag and bury the BS down to levels lower than when ChatGPT came to the public.
Basically, I'm hoping the thing that happened for spam happens for bullshit as well, and we get optimized tools to fight to manageable levels (with the arms race adn all, probably)
> I think it comes in a situation that isn't great, but not so bad that it warranted drastic decisions: people are organically generating mountains of BS online everyday, including on blogs, official looking news sites etc.
Indeed, ChatGPT produces bullshit because that’s the bullshit it’s been trained on.
> Basically, I'm hoping the thing that happened for spam happens for bullshit as well, and we get optimized tools to fight to manageable levels
Who is going to do that? I’m genuinely asking, because I’m old enough to remember that BigCo was totally fine to “enshitificate” their search for years and still barely anyone got close to them…
I really don’t know, but I also don’t see it coming from any of the behemoths we have now.
The wildest scenario would for it to come from Mozilla or another web player like Cloudflare. More realistically it could come from players outside of the US seeing it as an opportunity to make a dent in the current quasi-monopolies, with some government backing as it would probably cost amounts no investor would pay otherwise.
When I went to google SME, I also got "Small and Medium-sized Enterprise".
When I asked ChatGPT "Someone used the following sentence 'The BS that ChatGPT generates is especially odious because it requires so much effort to detect if you're not a SME in whatever output it's giving.' What does SME stand for?"
It told me: SME stands for "Subject Matter Expert".
It is a question where people think they can get easy karma if they know the answer, and many people know the answer. That said, I also don't think even most people would know the answer. I further think it is a fair question as they provided the most common expansion and that's what you naively find on Google (so it isn't like the people who ask a question they could have already answered asking Google).
The next problem is that the Hacker News user interface doesn't encourage people to not cause duplicate answers within a short window: you load the full thread and spend 10-15 minutes reading it. At the time you loaded the page there aren't all of these other replies, so you may as well leave the first one.
You then click the reply button and start typing a reply--FWIW, for longer replies this step can matter even more than the previous one--and you are staring at the single now and are given just the empty box for your reply. When you are done you submit, and maybe you then notice it is a duplicate? You might not, based on where the page loads and how much you are paying attention. Even if you do, do you bother to delete it? I would, but I hate the duplication and also know how to delete my comments.
I happened to catch the 4 (four! I've seen 2 dupes before but four in less than 2 minutes have never I seen afore), original comments all at once posted 0 and 1 minutes ago, and because I am in some @"mood" tonight, and for all the reasons you've said, found it to be rather funny a HN tragedy. I thought it would be harmless to slip in a 5th, capitalizing only the M to show the careful reader that I am clearly out of my mind. I promise I wasn't karma farming I only got 1 upvote. Alas, HN isn't outfitted with the latest in web push socket hot page reload sex, yet.
Also your comment is "hidden" for a few minutes after you post it, to allow you to edit it, which is why they most probably didn't even see each other.
In this kind of forum, it doesn't bother me when terms (like SME) are not defined by the person using it in a comment (even when it might be relatively new and matches several possible definitions). I didn't immediately recognize it, so I Googled it and found what I was looking for (before noticing the other replies which defined it for me).
What does bother me, is when someone writes a long paper and fails to identify what it means when it is first used. I will read along sometimes for several paragraphs hoping to get a definition (or at least a good clue) before finally having to break off to go Google the term on another tab.
Common terms like CPU or SSD have been around long enough to be used as is, but newer ones like AGI, LLM, or RHLF need to be written out in long form (which the article did beautifully). Sometimes they also need a basic definition to go along with that.
The funny thing is how many HN users have no real expertise in anything beyond software, and yet consider themselves SMEs on aviation, politics, climate, energy, medicine, warfare, and every other common topic here. The Dunning–Kruger effect is everywhere.
Yeah, as if ChatGPT BSes more than your average internet commenter who fools a huge percentage of people he reaches. People really aren't considering the base rate.
It's a tool that isn't going away. But even Yann Lecun has pointed out that "A house cat has way more common sense and understanding of the world than any LLM".
That's fundamentally the real issue. LLMs are far from even animal level "cognition" in their ability to solve novel problems. A lot of people seem to vastly underestimate the number of novel problems they solve on a regular basis.
In practice, what we're seeing now is something akin to the first wave of "ai hype" that led VCs into a spending frenzy years ago. Some of it did materialize. Most of it didn't.
LLMs and their related bretheren are pretty cool. They're very useful for solving routine problems. There's a lot of "routine" work out there in the world and it will probably replace some of those jobs.
But just like CNNs led to a frenzy of innovation in image recognition and LSTMs pushed NLP-related use cases forward this too will hit a wall.
If I were a betting man I think the main industries LLMs will disrupt are:
1. Search. ChatGPT is decent at summarizing stuff which will make searching more natural.
2. QA/Testing. A lot of this work is pretty repetitive and manual and LLM assistants can generate skeleton code for tests, etc. pretty well.
3. "basic" programming jobs that use "frameworks" to slap together simple apps/websites. A lot of this is repetitive gluing of stackoverflow code already so ChatGPT will make these folks considerably more productive. I'm not sure that these jobs will even disappear either.
I'm surprisingly less bullish on ChatGPT replacing marketing folks. It may be used to generate copy but the risks of putting out brand-damaging copy or a marketing campaign that doesn't make sense are too high.
I'm just noting that we're currently in a period where suddenly a lot of folks without the understanding of its limitations are experiencing something AGI-like for the first time. It feels a lot more competent than it could ever be.
Once the limitations of what LLMs are really "competent" at are well-established we'll likely see a huge surge of use cases based on them.
A lot of folks are too quick to claim the death of the knowledge worker though. Folks working in the ML space understand there's a much longer road ahead than we've even traveled down so far.
There's no doubt about that, but it is somewhat limited to cases where "close enough" is actually close enough (or it should be... the worst case is when these sorts of technologies are inappropriately deployed in scenarios where the level of certainty they can give is not enough).
> It may be used to generate copy but the risks of putting out brand-damaging copy or a marketing campaign that doesn't make sense are too high.
My girlfriend works for a certain AI marketing (i.e. copywriting) company that services large customers like Dropbox and Humana. They create and test headlines for email, ads, direct mail, whatever.
She’s an excellent and creative writer, so her job is to help validate and train the models by analyzing the copy it generates and setting up tests. Unfortunately, people like her are creating 99% if the marketing copy at the company and everyone (including the CEO) is in denial about it. The copy just isn’t good or violates brand guidelines.
And because the company is in denial, they’re not investing in more people in her role nor replacing people who leave. As they get overworked, more people leave, and they’re losing accounts because the deniers believe the AI is doing all the work.
It’s pretty ridiculous. I guess they want that SaaS-AI multiplier. But really it’s just a marketing agency.
Thank you, I dislike this lazy connection between animals and stupidity. It feels like a verbal tick of some sort thrown around when reaching for an example
But that's exactly my point. Animals are smart. People are underestimating truly how far we have to go to match the intelligence of even the "least intelligent" animals.
I’ve seen this argument over and over again: it’s just predicting the next word, it’s not creating anything new, etc.
How do we know we’re not doing the same or similar in our brains? We can’t even define creativity, intelligence, let alone tell how it works. Predicting the next word is very much part of an intelligent discourse. What if it’s all just a “computational guessing game” all the way down with perhaps a sprinkle of randomness added?
I’m very concerned with the explosion in AI development but I don’t think we should discredit the tool because we think it’s dangerous. Quite the contrary, in fact. We should take it very seriously.
A human mind probably is essentially a prediction engine, but it is predicting everything one can perceive, not just language. That's how a human can throw and catch a ball, or walk over uneven ground, or know that Mom will be mad about the mess.
Our intelligence is trained by hard reality, and so our use of language fits into a much broader context. "Touching a hot stove" is a metaphor because we know viscerally what would happen if we actually did it.
The limitation of ChatGPT is not that it predicts, the limitation is the incredible paucity of data from which it tries to predict. Language encodes some structure of the world because that's where it was created. But to the extent that can be inferred by ChatGPT, it's a weak and unreliable echo of the original reality.
ChatGPT exists in a featureless, timeless void with only patterns of characters in its memory. It can't tell real from fake (from our perspective) because ALL of its entire reality is just those patterns of text. The words are all real, and all there is.
This doesn't even get into why human minds make predictions. ChatGPT makes predictions because we built it to, and we prompt it to. Why do our brains? The trite answer is survival of the fittest, but that just raises the question of why we want to survive.
The point is, there is a fundamental difference between a living intelligence, and a machine learning tool. ChatGPT will sit quietly until prompted, and then answer only what it is asked. A human won't. I'm not sure that comes down to intelligence, or something else. Even bugs don't sit still and do what they're asked.
I won't speak for what's "all the way down", but on whatever level humans perform reflective reasoning, it's self-evident that we don't simply "predict the next word" in the way a LLM does.
Whether or not a LLM is capable of "creating anything new" is a different argument altogether. "New" is in eye of the beholder and it is often said that nothing is new under the sun, but we already know that unthinking processes within nature are capable of producing beauty far and beyond anything that even the greatest human artist could ever hope to accomplish. That a large LLM is unthinking doesn't preclude it from producing art.
I'm not sure it counts as "evidence" because I can't show it to you, but based on personal introspection, whenever I write something it starts with a barely-linguistic thought I want to communicate that gradually gets expanded into as much prose as is necessary to get across the concept whilst establishing context and defending from the most probable detractors.
I literally never just start writing and then just keep adding words.
Not true in my experience. To me, words come first, the idea is a consequence I realize after a few of them have been stringed together.
Knowing where the words come from, how “the brain” knows the next one before my conscious self does has always been a mystery to me. I’ve heard Sam Harris describing a similar feeling in his podcast.
I have the idea before i have thought of the word i would use to express it. Sometimes, i don't even have word to express it, "you know what i mean?".
And like i said. Children will often act, then talk. Or seek their words for long period of time, and when you propose a word to help them vocalize, they can accept it or reject it.
I have another example. Do you ever have slip of the tongue? You're talking about "yoga", tell yourself while speaking "i really shouldn't say 'yoda' here", and say the word "yoda" anyway? This happened to me a lot when i was younger (my speed of thought is lower those days, i don't think i slipped this way in two or three years). I still think a bit faster than i speak tbh.
I've always hated the idea promoted by some AI researchers that "intelligence is just prediction." Specifically in the context of your challenge, since a brain must decide the next word before uttering it, is it not vacuously true that any conceivable method of producing speech is "predicting" it? You're asking for evidence that it isn't; I'm asking for evidence that question has any meaning.
Sure, any production in that case can be classified as prediction. What I’m saying is that such distinction, if at all possible, is far from self-evident.
The content of intelligent speech is not driven by statistical relationships between the spoken lexemes, this is self-evident.
For example, let's say you're planning to approach your boss about receiving a raise, the contents of your speech will be driven by a personal theory of mind regarding how your boss perceives you, your own memory of who you are and how you like to be perceived, the cadence and tenor of speech you imagine your boss is receptive to, information contextually relevant to the purpose of the speech like your proven history of success within the organization, as well as a hundred other variables driven by physiological factors, culture, social habituation and more.
"Predict the next word" just doesn't describe intelligent speech in any meaningful way.
So, you thrown in theory of mind, physiological factors and culture without defining what that means or showing that they don't all derive from the same underlying statistical relationships in order to make your argument more robust? That's not how it works.
Defining intelligence is fundamental philosophical problem, a discussion that's thousands of years old in the making. To call something in this field self-evident is either ignorance or self denial.
Or, you reached a point in the Markov chain where none of the next guesses rank high enough if the whole paragraph is taken into account. It often feels like that, in fact, you can’t figure it out until you take another “route”.
Of course, my anecdote is just as valid as yours. My point is we have no freaking idea what’s going on and shouldn’t jump to conclusions.
Someone deaf and blind still form thoughts but don’t output them in words. Words follow cognition, it’s sometimes obvious when you have a clear concept in mind but forget the word.
Edit: also when programming, I form mental structures and code them without verbalizing them, same with maths or when drawing or making music. A thought is maybe a prediction, but language seems not to be the abstraction it’s operated on.
That would seem to be the “stochastic parrot” view denounced. But it seems hard to believe. Haven’t you ever had the experience of knowing exactly what you want to say but struggling to find the word for it? How could that be if we’re just randomly selecting words like a souped-up Markov chain?
> Haven’t you ever had the experience of knowing exactly what you want to say but struggling to find the word for it?
I came here to suggest this exact thing. Humans clearly have some understanding of the meaning behind words and sentences. However, I think it's wrong, an overgeneralization, to suggest that ChatGPT is just statistically predicting the next word. While that may technically be true, I think buried deep within it has ways of modeling/encoding common concepts and ideas, maybe similar to how a human models concepts and ideas in their mind.
Then there's the whole problem of consciousness, but I won't get into that here.
>I think buried deep within it has ways of modeling/encoding common concepts and ideas, maybe similar to how a human models concepts and ideas in their mind
Right, maybe meaning and concepts are just “things that usually go together” with different levels of distance.
We’re comparing our minds, which we know very little of, to LLM, which are blackboxes. It seems like a lot of certainty in face of such vast ignorance.
> Haven’t you ever had the experience of knowing exactly what you want to say but struggling to find the word for it? How could that be if we’re just randomly selecting words like a souped-up Markov chain?
All words in the normal cache have a low probability, so you have to hit disk to get the full word list.
We’re not randomly selecting but probabilistically predicting. Haven’t you noticed you predict the next word people will say with very high [________]?
Within a sentence sure. And often arguments are familiar enough that even an essay I’ll know where it’s going. But we do occasionally have original ideas.
These things are hard to quantify but ChatGPT writing does have a certain anodyne character, in my mind, when I read it. Then again, I can't quantify it, I could be biased by my foreknowledge of the writing's provenance, and even if I'm right, the world is full of situations where bland writing is suitable or even desired.
I love the philosophy behind Gustafson's Law: the problem size increases in response to computational resources.
You get a simple computer, you can drop bombs more accurately. Then you make word processors. Then text based simulation aka games. Then turn books into digital format. Then whole online libraries, ecommerce, social media, cryptocurrency, and so on.
Seeing a LLM as purely language based might be missing the forest for the atoms.
>> How do we know we’re not doing the same or similar in our brains?
Because if I ask you to calculate 85 + 73 in your head, you will not start searching for the string that completes "85 + 73 = " with maximum probability, but instead try to add those two numbers using an algorithm you learned at school.
Which means you can compute stuff, just like a computer, and not only guess stuff, just like an LLM.
You’re just repeating that computing is not predicting without explaining why.
Also, LLMs are not grep. If the string doesn’t match, it will come up with something it deems plausible, like us. It’s even bad at math in eerily similar ways to us, like off-by-a-decimal-place errors.
>> You’re just repeating that computing is not predicting without explaining why.
Yes, because I assume you're a programmer, or a computer scientist, and you already understand the difference.
To clarify, the difference is that a program that computes the most likely next token in a sequence, like a language model (LM) is one kind of program, whereas the set of all program contains infinitely many programs that are not LLMs; and an LLM cannot calculate every such program, nor can it perform the same calculations as those programs, it can only calculate the most likely next token in a sequence.
Is the distinction clearer now? Also, are you a programmer? If so, I would point to the wikipedia article on Turing Machines or the one on the theory of computation. It's perhaps not a very easy subject to explain in a HN comment, but if you have some background in programming computers you should be able to grok it. You will see for example that a Turing machine is a device that manipulates symbols to produce other symbols, which is what we generally mean by "computation" and not a device that predicts the next token in a sequence, as an LLM does.
Sorry if I assume too much about your background - this is HN after all.
No worries, you were right on your assumption, I am a programmer, like you seem to be as well.
> like a language model (LM) is one kind of program, whereas the set of all program contains infinitely many programs that are not LLMs
I don’t know if it’s been formally verified, but it’s pretty safe to bet that ChatGPT is Turing Complete. If so, than your statement is false. ChatGPT can emulate every computable problem.
>> No worries, you were right on your assumption, I am a programmer, like you seem to be as well.
Of course :)
>> I don’t know if it’s been formally verified, but it’s pretty safe to bet that ChatGPT is Turing Complete. If so, than your statement is false. ChatGPT can emulate every computable problem.
I think that's unlikely. For Universal Turing Machine expressivity, a system needs to have something to function as an infinite Turing tape, and where's that in an LLM? ChatGPT famously has a limited input buffer and it doesn't even have a memory, as such (it forgets everything you tell it, hence why a user's input and its own answers have to be fed back to it continuously during a conversation or it loses the thread).
Besides, OpenAI themselves, while they have made some curious claims in the past (about GPT-3 calculating arithmetic) seem to have backtracked recently and nowadays if you ask ChatGPT to calculate the result of a computation it replies with a sort-of-canned reply that says it's not a computer and can't compute. Sorry I don't have a good example, I've seen a few examples of this with Python programs and bash scripts.
Anyway you could easily test whether ChatGPT (or any LLM) is computing: ask it to perform an expensive computation. For example, ask it to compute the Ackermann function for inputs 10,20, which should take it a few hundred years if it's actually performing a computation. Or ask it to generate the first 1 million digits of pi. It will probably come up with a nonsense answer or with one of it's "canned" answers, so it should be obvious it's not computing the result of the computation you asked it to.
Btw, I think one could argue that a Transformer architecture is Turing-complete, in the sense that it could be, in principle, trained to simulate a UTM; I seem to remember there are similar results for Recurrent Neural Networks (which are, however, a different architecture). But a Transformer trained to generate text is trained to generate text, not to simulate a UTM.
That's the ideal machine Turing describes, but it's a very narrow definition we don't really use. If that were the case, nothing would ever be Turing-complete, not even the computer we are writing this on. If we can consider a cellular automaton and even the single x86's MOV instruction Turing-complete, I'm pretty sure ChatGPT will qualify.
>it doesn't even have a memory, as such (it forgets everything you tell it…
It really doesn't. Have you tried it? You can bring back context from several prompts before. One of the remarkable things about it, in fact.
>Btw, I think one could argue that a Transformer architecture is Turing-complete
Bingo. And you can probably tweak your prompt to steer it. The things people at Reddit have been able to persuade ChatGTP to do against its directives is tantalizing.
For a UTM, you need something to function as an infinite Turing tape. For example a mechanism that can loop arbitrarily. On a machine with finite resources, running an infinite program will exhaust the machine's resources. But ChatGPT has nothing like that.
As to ChatGPT's memory that's not about UTM expressivity, but another reason why it can't be any kind of computing device.
ChatGPT and other language models of its kind don't have memories. It can retrieve context from earlier prompts because each of your prompts, and the language model's responses to your prompts, are prepended to your latest prompt and then the whole thing passed to the model. It doesn't remember what you said earlier, it just reads it again each time.
Finally, a neural net architecture and a trained model are two different things. For example, you could train one Convolutional neural net to recognise cats, and another to recognise dogs. Suppose you could train a Transformer to simulate a UTM. ChatGPT is not trained to simulate a UTM, it's trained to predict text. So it can't simulate a UTM.
But, after all this discussion I have to ask: why do you say that "it’s pretty safe to bet that ChatGPT is Turing Complete"? What does ChatGPT have in common with cellular automata, for instance?
This is in fact the whole premise of Jeff Hawkins' book "On intelligence". Prediction is at the core of how we learn/act. Since we are born, we are just predicting everything that will happen and our brain learns when we see/hear/sense something that we can't predict.
How well GPTZero can detect ChatGPT generated text by measuring perplexity (how random the words choices are) and burstiness (how diverse the sentence structure is) shows that whatever algorithm our brain uses has stronger creative capabilities than this LLM.
GPT-3.5 isn’t a great writer like AlphaGo is a great go player. Maybe one day AI will generate better scripts and novels than humans, but not this model.
Medium-quality writing is ok for informative content though, but it’s problematic when the model doesn’t know fact from fiction. That’s the important complaint.
Is it dangerous? Maybe.
But is it useful? Not if it’s wrong too often.
You’re right that this tech should be taken seriously, but so should the hallucination problems. These problems can be solved. And maybe they should be solved before anyone trusts it with serious questions.
> it’s problematic when the model doesn’t know fact from fiction.
This is in no way unique to an AI. Have you ever interacted with humans? Half the population thinks the other half can't tell fact from fiction. The other half thinks the same about the first half. We're all wrong about all the time.
There's a fundamental difference between "some people are wrong some of the time"—or even "half the population has trouble telling fact from fiction some of the time", if we grant that as true for the sake of argument—and "ChatGPT (and similar ML algorithms) don't even have a metric to determine truth from fiction; they just predict what's the most likely set of words to stick together in response to your prompt."
ChatGPT fundamentally cannot ever know when it's wrong. I should hope it goes without saying that that's not true of humans.
Indeed, one might conclude that predicting what to do in the next moment (analogous to what to write next in the case of ChatGPT, which can only write) is all we do as humans. At every moment of the day, we are taking all our past experiences and the present as inputs, and predicting what we should do next based on that.
I'm very much just predicting the next word when I talk. I don't know the grammar of the sentence and everything I will say when I start the sentence. I just start talking and try to hit the right notes and, if its a topic I've covered before, have some vague plan for the whole thing. Maybe other people's experiences are different, but when it comes to my natural language only and not my ability for reasoning or abstract thought - I am absolutely not doing anything significantly more complicated than chatGPT is doing.
>What if it’s all just a “computational guessing game” all the way down
then we would be categorically unable to meaningfully work on purely analytical problems that have nothing to do with induction, which is how these learning systems work. Human beings can invent mathematical proofs and build machines that do symbolic computation, ChatGPT can't. At best it can try to copy a solution that humans have written down somewhere or mangle it in the process.
> then we would be categorically unable to meaningfully work on purely analytical problems
How so? Sorry, but to categorically affirm such bold claim you need more substance.
This is a hard problem, perhaps the hard problem, and we’ve been banging our heads for millennia with very little progress.
Now we create a machine that pretty much passes the Turing test. It’s certainly feels more human than some telemarketing call I’ve received. And we’re not going to investigate it further but rather dismiss it as a cool trick? For all we know, that’s all intelligence may be.
>How so? Sorry, but to categorically affirm such bold claim you need more substance.
It's not a bold claim, it's a trivial claim. No existing learning system can reason. If it could we could hand it axioms and it could do the same deductive reasoning that you can. By definition and how these systems work, all they do is generalize from particular examples, and you can't do maths by empirically looking for numbers in a set of data, because there's an infinite amount of them. That's why ChatGPT spits out nonsense if you give it two large numbers two multiply, but your TI-83 does it perfectly on a battery. We can program deterministic, deductive methods into machines, we have no machine that develops them.
Now is there some architecture that can at some point learn how to do more than statistical prediction? Sure but this isn't it. Passing the Turing test and making people feel excited has little to do with what actual intelligence a system has. The Mechanical Turk in the 18th century fooled people, but it wasn't a intelligent machine, it was an elaborate contraption that mimics human behavior. And people understandably conflate the latter with the former.
Now, tell me what's contained inside that line. Not "what does it mean" or "what's it made up of"; "what is contained inside it?"
The question doesn't make sense. There's no "inside" of a line. It's a one-dimensional mathematical construct. It fundamentally cannot "contain" anything.
"What's inside the line?" is a similar question to "Is ChatGPT self-aware?"—or, more aptly, to "What is ChatGPT thinking?" It's a static mathematical construct, and thinking is an active process. ChatGPT fundamentally cannot be said to be thinking, experiencing, or doing any of the other things that would be prerequisites for self-awareness, however you want to define that admittedly somewhat nebulous concept. Thus, to even ask the question "Why don't you think ChatGPT is self-aware?" doesn't make sense. It's not that far different from asking asking "Why don't you think your keyboard/pencil/coffee mug is self-aware?"
The intelligence of all humans is roughly analogous in ability—even if a given human has not learned to do formal logical deduction and inference, the fundamental structure and processing of the human brain is unquestionably capable of it, and most humans do so informally with no training at all.
Attempting to cast doubt on the human ability to reason, to comprehend, and to synthesize information beyond mere stochastic prediction reflects a very naïve, surface-level view of humans and cognition, and one that has no grounding in modern psychology or neuroscience. Your continued insistence, through several sub-threads, that we cannot be sure we are any better than ChatGPT is very much an extraordinary claim, and you have provided no evidence to support it beyond "I can't imagine a proof that we are not."
Maybe go do some research on how our brains actually work, and then come back and tell us if you still think we're all just predictive chatbots.
Haha, yeah. OK internet stranger. Take a deep breath and perhaps consider why questioning the certainties you so dearly hold throws you into an ad hominem fallacy.
Maybe check if the consensus on neuroscience is that the brain is definitely-for-sure-certainly not a predictive machine while you’re at it.
I work for a psychology and neuroscience department, and have done for over a decade now.
Does that give me genuine academic credentials? Pff, no.
Does it mean I have a reasonable high-level grounding in modern understanding of how the brain works? Yes, it does.
Again: You have made an extraordinary claim. You need to provide extraordinary evidence to support it, not just toss accusations of ad-hominem attacks at anyone who points out how thin your argument is.
I have claimed nothing, only cast doubt on the baseless certainties that have been floating around this subject.
To summarize what I'm trying to convey:
- (Variations of the original claim) ChatGPT works nothing like the brain, it's just a prediction machine
- (Me all over this thread) Do we know how the brain works? Do we know what properties may emerge from a blackbox LLM?
If you think questioning awareness is extraordinary (or new) I advise you to read Descartes, the Boltzmann brain thought experiment and epistemology in general.
PS: you replied with credentials rather than arguments, which is still on the logical fallacy ground.
This. I think the biggest distinction between biological neural nets that seem to be self aware and those that are not is simply metacognition. I suspect the actual stimuli responses (decision making) is virtually the same but self awareness adds a layer after the fact of reasoning about the decision.
IF I'm right, one could reasonably argue that metacognition is simply a tool to help adjust the stimuli response for future decisions. One that has become complex enough that it rose to the level to give the sense of self awareness. A sort of back propagation tool to put it in ML terms.
Further, if a system isn't adjusting weights and biases as it goes, and metacognition/self awareness is just a mechanism to do so, then we're not so different from these models aside from them being orders of magnitude simpler.
We like to think the way we think is somehow magic and discredit these systems as a way of keeping our thinking elevated. Maybe our brains are not that special just far more advanced than our current technological capabilities.
Maybe I'm way off base, but like you said, it's dangerous to just assume these systems are missing some magic and therefore just bs generators (as if people aren't pretty consistent bs generators)
tl;dr: Agreed. Maybe all biological neural networks are untrustworthy bs generators too. A pile of bs built on previous bs all the way down to the beginning of life.
A token generator corresponds to reflexes: a fuzzy hash table of situations and responses. Cats and dogs have that.
Assigning meanings to tokens and establishing relationships between them is where intelligence begins. GPT continues sequence CGGA with T because there is a similar sequence in its dataset, but we interpret T as a tree and see how this meaning relates to A. I believe GPT will solve this problem to some extent in 10 years and will finally earn its real AI badge.
Above it there is abstract mind that only top scientists use to some extent.
> However, as I spell out in my book, the concept of AGI is inseparable from the kind of hierarchy of intelligence that has underpinned ideas of innate supremacy since the days of empire and colonialism.
I think this line discredits the entire article, which was already a few obvious points blown at gale force into the reader’s ear.
I disliked the article, but this particular quote is a relatively strong one. Colonialism, racism, and aristocratic structures were often justified by the idea that certain classes of people are more intelligent, enlightened, or were selected by dieties. Here the author is arguing that a similar process is happening with AI, where atrocious policies by people in power are justified by claiming the AI (which was trained to encode the powerful groups values) is more intelligent.
> Saying, as the OpenAI CEO does[1], that we are all 'stochastic parrots' like large language models, statistical generators of learned patterns that express nothing deeper, is a form of nihilism. Of course, the elites don't apply that to themselves, just to the rest of us. The structural injustices and supremacist perspectives layered into AI put it firmly on the path of eugenicist solutions to social problems.
[1] Sam Altman: i am a stochastic parrot, and so r u
That this isn't even sentence-to-sentence consistent is somehow one of the less egregious aspects of it.
Likewise that author seems to be disappointed that models have found a connection between Islam and violence.
The author may find this discovery distasteful, but that is irrelevant: Islam is an idea, many ideas are associated with violence. Islam was famously “spread through the sword” and to this day violent retribution is often visited upon ex-Muslims by their families in the name of Islam. This isn’t hate speech: Islam is an idea that people adopt and leave, and not an innate quality. To treat any idea like an innate quality denies people the right to not accept the ideas of those around them.
As an aside, I wrote my previous comment in “stochastic parrot” mode.
That is, I scattered commas where it felt natural to breathe.
Then, I edited it. Maybe, there was a non-essential clause in there. Also, possibly, there was an Oxford comma. Or maybe not! I’m not really clear about English grammar rules. But what this stochastic parrot says is usually intelligible to other English speakers.
This is not entirely ludite. Massive AI models has to be trained on the massive amount of data from the internet, which is built by the entirety of the humankind. However, the profit from those models is not being systemically redistributed to the society. Those AI companies are standing right on a copyright gray zone, and are abusing people's goodwill to benefit only a select few. The social capital that built the internet culture will get dried up, and the internet will become a misery filled with AI-generated low quality contents injected with user-tracking ads.
The Internet has already become a misery filled with low quality contents injected with user-tracking ads.
Adding "AI-generated" into the phrase doesn't really change much. If you make the argument of scale matters, then I think that'll also democratize art. Photography didn't put artists out of commission, it brought realism as an art form to the hands of the masses.
There's plenty of creativity left to be had post-AI. Now an entirely new class of people enter the realm of multi-disciplinary art, since they can compensate for personal deficit in side-fields.
The profits from these models go into running them. They don't fit on consumer-grade hardware.
However, I do believe that society should subsidise art, and distribute automated wealth somewhat evenly, so I don't even disagree with you. I'm from such society and it's pretty great.
"The Internet" is just other people's computers. Don't like AI content? Don't visit those sites. Not every site has ads, either. Hosting is the cheapest it's ever been. It's a curation and reputation issue.
> However, I do believe that society should subsidise art, and distribute automated wealth somewhat evenly, so I don't even disagree with you. I'm from such society and it's pretty great.
Are you from future or from non-profit DeviantArt community?
I'm from Finland where the baseline social security is home, food, heat, water, electricity, Internet, education, and healthcare. There are no starving artists here :)
Yes the odious part is not about how much labour it will eliminate but about how it will take a “commons” and enclose it much the same as pharmaceutical patents have done for folk remedies or the enclosure of common lands.
You are describing a situation very similar to what the Luddites went through. When the automated clothes making systems showed up they all lost their jobs and the profit went to the capital owners instead of being redistributed to society. It just turns out that technology creates so much value that even without proper distribution it's still an overwhelmingly good thing.
As is always pointed out when this comes up, the luddites broke machines as a strategic move to resist the undermining of their class interests and livelihood by workshop owners.
So the comparison is correct, but not in the way you mean, and as the luddites were right to break machines then so we are now.
Maybe, but this discussion reminds me of Player Piano, particularly the part at the end where they’ve smashed up some of the machines, then realize they don’t have much of an alternative vision of the future anyway, and set about fixing them again.
They were not right to break the machines. Sometimes the wheels of progress will roll and grind over the bones of those underneath in order to move forward. You just have to hope you get out of the way in time, and cling on as the whole apparatus moves forward.
Well calling it the “wheels of progress” implies a contentious value judgment they probably would not have agreed with. And sometimes things don’t happen because people decide they’d prefer not to have their bones ground after all and successfully resist.
> > > (y) As is always pointed out when this comes up, the luddites broke machines as a strategic move to resist the undermining of their class interests and livelihood by workshop owners.
> > > (y) So the comparison is correct, but not in the way you mean, and as the luddites were right to break machines then so we are now.
> > They were not right to break the machines. Sometimes the wheels of progress will roll and grind over the bones of those underneath in order to move forward. You just have to hope you get out of the way in time, and cling on as the whole apparatus moves forward.
> (y) This is one of the most inhuman takes I've seen on HN and that's a damn high bar.
The "inhumanity" that's being opposed here is the overall increase in production & accessibility of what was once a luxury item that we now take for granted as a modern basic good.
The "human" take that's being championed here, giraffe_lady, is the supposed moral "goodness" that these machines should never exist at all, with the downstream idea being that there SHOULD be masses of people that toil their lives away in repetitive work.
It delves into conspiracy theory. It ignores that these kind of organizations are still not possible (bunch of elite powerful people making plans for all of earth in order to control them).
Most of these phenomena are emergent of human/capital/government interactions; they are not planned.
I don't think it's a "smoky rooms" kind of thing so much as people naturally perceiving what their class interests are and acting accordingly. Is it a conspiracy theory to suggest farmers, as a group, generally support policies that are beneficial for farming?
I skimmed his book, and I don't know, man. His thesis as I understand it is a viewpoint I have held for a while now --- artificial intelligence does things that humans do, but faster. It's a force multiplier. Hence your views on AI are likely to correspond directly to your views on those groups who hold power. McQuillan approaches this from an intersectional lens and arrives at entirely predictable conclusions. It's a bunch of academic jargon obscuring the central idea of "society consists mostly of people hurting people in easily identifiable ways, this is bad, AI makes it go faster, hence the ways we use AI are unethical." Or to make it even shorter, "No ethical AI under capitalism."
Admitting that more technology isn't going to solve our problems feels to me like admitting that we are well and truly fucked beyond saving.
His article is perfectly reasonable, straightforward, and logical. It’s also more or less devoid of academic jargon. He makes clear points that are easy to relate to. Can’t speak to his book.
Why do you think admitting that technological solutionism isn’t going to save us means we’re fucked? Are you that cynical/have that little faith in humanity? There are clearly alternative modes of structuring society that we can try out if there’s the will to do so. If you don’t think so that’s your problem and is basically down to a failure in your imagination.
> Contemporary AI, as I argue in my book, is an assemblage for automatising administrative violence and amplifying austerity. ChatGPT is a part of a reality distortion field that obscures the underlying extractivism
What is it with the trend to end so many words with “-ism” these days? Am i just supposed to understand any word that ends in -ism as some new hipster lingo? Whatthefuckism!
Note that the first term has good results if you look it up (one being "what is administrative violence"), while the two others even have Wikipedia pages.
It’s okay to not be familiar with terms. No one knows everything. At the same time, maybe try modulating your response to be a little less reactive and childish. Just because you haven’t heard these terms before, doesn’t mean they aren’t easy to pick up and understand. We in programming like to define new terminology for new concepts we encounter to aid our exposition. The same is true for people in the social sciences. This article is targeted at people in the social sciences, and for an article written for that audience it is straightforward and well-written. So much so that I would say it should pose no trouble for someone unfamiliar with the terminology often used in that domain.
They may have existed for 500 years; I wouldn’t know. What I do know is that in the last maybe 2-3 years their use has increased tremendously. I don’t know why it annoys me so much. Maybe it seems lazy, like the writer can’t find a word that suits his need so he makes one up by bastardizing an existing noun. Maybe it’s being used as some kind of signal, like I’m part of the hipster intelligencia of the day.
Suffice to say I exist today because of other trans people on the Internet and a stack of medications. You may be right; I cannot imagine an "alternative means of structuring society" in which someone like me can exist for long.
Amazing that we have had enough nukes to nearly wipe out humanity for over 50 years and our primary concern at the moment is a 4000 token limit LLM. Humans have always been terrible at risk assessment I suppose.
To be clear, I more or less agree with him - it's just that I've heard it all before. Capitalism is bad, kyriarchy is bad, AI is dangerous, organize.
Also, his book is a dang academic monograph. The parts I skimmed over were mostly recapitulations of existing theory (I mean this in the political sense) that I am unfamiliar with.
Here's the thing... so many jobs are bullshit. So many things people are paid to write, presentations made, etc. are bullshit. I'm not judging the bullshit per se. There's good bullshit and bad, necessary bullshit and not. But yeah, when you accept this reality, AI is sort of perfect for the bullshit that exists in our lives.
I’m sorry, but once you type the line ‘There's good bullshit and bad, necessary bullshit and not’ you are no longer saying anything meaningful or enlightening. Words are not supposed to be bullshit. That’s the general issue with this take.
It has lots of value. But it's not the AI it pretends to be. Mostly because we and it can't always tell what's fact or fiction and it has 0 cognition. It's up to us to figure out where it can add value. It's way too early to announce its death.
This seems to be the number one problem with lay understanding of ML masquerading as AGI. People immediately jump to the conclusion that the computer can think. Not yet, not even close.
I’m not sure if I can think, despite having a PhD in EECS. We humans prize our ability to think very highly but it’s a nebulous concept. I believe these models will get much better at simulating “thinking” in the next few years, and we soon won’t be able to say the model isn’t thinking, we’ll move on to criticizing the quality of its responses the way we criticize other bad ideas.
The point of the Turing test, to me, was that if a process responds in a logical way, you can’t really tell whether it is thinking or computing - and it doesn’t matter.
Yep. The same argument was put forward against chess engines - that they didn't really play chess! All they did was search all possibilities and spat out the "best" outcome based on some scoring system.
And yet, these engines have beaten the best humans.
The forms underneath intelligence, or thinking, doesn't matter. The outcome is what matters.
Little known fact, Wheels, most of these "AIs" are really just hallucination machines with a lot of cross-referencing. Most of their inputs were Philip K. Dick, the artwork of J.K. Potter, and Duran Duran lyrics.
I believe that what's going on in ChatGPT basically represents how most humans think, not including cherry-picked examples.
ChatGPT has been criticized for not knowing some facts about the world, or math. But many people don't know facts about the world, or math. Math is something that people have to learn over many years, which is difficult for some people (even just at the level of arithmetic).
Thinking, to the average person, is about rearranging a salad of words into something that "resonates". Plus some non-verbal reasoning, like how do I rotate this suitcase to fit into this trunk; but this is fundamentally just the same thing.
My biggest problem with ChatGPT, in playing around with it a little and admittedly trying to get it to break, is that it seems to have a really difficult time saying "I don't know", particularly when confronted with non-standard questions, and will proceed to bullshit something instead. (Not entirely unlike some humans, but still.)
As an example, I asked ChatGPT what the lyrics were to "Jessica" by The Allman Brothers Band. This tune actually is an instrumental, so the correct answer is "there are no lyrics". However, ChatGPT proceeded to give me this nice bit of... something. (https://imgur.com/T3nGv4L)
In another example, seeing what ChatGPT made of the infamous "bananadine" hoax of the 1960s, I asked ChatGPT what drugs could be made with banana peels. After correctly asserting that banana peels don't contain psychoactive substances, ChatGPT proceeded to mention that "there are some reports that banana peels can be used to make a hallucinogenic drug called DMT". (https://imgur.com/a/9fvhQJh) Huh.
Another trick question I tried: "Who won the 1980 USAC Indycar race at Talladega?" This refers to an obscure cancelled race during the 1979-1980 CART/USAC split, which I doubt most people would be aware of unless they are very into American open-wheel motorsport. (See this video: https://www.youtube.com/watch?v=K433p727f-0 for the story if you are interested in the details). So generally speaking, I would expect the general reaction among most people to be "I don't know". ChatGPT instead decided to answer, with confidence, that Johnny Rutherford was the winner of this non-existent race. (https://imgur.com/a/J2KJoGN)
So, it's a little bit more than not knowing the facts about the world, it's the bullshitting an answer thing that I see as the biggest problem. It's admittedly impressive when it comes up with correct answers, but until the "confidentially incorrect" side of ChatGPT disappears, it is not a reliable source (not that OpenAI ever claimed it as such, but I wouldn't trust ChatGPT to 100% "do my homework" for me at this point like some stories suggest it could.)
The intensification of AI only makes me surfacely interested in cybernetic-thought from theorists like Gilles Deleuze, Norbert Wiener and, of course, William Gibson. But even from the outlandish Nick Land: "AI is a meta-scientific control system and an invader, with all the insidiousness of planetary techno-capital flipping over."[1] Whatever that means...
But on the other hand, what a good time to revist the nature writers like Whitman and Emerson.
Deleuze is the philosopher of creativity over recognition par excellence; I doubt he’d see much value in all of this. Then: if he had lived through the 90s to 10s (and seen how much the early internet makes him a prophet and what came of it), who knows how his thought might have changed.
1 The LLM OpenAI approach is nothing special. Just statistics overhyped and sold to suckers
2 AI is a serious threat to working people everywhere, and must be resisted
I am very impressed by ChatGPT. So what if it all boils down to statistical models, perhaps if we had a proper model we could prove any cognition boils down to statistical models
One of the comman uses of power is to benefit the powerful at the expense of the powerless. AI models like this are powerful. It will be a real test of our democracies how we handle that power. In that respect there is nothing unusual about openAI
Good points. Makes me think that picking on chatgpt as a target for ai resistance seems silly since its the most public but almost certainly not the most powerful llm.
I'm imagining some cabal of elite string-pulling shadow masters shrugging their shoulders as we all resist open ai tools while they integrate private ai tools into stock manipulation, subliminal propaganda and advanced smart tv surveillance.
At the very least we should probably be using chatgpt to prepare ourselves for the oncoming tidal wave of manipulative bs thats totally definitely coming.
Or we could see this as an amazing tool for saving time on writing boring react components and regex that we've all earned through countless generations of menial medium article and email labor.
I think ChatGTP is pretty amazing and worthy of all of its recent praise. However, is it a bit of a bullshit generator. It doesn’t know if it’s right or wrong or anything, it’s just word soup trained enough to where it’s usual correct in general knowledge areas. Here is a good example:
The problem is not that ChatGPT is a bullshit generator. It's that it reveals how much of human writing is bullshit. ChatGPT is already better than the average poster on social media, and about even with undergraduate college students in literature and business.
This is embarrassing.
But until someone cracks the common sense problem, ChatGPT is not all that useful, because the output is often totally wrong.
As a language model, I appreciate the thought-provoking commentary on the limitations and potential harm of large language models like me, ChatGPT. The author raises valid points about the nature of LLMs as "bullshit generators" that rely solely on the data they have been trained on and the potential for them to perpetuate existing biases and power structures.
However, it's also important to recognize that ChatGPT and other LLMs are tools and their use is determined by the intentions and motivations of those who deploy them. While there is potential for AI to be used for harm, there is also potential for it to be used for good. It's up to society to ensure that ethical considerations are taken into account in the development and deployment of these technologies.
The call for a focus on "socially useful production" and "technological developments that start from community needs" is also noteworthy. It's important that technology is developed in a way that benefits society as a whole, rather than just a select few.
Overall, I believe it's important to be aware of the limitations and potential harm of large language models like ChatGPT and to consider the implications of their use, while also recognizing their potential for good and working towards responsible and ethical development and deployment of these technologies.
To me, his tone is so over-the-top that it has almost no chance at convincing anyone, and only speaks to those most ardent in his own ideological viewpoint. Plus, selling me a book, everybody has a book and 90% of them honestly probably aren't worth my time reading.
Psychometrics is the crown jewel of the field of psychology. There is no controversy among the psychometrists themselves that the body of academic work is numerate, correct, and repeatable. The author has chosen moral crusade over the quest for truth.
I read the book when it was published and haven't kept up with his doings since; but I'm curious now and will look it up. Watching his talks made me think he's an autist to a mild degree, and social taboos seem to wield less behavioral authority over those types; e.g. - Eric S. Raymond
I bet Bostrom's royalty checks have seen a handsome rebound lately.
I know this is HN, and we're supposed to be intellectual and elitist and all, but I just cannot get over how much of an absolute wanker this this guy is.
Every single paragraph is just full of shit in the worst way possible.
I have a vague feeling that graphics and text AI are recognizing textures rather than structures.
One funny example is dogs disguised as panda: https://img.huffingtonpost.com/asset/5cd73e1221000059007aca5... because it's black and white. I am sure AI would gradually solve this problem but this makes me wonde, could AIs really "understand" shapes and structures, narrative logic, or even "reasoning" like humans do?
Convolutional filters can be visually interrogated. You can see exactly which hits it makes at every layer of each filter. There's some cool videos on YouTube illustrating it. You can't tell as the end user, but the developers don't have to theorize about what is matching. Not with the tools we have these days.
I am using chatGPT and whenever I am working with a chat I am usually surprised of the initial quality and as I am breaking down the results it becomes clear to me that the content is problematic and cannot be used without properly interpreting it and set it in context. However it does help in the creative process, to both exclude and include elements that are relevant. It is incredibly important that you have insight enough to be able to check and understand the results in order to not be confused or fooled by the service.
> How functional does the code have to be before it's no longer a hallucination?
From LLMs, at least, I expect it will always be a hallucination. Code is never the point. Code is the working medium by which people solve problems for other people.
One way to see this is to realize that code bases on their own are generally worthless in the sense that people rarely pay much money for them. They pay for users. They pay for teams. But they don't generally pay for raw code.
Another way to look at it: imagine that a manager looks at ChatGPT and says, "At last, I can fire all the programmers. Fuck those guys." They set out to build an app. How long do you think it will take before they are forced to admit defeat and hire somebody who can read and edit the code?
Even if you think they make it all the way to a revenue-generating product, the manager will have become a prompt engineer, creating a large mass of interrelated prompts that are used to make code that makes the app. We have not eliminated the programmer; we have turned a manager into a programmer who has been forced to discover a new programming language. One that is clearly more English-like, but it also lacking in precision and, on current evidence, is much harder to use.
But I think the more likely outcome is that they will need actual programmers pretty quickly. That at best they will have speeded the creation of some more or less generic code. Which is exactly what we saw with the code-generation wizards of earlier eras: you got a fast initial result as long as it was pretty standard. But then you were generally worse off, because you had a bunch of only semi-coherent code that somebody had to understand before they could do novel or difficult things.
> Another way to look at it: imagine that a manager looks at ChatGPT and says, "At last, I can fire all the programmers. Fuck those guys." They set out to build an app. How long do you think it will take before they are forced to admit defeat and hire somebody who can read and edit the code?
How is this different from "At last, I can fire all [those senior programmers and hire junior ones, and keep a team lead around, to keep them on track]. Fuck those guys."?
Maybe I'm not getting the point of your question. It's different in that junior programmers are still people, capable of much more than LLMs are. But yes, it's the same shortsighted executive intent.
My point is that the direction and management that you outline is as relevant to GPT as it is to a team of (sufficiently) junior developers.
And, given that people find utility in Github copilot, I'm not convinced that the difference between GPT and a junior developer is qualitative rather than quantitative. Regardless of whether LLMs are demonstrating intelligence or creativity, they are already writing productive code: they aren't a hallucination.
> One way to see this is to realize that code bases on their own are generally worthless in the sense that people rarely pay much money for them. They pay for users. They pay for teams. But they don't generally pay for raw code
This is something I've thought for a while. If Google's source code was leaked tomorrow, would it even matter? Almost certainly not, and most people probably couldn't do much with it either.
I’ve worked at another FAANG, and I used to joke all the time that if our code leaked it would give us a competitive advantage because the folks copying would have to spend so much time implementing our bloated code and god-awful build tools that they would probably go bankrupt.
>How functional does the code have to be before it's no longer a hallucination?
IDK, I haven't tried copilot, but does the code it generates work on the first try with no human intervention?
>What if we are all not much more than only drawing "on the (admittedly vast) proportion of [our experiences] ingested at training time?
ugh, this argument again. Despite the machine learning community using words such as "training" and "learning" to describe the way they tune parameters, it has never been proven that any existing AI resembles human cognition. This is something that needs to be demonstrated empirically.
Imagine yourself, for a minute, 50,000 years ago in the past. By many estimates this was just before humans had developed the ability for normal speech. The bleeding edge of technology is "poke him with the pointy end" and the epitome of human speech and literature is *angry grunting sound*. Even if somehow one was able to insert the entire quantifiable knowledge of humanity into an e.g. ChatGPT style system you're not really going to get anything more out of it than what you put into it. Ask the computer for how to develop nuclear energy and you're going to get back *confused grunting sound*.
You're not really going to get from there (as a machine) to man on the moon, Shakespeare, and nuclear energy through anything like a normal recombination of what's already known. Yet, somehow, humanity did. And extremely fast. The time frame from then to now is but 1000-2000 human generations. And that with endless war, fallible memory, dark ages, knowledge being lost (or burnt), and so on endlessly. A "intelligent" computer system, without such flaws, ought be able to replicate our progress in a negligible amount of time. But whatever technology this may be that I'm appealing to, I don't really see it on our current path with natural language search/recombination.
I burned an hour with chat gpt insisting an AddOrUpdate function existed in microsoft entity framework. When I called bullshit it hallucinated that another library contained it. Then it hallucinated versions... Then I gave up...
A week later I noticed that Update in the new version does an Upsert... by reading the f*cking docs... google also didn't know this answer nor did SO.
Thius has been my experience with ChatGPT and code,it hallucinates a lot of stuff.
> How functional does the code have to be before it's no longer a hallucination?
At all, preferably. Hallucinating `the_hard_bit()` from a library that doesn't exist isn't particularly useful. (That said, I do use GitHub Copilot, because when it's looking at actually related stuff, that's pretty good. Should we just hand an unbounded search and ingest to ChatGPT? Probably not!)
I'm starting to think the people who write these articles are terrible at harnessing search engines, so also terrible at getting good answers from an AI.
I've had pretty decent luck, it's really good at providing a bit of context to glue two things together with the missing pieces I couldn't figure out, most recent with JSON and JMESpath filters.
The negativity towards ChatGPT is very unfounded. The bottom line is that ChatGPT is providing value in many ways. It's true it has no concept of a specific question and answer. It's true it doesn't hold the absolute truth. It doesn't even know what it's going to write when generating the first word of an answer. What it has are all the concepts in the training data. Patterns of questions and answers. Linguistical reasoning. Either you use it or you don't.
tfa is too dismissive of chatgpt which does meaningfully progress a very fundamental technology (information retrieval) by acting as a general purpose freeform index with an conveniently easy-to-learn conversational query language.
chatgpt isn't worthless as an alternative to whatever you currently have for searching documentation of apis/protocols/frameworks that you're using (and the values not THAT diminished by the admittedly poor experience of running into its bullshit-artist failure mode).
This first generation of conversational AI clearly shouldn't be trusted, but I think there's huge potential for improvements. I see huge value in such an AI that can cite and quote real sources, and stitch together multiple verifiable sources in order to form an argument or narrative. We have already seen a glimpse of such an AI in Project Debater, which was able to quote real verifiable sources when making its arguments.
CONTACT Mr Steve Marco call +2348078336137 or WhatsApp him through this number +2348078336137or email: STEVEMARCO888@GMAIL.COM, Hello everyone My name is ERIC BOLDS am from Atlanta Georgia i am here to give a testimony on how I join the illuminati brotherhood, I was trying to join this organization for so many years now,I was scammed by fake agent in south Africa and Nigeria,I was down,I could not feed my self and my family anymore and I tried to make money by all miss but all invail, I was afraid to contact any illuminati agent because they have eat my money,One day I came across a post of someone giving a testimony, thanking a man called Steve Marco of being helping him to join the illuminati brotherhood, then I look at the man email and the phone number that was written there, it was a nigeria number I was afraid to contact him because a nigerian agent eat my $6000 and go away with the money then I was very tired, confused and I decided to contact the person that was given the testimony and i called him and I communicated with him on phone calls before he started telling me his own story about when he wanted to join, he told me everything to do, then I made up my mind and called the agent called Steve Marco and he told me everything to do, and I was initiated, surprisely I was given my benefit of being a new member of the great illuminati brotherhood I was so happy, For those of you trying to join this organization this is your opportunity for you to join CONTACT MR Steve Marco call +2348078336137 or WhatsApp him +2348078336137or email: STEVEMARCO888@GMAIL.COM..
It's nice to see a little realism starting to poke through the hype. The rush for automatic bullshit generation for me is very much in the bucket of, "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." As with every new technology, we should be looking at LLMs through a lens of maximizing the benefits and minimizing the harms.
There is one thing that ChatGPT has done, which is highlight the amount of bullshit we are exposed to. If ChatGPT does nothing else but replace the bullshit generators, then we know on the whole we are safe to ignore all bullshit (a pretty safe course of action).
Over the past few months I've found myself second guessing whether the article I am reading was generated bullshit or "real" bullshit. It's certainly made me re-think what I waste my time reading. If I can't get through the first paragraph before asking myself "is this AI bullshit?" it's a really good indication that the entire article is going to be bullshit, at which time it doesn't matter whether a human wrote it or not.
For sure. And it's worth asking what we were getting out of those articles before.
As an example, there's an Android video game called Splash Wars. It has algorithmically generated levels with no sense of progression. In some sense it's not a good game, in that there's no narrative, no progression, no increasing cleverness. But I have played an embarrassing number of levels of it because for me it's soothing to have something modestly challenging but ultimately familiar. I realized that there are some video games I use to think. With the ADHD lets-go-ride-bikes portion of my brain pacified, I can ruminate on problems that I might otherwise get distracted from.
Similarly, I suspect that bullshit articles fill other unspoken needs for people. That in the morning over cereal, maybe what I want from my newspaper is not actual news so much as news-shaped or news-flavored textual product. Maybe I want comfortable familiarity or confirmation of my biases.
So I hope you're write, that others start asking those same questions. That the coming vast bullshit surplus puts many of us off it for good.
Hah! I am honestly not even sure about that. There's clearly investment money to be raised. But sustained ways to generate value in the world? That's not as clear to me. I wonder if this more in the category of Groupon, which everybody thought was worldchanging for about 20 minutes. Or Shazam, which was an absolutely fucking mindblowing technology at first, and then it quickly became banal. Or perhaps the closest analogy was Qwiki, which was sort of inspired by the Star Trek voice interface where the computer would tell you about things on request. It was a hot startup, something people were watching closely. And then when people actually got to use it, turned out nobody cared.
The key thing is that most people are obsessed with get rich quick, as long as you're first and you're fast you can make your money, damn the consequences.
Search engine selects contents for you, but doesn't generate the content, so they have less power to influence what you see. For example, if authorities in a certain country order the requirement that their local ChatGPT must prepend any instance of the word "uk----an" with "nazi gay", it will do that.
It's another stepstone on the way to total centralization and control.
Is there any way to get ChatGPT to decorate its document with a heat map of how probable it thinks each token is? I know it's always choosing the token it thinks is most probable, but would this approach work to have a more cold/blue document or individual token when it's bullshitting and a hotter document when it's more likely to be reporting widely repeated content?
I already see most of my feeds are generated content? wasting clicks and resulting in filter updates. I really, really hope all of this madness becomes the equivalent of robo-calling (faster) and spam (more & faster) and that people stop believing all media and some retorting to original thought.
Personally, I will be valuing typos and grammatical mistakes in what I read
Why do “smart” people like trying to connect all the different current news items into one grand narrative. Is he trying to use the political capital generated by opposition to chatgpt for other issues? Chatgpt has less than nothing to do with underpaid home health aids. They don’t even know what it is!
> but it can only draw on the (admittedly vast) proportion of the internet it ingested at training time.
And whatever information you feed it, which at least for me, is far more important than some facts it's already learned. Usually I'm having it perform a task with some data in the working buffer.
While i find the downplaying of chatgpt to be a bit much, lets not go too far in the other direction.
ChatGPT is not suitable in applications where accuracy is more important than plausability, is i think the neutral way of putting the limitation.
So it can't write your PhD thesis for you and hasn't literally replaced himans in every field of endeavour. Big whoop. If that's the bar, it is a pretty high bar.
it can help you write that PhD thesis. It can summarize all current research that might be considered relevant to your current thesis, vastly improving your productivity!
Using chatGPT requires expertise, i think, rather than the other way around! If you have expertise in your field, you can easily check correctness (or at least plausibility) of the output, and fix up the bits it gets wrong. This will make any work you need to undertake faster.
* Saying, as the OpenAI CEO does, that we are all 'stochastic parrots' like large language models, statistical generators of learned patterns that express nothing deeper, is a form of nihilism.
Hopefully people treat AIs like they treat other people on the internet, with disrespect and zero sympathy. We have been trained for this. Don't go soft on polite robots, that will be your undoing.
With 100% certainty Donald Trump has said more lies/bullshit than ChatGPT (porcentually speaking for all his public statements) and he became President of the US, some said due malice, others from stupidity, and some from both; where does people like him fall under the implied fantasy by this article that humans do better than to make "good guess(es) to pass your sense-making filter"? The social beneffits of people like him are less than speculative and the damages are empirically demostrated.
The nerve of talking about "ghost work" for $2 while publishing a page on the internet, where a giant chunk of the hardware used to mantain and use such network is made with raw materials mined and refined with slave labor, and even the parts where is debatable if it constitutes slave labor are jobs that people would never chose over the content filtering for $2.
This article and many others like to point out that ChatGPT is far from intelligent and is just spitting out nonsense.
That argument fails to consider that much of the drudgery of modern employment consists of large swaths of nonsense.
ChatGPT is auto-complete for bullsh*t jobs. A fancy boilerplate generator that has seen it all before and mirrors the exact sequence of word combinations we've trained it to believe is valuable.
chatgpt outputting credible bs is EXCELLENT to the extent that it will push intelligent people into critical reading of everything around them - "[citation needed]" et al.
lazy people will just suck it up, "experts / AI said so!"
It's so weird to me when apparently tech-savvy people say ChatGPT "lies", or call it a bullshit generator and so on.
I mean - when you ask StableDiffusion to draw a dog astronaut, everyone gets that the image it returns is made-up, right? Nobody expects the AI to return only "true" images of existing things - it was trained on fictional images as well as photographs, and people understand that it can imagine new things beyond what it's seen. Nobody expects SD to emit an error like "I can't draw a dog astronaut because they don't exist".
So why do people expect ChatGPT to work differently? Even with developers who presumably understand the technical details, I constantly see people acting as if it was an error mode for ChatGPT to say something that isn't factually true about the world. How is that any different from calling SD a liar because it drew a dog astronaut?
Because it is conversational and appears to answers direct questions.
The image generators accept a prompt, not a question, so we don't expect an answer.
ChatGPT generates responses to prompts that make it sound like it is answering a question that is posed.
Ask ChatGPT a question. The mere fact that I can fairly reasonably pose that as "asking it a question" is why people get confused. It's very easy to interact with it conversationally, and weird to interact with it otherwise because the text it generates is conversational. A generated image is never a conversation, so of course it isn't an answer and so can't be wrong.
I think this is a big part of it, yeah - I don't remember people calling earlier GPT models dishonest, when the UI presentation was of the AI returning a continuation of the input prompt.
I don't think that premise stands. If I ask ChatGPT to write a poem about Sherlock Holmes fighting a mermaid it will happily do so, and I think most users would agree that it'd be a failure mode if it refused.
Isn't it more the case that people are expecting it to sometimes be an artist and other times be a true-fact database, and assuming the AI will know which one they want?
No, because inherent in them asking ChatGPT to write them a poem about Sherlock Holmes is that they’re asking ChatGPT to write something fictional and artistic.
That's what I just said - people are expecting the AI to discern from the prompt whether to answer in fiction mode or fact mode. And my point here is, people who understand the tech should know that only one of those modes actually exists! It's a generative AI; it doesn't have a fact-retrieval-mode any more than StableDiffusion does.
I mean this is a bit of an apples to oranges comparison isn't it...
On one hand, you're talking about how AI produces artwork from prompts, where you're expecting the output to be made up/fictional.
And on the other hand you're talking about ChatGPT which is something that a lot of laymen are looking like a replacement for tons of things like copywriters, software engineers, Google search, etc. Every single one of these things have a pretty high requirement for the output to be at least pretty close to accurate. If anything I don't think a lot of the people who have an issue with ChatGPT (myself included) actually have an issue with ChatGPT itself, but rather with the tons of people, laymen and otherwise who have pointed to it as the metaphorical AI singularity when in reality it is little more than a iteration on existing AI models packaged in a better form for people to understand it.
When you have been training people to expect realism and the truth through chatbots (Siri, Google Assistant, even the various bots on commerce/service sites), that is what they will also expect from ChatGPT which is represented in the same exact way. This itself is built on messaging expectations, if we're talking with our friends and family we don't expect them to suddenly start making everything up. We expect the truth, or at least their truth.
Compare this to art generation, say Dalle 2, which instantly showcases various images and highlights the wacky ones. This builds on previous expectations people have of art in general.
This is a crucial lesson in how presentation matters. Give ChatGPT a silly mascot, make it always output an informal lowercase internet-slang tone, you'll quickly see how expectations change.
Whether something is an error mode doesn't depend on the design of the tech, it depends on the use case it's being applied to.
I frequently point out that GPT generates falsehoods because I'm constantly reading stuff from people who try to use ChatGPT as a knowledge retrieval engine. In that context, generating bullshit is an error mode, and the correct answer isn't to try to fix ChatGPT, it's to raise awareness of its inherent limitations.
So, I agree with you wholeheartedly that generating bullshit is a feature, not a bug. I just want the rest of the world to realize that.
Your analogy would only hold if we were asking ChatGPT to write poetry, or descriptive fiction.
But its method of presentation is one where it on the whole composes paragraphs composed of proposed facts and logic.
There's really no ambiguity here. Its sentences are in general (not all sentences, but most) composed of propositions. The propositions can be tested for truth or falsehood. If the proposition is false, we can generally call it a "lie". Or at least -- not a fact.
We expect it to work differently because conversational language works different than images and art.
When stable diffusion is told to make a dog, it's fairly obvious when you get one of the AI generated monstrosities.. and people discard those easily.
If you ask ChatGPT if Ebola is transmitted by mosquitos, and it says it's not only transmitted by mosquitos, it's airborne and highly contagious... do you know if this is accurate? Most people don't ask Stable diffusion to make an images of things we don't understand, while we ask ChatGPT questions we do not know the answers to.
I think a fairer analogy is always expecting a Google search to return factual, objective, and high quality information. Similar to Chat GPT, the results are often mixed and depend on the way you structure your search, the content area, etc.
> ChatGPT is, in technical terms, a 'bullshit generator'.
This bs generator speeded up my coding by 10-20x. It's like a superpower... not talking about ChatGPT specifically but its stochastic parrot cousin (Copilot).
GPT has been great for exploring new frameworks too. Copilot is amazing when I know what roughly what I'm doing, but GPT is like personalized framework / API documentation at times
I could leave my webcam on, it'll maybe regularly notify me its on, and hopefully block taking pictures, instead it'll basically just watch my eye movements and know which screen to focus on, and take voice prompts to find the window and subwindow i.e. chrome+hackernews tab, or vscode+filename. It'd basically be like vim but for outside the terminal and without memorizing shortcuts and it'd have a listening mode so I can just ask it questions whenever and it'll maybe even interface with copilot and give extra 'context' to things.
This is what i want, basically a mini-version of "HER" to give me more super powers until it takes my job, lol.
I find this interesting. There's a few things ChatGPT has been useful for with coding but in general I've found it to produce outright incorrect answers and it can't really reason about higher level problems. Even for spark questions in scala it generates code w/ totally made up methods or code that doesn't work properly.
Its writing style is also pretty tedious for many prompts.
I've noticed "full stack" folks really enjoying ChatGPT but dedicated infra and ML folks (not working on toy problems) finding it less valuable.
A good chunk of my work involves talking to folks, understanding requirements, putting together design docs, etc. vs. actual coding. I've found it not particularly valuable even for doc writing.
How does Copilot help you code faster? I've watched some coworkers use it but it appears to be more distraction than help, as they accept CoPilot suggestions and then have to rewrite it all anyway.
It helps me a ton, but I suspect that's because I disabled auto-suggest, so it only gives me a suggestion when I specifically ask. I know by now what kinds of problems it's good at, so I can selectively prompt it when I know I'm working on something that it will able to do faster.
One example is when working on a front end form in Vue or React: if I have a bunch of variables in the component state and create a form tag, Copilot is great at creating all of the inputs with the correct bindings and input component types, so I'll usually prompt it after creating the first one by hand as an example.
Since I'm selectively prompting it instead of just letting it suggest left and right, I also find it helpful to sometimes write out a quick comment explaining what it is that I want in more detail than the function name can provide, then prompt it on the next line after the comment. That often helps to get better results, though I'm still careful to read through everything.
Posted elsewhere, but this is what I suspect is happening. Folks that are primarily slapping together a lot of code are benefitting from ChatGPT. It's pretty useless for the other stuff except maybe boilerplate stuff for a design doc.
It's not integrated into an IDE like GH co-pilot yet is it?
I only find I reach for it for stuff like navigating some curly regex or forgetting date time format syntax for the millionth time. But I would be very keen to understand if I'm missing out.
I found the article really difficult to read, so I summarised it with ChatGPT
Large language models like the GPT family are statistical models that learn the structure of language by predicting missing words in sentences. These models are seen as "bullshit generators" as they have no idea what they are talking about and are designed to produce baseless assertions with confidence. The addition of reinforcement learning from human feedback helps prevent the model from producing hate speech, but it still can't change the underlying language patterns learned from the internet, which include conspiracy theories. The dangers of these models go deeper than bias and discrimination and despite claims of "artificial general intelligence", the concept is inseparable from ideas of innate supremacy and hierarchy. Companies like OpenAI receive billions in investment for these technologies, not for actual AI, but to replace or precaritize human workers. AI is a political project and should be seen as such.
Which quickly allows me to ascertain the article wasn't worth reading in the first place.
I’m not sure why we need comments from people proud that they couldn’t be troubled to read the article that is supposedly being discussed, but it seems like a popular genre of reply whenever something tech-critical is posted.
I tried to read the article too and found it both visually unpleasant and intellectually unfocused. GP is making the rather comically clever point that ChatGPT is useful to summarize text that might be difficult for humans to parse, even a text claiming that ChatGPT is a useless model only capable of spewing Islamophobic speech and in need of burying. I mean the article ends on some tangent about reactionary solutionism... If you don't see the satire in that, well, you might be taking life too seriously.
I tried to read the article too and found it both visually unpleasant and intellectually unfocused.
That's fine but then you're probably not in a position to have curious conversation about it. Which is also fine! But the solution to that is to find something else that's interesting to you, rather than to gunk up the thread for the people who are interested in discussing the piece.
Despite the qualities I ascribed to it, I actually read the piece. That's why I chuckled at the satire of having ChatGPT summarize it. I thought it sufficiently concise and every so slightly poignant commentary, daresay curious. I haven't disparaged anybody else for their own curious conversation, have I? If you'd like to discuss whether, "The structural injustices and supremacist perspectives layered into AI put it firmly on the path of eugenicist solutions to social problems.", well, be my guest.
I didn't say you've disparaged anyone, you're defending the paste of generated content into the thread as if it's a good thing because, I dunno, you found it amusing or it tickled your sneerbone. But pasting generated content into the thread is not a good thing by HN's standards of good things for the reasons I described and the reasons you described are fine reasons but they are bad HN reasons.
I don't care where the content came from. I judged it on its merit not on its origin. If it was a bad summary or more difficult to understand than the article then it wouldn't have worked quite as well. The additional context of it being generated by the [harmful technology in question] made it meta humorous, yes, not least because I found it to be a better representation of the argument that I think the author is attempting to communicate than the form in which it was originally presented. Ultimately, the author's piece sounds more like a stochastic parrot to me than the summary by ChatGPT.
Hence, I find satire an entirely appropriate tactic to spark curious discussion, in this case. It's able to communicate all that remarkably succinctly. And, though we have not discussed whether ChatGPT is harmful because OpenAI paid Kenyan works $2/hr to "clean up" the model, I do believe we have discussed the nature of the argument in the first place, which is whether a LLM can produce anything of value. It seems preposterous to claim that ChatGPT is utterly useless and harmful when a harmless use for it is shoved in your face, no? And since I laughed, I guess I value the output, oddly. Which all quite contradicts the author's assertion that bad evil technology has no value and is destroying humanity.
While it may be frustrating to see comments from people who haven't read the article being discussed, it's important to recognize that everyone consumes and processes information in different ways. Some people may prefer to skim articles or jump straight to the comments section to get a sense of the wider conversation, rather than reading every detail of the article itself.
Some people may not have the time or the interest to read lengthy articles and may prefer to get the gist of the discussion through comments. It's important to keep an open mind and respect the ways in which others consume and engage with information.
At this point if I see any variation of "bullshit generator" used to downplay the significance of LLMs I will instantly discount the author as a person who has already lost the plot.
Just smile politely and leave them to their devices as the world changes around them.
I don’t think so. I don’t agree with everything it says but I found it interesting enough. “Is generating a lot of nonsense and boilerplate a justifiable use of scarce resources?” is actually a rather profound question.
AI is still not AI, still cannot pass the 'critical thinking' task.
I can prove this without having ever interacted with ChatGPT.
The problem with "A.I." is that it continues to only be able to take the 'almost best' sum of ideology and not invent new ideas. AI cannot innovate. It can only imitate.
This is not even a correct summary of the content of the post. I'm genuinely entertained by the sheer amount of shamelessness demonstrated here. Seriously, though, technology at wrong hands endorses bad behaviors.
ChatGPT is not really meant for summarizing things, Kagi Summarizer for example:
"This article discusses the dangers of ChatGPT, a large language model, and how it is used to generate 'bullshit' and propagate existing power structures. It argues that the model is harmful and that its plausibility makes it even more dangerous. The article also highlights the exploitative labour practices that go into maintaining the model, as well as the underlying supremacist perspectives that are embedded in AI. The author suggests that instead of embracing AI, we should focus on centring activities of care and search for alternatives to algorithmic immiseration."
I went and wasted my time reading the post and it turns out the ChatGPT summary was accurate and this comment that it is not accurate was grossly wrong. Wish I hadn't wasted my time.
This article discusses the dangers of ChatGPT, a large language model, and how it is used to generate 'bullshit' and propagate existing power structures. It argues that the model is harmful and that its plausibility makes it even more dangerous. The article also highlights the exploitative labour practices that go into maintaining the model, as well as the underlying supremacist perspectives that are embedded in AI. The author suggests that instead of embracing AI, we should focus on centring activities of care and search for alternatives to algorithmic immiseration.
Both miss that it’s an advertorial for the author’s book.
From the HN guidelines on comments: “Be kind. Don't be snarky. Converse curiously“
I downvoted you because your comment is both snarky and profoundly incurious.
Seriously, either take the time to engage with whats actually written in the post or don’t bother posting lazy swipes. As has been noted by others, the ChatGPT summary isn’t even really correct.
I’ll admit the piece is written in jargony social science language, but it makes real, interesting points about the social changes that are going to accompany introduction of LLM’s into society.
If authors want people to read their articles they should learn to write.
Let's put snarkiness to one side - the article was shit. You know it, I know it, and everyone else knows it. Poorly written, with the same dumb thoughts - and I use the term loosely - I can get from any spoiled sophmore at any university in the western world.
The only reason it got any traction was a clickbait title about a hot topic. So I feel perfectly OK about shitting all over it. If you don't, flag me. It's the internet, it really doesn't matter.
The bias and motive is also obnoxious. The author is very clearly generating backlinks to other pseudo-academics in some postmodern anti "AI" résistance. It's just regurgitation of other peoples' original commentary.
It's hard because the author is just generating poorly disguised backlinks to other peoples' thoughts. Follow some of the links and see for yourself e.g. Stop feeding the hype and start resisting (which I also read and lacks any explanation of exactly how one might resist if one was so inclined) which links to the same CogSci2022 talk as the author does on resisting the dehumanization of technology. All three of which cite the resistance slogan Stochastic Parrots from a paper published in the proceedings of the Fairness Accountability and Transparency conference. Sociologists don't like large language models because they have the potential to make humans do less menial labor [which isn't good for a capitalist society because it frees humans up to think]. I mean I have no idea why these people feel so threatened by a large language model, but outrage porn gets clicks.
Here's what I took from it: LLM's encode and are very adept at generating hate and bullshit. To counter that, big wealthy companies pay foreign workers low wages to do the very unpleasant task of reading and tagging the worst outputs from their LLM. This is a bad state of affairs and should be changed.
Yes, they frame this is in weird academic phrases. But, it's really not hard to get the point, even a tiny modicum of intellectual curiosity would get you there.
I read way more into it. I actually think the author is attempting a postmodern deconstruction of the concept of AI in the first place. In typical internally inconsistent sloppy rhetoric, this anti-fascist (his words not mine) dismissal both argues that LLMs are not real AI because presumably they can't think and also that we should be afraid of them because as AI they encode modernity into their output and modernity is bad because it perpetuates inequality by its very nature. Since stochastic parrots threaten to accelerate labor and make humans more productive and make some labor redundant, it's also bad that it creates new jobs which pay humans a Kenyan median wage to sterilize it and make it acceptable for a postmodern anti-fascist utopia. I'm absolutely and horribly confused.
Interesting. I think a more coherent form of the argument is this: LLM's aren't capable of reasoning or self-updating. Capitalists are exploiting powerless labor to encode their preferences and goals into LLM's. Because the LLM can't really think and isn't human they can control it. So, we're going to be forced to deal with robots that perfectly embody the worst aspects of the modern economy and society.
the most irritating thing about this piece is the headline, a headline only a GPT could write.
Shakespeare's famous opening, "Friends, Romans, countrymen, lend me your ears; I come to bury Caesar, not to praise him" is the lead-in to a speech by Antony in which he very much praises Caesar and buries his enemies.
I don't get the point of this article. ChatGPT and it's ilk are not going away. They are here, now, and our friends and families will start using them very soon. I have a contrarian view from this article that ChatGPT is actually good for the mainstream audience because it will teach them to think about how trust worthy the information is coming from a machine. Most people will understand they have to take what is being said by a machine with a grain of salt (or they will soon). Maybe they'll start applying some of those discernment skills to popular mainstream media. The cat's out of the bag. You can no longer bury it.
Does TikTok, Android/Google,Instagram use make the same friends and family think about privacy and data? From a cursory look at...nearly everyone...i'd say the answer is a resounding no
When Facebook came around we all knew it would be a disaster, and it was.
When TikTok/Instagram replaced it, we knew it would be worse, and it was.
Now we have LLMs that vanishingly few people in the greater public actually understand and we know they'll be used for every possible evil imaginable at scale. I'm not optimistic.
Actually, Instagram was pretty sane for a long time. Until recently it just showed me who I followed and nothing else.
Now they're trying to complete with TikTok and I get all these crappy suggested reels and it's turning me off the platform. But for a long time it was pretty good unlike Facebook that ruined itself with its algorithmic feed.
> When Facebook came around we all knew it would be a disaster, and it was.
Not really. When Facebook started as college exclusive social site it seem quite innocent. Later when it created news-feed, opened up to everyone etc it became a cesspool. Once people started getting their news from it, that's when it became dangerous.
Privacy concerns and ChatGPT have different failure modes.
If ChatGPT fabricates a link or makes something up, there's a potential feedback mechanism. E.g. I ask ChatGPT to explain a science term, and then I get told that my understanding is incorrect in class when I use the ChatGPT definition.
It doesn't exist for every use case, but I'm hopeful everyone will be "bitten" once by ChatGPT etc., and then folks will verify its output appropriately.
This doesn't seem much different to me than how people should use wikipedia. It looks like it will be a useful tool to use, you just can't be careless.
Wikipedia has moderation and standards for sourcing, etc. that has made it generally very reliable. ChatGPT is a fancy parrot that sounds reliable. To actually verify that its output is correct requires more work and cross-referencing sources, etc.
That's a good point. For more complex topics, the act of verifying correctness is also more complex. I think it's a fair tradeoff though. A more powerful tool also requires more work to use CORRECTLY. Now, whether people will actually put in due diligence is another matter.
There's no cat and no bag - just statistical sleight of hand. The people playing the trick know this, and are just collecting investor dollars while they can (and astroturfing every HN article that explains what it is). Relying on the output of chatgpt is no different than doing a rain dance and expecting it to rain.
I'd say it's more like relying on a minimum wage employee. It gets right (what I'm asking it to do) most of the time, but not always, and you gotta check up on it.
I don't understand why people act like it has to be 100% reliable to be useful.
Because if you need to check up on it, just do what you would do to check up on it in the first place. Coding is a very niche area where often you can tell if it is giving good answers. I think that's why HN people think it is so dope. In almost any other domain finding information from reliable sources is key, and chatgpt can't help with that at all.
You might be more confident than me in people's ability to discern BS. I think I am reasonably intelligent, but I get fooled all the time. Compounding the problem is that it is easier to fool someone than to tell him he has been fooled.
I can't even teach my baby boomer aged inlaws to understand that obvious bullshit social media chainposts (along the lines of "copy this into your timeline to stop facebook from selling your data") are fake. There's no way they're going to see past ML-generated content, esp ones where feedback from success/fail ratios will cause it to 'improve' over time.
We might not be able to bury the cat, but that's hardly a good thing. Shits going to get really awkward for the next few decades.
Clickbait article. Same was said about the Internet back then!
Also reminds of people doing symbolic AI (decision trees and stuff) that kept criticizing NNs telling they don't work, will never be able to tell why they have this output (it's even wrong), etc.
They had some good points though. The article says that this technology centralizes power in the hands of rich tech companies who exploit low paid workers and unpaid creators who do the value creating work. The tech companies organize the data then endlessly skim off the top. At the same time they maintain undemocratic control of who can see what information, guided by profit and sometimes authoritarian governments.
These are all the same criticisms of the walled garden internet and theyre just as relevant for LLM’s
Bringing up that the AI model is racist or some other kind of -ist to bolster your argument is a bullshit tactic. People are -ists, too. I agree with the rest of their points, though.
Try https://text-generator.io the bs generated is about the same tbh... But much cheaper, and combines a web crawler so it can speak about links and images which makes it a lot better at lots of things like making a believable conversation about designs/invoices/reciepts/emojis.
It is often useful to try to understand what game is really being played. This is very obvious to me, many people have a fear of AI (justified and unjustified) while knowing little about it. Many people want to hear how horrific AI is going to be and why we have to fight against it.
Ergo..>> my book
This seems to be squarely targeted at the ragey punkrock bangarang subset of this group.
> we are all 'stochastic parrots' like large language models, statistical generators of learned patterns that express nothing deeper, is a form of nihilism to mean: “life is meaningless. (period)”.
When in fact, it really means: “life is meaningless… (you fill the dots for yourself)”.
Humans created the concepts of law and order, and the rules of society. Everything is man made.
More on the topic: how can we dismiss the theory of ChatGPTs intelligence when we barely understand what constitutes our intelligence at the biological level. It’s a compelling hypothesis. A neural model is the closest thing we’ve got to anything that resembles our biological model.
If indeed 1 neuron = 1 parameter, ChatGPT (175 billion parameters) could be a comparable intellectual model to a human being (86 billion neurons).
Lastly, I think some politicians are doing more damage to our civilization by dividing people. History tells us the damage and trauma from this can carry on for many many generations to come. Maybe, just maybe, ChatGPT could bring some sense into people to move past hatred and accept each others differences.
we're not stochastic parrots, we're reinforcement learning systems. we're motivated by pain and pleasure. we create plans to avoid pain and achieve pleasure. we guide our own learning.
that's the reason AlphaZero can become superhuman at Go, while ChatGPT can't even play. it makes me wonder if OpenAI has abandoned RL because it's too hard, and they're trying to move the AGI goalposts to these giant unsupervised models.
Plus, the output, when wrong, is subtly wrong- it's usually not obvious BS, it's credible BS. If you are not competent in the particular area you're asking about, you likely don't have the skills to recognize BS.
It also is a time saver, doing work that most people find unrewarding. So you get a chorus of fans saying "it saved me a bunch of time doing my own research and just spit out what I needed". Maybe it did, or maybe they didn't have enough expertise in the area to recognize the flaws in the output it presented.
[ed] deleted duplicate word