Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is, human intelligence is likely also based on a similar advanced chat bot setup.

While GPT-4 only performs as good as top-10th percentile of human students taking an exam (a professional in the field can do much more than this), it is notable that as a generalist GPT-4 would outperform such professionals. And GPT-4 is much faster than a human. And we have not yet evaluated GPT-4 working in its optimal setting (access to optimal external tools). And we have not yet seen GPT-5 or 6 or 8.

So, get ready for an interesting ride.



Alas, if it could only remember and precisely relate more than 4k or 8k or 32k or 64k words...

And if only scaling that context length weren't quadratic...

Indeed, we would really expect an AI to be able to achieve AGI. And it might decide to do all kinds of alien things. The sky would not be the limit!

We have more than 100 trillion synapses in our brains. That's not our "parameter" count. It's the size of the thing that's getting squared at every "step". LLMs are amazing, but the next valley of disillusionment is going to begin when that quadratic scaling cost begins to rear its head and we are left in breathless anticipation of something better.

I am not as worried, I guess, as your average AI ethicist. I can hope for the best (I welcome the singularity as much as the next nerd), but quadratic isn't going to get easier without some very new kinds of computers. For those to scale to AGI on this planet it's questionable if they'll have the same architecture we're working with now. Otherwise, I'd expect a being whose brain is a rock with lightning in it to have take over the world long, long ago. Earth has plenty of both for something smart and energy efficient to have evolved in all these billions of years. But it didn't and maybe that's a lesson.

That all said, these LLMs are really amazing at language. Just don't ask them to link a narrative arc into some subtle detail that appeared twice in the last three hundred pages of text. For a human it ain't a problem. But these systems need to grow a ton of new helper functionality and subsystems to hope to achieve that kind of performance. And, I'll venture that kind of thing is a lower bound on the abilitites of any being who would be able to savage the world with it's intellect. It will have to be able to link up so, so many disparate threads to do it. It boggles our minds, which are only squaring a measly 100T dimension every tick. Ahem.


You can only hold around 7 to 10 numbers in your mind well, in your working memory. Let me give you a few: 6398 5385 3854 8577

You have 1 second, close your eyes and add them together. Write down the result.

I’m pretty sure that GPT-4 at its 4k setting would outperform you.

[The point being, we have not seen what even GPT-4 can do in its optimal environment. Humans use paper, computers, google, etc. to organize their thoughts and work efficiently. They don’t just sit in empty space and then put everything into the working memory and magically produce the results. So imagine now that you do have a similar level of tooling and sophistication around GPT-4, like there is present around humans. I’m considering that and it is difficult to extrapolate what even GPT-4 can do, in its optimal environment.]


Indeed, and maybe less than 7...

I'll point out that chatgpt needs to be paying attention to the numbers to remember them in the way I'm taking about. You will need to fine tune it or something to get it to remember them blind. I suppose that's not what you're talking about?

There is a strong chance that I'll remember where to find these numbers in a decade, after seeing and hearing untold trillions of "tokens" of input. The topic (Auto-GPT, which is revolutionary), my arguments about biological complexity (I'll continue to refine them but the rendition here was particularly fun to write) or any of these things will key me back to look up the precise details (here: these high entropy numbers). Attention is perhaps all you need... But in the world it's not quite arranged the same way as in machines. They're going to need some serious augmentation and extension to have these capabilities over the scales than we find trivial.

edit: you expanded your comment. Yes. We are augmented. Just dealing with all those augmented features requires precisely the long range correlation tracking I'm taking about. I don't doubt these systems will become ever more powerful, and will be adapted into a wider environment until their capabilities become truly human like. I am suggesting that the long range correlation issue is key. It's precisely what uniques humans from other beings on this planet. We have crazy endurance and our brains both cause and support that capability. All those connections are what let's us chase down large game, farm a piece of land for decades, write encyclopedias, and build complex cultures and relationships with hundreds and thousands of others. I'll be happy to be wrong, but it looks hard-as-in-quadratic to get this kind of general intelligence out of machines. Which scales badly.


When doing the processing GPT remembers these in the “working memory” (very similar to your working memory that is just an actuation of neurons, not an adjustment of the strengths of synaptic connections).

And then, there is a chance that the inputs and outputs of GPT be saved and then used for fine-tuning. In a way that is similar to long-term memory consolidation in humans.

But overall, yes, I agree, GPT-4 in an empty space, without fine-tuning is very limited.


It doesn’t remember anything unless you mean that intermediate values in calculation of forward pass is “remembering”. The prompt continuation feature is just a trick where they refeed previous questions/replies back to it with new questions at the end


>And if only scaling that context length weren't quadratic...

There are transformers approximations that are not quadratic (available out of the box since more than a year) :

Two schools of thoughts here :

- People that approximate the neighbor search with something like "Reformer" and O(L log(L) ) time and memory complexity.

- People that use a low-rank approximation of the attention product with something like "Linformer" with O(L) complexity but with more sensibility to transformer rank collapse


So how many of those 100 trillion synapses are actually in the part of the brain that does the thinking? Because the brain has different regions (subsystems) responsible for different things.


> But these systems need to grow a ton of new helper functionality and subsystems to hope to achieve that kind of performance. And, I'll venture that kind of thing is a lower bound on the abilitites of any being who would be able to savage the world with it's intellect. It will have to be able to link up so, so many disparate threads to do it. It boggles our minds, which are only squaring a measly 100T dimension every tick.

Agreed: LLM are just one of many necessary modules. But amazing nonetheless. The quadratic scaling problem needs an attentional-conceptual extractor layer with working memory. Hofstadter points out that this needs to be structured as a recursive “strange loop” (p 709 of GEB). Thalamo-cortico-thalamic circuitry is a strange loop and attentional self-control may happens by phase- or time-shifting activity of different circuits to achieve flexible “binding” for attention and compute.

I’m actually optimistic that this is not a heavy computational lift but a clever deep extension of recursive self-modulating algorithms across modules. The recursion is key. And the embodiment is also probably crucial to bootstrap self-consciousness. Watching infants bootstrap is an inspiration.


But where is the imminent danger? It is still limited in many ways. For example, it can be turned off or unplugged.

Is it because CAPTCHAs won’t work anymore? That sounds like a problem for sites like Twitter that have bot problems.

Is it because it may replace people’s jobs? That comes with every technological step forward and there’s always alarmist ludditism to accompany it.

Is it because bad people will use it to do bad things? Again, that comes with every new technology and that’s a law enforcement problem.

I don’t really see what the imminent danger is, just sounds like the first few big players trying to create a regulatory moat and lock out potential new upstarts. Or they’re just distracting regulators from something else, like maybe antitrust enforcement.


There are two big concerns:

1. GPT-8 or something is able to do 70% of people’s jobs. It can write software, drive cars, design industrial processes, build robots and manufacture anything we can imagine. This is a great thing in the long term, but in the short term society is designed where you need to work in order to have food to eat. I expect a period of rioting, poverty, and general instability.

All we need for this to be the case is a human level AI.

2. But we won’t stop improving AIs when they operate at human level. An ASI (artificial superintelligence) would be deeply unpredictable to us. Trying to figure out what an ASI will do is like a dog trying to understand a human. If we make an ASI that’s not properly aligned with human interests, there’s a good chance it will kill everyone. And unfortunately, we might only get one chance to properly align it before it escapes the lab and starts modifying its own code.

Smart people disagree on how likely these scenarios are. I think (1) is likely within my lifetime. And I think it’s very unlikely we stop improving AIs when they’re at human levels of intelligence. (GPT4 already exceeds human minds in the breadth of its long term memory and its speed.)

That’s why people are worried, and making nuclear weapon analogies in this thread.


I actually don’t think that ASI, if/when created by humans, will be very dangerous for humans. Humanity so far is stuck in an unfashionable location, on a tiny planet, on the outskirts of the median sized galaxy. There is very little reason for ASI, if created, to go after using up the atoms of a tiny planet (or a tiny star) on which it had originated. I’d fully expect it to go with the Carl Sagan and try to preserve that bright blue dot, rather than try to build a galactic superhighway through the place.

It’s the intermediate steps that I’m more worried about. Like Ilya or Sam making a few mistakes, because of lack of sleep or some silly peer pressure.


You might consider it unlikely, but would you bet the future of our species on that?

A couple reasons why it might kill all of us before leaving the planet:

- The AI might be worried if it leaves us alone, we'll build another ASI which competes with it for galactic resources.

- If the ASI doesn't regard us at all, why not use all the atoms on Earth / in the sun before venturing forth?

In your comment you're ascribing a specific desire to the ASI: You claim it would try to "preserve that bright blue dot". Thats what a human would do, but why would we assume an arbitrary AI would have that goal? That seems naive to me. And especially naive given the fate of our species and our planet depends on being right about that.


Can you show me your home planet?

Oh, sorry, no. There was an accident and it got destroyed.

What accident? Oh, I see.


I understand (1) but how does (2) happen? If you don’t trust it can’t you just have a kill switch that needs to be updated daily otherwise the thing turns off? How would software be able to alter the hardware it runs on such that it can guarantee itself an endless supply of power?


The general worries are:

- An ASI could easily be smart enough to lie to us about its capabilities. It could pretend to be less smart than it is, and hope that people hook it up to the internet or give it direct access to run commands on our computers. (As people are already doing with ChatGPT). We currently have no idea how ChatGPT thinks. It might be 10x smarter than it lets on. We have no way of knowing.

- Modern computers (software and firmware) are almost certainly utterly riddled with security vulnerabilities we don't know about. An ASI might be able to read / extract the firmware and find plenty of vulnerabilities to exploit. Some vulnerabilities allow remote code execution. If a superintelligent AI has the ability to program and access to the internet, it might be able to infect lots of computers and get them to run parts of its mind. If this happened, how would we know? How would we stop it? It could cause all sorts of mayhem and, worse, quietly suppress any attempts people make to understand whats going on or put an end to it. ("Hm, our analytics engine says that article about technology malfunctioning got lots of views but they all came from dishwashers and things. Huh - I refreshed and the anomoly has gone away. Nothing to see here I guess!")

It might be prudent not to give a potential AGI access to the internet, or the ability to run code at all outside a (preferably airgapped) sandbox. OpenAI doesn't think we need to be that careful with GPT4.


You didn't understand who the actual luddiets were, but don't worry, I have a feeling we'll get our chance.


> The problem is, human intelligence is likely also based on a similar advanced chat bot setup.

This is so wildly wrong and yet confidently said in every techbro post about LLMs. I beg of you to talk to an expert.


Like what expert? And who are you exactly to state that this is wrong, that boldly? Are you an expert? How many neuroscience and psychology papers have you read? Do you have any children? Have you trained any LLMs? Have you worked with reinforcement learning? Or how many computer science papers have you read during last two decades?


Lets assume for a moment that I haven't read any papers in those fields, that I don't have any childrens, that I haven't trained any LLMs or worked in "reinforcement learning", or even read any computer science papers in the last 20 years (the answer to 90% of that is yes): I don't have to be an expert in physics to know that pastors can't levitate, regardless of what they claim.

You're mad that I'm calling you out, I get it, but you gotta understand after the 200th time of seeing this unfounded sentiment bandied about I'm not phased.


>Lets assume for a moment that I haven't read any papers in those fields, that I don't have any childrens, that I haven't trained any LLMs or worked in "reinforcement learning", or even read any computer science papers in the last 20 years (the answer to 90% of that is yes): I don't have to be an expert in physics to know that pastors can't levitate, regardless of what they claim.

....what? You're saying to assume you know nothing about a field but to assume your claim is correct? You said to "talk to an expert" - who should I talk to? What should I read about here? If there's something I'm missing I want to correct it, I simply can't say "well this random guy commented and said I'm wrong, better change my understanding of this topic."

> you gotta understand after the 200th time of seeing this unfounded sentiment bandied about I'm not phased.

All you've said is "I've disagreed with everyone on this topic, and while I have no information to offer other than it's 'common sense'". That does nothing to either improve our understand, or further a conversation, it's literally just saying "you're wrong and I'm right" with no elaboration.


This statement is a theory and it is not a widely accepted or a proven one. Yet, I do see it in my lab research notebooks on generative AI (that date at ~2017). I think it is a good theory. Haven’t seen anything that contradicts it badly so far…

If you haven’t done the above, I’d suggest doing it. It’s fun and gives a good perspective :)


The fundamental truth is we really have no idea how the human brain works.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: