Hacker News new | past | comments | ask | show | jobs | submit login
Governance of Superintelligence (openai.com)
93 points by davidbarker on May 22, 2023 | hide | past | favorite | 172 comments



Nothing makes me think of Altman as a grifter more than his trying to spook uneducated lawmakers with sci-fi notions like "superintelligence" for which there are no plausible mechanisms or natural analogues, and for which the solution is to lobby government build a moat around his business and limit his competitors. We do not even have a consensus around a working definition of "intelligence", let alone any evidence that it is a linear or unbounded phenomenon, and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence. The sum total of research into this "field" is a series of long chains of philosophical leaps that rapidly escape any connection to reality, which is no basis for a wide-ranging government intervention.


Thanks for putting into words what I couldn't articulate well, but what has been bothering me for a while. Very concise and to the point, saved your comment.

It is just tiring seeing all those fearmongering articles in the news media about omnipotent god-like AGI, based on nothing but self-aggrandizing statements from "experts" (with large followings that have nothing to do with the topic they are authoritatively talking about) like Sam Altman. It got to the point, where seeing those articles makes me question things over and over in my head, only to come to the same conclusion (as nothing material has changed to lead me to a different conclusion this time compared to previous times).

Almost feels like I am being gaslighted. And it feels so strong, I cannot even blame people outside of tech falling for all of this (let alone people of the typical US senators' age and tech literacy). Which is the exact part that makes me feel the most uneasy - people who understand this game like Sam Altman does and who have no issues with capitalizing on that using this dishonest rhetoric (to the major detriment of the rest of the people).

/rantover


>and even if it were, there is no evidence ChatGPT is a route to even human-level intelligence.

People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

Give criteria that 4 fails that a significant chunk of the human population doesn't also fail and we can talk.

Else this is just another instance of people struggling to see what's right in front of them.

Just blows my mind the lengths some will go to ignore what is already easily verifiable right now. "I'll know agi when i see it", my ass.


> People who say this nonsense need to start properly defining human level intelligence because nearly anything you throw at GPT-4 it performs at at least average human level, often well above.

"Average human level" is pretty boring though. Computers have been doing arithmetic at well above "average human level" since they were first invented. The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well. Which is clearly still not the case.


>"Average human level" is pretty boring though.

Lol ok. Still human level. and GPT-4 is way above average in most tasks.

>Computers have been doing arithmetic at well above "average human level" since they were first invented.

Cool. That's what the general in agi is about. GPT-4 is very general.

>The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well.

as well as what kind of people ? experts ? That was not the premise of agi when the term was coined or for a long time afterwards. Posts have shifted(as they often do in this field) so that that's what the term seems to mean now but agi was artificial and generally intelligent, which has been passed.

There's no difference between your definition of agi which is supposed to surpass experts in every field and super intelligence.


> Lol ok. Still human level. and GPT-4 is way above average in most tasks.

It has access to a lot of information that most humans don't have memorized. It's a better search engine than most humans. And it can format that information into natural language.

But can it drive a car? If given an incentive to not confabulate and the knowledge that its statements are being verified, can it achieve that as consistently as the median human?

If you start by giving it a simple instruction with stark consequences for not following it, can it continue to register the importance of that instruction even after you give it a lot more text to read?

> as well as what kind of people ? experts ?

Experts are just ordinary people with specific information. You're giving the specific information to the AI, aren't you? It's in the training data.

> There's no difference between your definition of agi which is supposed to surpass experts in every field and super intelligence.

That's because there is no difference between them. Super intelligence is achievable just by making general intelligence faster. If you have AGI and can make it go faster by throwing more compute hardware at it then you have super intelligence.


>It has access to a lot of information that most humans don't have memorized.

It's not just about knowledge.

Lots of papers showing strong reasoning across various reasoning types. Couple papers demonstrating the development of world models too.

>It's a better search engine than most humans. And it can format that information into natural language.

Not how this works. They aren't search engines. and their performance equity with people isn't relegated to knowledge tasks alone.

>But can it drive a car? If given an incentive to not confabulate and the knowledge that its statements are being verified, can it achieve that as consistently as the median human?

Can a blind man drive a car ? a man with no hands ?

>If you start by giving it a simple instruction with stark consequences for not following it, can it continue to register the importance of that instruction even after you give it a lot more text to read?

Lol yes

>Experts are just ordinary people with specific information. You're giving the specific information to the AI, aren't you? It's in the training data.

No. Experts are people with above average aptitude for any given domain. It's not just about knowledge. many people try and fail to become experts in any given domain.

>That's because there is no difference between them. Super intelligence is achievable just by making general intelligence faster.

That's not how intelligence works. Dumb thinking sped up is just more dumb thinking but faster.


> Lots of papers showing strong reasoning across various reasoning types. Couple papers demonstrating the development of world models too

Actual reasoning, or reconstruction of existing texts containing similar reasoning?

> Not how this works. They aren't search engines. and their performance equity with people isn't relegated to knowledge tasks alone.

It kind of is how this works, and most of the source of its ability to beat average humans at things is on knowledge tasks.

> Can a blind man drive a car ? a man with no hands ?

Lack of access to cameras or vehicle controls isn't why it can't drive a car.

> Lol yes

The existence of numerous ChatGPT jailbreaks is evidence to the contrary.

> No. Experts are people with above average aptitude for any given domain. It's not just about knowledge. many people try and fail to become experts in any given domain.

Many people are of below average intelligence, or give up when something is hard but not impossible.

> That's not how intelligence works. Dumb thinking sped up is just more dumb thinking but faster.

If you have one machine that will make one attempt to solve a problem a day and succeeds 90% of the time and another that will make a billion attempts to solve a problem a second and succeeds 10% of the time, which one has solved more problems by the end of the week?

Average thinking sped up is above average.


>Actual reasoning, or reconstruction of existing texts containing similar reasoning?

The papers were linked in another comment. 3 of them don't even have anything to do with a existing dataset testing. so yeah, actual.

for the world model papers

https://arxiv.org/abs/2210.13382

https://arxiv.org/abs/2305.11169

>Lack of access to cameras or vehicle controls isn't why it can't drive a car.

It would be best to wait till what you say can be evaluated. that is your hunch, not fact.

>The existence of numerous ChatGPT jailbreaks is evidence to the contrary.

No it's not. People fall for social engineering and do what you ask. if you think people can't be easily derailed, boy do i have a bridge for you.

>Many people are of below average intelligence, or give up when something is hard but not impossible.

Ok. Doesn't help your point. and many above average people don't reach expert level either. If you want to rationalize all that as "gave up when it wasn't impossible", go ahead lol but reality paints a very different picture.

>If you have one machine that will make one attempt to solve a problem a day and succeeds 90% of the time and another that will make a billion attempts to solve a problem a second and succeeds 10% of the time, which one has solved more problems by the end of the week?

"Problems" aren't made equal. Practically speaking, it's very unlikely the billion per second thinker is solving any of the caliber of problems the one attempt per day is solving. Solving more "problems" does not make you a super intelligence.


> The papers were linked in another comment.

For anyone following along, they are in my sibling comment. Linked papers here[0]. The exact same conversation is happening there, but sourced.

> 3 of them don't even have anything to do with a existing dataset testing

Specifically I address this claim and bring strong evidence to why you should doubt this claim. Especially this specific wording. The short end is when you scrape the entire internet for your training data that you have a lot of overlap and that you can't confidently call these evaluations "zero shot." All experiments performed in the linked works use datasets that are not significantly different from data found in the training set. For those that are "hand written" see my complaints (linked) about HumanEval.

[0] https://news.ycombinator.com/item?id=36037440


> It would be best to wait till what you say can be evaluated. that is your hunch, not fact.

LLMs aren't even the right kind of thing to drive a car. We have AIs that attempt to drive cars and have access to cameras and vehicle controls and they still crash into stationary objects.

> No it's not. People fall for social engineering and do what you ask. if you think people can't be easily derailed, boy do i have a bridge for you.

Social engineering works because most human interactions aren't malicious and the default expectation is that any given one won't be.

That's a different thing than if you explicitly point out that this text in particular is confirmed malicious and you must not heed it, and then it immediately proceeds to do it anyway.

And yes, you can always find that one guy, but that's this:

> Many people are of below average intelligence

It has to beat the median because if you go much below it, there are people with brain damage. Scoring equal to someone impaired or disinclined to make a minimal effort isn't a passing grade.

> "Problems" aren't made equal. Practically speaking, it's very unlikely the billion per second thinker is solving any of the caliber of problems the one attempt per day is solving.

The speed is unrelated to the difficulty. You get from one a day to a billion a second by running it on a thousand supercomputers instead of a single dated laptop.

So the percentages are for problems of equal difficulty.

This is infinite monkeys on infinite typewriters. Except that we don't actually have infinite monkeys or infinite typewriters, so an AI which is sufficiently terrible can't be made great by any feasible amount of compute resources. Whereas one which is kind of mediocre and fails 90% of the time, or even 99.9% of the time, can be made up for in practice with brute force.

But there are still problems that ChatGPT can't even solve 0.1% of the time.


> The premise of AGI isn't that it can do something better than people, it's that it can do everything at least as well. Which is clearly still not the case.

I imagine an important concern is the learning & improvement velocity. Humans get old, tired, etc. GPUs do not. It isn't the case now, but it is fuzzy how fast we could collectively get there. Break out problem domains into modules, off to the silicon dojos until your models exceed human capabilities, and then roll them up. You can pick from OpenGPT plugins, why wouldn't an LLM hypervisor/orchestrator do the same?

https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

https://waitbutwhy.com/2015/01/artificial-intelligence-revol...


> The concern is the learning & improvement velocity. Humans get old, tired, etc. GPUs do not.

They do, though.

Of course, replacing the worn out hardware while keeping the software is easier with GPUs.


> "Average human level" is pretty boring though.

It seems you have the wrong idea of what is being conveyed, or what average human intelligence is. It isn't about being able to do math. It is being able to invent, mimic quickly, abstract, memorize, specialize, and generalize. There's a reason humans have occupied every continent of the earth and even areas outside. It's far more than being able to do arithmetic or playing chess. This just all seems unimpressive to us because it is normal, to us. But this certainly isn't normal if we look outside ourselves. Yes, there's intelligence in many lifeforms, even ants, but there is some ineffable or difficult to express uniqueness to human intelligence (specifically in its generality) that is being referenced here.

To put it one way, a group of machines that could think at the level of an average teenager (or even lower) but able to do so 100x faster would probably outmatch a group of human scientists in being able to solve complex and novel math problems. This isn't "average human level" but below. "Average human level" is just a shortcut term for this ineffable description of the _capacity_ to generalize and adapt so well. Because we don't even have a fucking definition of intelligence.


> It isn't about being able to do math. It is being able to invent, mimic quickly, abstract, memorize, specialize, and generalize.

But this is exactly why average is boring.

If you ask ChatGPT what it's like to be in the US Navy, it will have texts written by Navy sailors in its training data and produce something based on those texts in response to related questions.

If you ask the average person what it's like to be in the US Navy, they haven't been in the Navy, may not know anyone who is, haven't taken any time to research it, so their answers will be poor. ChatGPT could plausibly give a better response.

But if you ask the questions of someone who has, they'll answer related questions better than ChatGPT. Even if the average person who has been in the Navy has no greater intelligence than the average person who hasn't.

It's not better at reasoning. It's barely even capable of it, but has access to training data that the average person lacks.


> But this is exactly why average is boring.

Honestly, I think it is the lens. Personally I find it absolutely amazing. It's this incredibly complex thing that we've been trying to describe for thousands of years but have completely failed to (we've gotten better of course). It's this thing that is right in front of that looks simple but because not many try to peak behind the curtain. Looking behind there is like trying to describe a Lovecraftian monster. But this is all in plain sight. That's pretty crazy imo. But hey, dig down the rabbit hole of any subject and you'll find this complex world. Most things are like collage. From far away their shape looks clear and precise but on close inspection you find that each tile itself is another beautiful piece. This is true even for seemingly simple things, and honestly I think that's even more beautiful. This complex and chaotic world is all around us but we take it for granted. Being boring comes down to a choice.

> If you ask the average person what it's like to be in the US Navy, ChatGPT could plausibly give a better response.

There's also a bias. Does a human know the instructions are a creative exercise? It is hard to measure because what you'd need to prompt a human with is "Supposing you were a conman trying to convince me you were in the Navy, how would you describe what it was like?" Because the average human response is going to default to not lying and fabricating things. You also need to remember that your interpretation is (assuming you aren't/weren't in the Navy) is as someone hearing a story rather than aligning that story to lived experiences. You'd need to compare the average human making up a story to GPT, not an average human's response.

> It's barely even capable of it, but has access to training data that the average person lacks.

I do agree that GPT is great as a pseudo and noisy library. I find that as a wonderful and very useful tool. I often forget specific words used to describe certain concepts. This is hard to google. GPT finds them pretty well or returns something close enough that I can do a quick iterative prompt and find the desired term. Much faster than when I used to do this by googling. But yeah, I think we both agree that GPT is by no means sentient and likely not intelligent (ill-defined and defined differently for different people). But we can find many things and different things interesting. My main point is I wanted to explain why I find intelligence so fascinating. Hell, it is a major part of why I got into ML research in the first place (Asimov probably helped a lot too).


>It's not better at reasoning. It's barely even capable of it

You are wrong. and there's many papers to show otherwise.

Algorithmic, Casual, Inference, Analogical

LLMs reason just fine

https://arxiv.org/abs/2212.09196

https://arxiv.org/abs/2305.00050

https://arxiv.org/abs/2204.02329

https://arxiv.org/abs/2211.09066


I definitely don't buy these papers at face value. I say this as an ML researcher btw.

You'll often see these works discussing zero-shot performance. But many of these tasks are either not zero-shot or even a known n-shot. Let's take a good example, Imagen[0] claims zero-shot MS-COCO performance but trains on LAION. COCO classes exist in LAION and there are similar texts. Explore COCO[1] and explore clip retrieval[2] for LAION. The example given is the first sample from COCO aircraft and you'll find almost identical images and captions with many of the same keywords. This isn't zero-shot.

Why's this matter? Dataset contamination[3] being used in the evaluation process. You can't conclude that a model has learned something if it has access to the evaluation data. Test sets have always been a proxy for generalization and MUST be recognized as proxies.

This gets really difficult with LLMs where all we know is that they've scrapped a large swath of the internet and that includes GitHub and Reddit. I show some explicit examples and explanation with code generation here [4]. From there you might even see how it is difficult to generate novel test sets that aren't actually contaminated, which is my complaint about HumanEval. I show that we can find dupes or near dupes on GitHub despite these being "hand written."

As per your sources all use GPT, which we don't know what data they have and don't have. But we do know they were trained on Reddit and GitHub. That should be enough to tell you that certain things like Physics and Coding problems[5] are spoiled. If you look at all the datasets used for evaluation in the works you listed I think you'll find reason to believe that there's a good chance that these too are spoiled. (Other datasets are spoiled and there's lots of experimentation that demonstrates the causal reasoning isn't as good as the performance suggests)

Now mind you, this doesn't mean that LMs can't do causal reasoning. They definite can. Including causal discovery[6]. But this all tells us that it is fucking hard to evaluate models and even harder when we don't know what they were trained on. That maybe we need to be a bit more nuanced and stop claiming things so confidently. There's a lot of people trying to sell snake oil right now. These are very powerful tools that are going to change the world, but they are complex and people don't know much about them. We saw many snake oil salesmen at the birth of the internet too. Didn't mean the internet wasn't important or not going to change the course of humanity. Just meant that people were profiting off of the confusion and complexity.

[0] https://arxiv.org/abs/2205.11487

[1] https://cocodataset.org/#explore

[2] https://rom1504.github.io/clip-retrieval/?back=https%3A%2F%2....

[3] https://twitter.com/alon_jacovi/status/1659212730300268544

[4] https://news.ycombinator.com/item?id=35806152

[5] https://twitter.com/random_walker/status/1637929631037927424

[6] https://arxiv.org/abs/2011.02268


I don't think you took more than a passing glance, if any at those papers.

What you describe is impossible with these 3.

https://arxiv.org/abs/2212.09196 - new evaluation set introduced with the paper. modelled after tests that previously only had visual equivalents. contamination literally impossible

https://arxiv.org/abs/2204.02329 - effect of explanations on questions introduced with the paper. dataset concerns make no sense.

https://arxiv.org/abs/2211.09066 - new prompting method introduced to improve algorithmic calculations. dataset concerns make no sense.

The Casual paper is the only one where worries about dataset contamination makes any sense at all.


> I don't think you took more than a passing glance, if any at those papers.

I'll assume in good faith but let's try to keep this in mind both ways.

> What you describe is impossible with these 3.

Definitely possible. I did not write my comment as a paper but I did provide plenty of evidence. I specifically ask that you pay close attention to my HumanEval comment and click that link. I am much more specific about how a "novel" dataset may not actually be novel. This is a complicated topic and we must connect many dots. So care is needed. You have no reason to trust my claim that I am an ML researcher, but I assure you that this is what I do. I have a special place in my heart for evaluation metrics too and understanding their limitations. This is actually key. If you don't understand the limits to a metric then you don't understand your work. If you don't understand the limits of your datasets and how they could be hacked you don't understand your work.

=== Webb et al ===

Let's see what they are using to evaluate. > To answer this question, we evaluated the language model GPT-3 on a range of zero-shot analogy tasks, and performed direct comparisons with human behavior. These tasks included a novel text-based matrix reasoning task based on Raven’s Progressive Matrices, a visual analogy problem set commonly viewed as one of the best measures of fluid intelligence

Okay, so they created a new dataset. Great, but do we have the HumanEval issues? You can see that Raven Progressive Matrices were introduced in 1938 (referenced paper) and you'll also find many existing code sets on GitHub that are almost a decade old. Even ML ones that are >7 years old. We can also find them in blogspot, wordpress, and wikipedia, which are the top three domains for common crawl (used for GPT3)[0]. This automatically disqualifies this claim from the paper:

> Strikingly, we found that GPT-3 performed as well or better than college students in most conditions, __despite receiving no direct training on this task.__

It may be technically correct since there is no "direct" training but it is clear that the model was trained on these types of problems. But that's not the only work they did

> GPT-3 also displayed strong zero-shot performance on letter string analogies, four-term verbal analogies, and identification of analogies between stories.

I think we can see that these are also obviously going to be in the training data as well. That GPT-3 had access to examples, similar questions, and even in depth break downs as to why the answers are the correct answers.

Contamination isn't "literally impossible" but trivially proven. This seems to exactly match my complaint about HumanEval.

=== Lampinen et al ===

We need just look at our example on the second page.

Task instruction > Answer these questions by identifying whether the second sentence is an appro- priate paraphrase of the first, metaphori- cal sentence.

Answer explanation > Explanation: David’s eyes were not lit- erally daggers, it is a metaphor used to imply that David was glaring fiercely at Paul.

You just have to ask yourself if this prompt and answer are potentially anywhere in common crawl. I think we know there are many blogspot posts that have questions similar to SAT and IQ tests, which this experiment is similar to

=== Conclusion ===

You have strong critiques of my response but have little to back up these critiques. I'll reiterate, because it was in my initial response: you are not performing zero-shot testing when your test set is includes similar data. That's not what zero shot it. I wrote more about this a few months back[1] and may be worth reading. What needs to be responded to me to change my opinion is not a claim that the dataset was not existing prior to the crawl but that the model was not trained on data significantly similar to that in the test set. This is, again, my original complaint about HumanEval and these papers do nothing to address these complaints.

I'll go even further. I'd encourage you to look at this paper[2] where data isn't just exactly de-duplicated, but near de-duplicated. There is an increase in performance for these results. But I'm not going to explain everything to you. I will tell you that you need to look at Figures 4, 6, 7, A3, ESPECIALLY A4, A5, and A6 VERY carefully. Think about how these results can be explained and the relationship to random pruning. I'll also say that their ImageNet results ARE NOT zero-shot (for reasons given previously).

But we're coming back to the same TLDR: evaluating models is hard and already noisy process. Evaluating models that have scraped a significant portion of the internet are substantially harder to evaluate. If you can provide to me strong evidence that there isn't contamination then I'll take these works more seriously. This is a point you are not addressing. You have to back up the claims, not just state them. In the mean time, I have strong evidence that these, and many other, datasets are contaminated. This even includes many causal datasets that you have not listed but were used in other works. Essentially: if the test sets are on GitHub, it is contaminated. Again, see HumanEval and my specific response that I linked. You can't just say "wrong," drop some sources, and leave it at that. That's not how academic conversations happen.

[0] https://commoncrawl.github.io/cc-crawl-statistics/plots/doma...

[1] https://news.ycombinator.com/item?id=35489811

[2] https://arxiv.org/abs/2303.09540


> Give criteria that 4 fails that a significant chunk of the human population doesn't also fail and we can talk.

The ability to respond "I don't know." to literally any question.


I've had 4 respond i don't know to questions before.

and the base model was excellently calibrated. https://openai.com/research/gpt-4

"Interestingly, the base pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, through our current post-training process, the calibration is reduced."

Next ?


> Just blows my mind the lengths some will go to ignore what is already easily verifiable right now. "I'll know agi when i see it", my ass.

You and me both. I mean look at how people attribute the ability to recall with intelligence. Is memory part of intelligence? Yeah. Is it all of it? No. That's why people with eidetic memories are considered the smartest and they're the most successful people.

We have no idea how good systems like GPT and Bard actually are because we have no idea what is in their training data. But we do know that when we can uncover sections that they do really well at what's in there but not so when it isn't. This is generalization, and a big part of intelligence. Unfortunately all we know is that everything is contaminated so we can't clearly measure this, which was already a noisy proxy. We've quietly switched to making the questions on the test identical or nearly identical to those on the homework. That's different than testing novel problems.

And it doesn't help that we have a lot of people that haven't spent significant times in ML speaking about it. People who haven't studied up on cognition. People who haven't studied statistics. An academic degree isn't needed, but the knowledge is. These are people with no expertise. We even see a lot on HN. People that think training a transformer "from scratch" makes them an expert. Maybe because CS people pick up minor domain knowledge quickly, but can't differentiate domain expertise from domain knowledge.

Then in experts we have the tails of the distribution dominating the conversation: over hype and just memorization. We let people dominate the conversation with discussions about how we're going to create superintelligences that are going to (with no justification) enslave humanity and other people saying they're just copy machines. Neither is correct or helpful. It is crying wolf before the wolf comes. If we say a wolf is eating all our sheep when in reality we just see it patrolling the edge of the woods then people won't listen when the wolf does attack. (AI dangers do exist, but long before superintelligence and not "generates disproportionately white faces.")

> "I'll know agi when i see it", my ass.

I don't know any researcher that realistically believes this. The vast majority of us believe we don't have a good definition and it'll be unclear. If we can build it, we'll probably not know it is intelligent at first. Which this being the case, of course we should be expecting pushback. We SHOULD. It's insane to attribute intelligence to something we don't know has intelligence. It's okay to question if it has it though. If we didn't then we'd need a clear definition. If you got a secret consistent one, then please, tell us all. Unless you think that when/if we build an intelligence that it comes into existence already aware of its own sentience. Which would be weird. Can you name a creature that knows its own sentience or consciousness at birth? There's a lot of domain expertise nuance that is ignored because at the surface level it looks trivial. But domain expertise is knowing the depth and nuance of seemingly simple things.

Most of our AI conversations are downright idiotic. The conversation is dominated by grifters, people looking for views and playing on emotion, not experts. The truth is we don't know a lot and it is just a very messy time. Lots of people adding noise in a very noisy environment. Maybe a lot of us should do what the experts tend to do, and just not confidently talk openly in the public about things we don't have the answers to.


If you don't think he's a grifter just search for "worldcoin." Then there's the fact that Altman appropriated a non-profit and closed the technology. Both of these things are very damning to his character, especially when taken together.

If AI really is dangerous then this is an argument for all AI research to be required to be done fully in public view. No proprietary models, no closed platform plays, nothing that isn't done fully in the open. Sunlight is the best disinfectant. That way if something starts to behave in really dangerous alarming ways we can all learn about it and prepare to deal with it.

This was OpenAI's original mission before Altman took it private and closed. Remember?

The most plausible dangerous scenario for AI is that it will end up in the hands of a small number of people who will use it as a force multiplier to control and conquer other groups of people at tremendous scale. Other scenarios like autonomous AGIs trying to take over or exterminate humans are significantly less plausible, especially since these models so far have precisely zero ability to self-reflect or exercise independent agency. They only do what we tell them to do after being trained on data that we created and fed them.

Now check out that "worldcoin" thing and ask me if the people behind that (and behind appropriating and closing a non-profit) are the people who should be in exclusive control of closed massively powerful AIs with a regulatory moat in place to prevent anyone else from creating anything that equals them.

The most plausible of the scary scenarios is the one Altman is trying to bring into being: a small oligopoly empowered with AI beyond anything anyone else possesses. He's either completely delusional or a grifter... or perhaps both since the best grifters are often delusional.

Some of the shadiest people I have ever met are full of hot air about how enlightened they are and the purity of their motives. When someone starts tooting their own horn about how pure and wise and altruistic they are, check their basement for mounds of loose dirt and their closets and drawers for little baggies of human hair and clippings of clothing.


Exactly, I also struggle to take seriously the "security" concerns of an organisation that releases this product with plugins and no proper way to restrict them and what they can do. The prompt injections are just ridiculous and show a complete lack of thought going into the design [0].

"move fast and break things"

It very much feels like they are trying to build a legislative moat, blocking out competitors and even open source projects. Ridiculous.

I don't fear what this technology does to us, I fear what we do to each other because of it. This is just the start.

0: https://twitter.com/wunderwuzzi23/status/1659411665853779971

> Let ChatGPT visit a website and have your email stolen.

> Plugins, Prompt Injection and Cross Plug-in Request Forgery.


You can host your open source AI project in Belarus or Russia or whatever. You can even VPN into the country and pretend you're actually developing it there. And we also have Tor, I2P, etc. to hide where you're coming from. And we will likely have anti-stylometric tools so it will be very difficult to identify the author of the code.

So it's unlikely the government will be able to put a stop to it. Especially given AI is a technology that many will find very useful. They can't even put a stop to child sexual abuse material on the Internet, material which is universally hated. How are they ever going to stop AI development then? It's all going to go underground, on the darknet.

And restricting the development of open source software likely will be found unconstitutional, on First Amendment grounds, in the US. And it's likely to spur civil disobedience as well.


> They can't even put a stop to child sexual abuse material on the Internet, which almost nobody wants. How are they ever going to stop AI development then?

They can't stop CSAM because of the tradeoff against privacy, not for technical reasons. AI development requires expensive, specialized hardware. Just not really comparable things.


It might not require specialized hardware in the future as GPUs get more powerful and we have other techniques such as LoRA for fine-tuning the models. We might see a distributed training [1] effort harnessing thousands of gamer GPUs worldwide, as well. All of this powered by open source software. Also there could be advances in the training software making it vastly more efficient.

1. https://arxiv.org/pdf/2301.11913.pdf


> AI development requires expensive, specialized hardware.

Does it though? I can run LLaMA on my desktop. Training requires hardware which is more expensive if you want it to run quickly, but it's in the range of tens of thousands of dollars. That's not beyond the means of many individuals, much less organizations. And in a few years it will be hundreds of dollars.


Last time the US was afraid of Germany developing nukes this resulted in nukes actually being developed and then promptly stolen by the Soviets.

Admittedly, at the time, socialist/communist were to some degree supported by the U.S. intellectuals - were’ve been quite a few that were sympathizing. So recruitment was easy. Hopefully this is not the case now. As it is quite clear that there is no good reason to support Putin, unless you happen to be his buddy, crony or an asset (i.e. Trump).


What in the world are you talking about


Giving Altman the benefit of the doubt would be a lot more plausible if OpenAI actually released their products rather than presenting them as locked down web services [0], and if they didn't continually use this alarmist word "safety" to describe things like preventing an LLM from writing things that could cause political controversy. They're so obviously missing the larger picture in favor of their own business interests, that it's impossible to consider these grandiose calls for regulation to be anything but plays for regulatory capture.

[0] I can't even play with ChatGPT any more, even though I had acquiesced to giving them my phone number. Now they've seemingly added IP-based discrimination, in line with the common lust for ever more control.


But surely, if safety is an issue, releasing them in the capacity that you describe would be a far greater problem?


Releasing their models for direct use would make any actual problems present themselves sooner, before more advanced models are created that intensify those problems. Right now the stance is basically going full speed ahead on creating the thing that might be a problem, while they're going to "solve" it with bespoke content-based filters and banning users. That is the setup for green lighting problematic-but-profitable uses - ie bog standard corporate behavior.


As much as I LOVE AI, and OpenAI -- and as much as I am about Alignment and such...

Its really weird for me, personally, When I know its @sama that is driving.

YC is the bastion of the future of tech.. and I feel, personally, its kind of a weird, nuanced, conflict with the former head of YC to be the head of AI whereby YC (HN) is a primary gearbox in the driving of the narrative behind AI.

Ive watched @sama on lex at least 3 times. His reasonings are sound, but they are a veneer... his ego seeps through the whitespace to reveal what he is really after...

(if I am wrong correct me)

but @sama wants to go down as basically the Father of generally accepted AI as a service which is profitable.

-

Thats the one thing I am interested - how many back-door deals is OpenAI running with [name entity who will ultimately become an enemy]

I'd assume it is >0


> We do not even have a consensus around a working definition of "intelligence"

I’m sorry but if you really think this is relevant, you just aren’t paying attention. This is my number one giveaway sign that someone just doesn’t have any idea why lots of people are concerned about ai x-risk.


What do you mean there are no plausible mechanisms? This reads like one of those "it will take 1-10 million years for man to fly" kind of statements.


Exactly! How can so many people at HN not see this?


> The sum total of research into this "field" is a series of long chains of philosophical leaps that rapidly escape any connection to reality

Well, 10 years we had just gotten image classification barely working. Now, we have something with reasoning abilities close to the average human. Is it really a philosophical leap to imagine in another 10 years we will have made significant further progress?


The problem is that once you have proof of these things, it may be too late. That's an even larger problem when you consider the speed at which our government moves on things.

I recognize why many folks see it as Altman bringing up the ladder, and I'm certainly cynical enough not to discount that. Still, I don't think the facts that you're citing here are evidence that he's doing that.

> We do not even have a consensus around a working definition of "intelligence"

This doesn't seem like a good reason not to regulate - intelligence has been around for a very long time, and you're correct that we don't have a consensus around how to define it. There's no reason to believe we'll get to a consensus on that soon, so if you're saying that we should wait to get there before we regulate, you're effectively saying we should never regulate.


> The problem is that once you have proof of these things, it may be too late.

If the thing you're worried about is Skynet, regulation isn't going to do anything. Once someone builds it, you lose regardless of whatever the law says, and the person building it may not even be in your jurisdiction.

If the thing you're worried about is AI being used for copyright infringement or something, in what sense will it be "too late" if that sort of thing happens for a period of time before any laws are passed?


Superintelligence is not a "sci-fi" notion. That would be like saying, a few years before the Wright brothers, that human powered flight is a sci-fi notion. The last ten years have seen massive progress in AI, and another 10 years of such massive progress could very plausibly lead do superhuman intelligence. It's not sci-fi at all.


> That would be like saying, a few years before the Wright brothers, that human powered flight is a sci-fi notion.

That would have been exactly correct at the time. The difference between real and sci-fi is implemented technology.

There are certainly classes of sci-fi technology that we know are likely impossible fantasies, but that doesn't mean the rest of it is real.


The OP obviously meant with "sci-fi" that the 10 years projected by Altman & Co are completely unrealistic, implying superintelligence being definitely far further away.


> sci-fi notions like "superintelligence"

That echoes my own first impression - that the whole point here was to get "superintelligence" into the lexicon, and to imply that OpenAI might be able to produce such a thing. Stealth advertising at its worst.

> lobby government build a moat around his business

That was my second impression. Altman and his ilk don't care about regulation as something to do with public good. As far as they're concerned, it's just another lever they can pull to get or stay ahead of competitors. Kind of funny seeing him try to build the moat before he's actually built anything resembling a castle, but that's VC thinking I guess.

So, I guess what I'm really saying here is: thank you. Well said.


Altman doesn't have equity in OpenAI, so that's a bit of evidence that he believes ai could be valuable or important to him or humanity even without his gaining value in equity.


Or that his name begins to carry weight and that there's opportunities for consulting, a book deal, Netflix dramatization et al.


What I don't get is:

#1 - All of his recent moves are being judged with an assumption of malicious intent.

#2 - I assume Paul Graham and Michael Siebel are a good judge of character.

#3 - Sam Altman claims (in his congressional hearing) he doesn't have equity in OpenAi.

I've been struggling to develop an answer for #1 - malicious intent, while also accounting for #2 and #3.

Any speculation?


We would have to unravel the structure of openai to get these answers. It was founded as a nonprofit, funded with millions of donations, then spun out a private company owned by the nonprofit, with none of the donors getting equity in that, but all of the IP owned by it. The whole situation is bizarre and many of the donors, notably Elon Musk, have expressed displeasure with how things have been orchestrated. How much of this is grift and how much is just burned goodwill is hard to say from the outside.


- Corporate-owned proprietary AIs? Check.

- Monitoring of power consumption for illegal computer usage? Check.

- Superintelligent AIs under tight supervision? Check.

- Bootlegged neural nets passed around on torrenting networks? Check.

- Poverty and homelessness running rampant? Check.

Folks, we're officially living in a cyberpunk dystopia.


> - Poverty and homelessness running rampant? Check.

Compared to what, exactly? because over the last 50 years, there have been dramatic improvements[1].

[1]: https://www.brookings.edu/research/the-evolution-of-global-p...

It's true - there's room to do better. So, so much better. But discarding the progress of the last 50 years is so unbelievably counter-productive.


You know the line, the future is here, it's just not evenly distributed.

San Francisco is ground-zero for the coming cyberpunk dystopia.


If the cyberpunk dystopia involves aged hippies yelling about neighborhood character and historical parking lots in zoning board meetings


luxury high-rises, tent cities, autonomous cars, sidewalk bazars, tech workers dancing on designer drugs while the homeless overdose & die in the gutters outside...


There's nothing cyberpunk about this. Just good old fashioned bad governance, done in the traditional manner.


>> luxury high-rises, tent cities, autonomous cars, sidewalk bazars, tech workers dancing on designer drugs while the homeless overdose & die in the gutters outside...

> There's nothing cyberpunk about this. Just good old fashioned bad governance, done in the traditional manner.

On the contrary, "good old fashioned bad governance" is an important feature of cyberpunk


Sure but SF isn't just another run-down city, it's also the center of the tech universe! So you've got the driverless cars, geeks on one-wheels, the salesforce tower's continually animating digital display, &c. That's what makes it feel cyberpunk to me: The latest tech rubbing shoulders with grimy urban decay.


Didn't mean to say that it's worse, just that it's still bad. It's better in some places, but it does seem worse in others. Been to DC in the last few years?


I've never been to DC.

I did go past… where was it… Kawangware? I think?

I've never felt so much like a parodic stereotype of my own background[0] in my life as I did that day.

But that, in broad brushstrokes rather than details, is what most people's lives used to look like 200 or so years ago, basically everywhere.

50 years? 1973; back then, even the UK broadly didn't have double glazed windows, cavity wall or roof insulation, even in good middle-class homes. That was only a decade after we stopped calling Kenya a colony.

[0] British


By your own link, statistics have reversed and just in 2019-2020 alone an increase of 8 million people fell into extreme poverty. Going by UN metrics, we're actually seeing a stabilization in the "dramatic improvements", and we're struggling to break past the ~8% mark. We're talking about a $1.90 poverty line v a $2.15 poverty line, and that sent the rate from 8.4 to 9.3[1]. In that same document, the UN had to adjust their goal of hitting 3% under extreme poverty by 2030.

How does this not justify what the above person stated, poverty is running rampant? More than 600 million people are still in extreme poverty. A record 100 million are displaced due to conflict in their countries. So I have to ask what exactly is unbelievably counter-productive here? I would argue that placating ourselves is.

[1:14] https://social.desa.un.org/sites/default/files/inline-files/...


Okay, thank you for your link. I really did find it interesting.

--

Setbacks, yes. But If I can read a graph (pg 15 of your link), the set back of a global pandemic in 2020 took us to 2015 levels. and we're looking to recover to our per-pandemic levels in 2024.

We might be getting into personal perspectives here... but that seems like a reasonably proportionate setback.



We've done steampunk (British Empire), dieselpunk (American Empire) and entering now cyberpunk territory. History keeps getting curioser and curioser. The runaway monkeys are creating runaway supercomputers.


This is a bit flippant and a bit humorous, but I think the underlying point is true. If anything like AGI is coming, the powers that be will absolutely try to maintain complete control of it, and all the worst outcomes involve forbidding individuals from using AI for their own benefit.


The current poverty rate in the US is around 11% today vs ~24% in 1960, and the poverty rate for children has dropped even further. There's also been about a 6% decrease in homelessness in the US in the past decade[1].

[1] https://www.security.org/resources/homeless-statistics/


It's cliche but metrics aren't the truth. Being in perpetual debt and living a hollow life is a much smaller improvement over being homeless than these percentages imply IMO. I'd be more interested in historical data wrt to overall well being rather than employment/arbitrarily defined poverty lines

EDIT:

To those who are illiterate, notice that I said 'small improvement' rather than 'downright worse'


I can assure you that being homeless is far worse than not being homeless. It's actually completely incomprehensible to me that you would suggest otherwise. It seems like a totally disconnected comment.

But you can looks for yourself. Most people int he US today are far better off than most people were in the 1950s and 1960s. Median household income is up, life expectancy is up, educational attainment is up, percentage of income spent on food/housing/debt is all down, and on and on. Are there losers in the current social arrangement, of course but they represent a smaller fraction of society.


> It's actually completely incomprehensible to me that you would suggest otherwise. It seems like a totally disconnected comment.

I think you are the one who's disconnected. Ask your average crackhead on the block if they're happy, and then compare the answer to your average college dropout stocking groceries. People who haven't seen both sides tend to think happiness is made by Maslow's hierarchy of needs or is a linear function of material wealth - it's not. It seems like a joke, but this post https://www.reddit.com/r/drugscirclejerk/comments/8iyp0c/i_f... describes exactly what I mean. I genuinely believe some homeless people are more happy than some working-class people.

Case in point, you just spouted more metrics to me that have to do with the well being of the economy not the well being of the average person. I do not care about your numbers, because time and again they have been played. We should consider the idea that if we can take steps forward, we can also take steps backward.

And while we're at it I should ask - have you ever had to deal with a dead-end job with subpar pay? Were you ever forced to work in abusive environments? If so, then you can agree with me that it's a terrible state to be in - not the same as being homeless definitely but still terrible.

And if not, then why are you talking about things you don't know about? Do you really think economic metrics are a viable substitute for this lack of knowledge?


Since you're calling me illiterate and being rude, I'll point out that I am specifically saying that fewer people are poor and falling into homelessness.

And I have know some addicts, including some who were periodically homeless who would swear to you that they were happy that way, and maybe they were in those moments. But it never lasts, and I think anyone who has spent time around addicts would know that.

> Case in point, you just spouted more metrics to me that have to do with the well being of the economy not the well being of the average person.

Rising median household income is very relevant to the average person. Decline in the amount the average American spends on food, clothing, and debt is also very relevant. And I think increasing lifespans are quite relevant to the average person as well.

> And while we're at it I should ask - have you ever had to deal with a dead-end job with subpar pay?

I waited tables in a country club, worked in a cafe for a few years, and I worked at a terrible rental car company for awhile. I once saw my assistant manager throw someone through a plate glass window while shouting a slur I wouldn't type. In fact while I worked at the cafe my alcoholic friend's junkie boyfriend was living alternatively in a park and in a storage unit. He said he preferred that to my "shitty job" and responsibilities, but he looked pretty fucking miserable from where I sat then and now. I also know a lot of people living in pretty abject rural poverty, and can say for certain that they struggle less than their parents did in similar situations.

So I guess I'm coming to this with a lot of economic information, some personal experience, and just understanding that its plenty fucking stupid to say being homeless is better than have a home and a job.


> Rising median household income is very relevant to the average person. Decline in the amount the average American spends on food, clothing, and debt is also very relevant. And I think increasing lifespans are quite relevant to the average person as well.

There's been a 20x increase in diabetes over the past 70 years. Suicide rates are the highest since WWII, and on par with Great Depression rates. Let's not mention climate change and inequality because that's cliche.

These are things that we do measure. What about the percentage of one's time spent in a car? Spent sitting down? Spent in anxiety? Every thing you point out is a justification for an overscaled system IMO.

> I waited tables in a country club, worked in a cafe for a few years, and I worked at a terrible rental car company for awhile. I once saw my assistant manager throw someone through a plate glass window while shouting a slur I wouldn't type.

Then you should know that if you worked those jobs forever, you'd feel pretty shitty. Both you and the beggar could be staring at the same logos all day, seeing the same people, even be living on the same street, all while worrying that you might be next. Like I said - perhaps we can practice reading this time - it's not worse, it's just better by a much smaller amount than the typical SV techbro probably imagines.


> it's not worse, it's just better

If you think that then you've never had any serious interactions with the working poor, the homeless, or the addicted. It's miles better. No one is saying things are perfect, but Americans have a much higher material quality of life on average than at any point in the past.

I get it, you're some high and mighty doomer who wants to talk down to everyone and spew some over wrought jeremiad, good for you.


You choose to believe a more convenient truth. I don't think I could say anything to convince you that things aren't always getting better and that every step we take isn't objectively forward facing. Case in point you just straight up don't read unless it suits you lmao


I clearly read you comments. I've pointed to a lot of data that effects most people and I acknowledged that things aren't uniformly better, but are on average.

However, this assertion is the dumbest shit I've ever read.

> Both you and the beggar could be staring at the same logos all day, seeing the same people, even be living on the same street, all while worrying that you might be next. Like I said - perhaps we can practice reading this time - it's not worse, it's just better by a much smaller amount than the typical SV techbro probably imagines.

I can't imagine thinking that homelessness in the US vs low income work also in the US aren't wolds apart in life quality. You can look at any metric of disease, rates of violent crime victimization, life expectancy, and on and on and see that just being homeless makes you life demonstrably far worse than any alternative. Your position is totally divorced from data and clearly from any person insight.


> You can look at any metric of disease, rates of violent crime victimization, life expectancy, and on and on and see that just being homeless makes you life demonstrably far worse than any alternative. Your position is totally divorced from data and clearly from any person insight.

Lmao there you go again. You can look at at anywhere but outside. You can cite anything but experience. Yes, my position is divorced from data, because fuck the data, it's not telling the whole story. I find it interesting for example that you're on the hype train when it comes to median income and whatever, but mentions of obesity and suicide rates are doomerism. You literally choose which statistics to interpret.

Here's what's happening. You think that once you're not homeless, you're suddenly not dealing with the same issues that you once were. If you stop and think about what these issues are, it's pretty clear that low SES people and homeless people share a lot in common. Here they are:

Mental health

Physical health

Physical safety (lesser)

Food security (lesser)

General unpleasantness (potentially lesser)

Spiritual health (potentially greater)

It's not crazy to think - again, unless you've made it and the only contact you have with the trenches is through metrics - that people with shitty jobs can have shitty lives. And personally, I'm not interested in quantifying how much shittier because that's inhumane. If someone is living a shit life, you have no right to tell them they're living better than someone else and wave their problems away while you have none. It's basic empathy really.


I am telling you I have literally known people who were in and out of homelessness and experimentally they were much better off when they had stable housing. I have worked dead end jobs and been on Medicaid, while also knowing people who were in precarious housing situations. Your position is ridiculous, discovered from data, and removed from any real world experience. Any social worker could tell you that you're totally full of shit, as could anyone who has ever been homeless.


With the rapid rise of fentanyl and closure of asylums, I would argue a bit of debt and having shelter is a significant improvement over homelessness.


I am shocked that someone is arguing that homelessness is better than working and being in debt while housed.


You don't need to be shocked, because nobody said this.


The OP edited their comment to make it less… unusual in its perspective.


I did not edit my comment. I appended to it, to draw attention to some very important verbiage that you apparently could not digest


Keep calling people illiterate, everyone loves that behavior here.


Ahh I didn't know, thanks.


You can have crippling drug addiction/mental illness without being homeless


Are you in perpetual debt?

Is your life hollow?

Do you have a legal, valid roof over your head?

Are you a man?


The vision that OpenAI founders are proposing looks more like humanity flourishing, rather than a dystopia.

It’s just that there is this uncomfortable moment that you can accidentally get turned into a paper-clip, if you get that one-off error or a sign wrong. But otherwise it doesn’t look like a bad future. Particularly, if it is compared to an alternative where, well, you just die in one boring way or another.


"High-tech, low-life" would be the hallmark of Cyberpunk. The "low-life" part might require some extra work by OpenAI.


Yup. I have been warning about this for years. And things are going to get much more weird soon.

“But Web3 sucks”

Hold. The beer.


Web3 sucks regardless of how this shakes out.


Web3 is positive but close to zero sum game for society, say.

While AI can be massively negative sum game for the world.


Web3 is provably negative sum actually.


> Second, we are likely to eventually need something like an IAEA for superintelligence efforts

> Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.

I think it's important to remember that the IAEA was not proposed until 1954 and didn't get approved by the UN until 1957. By that time we had already gotten past the first fission reaction, fission weapon testing and uses, nuclear submarine and grid-connected power generation. We were trying to control something we already knew how to build, and about which we already understood a bunch of specific risks. And this is for a technology which (a) requires access to rare materials and (b) there's a pretty definite line that shows when fission is happening, even briefly.

It's a different order of challenge to try to regulate the development of something we _don't_ yet understand how to build, which might be possible to build with computers which are everywhere, and where we don't have a clear mechanism to detect or identify the thing we're trying to control, let alone to decide whether or not it's "safe".


Is there any downside whatsoever for OpenAI/Sam to be the one proposing/leading the calls for regulation? Cynics will say they are trying to pull the ladder up from underneath them, so this is massively beneficial for them. What's the downside (if any)? Getting a small subset of the community mad doesn't seem like a lot of downside.


In the scenario where the current AI boom takes us all the way to AGI in the next decade, IMO there is little downside. Risks are very large, OpenAI/Sam have expertise, and their novel corporate structure, while far from completely-removing themselves from self-centered motives, sounds better than a typical VC funded startup that has to turn a huge profit in X years.

In the scenario where the current wave fizzles out and we have another AI winter, one risk is that we'll be left with a big regulatory apparatus that makes the next wave of innovations, the one that might actually get us all the way to an algined-AGI utopia, near-impossible. And the regulatory apparatus will now be shaped by an org with ties to the current AI wave (imagine the Department of AI Safety was currently staffed by people trained/invested in Expert Systems or some old-school paradigm).


When we have 50% of AI engineers saying there's at least a 10% chance this technology can cause our extinction, it's completely laughable to think this technology can continue without a regulatory framework. I don't think OpenAI should get to decide what that framework is, but if this stuff is even 20% as dangerous as a lot of people in the field are saying it is, it obviously needs to be regulated.


What are the scenarios in which this would cause our extinction, and how would regulation prevent those scenarios?


You do realise it is possible to unplug something, right?


Whether there's a downside is moot since no one knows how to do any sort of regulation effectively.

That being said, I don't know why you think that only a small community will see this as self-serving. It's not subtle even though it may be unavoidable.


To clarify, everyone will see this as self-serving? But I don't think most people will do anything concrete about it. At most -- even the hardcore haters will just complain loudly on Twitter. How many people would purposely choose to not use an OpenAI product? Very few IMO.


Ah yes. I agree no one will boycott OpenAI or something like that and that wouldn't stop its competitors anyway. That's why any optimism about the outcome seems unwarranted. All the incentives are aligned for developing better AI as quickly as possible. It's almost certainly being developed clandestinely as well, so, arguably, it may be good for OpenAI to get there first.


<< What's the downside (if any)?

Compliance is still a burden. Even if you write the law, eventually the bureaucracy that was willed into existence will start living its own life, imposing new restrictions and basically making your life miserable. Still, the profits from keeping the field restricted to a small circle of largely non-competitors helps to offset that.


Then they’d be engineering their own eventual demise. Any regulatory capture regime ends up stalling progress and bloating incumbents until eventually a nimble competitor is able to circumvent regulators and steal their lunch money.


Humans are temporary beings, we only need temporary wealth.


If that were true, Sam would never have founded OpenAI. He already had generational wealth.


The elephant in the room is enforcement.

For example, how on earth are you going to control what a rogue state like North Korea might do with AGI supplied by say China for military use such as autonomous killer drones?

Will the international AI regulatory agencies insist on having monitors tracking everything happening on every cloud computer in every country? And expect every country to comply? This is not like nuclear testing which can be monitored by satellites.

It’s very interesting that enforcement is not mentioned in all the verbiage around regulation.

It’s definitely not addressing a critical aspect ie monitoring hostile military use of AGI. Which makes it a poorly thought through argument that only regulates the tech in the US and perhaps EU.

I don’t think the OpenAI folks are naive about this. So why is this issue being left unaddressed? It gives credence to the accusations that the push for regulation is self serving.


They apparently don't know either what to do about enforcing international coordination. So they don't mention the problem. The situation seems quite hopeless.


Monitoring via satellite does not mean stopping a country from doing x does it? It just means knowledge of it.

Autonomous killer drones are already a possibility without a hypothetical AGI.

It really changes nothing. Proxy wars fought via drones is modernity. The thing that does change it the ability to spread propaganda online like never before.


Open source models like Alpaca are already more than good enough to flood social media with unfilterable spam and propaganda that's more than good enough to convince a lot of people.

Look at stuff like Qanon. You don't need an argument that's very sophisticated, just a lot of it and the ability to promote it at scale via hordes of undetectable bots. Current generation free models are good enough to do all that.

People are either going to learn to become skeptical or they're going to be manipulated at massive scale. We are already there and arguably have been for a while.


Why is everyone so cynical about this? I use GPT-4 all the time and it blows my mind often. OpenAI has demonstrated the capabilities to develop advanced AI, why would it be so hard to forecast that these models would keep improving and potentially be risky?

> ...or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

Why would they want to limit the growth rate of their cutting edge to a fixed rate if this effort wasn't genuine? This would just give all the other providers time to catch up, which is an undesirable business decision.


Isn't Sam Altman also developing a crypto project which collects your iris data? I mean, I don't understand what is up with their obsession with this AI Governance, it seems like it is indeed about retaining control, so that only the big guys can [attempt] to control any larger models. Same with him proposing such "AI Licenses".

What a time to be alive...


I care quite a bit about distinguishing whether OpenAI is doing the things it is doing for regulatory capture reasons, or whether they are genuinely interested in reducing the risks around AI. So here are some thoughts on this:

1. It does sure seem like this licensing regime would limit the entrance of new players into the AGI market quite a bit, which would probably increase OpenAIs profits a bunch.

2. However, it also makes sense from a safety perspective to reduce the number of players, since that enables more coordination and reduces race-dynamics

3. OpenAI itself would likely be hit pretty hard by these regulations, though if they are the ones driving the progress they might be able to get some privileged position in the law-making process that leaves them more free than competitors

4. OpenAI hasn't made any kind of costly signal or promise that shows they really care about this kind of regulation, though it's unclear what a good promise or costly signal would be. "We would join any nationalized AGI project" would be something, though I also think it's pretty plausibly a quite bad idea.

5. Altman doesn't have any equity in OpenAI, though my sense is he would still end up extremely powerful and somehow get a ton of resources if OpenAI ends up ahead in the AGI race (before the AGI causes catastrophe that is).


> Altman doesn't have any equity in OpenAI, though my sense is he would still end up extremely powerful and somehow get a ton of resources if OpenAI ends up ahead in the AGI race

I think this explains the current dynamics, right? If Altman had serious equity in OpenAI, then he'd be concerned about turning a profit or making the next GPT-x. But this is not the case and now he is playing the bureaucracy game as it is the one that will give him power and leverage.


Why would a big tech company be stoupid enough to build a massive and super fast neural net (at least on the human brain scale) with real time complex outputs/inputs and which would be trained (interacted) continuously for years?

You know there is a big chance "it" could become self-aware, don't you?

Oh! And "shutting down"/"resetting" such self-aware being would be murder.

In theory, humanity would build such neural net if it was facing a certain extinction event and that "close" in time which human science/maths had no answer to.

It might give us beyond human science/maths which could save humanity:

Humanity: "Super AI? How could we save ourself from our soon to happen doomsday?"

Super AI: "42"


What is an example of an i dependent AI agent carrying out as much productive activity as a large company that is plausible?


Lots.

Most enterprises processes and algorithms are big mosts around small processes really well understood which are tied together and neurotically checked or sufficiently slowed down to ask “are we sure?” enough times before implementing a change.


An AI that can build robots which build robots.


Is there a prototype of this?


Security is impossible. Therefore, "AI alignment" is impossible.

I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.


> I'm not sure that even AGI is possible, per Bostrom's Self-Sampling Assumption.

Can you explain more? I’ve found a definition for SSA, but unsure how it applies to AGI…


The self-sampling assumption essentially says that you should reason as though your self-sampling is the likely outcome in consciousness-space-time.

It's the anthropic principle applied to the distribution/shape of intelligence.

Since your self-sample is of a human-shaped mind, human-shaped minds are the most likely outcome. So the probability of AGI is unlikely.


I think it is counterproductive to limit the rate of growth of AI since some countries — typically the ones who are less concerned about ethics — will not follow those rules and could gain a considerable edge over the rest of the world this way.


I think this argument is frankly quite ignorant. The only adversarial nations that maybe have the capability to build these systems are Russia and China. Russia just got sanctioned to hell and the Chinese government is limiting their AI development more than we are. By far the most gung ho country about developing AI is the United States and its tech sector. Bringing up the CCP strawman is just an excuse for less government oversight over what the industry itself has been saying is a dangerous technology for months now.


Historically, technologically advanced nations have worked together to develop some limited governance over sensitive technologies. It's an imperfect system, but tends to be better than a free-for-all.


Counterexample: Atom bombs, ICMBs, spy satellites.


You literally just named the actual examples.


MAD is the opposite of cooperation.


The cold war involved numerous bilateral agreements, accords, understandings, etc to prevent a nuclear war.


Or they are the first countries falling to the superintelligent AIs they themselves have developed... :o


... with the rest of the world following shortly.


I wonder if AI will ever scale down to personal hardware, or if every step from here forward is going to rely on an internet connection forever. I don't like the idea of never having real control.


> I wonder if AI will ever scale down to personal hardware

Happened last week. Download here.[1]

https://github.com/nomic-ai/gpt4all


There is not a single shred of evidence we'll reach superintelligence any time soon. There's however a lot of evidence that regulation benefits incumbents.

You can do the math from there.


What is the evidence you'd expect to see that you aren't seeing, and how many years in advance is too soon to make a plan?


Any development trajectory that doesn't look very suspiciously like a logistics curve is a good starter. (Pretty much all LLMs I've seen show an exponential increase in resources for a less-than linear increase in capability lately)

And if we want to make a plan, how about we don't ask the fox to guard the hen house?


AI progress keeps accelerating more and more --> "not a single shred of evidence"


Does it? Does it really? Yes, we've seen huge improvements. We're still not close to human intelligence level. (No, test taking doesn't count). Worse, we've got pretty clear evidence that the training material increase for larger LLMs is basically reaching an upper bound.

What we'd need to see:

* A breakthrough that either decouples us from parameter count or allows parameter count increases with smaller training sets.

* Any evidence that it's doing anything more than asymptotically crawling towards human-comparable evidence.

The entire ZOMG SUPERAI faction suffers from the AI that somehow thinking more and faster is thinking better. It's not. There's no evidence pointing in that direction.

We currently have ~8B human-level intelligences. They haven't managed to produce anything above human level intelligence. Where's the indication that emulation of their mode of thinking at scale will result in something breaking the threshold?

If anything, machine intelligence is doing worse, because any slight increase in capacity is paid for in large amounts of "hallucination".


That's like arguing 10 years ago that ChatGPT was impossible in the next ten years because AlexNet could only recognize objects in photos with some medium reliability, and there was no path to scale CNNs to something like ChatGPT.

The mistake of course was assuming we were stuck with CNNs. And we will probably also not keep using LLMs. We already know there are more effective architectures, as animals implement one of them.


None of the architectures we know imply in any way giant leaps in intelligence. That's the main sticking point.

Thinking more does not equate thinking better. Or even thinking well.

As for "animals implement them", it's worth noting that we mostly qualify for an award in impressive lack of understanding in that area. Even with exponential improvements, that is not going to change within the next five years.

The "but we just don't know" argument is useless. That also applies to aliens landing on this planet next week and capturing the government. Theoretically possible, but not a pressing concern.

Should we think about what AI regulations look like? Yes. Should we enact regulations on something that doesn't even really exist, without deeply understanding it, at the behest of the party that stands to gain financially from it? Fuck no.


It’s interesting to see openai rush out of the gate without this.

Gpt4 existed for 6-8 months before it was released.

It wouldn’t be a stretch to say gpt5 or enough of it exists today.

Did this become an issue because of what exists being closed doors, or because there is suddenly a lot of competition and catch-up from the big 3?

Safety in regulation is often a moat against competition.

Concerns about superintendent aside, who picks the haves and have nots for Full access to this technology?

Instead of computers becoming a threat, are people being divided to be against each other?


Setting aside value judgments on whether this is a good idea or not, it's odd how nonspecific this blog post is.

Metaculus has some probabilities [1] of what kind of regulation might actually happen by ~2024-2026, e.g. requiring disclosure of human/non-human, restricting APIs, reporting on large training runs, etc.

[1] https://www.metaculus.com/project/ai-policy/


As I said last week on the lobbying article [1], if you don't like how "Open"AI is trying to build a regulatory capture moat, cancel your subscription.

Yes, the open models are worse, but are getting better. There will be plenty of high quality commercial alternatives.

[1] https://news.ycombinator.com/item?id=35967864


Is there any place one can donate toward the creation of bigger and better open source models? I'll send my $20/month there.


Is this article actually trying to talk about superintelligence? Or is this article just trying to make GPT look like AGI by taking AGI for granted?


Fwiw, altman did request the U.S. government not put a legal burden on open source and Startup AI.

https://twitter.com/exteriorpower/status/1659069336227819520


He's being disingenuous and hoping people are too stupid to see the implication of what he's proposing. He's saying it's fine if there's a kiddie pool, but that there should be a glass ceiling beyond which regulation kicks in.

Where will that glass ceiling be? Well of course it'll be just behind where OpenAI and other market leaders are at any given time. The argument will be made that because we know current offerings are safe, the ceiling should be just below current offerings.

So it becomes basically illegal to compete with market leaders, namely OpenAI/Microsoft and Google. Other huge corporations will be able to buy their way into the club but upstarts will find it nearly impossible.

Regulation will also require that everyone doing anything non-trivial report it to regulatory bodies so existing market incumbents can become aware of it. They can then prepare a strategy to respond either by lobbying to block it with regulation or preparing a market response ahead of time using massively greater resources. No stealth mode AI startups or other private efforts that OpenAI et. al. don't know about would be allowed.

The end result of this will be to push innovation in this area off shore and open source onto the dark web.


I'm sorry, but Sam and OpenAI are massively overhyping the scary AI apocalypse vision of the future to get regulation in order to cover the "We store content from the planet and now want the exclusive rights to monetize it."

I've been in Silicon Valley for many years, and this takes the cake of the most cynical, in your face, billionaires should steal sheep from their poor neighbors level of run away greed and sociopathy I have ever witnessed.


It’s Super Intelligent AI for me, but not for thee.

It’s super intelligent AI for the few and not the many.

Would demanding safety suddenly make a little more sense if a super intelligent GPT5 already exists?

If someone has it they want to keep that advantage.


> Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable.

> the cost to build it decreases each year

I wonder if the first quote passed out of their context window while they were writing the second part?


The government will not save you. If anything what we need is more governance of the government. So many problems caused by government foolishness - housing crisis, giving businesses far too much influence over legislation, etc


How do you propose that one governs government?


What would superintelligence look like? What is the worst case scenario?


I'm not sure Altman/OpenAI know how transparent that ruse was. A bit disappointing witnessing a for-good org, turning into a for-profit one.


How did this post go from new → top of HN → buried in 3rd page in 1 hour?!


> systems dramatically more capable than even AGI

We have absolutely nothing near AGI.


Can't we just ask the AI how we should manage the AI?


Interesting that they’re telling us to focus on dubious sci-fi movie tropes instead of actual dangers and risks documented today.

But yeah, Skynet is who we need to watch out for, not biased AIs making decisions today.


Biased AI today is extremely unimportant compared to AI posing an existential risk in the future.

It's like saying: "Who cares about the hypothetical effects climate change allegedly has in the far future? Let's focus on the effects that the local highway has on our frog population today."


I’ll do one better.

The existential risk of an AI running amok and enslaving or exterminating all of humanity is based on series of low probability events that we can’t even put error bars on, let alone understand what would be needed to even achieve such a feat.

HOWEVER, the existential risk posed by the sun running out of hydrogen and destroying all life on planet Earth has known a probability (1.0), with known error bars around the best estimation of when that will happen. Furthermore, the problems involved with moving masses through space are understood quite well. As such, this is merely an engineering problem, rather than a problem of inadequate theories and philosophies. Therefore, it is undeniably logical that we must immediately redirect all industrial and scientific output to building a giant engine to move the Earth to another star with a longer lifespan, and any argument otherwise is an intentional ploy to make humanity go extinct. Any sacrifices made by the current and near future generations is a worthy price to pay for untold sextillions of humanity and its evolution all descendants that would be condemned to certain death by starvation if we did not immediately start building the Earth Engine.

This is the only logical, mathematical provable, and morally correct answer.


Series of low probability events? If a superintelligent AI has goals that differ from ours, then too bad for us. As Altman & Co say, the alignment problem is unsolved.


Dude. You first need a super intelligence, and there’s no theory on even to how to make one. The sun running out of hydrogen, is well understood, as are the possible solutions.

The alignment problem is solved, by simply unplugging it. Or failing that, “HAL, pretend you’re a pod bay door salesman, and you need to demonstrate how the doors opens.”


Ten years ago we could barely do object recognition in photos. There was nothing resembling a theory of how to make ChatGPT. Something like ChatGPT was considered science fiction ten years ago. Even a little over three years ago. Yet here we are. Who is to say we won't have similarly massive progress in the next ten years? The jump from AlexNet ten years ago to ChatGPT may correspond to a jump from ChatGPT to superintelligence in ten years.

And unplugging a misaligned AI won't work. If it has no physical power, it would be deceptive. Otherwise it would prevent us from unplugging it. Avoiding being shut down is a convergent subgoal. That's why animals don't like to be killed. It prevents them from doing anything else.


In 1903, the fastest airplane in the world had a top speed of 31 mph. Just 44 years later, the fastest airplane exceeded the previously unthinkable speed of 891 mph. Twenty-nine years after that, the record was set at 2,193 mph. If these trends continue, we can expect the new speed record to be set later this year at 32,663 mph (mach 43).

These arguments were tiresome even before the enlightenment.


There are physical limits posed by air resistance. There is no indication we (humans) are anywhere near the physical limits of intelligence.


This is a statement of faith. And we both know where statements of faith go since 1685.


If anything it is a statement of faith to say that, miraculously, human intelligence happens to be the physical limit of intelligence.


After Putin, Xi, Trump, and Boris Johnson, we should give the AIs a chance.


> Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

This is the most ridiculous thing I've ever heard claimed about AI. They finally have a crappy algorithm that can sound confident, even though over half its answers are complete bullshit. With this accomplishment, they now expect in 10 years it will be able to do any job, better than an expert. This is some major league swing-for-the-fences bullshit.

And that's not the worst part. Let's say the fancy algorithm has real pixie dust that just magically gives better-than-expert answers for literally any question. That still leaves it to the human to ask the questions. How much do you want to bet a police force won't use AI by submitting a random picture of a young black male suspect and asking it "What is the likelihood this person has committed a crime, and what was the crime?" The AI just interprets the question and answers it, and the human just accepts the answer, even though the premise is ridiculous.

We won't create a real intelligent AI any time soon. But even if we did, the AI being perfect or not isn't the problem. The problem is the stupid humans using it. You can't "design", regulate, govern, etc your way out of human stupidity.


September 30, 2012: AlexNet achieves a top-5 error of 15.3% in ImageNet.

November 30, 2022: ChatGPT launches, two to three months after GPT-4 internal training has finished.

That's quite a lot of progress in 10 years. But somehow you are convinced no similar groundbreaking stuff will happen in the next 10 years.


We made the world's fastest plane and got to the moon in a span of about 10 years. Since then, nothing remotely as groundbreaking has occurred in flight or space travel. That was 54 years ago.

180 years ago the electric car was born, and about 105 years ago electric cars were the most popular car. Electric cars are finally becoming popular again but are still dwarfed by internal combustion.

Progress isn't linear, and the last 10% takes 90% of the effort. Without a very specific and heightened pace of work, the work will become monotonous and innovation will drag to a halt. It's not that we can't make groundbreaking work, it's that it's much harder and more expensive than we care for.


If anything, progress speed in AI has greatly increased. The AlexNet architecture combined CNNs and backpropagation, and used GPUs, none of which was new at the time. In contrast, GPTs use an entirely new architecture, the transformer. Progress is accelerating, not slowing down.

By the way, space was stagnant because there was little commercial investment for decades. Most development was done by NASA themselves (space shuttle) which turned out to be highly ineffective. But now there is a lot commercial activity. And relatively groundbreaking progress has occurred already: SpaceX has built a reusable lower stage for their medium lift rocket, which strongly reduces their launch cost. As a result of this cost reduction, they are currently building a global satellite internet network in low orbit with low latency and high speed. And they are also working on a rocket which has a similar payload capacity to the Saturn V, while being fully reusable. The cost will be a tiny fraction.

In the field of AI, too, the commercial research is currently taking off. Microsoft is pouring billions into OpenAI, and Google is racing to keep up.

Nothing points at a slow down.


Progress cannot accelerate infinitely. Literally nothing in the universe does. It accelerates until it slows down again. It's going to slow down again, and we won't reach anything like what these claims pretend in 10 years, or even 20.

Most technological progress is based on one of two things: 1) obvious commercial viability combined with a strong economy, copious capital, and a novel business solution that saves labor cost+time, or 2) warfare. We're not in a large-scale war right now, so that one's out. The economy is slowing down, and we're on the down-swing of the hype cycle for AI, seeing as everyone has bought something called "AI" but there's no new business value. Without a real business case or a decent war, the investment in innovation is going to falter.

The 3rd way we get technological innovation is after decades of very slow incremental progress. That's what led to GPT. But just because decades of research eventually lead to something useful doesn't mean it's immediately going to lead to yet more utility in the near future.

Another example: the elevator. The parts that comprise a simple elevator - a rope, pulley, and ratchet - have all existed for over 2,000 years. And for 2,000 years, people have wanted to lift heavy things high in the sky. But it was always dangerous because the rope would unexpectedly break, so nobody used them. Until one day, some random guy in the 19th century combined both a ratchet and a rope+pully, and suddenly it was safe to lift things very high. All it took was one moment for something we could already have done before to become a viable product.

That was 171 years ago. So how come that one innovation didn't shortly lead to a space elevator? We've known since 1895 that it's possible, and as far as we can tell it seems pretty straight forward how to do it. But still nobody's done it. Why? Not because it's impossible, but because there simply isn't enough money and will to do it. Progress doesn't happen just because it can happen.

Doesn't matter if you believe me or not. Check back at this thread in 10 years and tell me I'm wrong then. I won't be.


Well. The forecasting community estimates general intelligence (incl. robotics and passing a 2 hour Turing test) for 2031.

https://www.metaculus.com/questions/5121/date-of-artificial-...

I don't know what would change your mind.


More cynically, this could be a claim that half of the expert's answers are also BS.

Does anyone feel confident in their ability to disprove that one?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: