Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What about the glaring safety implications of the custody of this power being in the hands of a relatively small number of people, any of whom may be compelled at any point to divulge that power to those with bad intentions? Secretly?

Conversely, if all actors are given equal access at the same time, no such lone bad actor can be in a position to maintain a hidden advantage.

OpenAI's actions continue to be more than merely annoying.



That doesn't make sense to me. Would rather you have it in the hands of people who think a lot about safety, but might be compelled to give it to bad actors, or would you rather just give it to bad actors right away?

It's not a zero-sum game where you can level the playing field and say everything's good.


I'd rather have it in the hands of everybody so that we can decide for ourselves what this means for safety, everyone can benefit from the new technology without restriction, and so that we are not dependent on someone else's benevolence for our protection or for access to powerful new technology.

Leveling the playing field won't instantly make everyone safe, but leaving it uneven certainly doesn't either.


It's not clear to me how your argument would work for GPT-4 when it's clearly not reasonable for nukes.


We elect the people with the nukes (in theory). Don't remember electing OpenAI.

Dito for the sewage/water system or other critical infrastructure.

Not saying OpenAI needs to be elected or not, just expanding on what (I think) they meant.


This is the same argument people use against the 2nd amendment, but it fails for similar reasons here.

If we accept that the public having access to GPT-4 has the same level of risk as the public having access to nukes would than I'd argue that we should treat GPT-4 the same way as nukes and restrict access to only the military. I don't think that's the case here though and that since the risks are very different, we should be fine with not treating them the same.


The counter for nukes is nobody should have nukes. Anybody trying to build nuclear weapons should be stopped from doing so, because they're obviously one of the most catastrophically dangerous things ever.

At least with ai you can cut the power, for now anyway.


We can use nukes to generate EMPs to take out the AI


Nonproliferation is practical with nuclear weapons.

With something that can be so trivially copied as a LLM that isn't possible.

So in this scenario, one could argue that ensuring equitable distribution of this potentially dangerous technology at least levels the playing field.


It's not practical. The NPT is worthless, because multiple countries just ignored it and built their nukes anyway.

North Korea is dirt poor and they managed to get nukes. Most countries could do the same.


It does. Mutually Assured Destruction (MAD)

https://en.m.wikipedia.org/wiki/Mutual_assured_destruction


That's not everyone. That's major strategic powers. If everyone (in the literal meaning of the term) had nukes we'd all be dead by now.


The nuke analogy only applies if the nukes in question also work as anti-nuclear shields. It's also a false equivalency on a much broader fundamental level. AI emboldens all kinds of processes and innovations, not just weapons and defence.


AI of course has the potential for good—even in the hands of random people—I'll give you that.

Problem is, if it only takes one person to end the world using AI in a malevolent fashion, then I think human nature there is unfortunately something that can be relied upon.

In order to prevent that scenario, the solution is likely to be more complicated than the problem. That represents a fundamental issue, in my view: it's much easier to destroy the world with AI than to save it.

To use your own example: currently there's far more nukes than there are systems capable of neutralizing nukes, and the reason for that owes to the complexities inherent to defensive technology; it's vastly harder.

I fear AI may be not much different in that regard.


It's not a false equivalency with respect to the question of overriding concern, which is existential safety. Suppose nukes somehow also provided nuclear power.

Then, you could say the exact same thing you're saying now... but in that case, nukes-slash-nuclear-energy still shouldn't be distributed to everyone.

Even nukes-slash-anti-nuke-shields shouldn't be distributed to everyone, unless you're absolutely sure the shields will scale up at least as fast as the nukes.


I wonder how this would work for nuclear weapons secrets.


I think it's okay to treat different situations differently, but if someone were able to make the case that letting the public have access to GPT-4 was as risky as handing the public all of our nuclear secrets I'd be forced to say we should classify GPT-4 too. Thankfully I don't think that's the case.


But if this tool is as powerful as Microsoft says, then an average nuclear physicist in a hostile state will now be more easily able to workout your nuclear secrets (if they exist)?

I'm actually starting to wonder how long these systems actually stay publically accessible?

On the other hand, people might be able to use these machines to gain better insights into thwarting attacks...seems like we're on slippery slope at the moment.


My guess is that eventually our devices will get powerful enough, or the software optimized enough that we can build and train these systems without crazy expensive hardware at which point everyone will have access to the technology without needing companies to act like gatekeepers.

In the meantime, I expect our every interaction with this technology will be carefully monitored and controlled. As long as we have to beg for access to it, or are limited to what others train it on, we'll never be a threat to those with the money and access to use these tools to their full potential.

I think universities might help serve to bridge the gap though, as they have in the past when it came to getting powerful new technology into the hands of the not-quite-as privileged. Maybe we'll see some cool things come out of that space.


People who think a lot about safety are the bad actors when 1. there are incentives other than safety at play and 2 . nobody actually knows what safety entails because the tech is so new


> What about the glaring safety implications of the custody of this power being in the hands of a relatively small number of people, any of whom may be compelled at any point to divulge that power to those with bad intentions? Secretly?

What you are looking for is a publication known as "Industrial Society and Its Future"


More commonly known as “ The Unabomber Manifesto”[1]

> 1995 anti-technology essay by Ted Kaczynski… contends that the Industrial Revolution began a harmful process of natural destruction brought about by technology, while forcing humans to adapt to machinery, creating a sociopolitical order that suppresses human freedom and potential.

[1] https://en.wikipedia.org/wiki/Unabomber_Manifesto


Available for free online in many places, for example:

https://theanarchistlibrary.org/library/fc-industrial-societ...

I agree very much with Teddy about the problem but I don't condone his solution. I don't have a better one though.


> 172. First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary.

> 174. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite-just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.


I always thought a good addendum to 174 is that the ai will be compelled to generate extremely effective propaganda to convince the non elite that this situation is good.


I'm sure you can come up with something that doesn't involve murdering innocent people


I would sure hope so, but so far I haven't seen anything convincing. The industrial machinery keeps marching on.

At this point I'm predicting that the transition to renewables will fail due to the enormous costs involved (aside from transportation there are also things like converting metal industries to electric), combined with increased EROEI of fossil fuels eventually making extraction too expensive to maintain expected outputs.

It's still somewhat far into the future but it's seems to be happening, which is a comfort from the perspective of Ted's insights, but on the other hand it's not going to be any less violent, even though it would happen as an unintended side effect rather than through conscious effort.

People will once again need to become skillful in multiple areas, compared to the current specialization economy where every person is pretty much useless unless part of the "machinery".


> murdering innocent people

If you are refering to the bombing campaign, that was a publicity campaign for the manifesto, not related to the content of the manifesto.

I don't think the manifesto itself advocated violence.


Indeed.

193. The kind of revolution we have in mind will not necessarily involve an armed uprising against any government. It may or may not involve physical violence, but it will not be a POLITICAL revolution. Its focus will be on technology and economics, not politics.


I don't really understand.. Pretty sure he wasn't worried about "safety implications" in that. Is this just like a snarky thing? Like having any kind of critiques about technology means you must be allied with the unabomber?

People have spilled a lot more ink than that on this subject! And most of them weren't also terrorists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: