Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The people who want to wax philosophical about AI generally have no idea how it works or what it’s capable of. People working in the area do know that (ok Mr pedant, the weights themselves are a black box, what is being modeled isn’t) and aren’t concerned. You can’t really have productive conversations between the two because the first group has too much to learn. The internet as a concept is comparatively simpler and we all know how clumsy governments are with it.

What people should certainly think about is how AI will impact the world and what safeguards we need. Right now it looks like automation is coming for some more jobs, and we might get an AI output spam problem requiring us to be even more careful and skeptical on the internet. People scared of changes they don’t personally understand aren’t going to ever be able to suggest meaningful policies other than banning things.



It is literally not true that no one who works on this stuff is worried about it.

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#...

> The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents were substantially more concerned: 48% of respondents gave at least 10% chance of an extremely bad outcome. But some much less concerned: 25% put it at 0%.


Dang, you changed your comment between starting my reply and sending it. For context it was originally asking whether I thought the current path and model of AI development had a small chance of causing a catastrophe down the line, or something like that.

I don’t know how to answer that question because I only care what AI development looks like now and what’s possible in the practically foreseeable future, which I don’t think will cause a large catastrophe at all.

I don’t think deep learning, transformer models, GAN, gradient boosted decision trees, or minimax with alpha-beta pruning will cause catastrophes. I don’t wring my hands about a completely uninvented and hypothetical future development until it’s no longer hypothetical, by which I don’t mean once it’s already causing problems, but once it’s actually something people are working on and trying to do. Since nothing even resembles that now, it wouldn’t be productive to worry about, because there’s no way of knowing what the threat model is or how to address it - it’s reasonable to consider Ebola becoming as transmissible as the cold, it’s unproductive worrying about silicon-based aliens invading Earth and forcing us to become their pets.

I think the issue is people assume AI researchers and engineers are sitting in dark labs not talking to each other, when there’s actually a lot of communication and development you can follow. It’s not people coming out of nowhere with radically different approaches and shipping it by themselves, it’s highly iterative and collaborative. Even if it did happen, which it never does, there’s be no way to stop that individual person without creating a dystopian panopticon, since it’s basically terrorism. You can be sure that if the actual people working on AI get worried about something they’ll get the word out because they do think about potential nefarious applications - it happened years back with deepfakes for example.


Some people working on AI have been raising the alarm.


Ok, you’ve completely changed your comment several times now and I’m not going to keep updating mine in response. I’m currently responding to some survey of NeurIps participants regarding long run (negative) effects of advanced AI on humanity.

A weighted average of 5% expecting something really bad in the long run doesn’t concern me personally, and it’s a hypothetical concern that is not actionable. I’ll be concerned when there exists a well-defined issue to address with concrete actions. I’m already concerned about the development of AI likely resulting in everything on the internet needing to be tied to a personal identity to be distinguishable from spam, but I’m also confident we’ll find a good solution to the problem.


Right so you just come to a different conclusion on the risk-acceptance level.

You don't believe there's no risk, nor do you actually believe that people working close to AI believe there's no risk. You just choose to accept the risk.

Obviously that's your prerogative, but it should be clear why it's wildly dishonest to portray anyone who's concerned and arrives at a different risk-acceptance level as ignorant.

Also, "we don't know what to do about the risk" != "only ignorant people think there's a risk."


> People scared of changes they don’t personally understand aren’t going to ever be able to suggest meaningful policies other than banning things.

True, but those same people also will have a huge effect on how these things will be developed and implemented.

One thing I'm finding remarkable is how dismissive AI evangelists are of these people. That's a serious mistake. If their fears are based on ignorance, then it's very important that the fears are addressed through educating them.

AI evangelists are not doing enough actual evangelism in this sense. Instead of addressing fearful people rationally with explanations and clarifications, they are simply dismissing these people's fears out of hand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: