Hacker News new | past | comments | ask | show | jobs | submit | suilied's comments login

I get that AI basically is a problem solving machine that might eventually adapt to solve generic problems and thus reach the ability to break out of its box. But so what? Even if it manages to do all that, doesn't make it sentient. Doesn't make it a threat to mankind. Only when sufficiently motivated, or in actuality, when we project our humanity on it does it become scary and dangerous.

If we ever seriously try to create an artificial conscience it might need to be embodied, because we are embodied and seem to have evolved this due to evolution, which is a pretty physical process. Looking at it from this perspective one might say that if we keep the AI in its box, it will never have a need for conscience and therefore will never gain it.


This reply puzzles me somewhat. The first half doesn't seem to relate to the post it's replying to.

How aware are you of the main points around AI X-risk like orthogonality? Or how an optimising process that makes efficient use of information does not need (in theory) to have "conscience" or "sentience" to be lethal?

And on a separate tangent, are you aware people are already making primitive agents by connecting LLMs (given an initial prompt) in a loop with the result of feedback from its actions?


Hmm, I see someone has not played enough universal paperclips.


I understand the sentiment but I disagree. It's just another disruptive piece of technology that people will have to adjust to. If you can't trust a digital face, then nobody will and using this tech for scamming purposes will simply fizzle out after an initial "adjustment period". I know that sounds rough but society might react differently than you think, i.e.: with digital faces being useless for identifying people, and maybe even becoming creepy or unsettling because of the implicated fakeness, meeting people in real life (opening bank accounts, transactions in general, anything where trust is valuable) will have more value. Don't fight progress in the hopes you can one day become complacent.


It will fizzle out right after fraud involving the telephone or email died out.


I don't see a difference between any technology. So if we can frown upon some nation for developing nukes, we can frown upon a group people for developing this.

On the other hand somebody can say "if we don't, then others will". So regulate and ban the technology, then.


"If we don't, then others will" is a bankrupt argument put forth by people who are trying to justify doing things they know are harmful. It would be more legitimate if it were "we won't, but others will".


This is what I was trying to point too, and why I added the "ban it, then" argument. Because there's a considerable amount of people who operate with the "if it's not illegal, then I can do this" mindset, and there's a big intersection of this two mindsets.

At least putting a ban on it will make people think, I hope.


Agreed.

I'm not sure how I feel about "ban it" -- I can argue that either way -- but I do think that arguments that banning something is pointless because people will do it anyway are misguided.

Regulations don't eliminate the effects of bad actors, but they do reduce the number and severity of them.


> So regulate and ban the technology, then.

When has that ever worked? What banned technology is conceived of but not developed because some government entity said not to?

I think the "if we don't, then others will" is part of the natural progression of technology. Whatever the next logical step of development is, that's where development efforts will flow. Some might not want to go there, but some will. Someone banning it will likely only fan the flames and drive more interest into the space - ala the "Streisand Effect".

When the US government (et al) labeled certain numbers "illegal" because they could be used to break DRM or certain encryption types, academia and hackers alike openly mocked the notion. T-shirts, stickers, and websites sprung up further spreading this "illegal" knowledge. People who had no idea about how a number could be so "dangerous" suddenly wanted to know. By telling people they can't know or do something will absolutely drive people toward that knowledge.

The hacker mentality often answers the question "why?" with "because I can". Saying you cannot only encourages more to jump in.


I see banning things is useless in the long run, but at least some people will think about why it might be banned.

I was thinking about throwing a wrench to the machine while saying "ban it" to make it stutter, like breaking the chain of obedience in Milgram Experiment, or like the woman who walks up to Zimbardo and stops The Prison Experiment (see his TED talk).

Because, as a hacker and programmer, I believe that we have ethical obligations, and this "We're doing something amazing, we need no permission" stance in these communities genuinely worry me.

Technology is not only technology, it affects people's lives. Anything which can damage it beyond a certain point by exploiting human nature is in the same category for me.


Trying to protect low security by censorship is a repetition of history: https://www.zdnet.com/article/chilling-effect-lawsuits-threa...

It's like banning Iliad, because it describes troyan horse.


I think you're missing the point here. Because I don't say anything in the line of that.

What I say is, this is a double-use technology and dangerous beyond a certain point, so it might need to be regulated or banned at the end of the day.

In an ideal world, we shouldn't need this, but we don't live that ideal world.

For example, I can't independently develop an amateur rocket which can land to an area of my choosing by the means of actively directing itself, beyond a certain accuracy and precision. Because it'll be a homing missile. On the same essence I can say that this technology can be used to damage other people.

Or, I can't get some enriched uranium to build myself a small, teapot sized reactor to power my house during power outages.

Can we say that we're censoring research in this areas too, because they're low security things?

This is same with latest A.I. developments. However I'm a bit busy to open these cans today.


Nuclear technology is low security indeed, and it's a technical problem, and uranium isn't exactly abundant element. Untrusted data is a problem of stupidity in comparison. But stupidity causes problems with any technology. It "spoons can harm people" tier problem.


It's possible to buy anything given the right price when it's not regulated or banned due to double-use technology restrictions. I'm sure while expensive I would be able to get required equipment for the right money, from the usual suspects (i.e. I'm sure there'll be microcontroller boards for controlling reactors up to 200KW or servo kits for 12 fuel, 12 regulator rod configurations from adafruit for example).

> Untrusted data is a problem of stupidity in comparison.

In the past, wrong data showed itself because of a lack of coherency. With the advanced misinformation operations, it almost have became an alternative reality game. A.I. today allows us to generate convincing lies at the push of a button. I can fathom what kind of misinformation bubbles can be built with technology like that.

These technologies are attacking to lowest level instincts of humans, which ones we deem utterly reliable for thousands of years. They are the same level with the manipulative algorithms in my mind. I put these into dangerous and harmful category.

This is not a case of stupidity. This is plain old, and very dangerous kind of, manipulation.

Downplaying this is not wise.


That a law is broken by people does not mean we don't need it.


I say frown away! We all know what this is used for and its tasteless at best. But I like to think in solutions, so in stead of trying to arms-race this tech into submission with ever cleverer deep-fake detectors (and thus be able to regulate), why not accept its existence and change ourselves. Not least of all because any digital tech is like a permanent mutation to the digital-dna of our global society.


> why not accept its existence and change ourselves.

That works too, like we accept the algorithms which tie people to screens and used to spread misinformation and manipulate the masses.

> We all know what this is used for and its tasteless at best.

Yeah, like interview fraud, misinformation campaigns and what not. These are tasteless, but not harmless.


> So regulate and ban the technology, then.

Except for criminals and CIA.


> Don't fight progress in the hopes you can one day become complacent.

I'm not convinced this represents "progress". That aside, the goal of resisting it isn't to allow you to become complacent some day. The goal is to avoid being harmed.


Disruptive technology disrupts. Society and culture as well as markets.

How many generations did it take for society to adapt to the industrial revolution? What were the spasms which occurred during that period?

As for deepfakes, we know before we even begin that it breaks (utterly moots) social norms about identity, trust, reputation.

How many people are going to die while societies adapt?

Is that price acceptable for the benefit of more amusing viral videos on TikTok?


I love this. I dabble a bit in electronics as well as tabletop wargames and have always dreamed of having an interactive table with "moving" terrain, doors that open/close, LED's indicating some status of an objective, rotating walkways, that sort of thing. I think if you're plynth interface can be opensourced or licensed even, that you can do more than just play card games. How about track your games that support said interface? Wins, losses, draws, rewards. Not sure what the storage capacity is of the cards, but I see possibilities more in the form of combining Amiibo's with Steam's social features like achievements and the workshop.


I have no experience with Epicor but similarily work in IT support and regularly get scheduled to "oversee" updates like this as well. Don't panic, get in contact with a couple of chill guys or galls that use Epicor that can help you test out the app so you can at least confirm whether the update was succefull or not. Anything else really isn't your problem.


thanks for sharing. don't panic is the theme of this thread and just hearing that helps. i have to remember my position in this whole scenario is to just be available and take whatever problems as they come.


"We're not deplatforming you, but basically, you can't use this platform, and if you were, please get off the platform"

This is truly the darkest timeline. What happened to "information wants to be free"? Your just participating in making yet another gulag archipelago.


It looks like we're getting closer and closer to making this a reality. Recently (this year) some other people published a sort of reworking of the Alcubierre drive[1], I wonder how much (if at all) better this paper will fit with that. For those who'd like a more accessible way to what it's about see: [2]

[1] https://arxiv.org/pdf/2102.06824.pdf [2] https://www.youtube.com/watch?v=8VWLjhJBCp0


I would fix it by removing it. I haven't once seen copyright being used to protect "the little man" (not saying it never happens, but I'm a bit sinical when it comes to big corps) Why not abolish the idea of copyright and let stories / characters / etc. take their course. At first this will probably cause chaos as people scramble to churn out a metric ton of material that was once locked under copyright but hopefully in the long run this will lead to an increase in overall quality of material as people start to be more critical of the things they consume.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: