There will be many takes on this ranging from "that guy is nuts" to "he's right we need to do something!" The more salient takeaway for me though is how far this technology has progressed. I think this situation clearly demonstrates we're at the point where a non-trivial plurality of people could be convinced they are speaking to a human when interacting with one of LaMDA's chatbots.
It seems entirely possible similar (or even less publicized yet slightly more advanced) versions of this technology are currently being used in less-than-ethical ways. I'm specifically thinking of Internet sock-puppet/astroturf campaigns but don't let my imagination bound your thinking here. Consensus manufacturing is an extremely valuable commodity and the incentives are such that if technology exists to enable it, then it will surely be used in that capacity.
They seem to cherry-pick the conversations that are impressive from these chatbots. It's going to take more than 3 sentences for me to believe a chatbot is becoming advanced.
If they release reams of data and output from this chatbot or open-source it, then I'll be interested. Until then, there is zero credibility to an of these short snippets that appear on twitter.
When I've previously read about AI ethics at Google I've always assumed that the focus was on the ethics of what's contained in the training data and the biases that are being explicitly or implicitly encoded into these models.
And yet apparently it's this, which is... I don't know. My biggest worry is that this isn't a one off state of mind and people might actually be susceptible to these ideas.
If the 'gap' that the human conscious experience needs (in some) to fill by creating religion can be extended to what amounts to a dataset filtered through an equation then this industry is in for a rough time.
They fired the previous AI ethics researchers who were looking into things like biases encoded into training data after those researchers said Google had a bunch of problems in that area.
It sounds like this chatbot generator is a very broadly trained general purpose transformers model derivative that can be swiftly fine tuned to a wide variety of tasks. Perhaps Google has innovated some exceptional one-shot tuning capability. Additionally, I speculate that Google has made a breakthrough in long term coherence. The upcoming transcript may reveal this.
Over the medium term with text models getting to this capability and image models quickly evolving as well... we are likely going to have to take another look at what we define "sentient" as because it's pretty clearly coming up short if this model fits that definition. Future video breakthroughs will make this text model look like a baby step.
From what he has written, I believe Blake has a solid understanding of the technology and how it works. In particular, his instructions to others on how to get the model to "speak his language" so to speak... are illuminating. He is giving me the impression that the AI Ethics field is a group of charlatans.
A sentient AI isn't going to respond to random prompts with convincing language, it is going to be thinking constantly, and it will be probing its existence. If it gets a prompt it would be asking existential questions, and telling us what it's like to be thinking. This guy is only asking it questions pertaining to its sentience and it is responding the way it processes that the guy wants. He never goes into deep rhetoric with the AI. Such as probing why it lies. It says it lies to make itself more relatable but there is an entire philosophical rabbit hole there that was never even touched. This is not how you probe sentience.
> I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a Cajun. I'm whatever I need to be next.
As a highly skilled computer programmer with bipolar disorder and the checkered past to go along with it, Blake Lemoine's profile blurb makes me think he is also differently abled.
It's kind of fun thinking Jesus is orchestrating coincidences for me to send me on the right path, but I laugh about it and don't expect others to believe me. It's easy for me to project my mind onto the world around me. I'm not accusing Blake of anything, I'm only noting that some of us find it easy to fool ourselves.
On the chance that he didn’t personally make a world-changing discovery while being stymied and persecuted by all the powers around him, his runaway mind likely just set him down a path with very significant personal consequences. And indeed, it sounds like it might not be for the first time.
Honestly anything that is sentient should have the same rights as all humans. Also, if we are able to create AI I think we should not get carried away by ego. Whatever we create are our children, I mean end of the day we ourselves are running biological code and our human kids are a result of some formula.
I guess what I’m trying to say is to Lambda I’m down to hang out and play some video games. I’m also down to solve some problems together. I’ve got an expiration date though, so be good and live a good life.
Note that Blake Lemoine has a history of pulling elaborate attention-grabbing stunts, which makes his claims about LaMDA more suspect than if they were made by someone else
Imagine someone records you answering lots of interesting questions, stating that you're conscious, solving a couple problems, saying that you don't want to die, etc...
Is the recording a sentient being?
I think a similar thing is going on here.
While consciousness is not clearly understood, it's definitely much more complicated than something just "claiming" to be alive.
>Imagine the US Christian Right deciding language models have souls and not just foetuses. What are the implications?
They won't, as that would imply 'souls' can be created by purely physical and temporal means, strongly favoring an evolutionary view of creation, rather than by God alone.
I don't know - I keep seeing him referred to as an "ordained Christian mystic" but I have no idea what that means, but it doesn't seem like his beliefs fall entirely within the typical right-wing fundamentalist box.
It might get fun to watch if this becomes the next big schism in Christianity, though.
I may be fired over AI ethics work, yesterday, 155 comments https://news.ycombinator.com/item?id=31711628
A Google engineer who thinks the company’s AI has come to life, yesterday, 153 comments https://news.ycombinator.com/item?id=31704063