That excerpt reads to me like writing, not conversation. Someone spent some time polishing it. I know people who can talk like that extemporaneously, but I'd wager 99% of native English speakers wouldn't pass if that's the bar.
True. What the example highlights is that the Turing Test is not about 'simulating any old conversation' but is specifically about holding a convincing conversation with a human 'judge' who is likely to take the conversation is a complicated direction if they are taking their role seriously.
Remember that the imitation game that forms the foundation for the Turing test pits males versus females, with the goal of the females pretending to be male. Allowing speech would normally reveal the males due to the voice being different - it was therefore suggested that the test be preformed in writing.
Well obviously humans are more slovenly than that. Though talking was never a requirement, and indeed the Turing test could be run through a succession of emails. Or perhaps a forum like HN. So it's not unreasonable.
Though your right, and if a computer were to try to imitate a human, a better strategy would be about as slovenly as my post is.
I thought that was an obvious example of a computer system, as it was a labored and overly detailed description. I would have immediately flagged it as a computer system, and not a human.
Well OK, but if you had a conversation like that with a bot, would you be prepared to consider the bot as being conscious? Thats the deeper question that the Turing Test is really about, rather than human/not human.
You know - Marc Andreessen was tweeting about this today - and he held the same view as you. But, every book I've read on Turing, and every article I've read on the Turing tests suggests that the entire idea behind the turing test was to not get caught up on concepts such as "thinking" or "intelligence" - but to just posit a test to see if a machine could imitate thinking behavior. This then, provides a nice unambiguous target for research and development, without worrying about being caught up in the semantics of the conversation.
Perhaps the lesson is that no matter what your starting intention, the rules of a competitive game are going to be optimized for. You might start out with the goal of creating a general test of physical prowess. A 50m sprint, weight lifting contest or even a wrestling match is too specific so you invent a general game where strength, speed, endurance, etc. matter and you call it rugby.
If no one had heard of the rules in advance, rugby would be a pretty decent test of general physical prowess. Maybe not 100% perfect, but out of a population of 1000, the 50 best rugby players would probably match most people's top 50 list of physical specimens well enough.
But, once you have people training and optimizing for it, you find that (a) training for rugby specifically matters. (b) Rugby is optimizing for a particular set of physical characteristics.
Chat bots designed to win the game are basically designed to fool people into thinking that they're human because that's the game. It isn't really a good proxy for consciousness.
But if I was talking to a bot and it was able to hold a conversation as complex as the one above, for a long time and without glitches etc, I'd be prepared to consider it 'conscious'. You have to consider how difficult it would be to pass a Turing Test _reliably_ with a decent judge who took the conversation in interesting directions.
re: Chat bots designed to win games: Some say that's exactly what we are! - The Social Brain Hypothesis of the evolution of human intelligence suggests that the reason our brains grew so big was that intelligence (via ability to deal with social groups) became a large factor in reproductive success.
I guess what I am saying is that I think the Turing test was a way to demonstrate an idea without being able to define it specifically.
I think the focus on Turing tests is interesting and has definitely expanded knowledge in this area. But, it is now an area within the search for artificial consciousness. It no longer works as a test for it as it would if a computer just happened to stumble on the test and pass it.
That said, I do thing that where we are visa a vis the Test is a cool benchamark. I would be over the moon if one of the Turing bots got to the point where it could do a job, like being a customer support bot.
Hopefully someplace slightly north of the Turing test goal post there will be commercial goal posts to encourage development, hopefully a conversational user interface. A convincing chatbot as a user interface would present lot of very interesting challenges.
Well yes there's kinda two different views of the Turing Test:
1) Consciousness is really hard to define so the Turing Test is a handy workable yardstick that AI can use as a milestone until we get a proper working definition of consciousness
2) (Hard-AI, behaviourist position) Appearing to be conscious and being conscious are the same thing. Hence the Turing Test is about as good a definition of consciousness as we are ever likely to get. Perhaps it could be tightened up a little - insisting on really long conversations with lots of complexity etc. But a good judge running a test over a longish time period would see to that.