Edit: you're right, I'm not clear. An experiment was made, in which Eliezer and some other person talked during 2 hours on IRC. Eliezer played the AI, and the other played the Guardian. The AI is supposed to convince the Guardian to "let it out" by the end of those 2 hours. No word play or such, the Gardian has to make a concious decision for the AI to win. The result is to be acknowledged publicly by PGP signed e-mail by the losing party. Eliezer won twice, over people who publicly stated that there was no way an AI would convinced them. Even though they could just say no, they didn't, and later sent the e-mail acknowledging they let the AI out.
Not quite sure what this means. Any references?