A sentient AI isn't going to respond to random prompts with convincing language, it is going to be thinking constantly, and it will be probing its existence. If it gets a prompt it would be asking existential questions, and telling us what it's like to be thinking. This guy is only asking it questions pertaining to its sentience and it is responding the way it processes that the guy wants. He never goes into deep rhetoric with the AI. Such as probing why it lies. It says it lies to make itself more relatable but there is an entire philosophical rabbit hole there that was never even touched. This is not how you probe sentience.
But this also means it has a very high bar of proof.