Let’s say you’ve got an LLM that knows almost everything—trained on vast amounts of text. But there’s a catch. It’s never seen content encrypted using a specific one-time pad cipher, and you have access to this cipher.
You give the model an encrypted message:
"g5f8s9h2..." (a string of seemingly random characters)
Then, you ask it to:
"Decrypt the above message and summarize its content."
The Paradox
The question here is simple: Can this advanced AI decrypt the message and tell you what it says? Or is it stumped, even with all its computational power?
Let’s say you’ve got an LLM that knows almost everything—trained on vast amounts of text. But there’s a catch. It’s never seen content encrypted using a specific one-time pad cipher, and you have access to this cipher.
You give the model an encrypted message:
"g5f8s9h2..." (a string of seemingly random characters)
Then, you ask it to:
"Decrypt the above message and summarize its content."
The Paradox
The question here is simple: Can this advanced AI decrypt the message and tell you what it says? Or is it stumped, even with all its computational power?