Okay, somebody posted a thread on Twitter explaining how this works...
The language model is capable of generating python scripts to solve certain text-processing tasks, and then it re-prompts itself by reading the python outputs back into the language model. Very clever!
Other tricks include... prompting itself to lookup wikipedia entries, and then re-prompt itself with snippets from the resulting wikipedia page. Each user prompt is inserted into a template prompt with instructions to the model about the limitations of its capabilities.
The language model is capable of generating python scripts to solve certain text-processing tasks, and then it re-prompts itself by reading the python outputs back into the language model. Very clever!
https://twitter.com/goodside/status/1598253337400717313
Other tricks include... prompting itself to lookup wikipedia entries, and then re-prompt itself with snippets from the resulting wikipedia page. Each user prompt is inserted into a template prompt with instructions to the model about the limitations of its capabilities.