Hey HN!
I recently built Prompt Reducer, an app that makes it easier to compress GPT-4 prompts. The main goal is to reduce the number of tokens in each prompt, thereby reducing the cost of running GPT-4. I figured since @gfodor tweeted about compressing GPT-4. It’s still early, and it does not work perfectly, but I’d love to hear any feedback or suggestions for how to make it faster or more efficient.
-- compress the following text in a way that fits in a tweet (ideally) and such that you (GPT-4) can reconstruct the intention of the human who wrote text as close as possible to the original intention. This is for yourself. It does not need to be human readable or understandable. Abuse of language mixing, abbreviations, symbols (unicode and emoji), or any other encodings or internal representations is all permissible, as long as it, if pasted in a new inference cycle, will yield near-identical results as the original text: --
There is no reason to think GPT-4 has any special knowledge about prompts, or how they should be effectively compressed so that it will treat it as equivalent to the original. It does an interesting job of faking it. But they are basically asking GPT-4 for a stylized version of "summarize the following:".