Instead of embedding the user prompt, I let the LLM invert it into keywords and search the embedding of that. It very much does feel like a magic bullet.
Using the LLM to mutate the user query is the way to go. A common practice for example to take the chat history of a chat, and rephrase a follow up question that might not have a lot of information density (e.g. follow up question is "and then what?" which is useless for search, but the LLM turns it into "after a contract cancellation, what steps have to be taken afterwards" or something similar, which provides a lot more meat to search with.
Using the LLM to mutate the input so it can be used better for search is a path that works very well (ignoring added latency and cost).
I think OP means to filter the user input through an LLM with “convert this question into a keyword list” and then calculating the embedding of the LLM’s output (instead of calculating the embedding of the user input directly). The “search the embedding” is the normal vector DB part.
"Query expansion"[0] has been an information retrieval technique for a while, but using LLMs to help with query expansion is fairly new and promising, e.g. "Query Expansion by Prompting Large Language Models"[1], and "Query2doc: Query Expansion with Large Language Models"[2].
Ask the LLM to summarize the question, then take an embedding of that.
I think you can do the same with data you store… summarize it to same number of tokens, then get an embedding for that to save with the original text.
Test! Different combinations of summarizing LLM and embedding generation LLM can get different results. But once you decide, you are locked in the summarizer as much as the embedding generator.
I could not help but notice the Contriever curve is so much higher on y-axis Recall than the other methods (figure 11 in https://arxiv.org/pdf/2307.03172.pdf).
My suspicion is some pre-logic such as is the user's question dense enough then use Hyde with chat history. If anyone has more recent experience with Contrievers, would love to learn more about it!
BTW: I think of this like asking someone to put things into their own words, and then it’s easier for them to remember. Matching on your way of talking can be weird from the LLM’s point of view, so use their point of view!
It is two different language models. The embedding model tries to capture too many irrelevant aspects of the prompt that ends up putting it close to seemingly random documents. Inverting the question into the LLM’s blind guess and distilling it down to keywords causes the embedding to be very sparse and specific. A popular strategy has been to invert the documents into questions during initial embedding, but I think that is a performance hack that still suffers from sentence prompts being bad vector indexes.
My heuristic is how much noise is in the closest vectors. Even if the top k matches seem good, if the following noise has practically identical distance scores, it is going to fail a lot in practice. Ideally you could calculate some constant threshold so that everything closer is relevant and everything further is irrelevant.