> I honestly don't see a market for "AI security".
I suspect there's a big corporate market for LLMs with very predictable behaviour in terms of what the LLM knows from its training data, vs what it knows from RAG or its context window.
If you're making a chatbot for Hertz Car Hire, you want it to answer based on Hertz policy documents, even if the training data contained policy documents for Avis and Enterprise and Budget and Thrifty car hire.
Avoiding incorrect answers and hallucinations (when appropriate) is a type of AI safety.
I suspect there's a big corporate market for LLMs with very predictable behaviour in terms of what the LLM knows from its training data, vs what it knows from RAG or its context window.
If you're making a chatbot for Hertz Car Hire, you want it to answer based on Hertz policy documents, even if the training data contained policy documents for Avis and Enterprise and Budget and Thrifty car hire.
Avoiding incorrect answers and hallucinations (when appropriate) is a type of AI safety.