A lot of teams can do a lot with search with just LLMs in the loop on query and index side doing enrichment that used to be months-long projects. Even with smaller, self hosted models and fairly naive prompts you can turn a search string into a more structured query - and cache the hell out of it. Or classify documents into a taxonomy. All backed by boring old lexical or vector search engine. In fact I’d say if you’re NOT doing this you’re making a mistake.