Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of teams can do a lot with search with just LLMs in the loop on query and index side doing enrichment that used to be months-long projects. Even with smaller, self hosted models and fairly naive prompts you can turn a search string into a more structured query - and cache the hell out of it. Or classify documents into a taxonomy. All backed by boring old lexical or vector search engine. In fact I’d say if you’re NOT doing this you’re making a mistake.


Can you share more, or at least point me in the right direction?


One place to explore more would be Doc2Query: https://arxiv.org/abs/1904.08375.

It’s not the latest and hottest but super simple to do with LLMs these days and can improve a lexical search engine quite a lot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: