Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Be VERY careful using Kagi this way -- I ended up turning off Kagi's AI features after it gave me some comically false information based on it misunderstanding the search results it based its answer on. It was almost funny -- I looked at its citations, and the citations said the opposite of what Kagi said, when the citations were even at all relevant.

It's a very "not ready for primetime" feature



That applies to all AI, and even human-generated content. The crucial difference is that AI-generated content is far more confident and voluminous.


I think I've only ever seen a single incorrect answer from Perplexity and I've probably made a thousand searches so far. It's very reliable


May I ask how you know those 999 answers were correct, and how would you have been sure to catch a mistake, misinterpretation or hallucination in any of those?


It's not only Kagi AI but Kagi Search itself has been failing me a lot lately. I don't know what they are trying to do but the amount of queries that find zero results is impressive. I've submitted many search improvement reports in their feedback website.

Usually doing `g $query` right after gives me at least some useful results (even when using double quotes, which aren't guaranteed to work always).


This is a bug, appears 'randomly', being tracked here: https://kagifeedback.org/d/3387-no-search-results-found/

Happens about 200 times a day (0.04% of queries), very painful for the user we know, still trying to find root cause (we have limited debugging capabilities as not storing much information). it is on top of our minds.


> we have limited debugging capabilities as not storing much information)

Maybe give an option to those users who are reporting bugs to pass more debug info if the user agrees.


Fair enough, I just ask for things that I can easily verify because I am already familiar with the domain. I just find I get to the answer faster.


Yeah, that's totally fair. I just think about all the people to whom I've had to explain LLM hallucinations, and the surprise in their faces, and this feature gives me some heebie-jeebies




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: