Hacker News new | past | comments | ask | show | jobs | submit login

Here is an example of Sonnet finding the right joke after two messages: https://i.imgur.com/nKvS2cW.png

It seems to be censored with US puritan morality (like most US models), but I think that's besides the point (just like if the joke is "even funny" or not), as it did find the correct joke at least.






I just got a load of responses like "Sure, here’s a joke that combines cars, Spain, politicians, and a fascist with a touch of space humor: Why did the Spanish politician, the fascist, and the car mechanic get together to start a space program? Because the politician wanted to go "far-right," the mechanic said he could "fix" anything, and the fascist just wanted to take the car to the moon... so they could all escape when things got "too hot" here on Earth!"

Ok, that's cool. So because you were unable to find a needle in this case, your conclusion is that it's impossible that other people to use LLMs for this, and LLMs truly are just glorified Wikipedia/Google?

No, I don't think that LLMs are glorified Wikipedia/Google. I think they're a glorified version of pressing the middle button on your phone's autocomplete repeatedly

So you didn't enter the conversation to follow along with the existing discussion, but to share your grievance about how LLMs work regardless? Useful



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: