Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does it have to defend itself a lot?

I've noticed contradictory statements are kind of standard for GPT-3. I can well believe you could come up with a system that's better at defending itself against the charge of contradiction than it is at avoiding contradictions in the first place.



In this case, I was pushing it quite a bit to see how it would respond. Most of the time, it responds quite confidently so it sounds right, even if it isn't, ie about the median of what you'd expect for a Hacker News comment.


> Does it have to defend itself a lot?

I think it's a common weakness of chatbots that you can get them to contradict themselves through presenting new evidence or assertions, or even just through how you frame a question. I found LaMDA to be much more resistant to this than I anticipated, when I tried it out (I'm a Googler).

It wasn't completely immune -- I was eventually able to get it to say that, indeed, Protoss is OP -- but it took a long time.


I've noticed contradictory statements are kind of standard for humans. I can well believe you could come up with a system that's better at defending itself against the charge of contradiction than it is at avoiding contradictions in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: