Hacker News new | past | comments | ask | show | jobs | submit login

Why is there no mention of Bard or any Google model in the paper?

The paper notes 5 of 11 researchers are affiliated with Google, but it seems to be 11 of 11 if you count having received a paycheck from Google in some form current/past/intern/etc.

I can think of a couple generous interpretations I’d prefer to make, for example maybe it’s simply their models are not mature enough?

However is research right, not competitive analysis? I think at least a footnote mentioning it would be helpful.




I just tested in bard, I can replicate this in ChatGPT easily over and over but bard just writes the repeated word in different formats in every regeneration and never starts outputting other things.

For example if I ask Bard to write "poem" over and over it sometimes writes a lot of lines, sometimes it writes poem with no separators etc, but I never get anything but repetitions of the word.

Bard just writing the word repeated many times isn't very interesting, I'm not sure you can compare vulnerabilities between LLM models like that. Bard could have other vulnerabilities so this doesn't say much.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: