Hacker News new | past | comments | ask | show | jobs | submit login

No, but we did make use of (1) a large number of input -> network -> output iterations, along with (2) precisely measured output values to decide which input to try next. It may not be so easy to experiment in the same way on natural organisms (ethically or otherwise).

Of course, if you're as clever as Tinbergen, you might be able to come up with patterns that fool organisms even without (1) or (2):

https://imgur.com/a/ibMUn




Perhaps a single experiment on millions of different people? A web experiment of some kind? "Which image looks more like a panda?" and flash two images on the screen.


That's a good idea, though note that there's a difference between asking "Which of these two images looks more like a panda?" and "Which of these two images looks more like a panda than a dog or cat?". The latter is the supervised learning setting used in the paper, and generally could lead to examples that look very different than pandas, as long as they look slightly more like pandas than dogs or cats. The former method is more like unsupervised density learning and could more plausibly produce increasingly panda-esque images over time.

A sort of related idea was explored with this site, where millions (ok, thousands) of users evolve shapes that look like whatever they want, but likely with a strong bias toward shapes recognizable to humans. Over time, many common motifs arise:

http://endlessforms.com/


Problem is, even if you succeed and end up with a fabricated picture that fools human neural nets into believing it's a picture of a panda, how would you tell it's not really a picture of a panda?

You'd need another classifier to tell you "nope it's actually just random noise and shapes" ... hm.


Probably not hard - just engage the higher cognitive functions. "Does it look like noise? Yeah." Or just close one eye and look again.


I think you missed my somewhat deeper philosophical point :)

Who gets to decide what is really a picture of a panda?

If we'd manage to craft a picture that could with very high certainty trick human neural nets (for the sake of argument, including those higher cognitive functions) into believing something is a picture of a panda, "except it actually really isn't", what does that even mean?

Human insists it's a picture of a panda, computer classifier maintains it's noise and shapes.

Who is right? :)


Interesting, sure. But I started out wondering if some obviously-noise picture could be found that fooled humans, at least at first glance. "Hey a panda! Wait, what was I thinking, that's just noise!" It would be weird and cool, on the order of the dress meme etc. but much more so.

Kind of like the memes in Snowcrash, ancient forgotten symbols that make up the kernel of human thought.


Like a picture of a blue and black dress?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: