Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As it happens, the representations (aka embeddings) learned by deep neural nets also organize objects geometrically in tensor or vector spaces, such that similar objects end up near each other in a high-dimensional system of coordinates. AI researchers nowadays routinely use generative models like, say, Glow[a] to identify geometric directions in the coordinates of a representation space that correspond with concepts such as "smiling vs not smiling," "male vs female," etc.

[a] https://blog.openai.com/glow/



I find myself reading the news backwards; rather than "the brain uses space-orientation to map other things", it's like "it's interesting how many things can be represented in a space-like manner that we may not have otherwise thought they could be".

I think the difference is that rather than "the brain is sort of performing a hack and reusing something it doesn't seem to have any driving need to re-use", the news is that many things turn out to fit into that model despite our intuition that they should have no particular spatialness to them. Metrics are more fundamental than we may have thought and it's actually neither a surprise nor a "hack" that the brain exploits this characteristic.

An interesting question comes to mind; do some brains have more dimensionality than others? Are some people literally one-dimensional thinkers, pervasively? Could we produce a test to distinguish between 2.1- and 2.5-dimensional thinkers? Can an increase in dimensionality be trained, or a natural talent fail to flourish without proper stimulation?


> I find myself reading the news backwards; rather than "the brain uses space-orientation to map other things", it's like "it's interesting how many things can be represented in a space-like manner that we may not have otherwise thought they could be".

It could actually be spatially related in the brain too. Neurons are situated spatially and spatial constraints, ie. they have a limited number of connections, and the closer two neurons are spatially, the faster they can signal each other. The neuronal connections also rewire each other based on reinforcement, which could very well consistent of moving related concepts closer in your physical brain (at least important ones).


In addition, multiple works[0][1] have discovered that grid cell representations arise from regularized recurrent networks when provided relative inputs to predict absolute outputs.

[0] https://arxiv.org/abs/1803.07770

[1] https://deepmind.com/blog/grid-cells/


Yes. And that's with the relatively simple deep models we have today, which are much simpler and have significantly less capacity than the human brain!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: