Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Because it's a set of puzzles on a 2D grid. We don't live on a 2D grid so it's already on the wrong track.

I don't see what this has to do with anything. Intelligence is about learning patterns and generalizing them into algorithmic understanding, where appropriate. The number of dimensions latent in the dataset is ultimately irrelevant. Humans live in a 4D world, or 3D if the holographic principle is true, and we regularly deal with mathematics 27 or more dimensions. LLMs build models with at least hundreds of thousands of dimensions.



Show me an LLM that is doing any of the things you mentioned and furthermore I'm willing to bet none of that will be possible after ARC is solved either. How much money would you be willing to bet?


Not sure what's so controversial, it's well known that LLMs can trivially be viewed as operating in higher dimensional space:

https://gcptips.medium.com/a-geometric-perspective-on-large-...

As for generalizing to algorithms, LLMs don't yet do this as well as humans, but they do do it:

https://arxiv.org/abs/2309.02390

Finally, there's no intrinsic reason why an AI that can reliably solve deductive problems like ARC would be limited to two dimensions.


Then you have no reason to argue with me.


The only position I took issue with, and still do, is my closing paragraph of my last post. Your argument for why ARC solvers wouldn't generalize doesn't even make sense.


No point in arguing. If you think it will generalize then there is no reason to convince random people on the internet that ARC-AGI solver will get you closer to AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: