Hacker News new | past | comments | ask | show | jobs | submit login

You can evaluate on lower depth/time.

But even that isn't a good proxy.

Humans cannot out-FLOP a computer, so they need to use patterns (like an LLM). To get the human perspective, the engine would need to something similar.




There are several neural network based engines these days, including one that does exclusively what you describe (i.e. "patterns only", no calculation at all), and one that's trained on human games.

Even Stockfish uses a neural network these days by default for its positional evaluation, but it's relatively simple/lightweight in comparison to these, and it gains its strength from being used as part of deep search, rather than using a powerful/heavy neural network in a shallow tree search.

[1] https://arxiv.org/html/2402.04494v1

[2] https://www.maiachess.com/


Definitely. And Google's AlphaZero did it years ago.

I don't think the patterns are very human, but they are very cool.


Have you tried Maia? I haven't myself (there isn't one in my ballpark level yet), but supposedly it plays more human due to being trained mostly on human play, not engine evaluations or self-play.


I have not.

Thank you.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: