It definitely isn't superhuman by any means, but I was deeply impressed when I learned about microchess, an early computer chess program that managed to run in 1K of ram in the KIM-1 microcomputer. It's a pretty easy program to beat if you play chess regularly. But if you are the kind of person that only plays chess sporadically and non-seriously, it can give a human a run for their money.
With advancements in saved/pretrained models I think answer is or will be a raspberry pi. I think a more interesting question is “what is the smallest computer that can learn chess”
It would be interesting to try to do it with the minimum possible power consumption. Would a small power-optimized ARM microcontroller such as an STM32L4 with some bulk storage be sufficient, for example? Could a device pulling < 100 mW beat a human grand master? 10 mW?
The one that beat Kasparov in 1997 could evaluate 200 million positions per second which is quite a lot. On the other hand beating me could probably be done with a modified pocket calculator.
There has been a lot of progress in computer chess software since 1997, the current version of Stockfish is very probably superhuman on a low-end smartphone
Also: What is the fewest required bits. With modern data based techniques, it's tempting to add a large data file that must be loaded with sunfish. However that kinda feels like cheating.
What's the deal with naming chess engines after fish? I tried to look up someplace to buy stockfish the other day and instead finding information about dried fish I found a bunch of information about the chess engine.
Stockfish was named that way because the two main authors are from Norway and north-eastern Italy. Oddly enough, in both places stockfish is a typical dish.
You might have had more luck searching in an incognito tab. If google thinks you are a programmer it may be more likely to display results related to the chess engine rather than the actual food. (Assuming you are using google and have done many programming related queries)
This engine appears to have neither an interesting evaluation of positions, nor any of the performance tweaks that make traditional engines powerful. You're likely to learn more in an afternoon on https://www.chessprogramming.org, but all chess engines are beautiful in their own way, and I have certainly created nothing better.
It's not meant to be a competitive engine, after all, it's written in 111 lines of python.
I think it could be very useful for didactic purposes, or for getting started. Not having all the extra complexities needed for efficient board representation, position evaluation, move generation etc... makes the big picture much clearer imvho
Yeah, I totally agree that eval + search is the first step to learn, in terms of overall structure. And this includes lots of important concepts like quiescence search etc. I just think it's worth being honest: you either need really good eval or really fast search just to give _yourself_ a good game.
It uses a variant of minimax game-tree search, which is a pretty classical approach to game AI. The particular one it uses is MTD-bi, which is in the MTD family of search algorithms, related to: https://en.wikipedia.org/wiki/MTD-f
It's debatable if this is AI but basically it looks at all the moves you can make then all the responses an opponent can make and then what you can do back etc. and attaches scores based on pieces taken and the positions. And then goes with the best score.
Conventional chess programs use a similar algorithm and ones like AlphaZero are similar except they use machine learning and neural nets to judge how good positions are rather than a simple point score system.
I remember trying to write a similar one after seeing the algorithm explained on the TV show Tomorrow's World, around 1980. (Here they are explaining the cutting edge of mobile phones in the day https://www.youtube.com/watch?v=vix6TMnj9vY&feature=youtu.be...)
Notably, Stanislaw Lem mentioned eye-tracking studies by Tikhomirov which suggest that human players also do some sort of tree-walking (or whatever the proper term is), but are unable to describe it consciously.
Just to add to your reply, alpha0 and Leela actually learn to play by playing with itself, which is much more than just having a score for a given position. Also, in the chess community, Stockfish is not seen as AI, but Leela and A0 are.
> Also, in the chess community, Stockfish is not seen as AI, but Leela and A0 are.
That's a strange statement. For one, because chess players are hardly judges of what is and what isn't AI.
Among chess programmers there may be such an opinion here and there, but originally chess was a classical AI topic and alpha/beta search a classical AI algorithm. As are neural networks and Monte Carlo tree search. So it's quite a strange opinion, IMO.
The classic definition of AI is a machine doing something that seems like it should require human intelligence. Of course once it becomes commonplace for a computer to do something, it no longer seems like it should require human intelligence, so an equivalent definition is that what works consistently is just programming and what seems like it should work but doesn't yet is AI. Successful AI exists in the lag between getting something to work for the first time and it being accepted as something that computers routinely do.
I think we need to be ready to accept that some things we may never completely understand because they're too complex. The human brain may be one of those things, and AI may be another.
For one, because chess players are hardly judges of what is and what isn't AI.
On the other hand, they happen to be great judges of human- vs engine- style of play. If you ask any of the top players who have spent time reviewing games by chess engines, I think you'll find a consensus around the belief that Alpha Zero and LCZero play far more human-like moves than do engines like Stockfish.
The traditional engine tends to be extremely conservative and materialistic, only playing a sacrifice when it has calculated a line which recovers the material with interest (or forces checkmate). The so-called AIs don't do this. You're far more likely to see them sacrifice material for a long-term positional advantage, like a great human player would.
From my experience looking at Alpha Zero and LCZero wins against Stockfish, one of the more common patterns I see is a sacrifice by the AI which gives such a dominant position that one or more of Stockfish's pieces become uselessly trapped behind their own pawns. It's this sort of position which seems perfectly tailored to exploit Stockfish's materialistic nature.
AlphaZero uses a neural network to represent the probability of winning a game in a given state and a probability distribution of next moves, but it still “just” uses Monte-Carlo Tree Search (MCTS) to look for the strongest position to play based on the estimated score for each possible state. In that way it is identical to earlier agents whether using minimax, negamax/principal variation search, or MCTS. The primary improvements of AlphaZero are learning without bootstrapping from human play & using the neural network to output the probability distribution over the available moves in each state to guide the search rather than using a static heuristic (like killer move or UCB, etc).
The original AlphaGo paper even mentioned that they tested the bare neural network predictions against the version with MCTS guided by the network and found that the MCTS version won 100% of the time, which strongly suggests that search is an indispensable part of strong AI performance in games.
AI definitely doesn't imply Artificial General Intelligence, but I'm pretty sure no-one's saying that. The common confusion is between AI and sexy Machine Learning.
What is the smallest computer with superhuman cababilities?