Hacker News new | past | comments | ask | show | jobs | submit login
The Game Of Go: A Programmer's Perspective (needforair.com)
146 points by louischatriot on April 18, 2012 | hide | past | favorite | 69 comments



This article also illustrates how important it is as a researcher to publicize yourself well. Mogo really only had a very small part in the breakthroughs that were made. Yet it is heralded in every such article as the bot that made it all happen.

For those that are curious what happened, there are some seminal references:

Monte Carlo Go. Bernd Brügmann - This introduced the basic idea.

Monte-Carlo Go Developments. Bruno Bouzy - Bouzy is the guy that kept hammering at the original Brügmann idea while everyone else thought it was a dead end.

Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search. Remi Coulom - This was really the breakthrough paper, that showed how to combine Monte Carlo simulations with a tree search. Was also the first MC bot to win the gold medals.

Mogo replaced the ad-hock operators in the last paper with an algorithm with more mathematical grounding (UCT). Even though the name is now very well known, most top Go programs, including the last research published about Mogo, actually already dropped the idea and went back to more ad-hock formulas again.

The article also makes the common mistake that the position is evaluated using random games. Calling them random is quite misleading. It seems to be necessary to include some knowledge in them, but too much knowledge hurts the strength again - regardless of any speed impact. It's an open problem to fundamentally understand exactly what kind of knowledge helps the program to actually play stronger.

What happened in Go is also a nice illustration that if you want to make a computer good at an AI task, the key is finding an algorithm that approximately-linearly improves with the computing power you throw at it. If you do this, you crack the wall. Algorithmic and hardware improvements will do the rest in the years to come.


Just for fun, I wrapped those three papers and a Gelly UCT paper up in a zip file, in case anybody wants to read them: http://dl.dropbox.com/u/25636/go_papers.zip


I explained the key ideas behind Mogo and not another program because it is the one I worked on, and know well. It is true though that I could also have mentionned Coulom's Crazy Stone.

About your second point - there is indeed some expert knowledge during the pseudo random games (cf the "use of a little expert knowledge" bullet point), but I wanted to insist on the idea of using almost-random-games to evaluate a position, because I found it so unexpected at the time !


I worked on UCT-based (one-arm bandit algorithm) go players for 3 semesters. It was truly a great, if often frustrating, experience.

It's a tough problem to get your head around. Sure, the premise makes sense- that the Monte Carlo heuristic plus the UCT tree-exploration algorithm balances promising nodes with uncertain ones to intelligently expand the tree. But tweaking that algorithm usually hurt performance more than helped, and the tweaks that did improve gameplay were non-intuitive.

One semester our goal was to improve the random-playout generator to play slightly more human-like games. It turns out that almost any play enhancements made in the Monte Carlo simulator crippled performance (for instance: instead of playing randomly, search the board for an opportunity to capture the opponent). The best result we had was achieved by applying the capture-if-possible heuristic on only two positions per random move (ex: at the beginning of the MC-simulator's move, pick two positions randomly- if moving in either one captures the opponent, do it). Applying that heuristic on any more than 2 positions per move made the random games play out more slowly than it was worth. Needless to say, a lot of time was spent testing our tweaks (which itself was quite time consuming).

It was frustrating to see our initial seemingly sensible tweaks cripple performance, and it definitely took some time to calibrate ourselves to the problem at hand. But I probably learned more about programming while working on Go than in any other 3-semester span.

Peter Drake's Orego player is a great starting point for exploring the UCT algorithm (Peter Drake wrote a popular Java textbook you may have used in undergrad). I highly encourage people interested in AI to check it out:

https://sites.google.com/site/drpeterdrake/research/orego


I'm confused as to why you would want to capture. My understanding is that pro players try to avoid having to actually capture unless necessary. (I.e. if they are winning a capturing race then the right move is always to tenuki unless the other player does something that demands a response.)


Real-life tactics unfortunately don't make for good heuristics in the Monte Carlo simulator.

Our goal was to improve overall gameplay by making the Monte Carlo simulator play slightly more realistically than random. Keep in mind that when the Monte Carlo simulator plays completely random games, it still does pretty good. I believe it's on par with GnuGo, a good pre-UCT go player, on a 9x9 board on a modern quad-core.

Whether a heuristic applied within the MC-simulator generates sample games that better assess the strength of a move is a tricky question to ponder. But it seems reasonable that most non-random tactics, however simple, would accomplish that (such as proximity- or capture-based tactics).

The reason that good real-life tactics do not correlate with improved gameplay when implemented in MC-simulators is because the logic required for even simple tactics can reduce the number of semi-random games the computer can play by several orders of magnitude. Picking a random move is extremely quick (choosing a number with a Mersenne Twister is faster than adding two numbers- compare that to the operations required to determine how many pieces surround a given position). That's partly why very simple heuristics like capture-if-possible are only effective if they're applied on a few (2, in our case) open board positions per move in a MC-simulated game. So its just slightly non-random.

Anyway I hope that made sense. It's a weird problem to think about at first, but it's actually really fun to play around with. I recommend checking out Drake's implementation.


Choosing a number with a MT is not faster than adding two numbers (which can typically be done in a single cycle).


Yeah that might have been an exaggeration. But as I recall, once you have the MT seeded and fired up, getting a new number requires little more than a bit shift. Anyway it was substantially faster than simple alternatives I tried (including other PRNGs). One early theory I had was that using a faster PRNG than MT could lead to a better result even if it wasn't as evenly distributed. I think MT proved to be the best solution though.


They were probably generating tons of them at once so the only operation was grabbing a number out of memory. But you're right that it takes a whole lot of operations to generate the numbers in the first place.


To further illustrate the point, I think I may have actually tried a no-capture heuristic at one point. I wouldn't be too surprised if it also worked better than random.


Correction: "most non-random tactics" wouldn't make it better, as someone else pointed out. Finding ones that do is a tough question, beyond just runtime; keep in mind that the MC player uses that strategy against itself over entire random games, which can lead to really un-humanlike games. Completely random games at least ensure that you're hitting all possibilities (and when that good move is found, that path is explored more).


Can I ask you a question? Is the random move approach simply brute force; or does it turn out that, weirdly, moves which have good outcomes from the approach also turn out to have good outcomes in general, even though the actual sequence of moves weren't actually considered?

i.e. I guess I'm just asking if it really is a heuristic.


The Monte Carlo simulator is a very simple and weak heuristic by itself. In UCT go players it's inseparable from the tree-traversal/expansion algorithm, which uses data propagated up the tree from the MC-tested leaf nodes to decide which move sequence to explore next.

When it's expanding the search tree, every node expansion is arrived at "intelligently". It starts at the root node, and at each branching, asking itself: which following move is the best mixture of good (based on MC games I've played on the leaf nodes of its children) and unknown (ie I haven't explored that branch very much yet). It does that at every node until the leaf, where it expands a node with the MC-simulator (and then propagates the result up the tree). So unlike some other tree-search algorithms, it isn't cobbled by having to expand all paths to a certain depth before exploring a particular sequence deeper. The result literally looks like a tree: some branches are really tall, maybe 18 moves deep, while others are really short. Even though the "heuristic" is MC, I don't like calling it brute force, because it uses the results of those MC games so well.


Thanks, I think I see the answer now. Let me summarize to check: There's a conceptual tree of all possible games, with a move at each node. You run some random games to completion, giving sequences of moves from the root (starting position) to a leaf (game over). Then, from the current move, you explore the best of the "next" moves more thoroughly - by doing the same thing again, starting from each of those "next" moves. The benefit of running random games to completion is it gives (some) long-range vision.

I wondered whether those (relatively) sparse "tracer bullets" actually find whole "good" sections of the tree - or whether it's just the specific leaf nodes it found that were good. The answer is that, since it's being used to guide exploration, it must be good - there'd be no point using it as a heuristic otherwise. So, the existence of a win in a region of the tree is a partial predictor of other wins in that region.

I read up on the rules of Go after asking, and it makes sense: once you own some territory, you'll tend to keep it - so that if a sequence of moves leads to a win, all the prefixes of those moves will also be strong positions along the way.

So it's not brute force, but a predictive heuristic. Still kinda surprising that it works well. :-)


Completely agree about the frustration you can have working on this algorithm. I can't count the number of seemingly very good ideas we had that just hurt his performance.

The worst were ideas that made it better against his former self but worse against human players (it happened !)


That's funny, I had that experience too. At first I tested my implementations against the plain UCT-player, and was puzzled why my "improvements" didn't do as well against GnuGo.


You remind me of one of the early Google engineers who complained that they'd put together some really clever and insightful algorithms, but that brute force with masses of data soundly beat them. And that being surprised can be the beginning of discovery.


The same thing applies in fuzzing (throwing bad data at applications in search of vulnerabilities). The dumbest fuzzers -- consisting of random bit flips, duplicating blocks of data, etc -- will often find several orders of magnitude more bugs than complicated, purpose-built fuzzers. Embracing stupidity is one of the hardest things about becoming a good fuzzer developer, oddly enough.


GP here. Whoa, just had this insight: maybe part of the maturation of a profession is developing practices that work well despite these counterintuitive effects (even if they aren't recognized/formalized as such - see bevan's follow-up too http://news.ycombinator.com/item?id=3860950). These are really hard to derive/infer because they depend on unknown probabilities (because you have no analytical solution to the "game"), and you just do a lot of experiments and develop a rule of thumb. They are informal heuristics. I saw similar approaches in machine learning in my grad school.

Change makes it harder for programming, not just that you have to keep relearning, but new technologies change the tradeoffs (e.g. plan vs. hack; performance vs. dev time). The rule of thumb shifts over time. Perhaps we'll develop a meta-rule of thumb...

Go is a vivid metaphor. Other disciplines have traditions/conventions/practices that are taken on faith, and aren't even noticed, let alone questioned or "proven" - because they can't be, without a sufficiently complete formal model of their world. Personally, I've found agile-like pragmatic conventions work well: focus on specific concrete things that are clearly needed (YAGNI); start before you are ready, it will become clear as you go along.


Just out of curiosity, which school did you go to that provides a 3-semester long AI course? Seems like you'd be able to go into quite a bit of depth there.


I went to Middlebury. There was an AI class, but the 3 semesters were research with my late professor Tim Huang.

The department at Middlebury was small, but I doubt that I could have received a better CS education elsewhere. The opportunity to do one-on-one interesting research so early in my college career was invaluable. All three semesters were Go-related, but I didn't focus on the same thing during each.

The first summer was spent scoping the lay of the land. Reading the papers, implementing naive "improvements", and a lot of testing. Our best result was achieved by implementing someone else's we'd read in a paper. It was a humbling experience.

The second semester started with finding ways to visualize the algorithm in action. Since the improvements were so nonintuitive I felt I had to find a better way to understand how it worked and what my changes were doing. Easier said than done.

The third semester was more varied. I did more testing and playing with parameters. I also ported a version to the iPhone (we wanted to be the first and best on there- this was before ios 2 was released). Then my professor and I started a company that was initially related to Go and iPhones but later pivoted substantially several times.

I'm very thankful for the opportunities I had at Middlebury and the chance to work with Tim Huang.


Sounds like a great education. I've been meaning to explore the research opportunities at my school; hopefully they will work out as well as yours did. Thanks for sharing, and sorry to hear about your loss.


For what it's worth, I wrote a similar post recently that some might find interesting:

http://blog.printf.net/articles/2012/02/23/computers-are-ver...


Good post, and I agree: it is a little sad to see the game cracked in a way that sheds so little light on human cognition.


MoGo beats 5d KGS now? Kind of depressing if true, this day felt much longer off half a decade ago... But I won't believe it until I see a few of these games as they are played myself :)


MoGo is not 5d KGS -- there are some bots that are 4d/5d KGS, using the approach given in the article. The two best (that I am aware of) are Zen19 -- closed source -- and pachi -- available here: http://pachi.or.cz/

The author of Pachi has a paper detailing the major algorithm advancement beyond Monte-Carlo game simulations, which is sharing information across simulations in a way that is totally nonintuitive to me.

For me, a 3d/4d KGS go player, this is all very interesting stuff!


Zen beat Takemiya Masaki a few weeks ago in a non-blitz two-game match at five and four stones. It didn't even use an excessive amount of hardware, just a couple of boxes:

http://gogameguru.com/zen-computer-go-program-beats-takemiya...


Actually, the author considers them to be a blitz game.. http://gogameguru.com/zen-computer-go-program-beats-takemiya...


Fair enough. I'd call it a "quick" game, analogous to quick games in chess. It was 25 minutes of main time and 5x30 sec byo-yomi periods, IIRC.

Not the kind of blitz people play on KGS, at least.


Go often takes a lot more moves (~300+) to finish than chess. 25min per player / 150 moves = 10 seconds a move per player.

Compared to ~40 in a chess game would be 2 min per side / 20 moves = 10 seconds a move. (http://www.chessgames.com/chessstats.html)


A game with 5x30 sec byo-yomi could take quite a bit longer than the 25 minutes free time, so that's not a good comparison.

https://en.wikipedia.org/wiki/Byoyomi

After the 25 is up, each player can make as many moves in 30 seconds as they'd like, the 5 means that they can overstep that bound 5 times (by 30 second increments).


I love the game of "Go" -- it's so strategic and it's particularly difficult when you're playing against a human. The mobile app I have for it is okay, but not great, probably due to the issues regarding algorithmic complexity. In fact, I like it a bit more than chess.

I urge anyone to try it.


One of my best friends is extremely well ranked at chess (his highest ELO was ~2200, he was in France's top 20 of his age group when we were in high school), and has a marked disdain for go.

Any players well versed in the two games care to expound on any key differences in terms of strategy/balance between the two?


I'm of moderate strength at both, being a chess expert and a 1d-1k KGS. Some brief comments:

- Go makes me confront my fear of heuristics. My unconscious ability to pattern match the right moves is always ahead of my ability to understand why they are right, although I try to catch it up by thinking really hard. It's a unique experience.

- Both the rules and strategy of Go feel more elegant in the mathematical sense of being a composition of simple ideas, which I like. Chess feels more like a set of arbitrary pieces of knowledge.

- The handicap system in Go is an objectively awesome way to have players of different strengths play competitively. In chess, you can almost never play with someone 400 rating points your inferior and have it be a satisfyingly competitive game -- giving piece or pawn odds changes the game completely. In Go, if you give someone four stones, it feels like you're still playing Go.

- When watching strong players, I like the fact that there aren't draws in Go. It makes the game dramatic until the end.

- I like chess problems better than Go problems, and I personally find a level of beauty and variety in amazing chess brilliancies which surpasses what I perceive in great Go moves. I don't know of a Go equivalent to http://timkr.home.xs4all.nl/chess/chess.html.

- At least at an amateur level, it's harder to make a critical, near-irrecoverable mistake in Go -- there's not as much of a snowball effect making an advantage into a bigger advantage. That makes it feel less stressful for me when playing long games.


Go problems feel super compelling to me. Typical problems can be boring "counting" exercises, but I find it hard to beat something like the ear reddening move: http://senseis.xmp.net/?EarReddeningMove

There is a great series on youtube on fantastic moves from professional games with commentary in English: http://www.youtube.com/watch?v=CJ9Oexs59CE&feature=relmf...

They are really beautiful in part because they become tesuji through a confluence of factors that reverberate across the entire board. It can be hard to see for amateurs (including me) but once you give them the right context and insight, they become startlingly brilliant.

Also, it's interesting you find it hard to make a critical mistake in Go - this feels very common to me. For example, a decision like deciding to defend a group instead of sacrificing it (which comes up all the time) often snowballs really quickly.


Also, it's interesting you find it hard to make a critical mistake in Go - this feels very common to me. For example, a decision like deciding to defend a group instead of sacrificing it (which comes up all the time) often snowballs really quickly.

I do that too, but you have a lot of room to back out before your one mistake becomes a losing mistake. If I neglect a group inappropriately, and then make it heavy, and then try desperately to defend it, and eventually fail to survive with no compensation, that's a lot of mistakes; and I usually could have chosen to stop and cut my losses for quite a while before it came fatal.

In chess, you can have long sequences in the midgame and endgame when the game is on a fine tactical balance, and one not-obviously-awful, ill-considered move can either put you in an immediately resignable state or make you spend the rest of the game fighting on the brink of defeat, trying to draw. And that's not really a style of play you can opt out of if your opponent chooses it.


>At least at an amateur level, it's harder to make a critical, near-irrecoverable mistake in Go -- there's not as much of a snowball effect making an advantage into a bigger advantage. That makes it feel less stressful for me when playing long games.

I think you mean, at an amateur level, it's hard to see the critical, irrecoverable mistakes that you and your opponent make ;)


"Both the rules and strategy of Go feel more elegant in the mathematical sense of being a composition of simple ideas, which I like. Chess feels more like a set of arbitrary pieces of knowledge"

That is exactly the reason that I think go (1) may be solved before chess is solved.

For go, there are some results that give hope that an all encompassing theory exists. For chess, the best we have are results of exhaustive searches of relatively simple situations and a bunch of heuristics. It is true that, together, those have led to spectacular results, but I do not think they will lead to a proof about who wins chess.

(1) to be exact: Mathematical go, as defined in http://www.amazon.com/Mathematical-Go-Chilling-Gets-Point/dp.... ko rules can have variations, and there are variations in how to count points at the end of a game. Both may affect only a tiny fraction of games, but a alpha-beta search may need only one path that is a win under ruleset A and a loss under ruleset B to change the outcome of a game.


"it's harder to make a critical, near-irrecoverable mistake in Go"

You must not have seen me play :-p


I especially agree with your first point. In Go, I often play moves just because I find them beautiful. Even in life-or-death situations, I begin by finding the "most beautiful" moves :)


I'm a fairly strong chess player (FIDE ELO 2325) and a much weaker go player (best rank around 5 kyu on internet sites).

Chess is a very tactical game. There are many long term strategic concepts, but victory frequently goes to the alert tactical opportunist (and chess engines are the ultimate alert opportunists).

Also, to win a chess game, you ultimately have to attack and destroy some part of your opponent's position. In go you just need one more point of space than your opponent -- there's no requirement to resort to violence at all.

One thing I definitely prefer about go is the openings. In chess they've been so heavily analyzed that games between professionals frequently go 20+ moves before an original move is played. In go there seems to be an almost limitless number of reasonable approaches to the opening. I guess there just aren't as many good moves available in the chess openings.

I'm not sure why your friend would have disdain for go. I think they're both fascinating games of strategy, certainly more engrossing than any RTS I've ever played.


My main analogy when talking to chess players: chess is a battle and go is a war. I played chess, now only go. My main problem was the arbitrary set of rules in chess: go is more like axiomatic. Given two players, black and white pieces and a board, it's almost "natural" to develop it


I feel in chess it's about concentrating your power with the goal of total elimination of the other side in one battle. With go, there are multiple battles where you make trade-offs - short term vs. long term, territory vs. influence. Focusing on solely one part of the equation is almost always not optimal, yet you can still develop your unique style of play. You would often feel it's not a zero-sum game where both sides are taking what they want(though in the end only one can win). Also with the huge branching factor, go feels more dynamic and fluid.


I am 2300ELO in chess and about 4-5k in go (maybe a bit weaker now). My impression is that chess is almost purely tactical/close fight game while go is much more strategic/feel game. Of course that could be because I am much stronger in chess and go overwhelms me a bit. I like go more in general, it has aesthetic appeal which I like. The only drawback is that even blitz games are quite long while in chess you can fire up few 1 0 (1minute per game no increment) on internet in no-time :)


The key difference is the philosophy behind both games. In Chess, the goal is to capture the king, destroying his army in the process.

In Go, the goal is to build a territory, destroying is not necessary (smany games finish with very few catpures).

I also like Chess, but I find this "building" philosophy fascinating.


There have been analyses of western and eastern warfare strategies that hypothesize westerners use chess-like strategies (frontal assault), while easterners typically use go-like strategies (surrounding the enemy).


As a first impression, that seems pretty specious to me. I wouldn't at all describe frontal assault as being a normal chess strategy -- the most common would probably be the idea of a multi-pronged attack where your opponent cannot simultaneously defend everything, which is an equally fundamental strategy in Go, at a deeper level than "surrounding."


It's less a difference of Western and Eastern and more of a difference between Clausewitz and Sun Tzu. Clausewitz was all about maneuvering the opposing army into a decisive battle (capture the king), and although you can use clever tactics, it is ultimately a war of attrition favoring the strong (such as industrialized nation-states). Sun Tzu was about using ambiguity so that the weaker opponent has a chance of prevailing over a stronger opponent (asymmetric warfare, or "cheating"). There have been Western military commanders that have used a distinctively Sun Tzu flavor.

Even if you played chess with a multi-pronged attack, you're still ultimately gunning for the king. Your objective is absolute, and each side knows this. Your objectives within the course of a Go game is more fluid and ambiguous.


There is fluidity to objectives during a game of chess between strong players. For example:

inflicting/preventing weaknesses, seeking/avoiding favorable exchanges, occupying/closing important lines.

Then all such long-term considerations suddenly become irrelevant when the game descends into a tactical melee.

At any rate, I don't see how the objective of chess (checkmating the opposing king) is any more absolute or less ambiguous than the objective of go (surrounding more space than your opponent).


Depends on how you want to play. Some people try to win by a certain amount of points. Some people try to lose by a certain amount of points. Some teachers move in a way to call attention to certain shapes or tactical sequences. Sometimes you just want to stick to beautiful moves. When you throw in handicap stones, komi, and reverse komi, how much you win or lose by is arbitrary. You can win by 0.5 points, but that's largely dependent on komi you agreed to. The losing player still has half of the board.

All fluid objectives during a game of chess between strong players are still subordinate to the ultimate objective: capture the king. There's only one king on each side, so there is not much give. There are 361 empty space to choose from in Go.


I don't understand what you're trying to say.

There's only one king, but both sides have 15 additional pieces. One extra pawn is often enough to win, and in the majority of games between competent players one side resigns long before an actual checkmate is on the horizon.

If you want to exchange off the pieces and make a draw, you can. If you're only interested in playing surprising/paradoxical/beautiful moves, so be it.

In go, regardless of the handicap, or whether you want to win by more or lose by less, the objective is still to surround more space than your opponent.


Like asynchrony, I don't understand this argument at all. You can play however you want in chess. It doesn't change the actual victory condition, nor does it in Go. The sub-objectives during a game of chess appear to me to be equally deep and important as the sub-objectives during a game of Go; namely, they are the whole point. Like in Go, you win by conceding some sub-objectives and pursuing others.


Strategies have evolved massively over time and with technology.


I'm curious, what is it that your friend doesn't like?


I learnt Go by watching this anime on Netflix called 'Hikaru no Go'. Its pretty kiddish, but fun, and really inspires you to learn the game!


That's a really fascinating peek at the methods used! I love that they use multi-armed bandit models and that they built a heuristic that wins partly because it's highly parallelizable. This is how you beat a human mind.


This seems like a really interesting area of research. As far as I can tell from my (limited) background on chess/go/etc algorithms, they seem to include lots of hard-coded scenarios (like a bunch of chess openings) and brute force searching.

Are there any teams working on more, I don't know, human-seeming algorithms? Everyone seems to agree that humans are good at these games from pattern recognition on a large corpus of game experiences.

I'm out of my element here and realize I'm approaching "Even though I don't understand what you do, I'm going to assume it's easy for you to implement my suggestion" territory, but what if you encoded positions as general shapes rather than stone-by-stone? What if you played through lots and lots of games and built up Bayesian probability models of how likely certain combinations of shapes lead to victories? I mean, when a human go player looks at a board position and sizes it up what are they doing?

I guess if we aren't particularly good at computer vision and understanding how the human mind works yet, then there are hard problems here, but it seems like a neat sort of interdisciplinary approach.

edit: It looks like NeuroGo[1] linked to from an article in a different comment is more akin to what I'm thinking. It's just not very good.... ha.

[1] http://webdocs.cs.ualberta.ca/~emarkus/neurogo/neurogo1996.h...


> As far as I can tell from my (limited) background on chess/go/etc algorithms, they seem to include lots of hard-coded scenarios (like a bunch of chess openings) and brute force searching.

When playing chess at an advanced level, you need to know a fair number of different openings (that even have names). Chess AI programs have a huge library of openings, endings and other game situations that are evaluated.

In contrast, Go does not have a "library" of openings the same way chess does. There are simply too many ways a game can progress. However, in Go, there are many "micro patterns" that repeat over and over again, and you have to learn some of these even at a beginner level. There are numerous recurring formations, like the "Ladder", where one player inevitably loses and it should be recognized after 3-5 stones are in a specific form.

So Go has these patterns that recur, but they aren't as clear and simple as in chess. While a ladder can be recognized after just a few stones, the outcome may depend on a stone on the other side of the board, that may be have been there for 100 turns. This is what makes pattern recognition tricky for Go AI.


I lost a large bet in 2010 on being able to create a computer program capable of beating my roommate at Go. UCT was my go-to solution to this problem but my opponent was able to beat my creation on a 13x13 board since he spent every free waking moment playing Go against a commercial app on the iPhone. My UCT-based creation ended up beating me but losing against someone who trained intensively for six weeks.


Go is a fantastic game. It's also a great analogy of the eastern philosophy with its emphasis on balance and long-term strategic vision.


That's what I really like about Go. You always need to balance short-term greed and captures against vision.

There is also the balance between keeping your groups strong and claiming a lot of territory.


Exactly. I generally find that when I am over zealous when playing I tend to lose.

I find it similar to what I learned from practicing softer martial arts: the person that becomes tense/rigid loses.


Yeah. Cheng Man-ch'ing. "Invest in loss."


The fact that computers have such a hard time is a reason it is worth playing and makes Go attractive. The fact that chess can be reduced just makes chess unappealing.


This isn't the first time I've seen this sentiment about Go from (I presume) a programmer, and I have always found it puzzling. It seems like the opposite of the Seneca quote, "Men love their country, not because it is great, but because it is their own." Loving Go because it is hard for a computer seems like a scornful reason to love it compared to loving it because it is Go. Like loving your wife because she is beautiful, it is intrinsically temporary. There are better places to search for mystical appreciation for human cognition, specifically places that lack definitions like starting, ending and success.

What I find attractive about Go is that it emphasizes subtle influences over outright attacks. Everything is vague, slowly coming into focus. You feel more like you're trying to secretly drive a Ouija board than trying to eliminate the opponent. I seldom feel like, this is the move that ruined my game, whereas with chess I can almost always immediately identify the move that costs me the game.

Then again, I am terrible at Go and I hardly ever play, but I have a sense, possibly illegitimate, that getting better at Go would make my whole life better, whereas getting better at chess would simply make me a better conniver, and I'm already pretty good.


It is not a scornful sentiment, it is the same appreciation that you are referring to. Just like computing 2+2 is not the same as theoretical mathematics and creativity in mathematics, finding a game that is not computer reducible is an aspect of beauty in the game that is not present in chess because it is reducible. edit: seems that go programs are quite advanced and may overtake human players as well, though the methods are a bit different from chess


I find this adversarial personification of computers curious. Does the fact that computers can drive cars make driving unappealing? At any rate, the computer never "has a hard time" -- It's us poor humans that have been having a hard time writing programs that play go.


am I the only one who thought the title is a pun on Game of thrones and Go as in Go (programming language)?


I didn't make that connection. But even if that was the intention, who cares?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: