Hacker News new | past | comments | ask | show | jobs | submit login
AlphaGo documentary (2020) [video] (youtube.com)
249 points by rdli on Sept 11, 2021 | hide | past | favorite | 67 comments



Michael Redmond's Go TV (was U.S. commentator for the official alphago games) https://www.youtube.com/channel/UCRJyagla1B5cxIfR4i2LdgA/vid...

He has some nice playlists:

- how to play go (for beginners) https://www.youtube.com/watch?v=KTWujSwL2bQ&list=PLW5_cMTm0w...

- basic openings (joseki) https://www.youtube.com/playlist?list=PLW5_cMTm0wvZOTchMWZag...

- joseki made popular by AlphaGo/AIs https://www.youtube.com/playlist?list=PLW5_cMTm0wvZU5pQhmQFw...

also of interest, Go Pro Yeonwoo (korean go professional) https://www.youtube.com/user/goingceo/videos


I'm so pleased to see this upvoted. Michael Redmond is unspeakably wonderful. He's a top professional player, is fantastic at explaining everything to a kyu player, and prepares his videos really dilligently.


Isn't he also the only American to reach the professional grade of 9-Dan?


Ahh I wish I'd watched the first few in that beginners playlist before watching the movie! Thanks for that.


I don't play go but the AlphaZero chess games are quite beautiful. Agadmator has a description of the weird unhuman like machine continuations, disgusting engine lines. They're correct but incomputable by a human and just look weird. AlphaZero had some beautiful lines that looked human but slightly counterintuitive. They were the sort of moves that humans could learn from.

Are the AlphaGo games similar?


I can try to explain for people not playing Go the most important thing that changed:

The AI popularized a move that has been known since ages, but wasn't used by strong humans much because it was considered a mistake (the early 33 invasion).

The thing with this move is: It gives you a small advantage very early in the game, but gives your opponent a superior fighting position in exchange. This fighting position was considered better. Not early on, but in later stages of the game it oughtweights the small advantage taken early.

Then came AI and it found out, given enough fighting power, it can mostly cancel that later game advantage.

So the new move is actually a good one, but only if you have very strong fighting skills. AI has this, some very strong players as well.

But for the rest of us, the pre AI strategy is often better because we can't handle the complexity.

It is common to see casual players copying the AI style and digging their own grave while doing so.


I'd like to add something: It is not necessary an error for humans to play this new AI opening strategy.

If human players do the new AI move, an attack to cancel out the better fighting position is unavoidable. We've learned from AI how to attack and defend in this situation, so even human players are prepared and know what to do. So it is fair game for the two human brains again.

The player who gets out of this fight with the worse position is on a rather high disadvantage for the rest of the game.

So you can think of this new strategy as doubling the stake of the game very early on. For players who are really good at fighting and handle the complexity that will arise from this fight this strategy is a good choice.


> But for the rest of us, the pre AI strategy is often better because we can't handle the complexity.

> It is common to see casual players copying the AI style and digging their own grave while doing so.

I think this point is greatly overstated. Amateur go style has always been modelled on professional play even though amateurs have no idea of the subtleties of it, and this was rarely bemoaned much in the past. It's easy to find examples of amateur players getting it wrong and turning a slightly-good position into a bad one, but this is true of just about anything and I think there's a lot of confirmation bias in associating it with AI moves in particular - it doesn't in itself mean they've disadvantaged themselves by trying.

It also greatly overstates how bad it is to play the 3-3 early and not fight aggressively. Even if you don't worry about the fighting variations we're talking about a small number of points, much less than amateur games tend to swing via mistakes every few moves. This is also true about those professional-like lines that have always been mimicked but played imperfectly, and is probably a big reason the lack of amateur understanding of them never mattered too much. Plus let's not forget that both amateur players in the game are relatively bad at fighting, it's just about as hard for the defensive player to really get it right as for the offensive player to utilise their theoretical advantage.

Overall I think the modern focus on AI-like play is much more of a fashion in line with other general shifts following what's professionally popular than a mistake. Players do pay attention to how well they do and associate it with specific moves and lines they recognise they don't understand well, but AI-inspired lines don't seem to have attracted any widespread cautiousness.

> Then came AI and it found out, given enough fighting power, it can mostly cancel that later game advantage.

Also worth noting that the AIs play the 3-3 invasion in a specific different way to what was the accepted human pattern, and this alone does make a big but understandable difference to how the shape commonly continues. It isn't that they only turned around the evaluation of the standard position through superhuman reading, they revealed a line that humans had undervalued but could then learn from.


I'm not a Go player but watched some of the famous Lee Sedol match on youtube while it was happening. One channel had a Korean professional 9-dan (= super grandmaster) commentating. At one point in game 1, AlphaGo made a move that shocked the professional. He spent a while analyzing it, asking whether it could work, etc. The next few moves were routine, so he commented on them briefly and then went back to talking about the surprising move. More moves came and he commented on those. Then perhaps 20 minutes later, during the wait for another move, he went back to the surprising move and said "if that move works, AlphaGo is going to win all 5 games". It was so far-reaching and subtle that if it was not a mistake, it could only be made by a player of superhuman strength.

As we know, Lee Sedol managed to win game 4 by a lucky break, but lost the rest, so the GM's comments were incredibly perceptive. I took a few notes while watching so I may be able to dig them up and check the details, but the surprising play became quite well-known and involved something called a "dog's head" formation, which might be enough for Go aficionados to identify it.


> Lee Sedol managed to win game 4 by a lucky break

Move 78 was not a lucky break. At least in my mind.

There are not many movies I cried to, in this movie I cried twice, once when Lee Sedol was apologizing for the loss, and once when he made move 78 and Alpha Go's probability to win dropped with 10%.

Now for me move-78 means hope.


I don't mean to burst your bubble, but since then, we have had significantly better algorithms than AlphaGO, and much more compute.

Unless humanity moves to a point where the average skill is that of Lee, and the average intelligence is akin to JvN's I don't see hope for humanity.


Computers are still bad at something as basic for humans as driving, so i'd say there's still hope.

Although I always wonder if give that the software was designed in a way to allow the computer to make mistakes and take risks to a reasonable amount, like humans do, would they still be crappy drivers?


this movie got me very emotional aswell, which is not common particularly for docs


I'd love to know which professional make that comment, so if you don't mind, can you please look it up?

The move in question should be move 102, which is an invasion that human would not try.


I'm having trouble finding my notes on the match, but after looking again on youtube, I'm pretty sure the professional was Myungwan Kim.


Yes, very much so - for example it used to be almost axiomatic (told in your second or third go lesson) that placing a white stone at the 3-3 point when black had played at 4-4 was very bad, but this video shows that AlphaGo has radically changed this view:

https://www.youtube.com/watch?v=2khNnE5Q3GM


Go is a bit different from Chess, there are so many options that you can't really have a "disgusting engine line", because either sequences are forced and humans can find them too eventually or the number of lines the move effects is so vast that we would never realise how clever it is.

Computer Go is obviously making calculations that a human couldn't, but as an observer it just looks like not making any mistakes. There is the occasional spectacular attack or defence, but humans do that too from time to time - human players often see or try moves that others wouldn't. In complicated situations computers will do a better job at assessing what the impacts are.

In Go, to win a player needs to consistently make moves that are on average 0.0025 points better than their opponent for 300 moves. It is hard to detect the genius if a neural net decides it is going to win the game that way.

The games are interesting to study, they are all masterworks. But the strength of a Go player is in the consistency rather than individual flashes of insight.


I disagree with this comment quite strongly. Go has a lot of high level theory and pattern recognition that is very different from just reading ahead, and AlphaGo has opened up a lot of new theory that we didn't have before, not unlike what was done by Go Seigen and others in the Shin Fuseki era.

I feel like the emphasis of your comment misses how the game has changed for humans since AlphaGo.


> not unlike what was done by Go Seigen and others in the Shin Fuseki era

That is the crux of it though; we're seeing the same effect as every time a new strong player arises. That doesn't scream "wow, this is like when chess engines discovered impossible new lines!".

It was probably a matter of time until some human pro started winning with the 3-3 invasion for example. It isn't like 3-3 invasions are a radical new thing, professional play just underweighted them. I've played amateurs who loved 3-3 invasions.

There is no question that the neural nets are substantially stronger than humans and that they're causing a monumental rethink of all aspects of the game. They're a big deal. But the difference is that they are bringing a very large number of innovations all at once, combined with far more accurate reading, rather than that any individual innovation is special.

It is really difficult for one sequence in Go to be special, because it is so common for a special sequence to happen.


I remember reading that humans play the corners and edges of the board, where the game is somewhat understandable given a combination of keen perception, pattern recognition, and concrete calculation. It is on the other hand inhumanly hard to understand playing further in the board interior (other than by reaching it incrementally from the edges), maybe excepting a few legendary players like Go Seigen. But the AI's have been going there and doing stuff, letting humans learn things that were previously unknown.


That doesn't sound right. Humans and AIs play in the corners and on the sides first because that is where it is easier to make points. One of the traits of how neural nets learn Go is they start by learning to fight and then quickly realise that the corners are more important than the centre. AIs largely confirmed the usual human pattern.

AIs have changed all the details about what is considered an acceptable result in the corners.

Although AI do brutally outplay humans in the centre when the game gets there, no question. Normally after outplaying them everywhere else first.


> That is the crux of it though; we're seeing the same effect as every time a new strong player arises.

Ok, I agree with you there. It's like AlphaGo is the next in the line of Dosaku -> Shusaku -> Go Seigen. These types of players that have revolutionized the game have been very rare, and I see AlphaGo as fitting into that mold.

> That doesn't scream "wow, this is like when chess engines discovered impossible new lines!".

I admittedly don't know enough about chess to understand this aspect of the discussion.


I'm just a middling go player but I'd say yes, the two are quite comparable and rooted, at least partially, in a similar aspect: Alpha doesn't care about margin of victory, only probability of victory. This leads it to play a much more influence/position vs territory oriented style at times in go, occasionally shockingly so.


Agreed, I think ChessNetwork analyzed several of the published games as well - and some of the Leela games - they're so much more interesting than most of what I see out of Stockfish...


Lee Sedol was rated #4 at the start of 2016, the year the match was played. 3 years later, aged 36,

> Lee announced his retirement from professional play, stating that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being "an entity that cannot be defeated"

https://en.wikipedia.org/wiki/Lee_Sedol

His rating seems to have crashed and burned from soon after the match in 2016 until his retirement in 2019..

https://www.goratings.org/en/history/

The #1 rated player is now Shin Jin-seo.

> In January 2019, Shin was defeated by South Korean Go program HanDol. The program defeated the top five South Korean go players. HanDol has been compared to AlphaGo, but is considered to be weaker.

https://en.wikipedia.org/wiki/Shin_Jin-seo


"7% Documentary: Behind the scenes of Fine Art AI - 纪录片《7%》:揭秘人工智能“绝艺”夺冠幕后 腾讯网 - English subtitles" (https://v.qq.com/x/page/r0025m06t5o.html) is a neat 2018 documentary about Fine Art, a world-class Go AI created by Chinese developers at Tencent. The documentary features extensive commentary by the main programmers, Ma Bo and Tang Shanmin, as well as the project lead Liu Yongsheng.

This bit from 4:02 stuck out to me:

INTERVIEWER: What level do you think Fine Art has reached?

MA BO: About the same level AlphaGo had when playing against Li Shiqi last year.

INTERVIEWER: Then wouldn't you regard what you do as a redundant work?

MA BO: How should I say this? Its like, when China made the atom bomb after America. Was that redundant work?


The Chinese have a level of nationalism that many nations lack.


Well, there are many "nationalists" everywhere but, for example instead of going:

"hey, China built the most extensive high speed railway network in the world in the last 15 years and the fastest operating trainsets, we can and should do better" they do something else entirely.


That is a good thing.


I feel it just artificially and pointlessly divides us and create needless "us vs them" situations.

Nationalism has interesting "properties".

It carries with it the implicit assumption that people miles away in the same nation that you have never met share some of the same beliefs and values you do.

It somehow justify you taking credit for the accomplishments of others in the same nation even though you haven't done anything to contribute to said accomplishment.


> I feel it just artificially and pointlessly divides us and create needless "us vs them" situations.

You may be objectively correct, but if it weren't for nationalism, my culture wouldn't exist today and I would have been a German, as would have been my ancestors for more than a century. Just because something is "artificial and pointless" doesn't mean that it's bad.


> but if it weren't for nationalism, my culture wouldn't exist today

I wouldn't be so sure about that. Frequently the conqueror gets culturally influenced by the people they conquer - especially 100+ years ago where totalitarian levels of control were much harder to implement.

Also when your people repelled the Germans, it might have more to do with cultural differences than nationalism - empires who succeed in holding on to capture territory are often those that absorb and incorporate the culture of the people they conquered.


Nationalism itself has a lot to do with cultural differences. Especially Czech nationalism.

> empires who succeed in holding on to capture territory are often those that absorb and incorporate the culture of the people they conquered

Well I didn't notice a lot of Germans and Austrians speaking Czech, to be honest.


no it's not

"identification with one's own nation and support for its interests, especially to the exclusion or detriment of the interests of other nations."


Why did the mods change the title and put an incorrect year in it? The movie is from 2017.


I originally watched this on Amazon Prime, but didn’t realize it was now for free on YouTube, which is why I submitted. I suppose I could have put “AlphaGo Documentary on YouTube” or some such ...

(original poster)


Your original submission and title were fine. It's the addition of (2020) I dislike which I presume was added by the mods.

https://hackernewstitles.netlify.app/


I could've sworn I saw it for free on YouTube in 2018 or so. Am I just remembering wrong?


I also saw it for free back in the day.


Thanks for that. I have seen it in that case. I saw 2020 and thought wow, there is a new documentary, I'll watch it when I get a chance. Mods can you fix this?


The upload date is 2020.


I'm curious to ask what's next for this series of AI at Deepmind? Is there other more challenging problems they are tackling at the moment? I read they have already master StarCraft 2 even.

Is it the case that they have stopped this series of AI and going all in on protein folding at the moment?


AlphaStar is really amazing achievement. but this is not like AlphaZero for Go, which completely made people rethink how Go should be played. AlphaStar didn't "solved" SC2.

It can still fall to very simple early strategy, and in few cases, it doesn't seem to understand some basic ability despite played millions of times. Even I can beat the advanced version of AlphaStar in Blizzcon, where DeepMind has a dedicated area for playtest.

After all, RTS games like StarCraft II which has imperfect information, asymmetrical unit, and require real-time reaction with long strategical planning is very different from Go...

(and protein folding is just too impactful)


This blog post mentions testing on Atari and being applied to chemistry and quantum physics

https://deepmind.com/blog/article/muzero-mastering-go-chess-...


AlphaFold 2 solved the CASP protein folding problem that AFAIU e.g. Folding@home et. al have been churning at for awhile FWIU. From November 2020: https://deepmind.com/blog/article/alphafold-a-solution-to-a-...

https://en.wikipedia.org/wiki/AlphaFold#SARS-CoV-2 :

> AlphaFold has been used to a predict structures of proteins of SARS-CoV-2, the causative agent of COVID-19 [...] The team acknowledged that though these protein structures might not be the subject of ongoing therapeutical research efforts, they will add to the community's understanding of the SARS-CoV-2 virus.[74] Specifically, AlphaFold 2's prediction of the structure of the ORF3a protein was very similar to the structure determined by researchers at University of California, Berkeley using cryo-electron microscopy. This specific protein is believed to assist the virus in breaking out of the host cell once it replicates. This protein is also believed to play a role in triggering the inflammatory response to the infection (... Berkeley ALS and SLAC beamlines ... S309 & Sotrovimab: https://scitechdaily.com/inescapable-covid-19-antibody-disco... )

Is there yet an open implementation of AlphaFold 2? edit: https://github.com/search?q=alphafold ... https://github.com/deepmind/alphafold

How do I reframe this problem in terms of fundamental algorithmic complexity classes (and thus the Quantum Algorithm Zoo thing that might optimize the currently fundamentally algorithmically computationally hard part of the hot loop that is the cost driver in this implementation)?

To cite in full from the MuZero blog post from December 2020: https://deepmind.com/blog/article/muzero-mastering-go-chess-... :

> Researchers have tried to tackle this major challenge in AI by using two main approaches: lookahead search or model-based planning.

> Systems that use lookahead search, such as AlphaZero, have achieved remarkable success in classic games such as checkers, chess and poker, but rely on being given knowledge of their environment’s dynamics, such as the rules of the game or an accurate simulator. This makes it difficult to apply them to messy real world problems, which are typically complex and hard to distill into simple rules.

> Model-based systems aim to address this issue by learning an accurate model of an environment’s dynamics, and then using it to plan. However, the complexity of modelling every aspect of an environment has meant these algorithms are unable to compete in visually rich domains, such as Atari. Until now, the best results on Atari are from model-free systems, such as DQN, R2D2 and Agent57. As the name suggests, model-free algorithms do not use a learned model and instead estimate what is the best action to take next.

> MuZero uses a different approach to overcome the limitations of previous approaches. Instead of trying to model the entire environment, MuZero just models aspects that are important to the agent’s decision-making process. After all, knowing an umbrella will keep you dry is more useful to know than modelling the pattern of raindrops in the air.

> Specifically, MuZero models three elements of the environment that are critical to planning:

> * The value: how good is the current position?

> * The policy: which action is the best to take?

> * The reward: how good was the last action?

> These are all learned using a deep neural network and are all that is needed for MuZero to understand what happens when it takes a certain action and to plan accordingly.

> Illustration of how Monte Carlo Tree Search can be used to plan with the MuZero neural networks. Starting at the current position in the game (schematic Go board at the top of the animation), MuZero uses the representation function (h) to map from the observation to an embedding used by the neural network (s0). Using the dynamics function (g) and the prediction function (f), MuZero can then consider possible future sequences of actions (a), and choose the best action.

> MuZero uses the experience it collects when interacting with the environment to train its neural network. This experience includes both observations and rewards from the environment, as well as the results of searches performed when deciding on the best action.

> During training, the model is unrolled alongside the collected experience, at each step predicting the previously saved information: the value function v predicts the sum of observed rewards (u), the policy estimate (p) predicts the previous search outcome (π), the reward estimate r predicts the last observed reward (u). This approach comes with another major benefit: MuZero can repeatedly use its learned model to improve its planning, rather than collecting new data from the environment. For example, in tests on the Atari suite, this variant - known as MuZero Reanalyze - used the learned model 90% of the time to re-plan what should have been done in past episodes.

FWIU, from what's going on over there:

AlphaGo => AlphaGo {Fan, Lee, Master, Zero} => AlphaGoZero => AlphaZero => MuZero

AlphaGo: https://en.wikipedia.org/wiki/AlphaGo_Zero

AlphaZero: https://en.wikipedia.org/wiki/AlphaZero

MuZero: https://en.wikipedia.org/wiki/MuZero

AlphaFold {1,2}: https://en.wikipedia.org/wiki/AlphaFold

IIRC, there is not an official implementation of e.g. AlphaZero or MuZero with e.g. openai/gym (and openai/retro) for comparing reinforcement learning algorithms? https://github.com/openai/gym

What are the benchmarks for Applied RL?

From https://news.ycombinator.com/item?id=28499001 :

> AFAIU, while there are DLTs that cost CPU, RAM, and Data storage between points in spacetime, none yet incentivize energy efficiency by varying costs depending upon whether the instructions execute on a FPGA, ASIC, CPU, GPU, TPU, or QPU? [...]

> To be 200% green - to put a 200% green footer with search-discoverable RDFa on your site - I think you need PPAs and all directly sourced clean energy.

> (Energy efficiency is very relevant to ML/AI/AGI, because while it may be the case that the dumb universal function approximator will eventually find a better solution, "just leave it on all night/month/K12+postdoc" in parallel is a very expensive proposition with no apparent oracle; and then to ethically filter solutions still costs at least one human)


I'm seeing a lot of cool stuff that people start to build on AlphaFold lately:

- ChimeraX: https://www.youtube.com/watch?v=le7NatFo8vI

- Foldit: https://www.youtube.com/watch?v=nA5GzQDTF20


Libraries.io indexes software dependencies; but no Dependent packages or Dependent repositories are yet listed for the pypi:alphafold package: https://libraries.io/pypi/alphafold

The GitHub network/dependents view currently lists one repo that depends upon deepmind/alphafold: https://github.com/deepmind/alphafold/network/dependents

(Linked citations for science: How to cite a schema:SoftwareApplication in a schema:ScholarlyArticle , How to cite a software dependency in a dependency specification parsed by e.g. Libraries.io and/or GitHub. e.g. FigShare and Zenodo offer DOIs for tags of git repos, that work with BinderHub and repo2docker and hopefully someday repo2jupyterlite. https://westurner.github.io/hnlog/#comment-24513808 )

/?gscholar alphafold: https://scholar.google.com/scholar?q=alphafold

On a Google Scholar search result page, you can click "Cited by [ ]" to check which documents contain textual and/or URL citations gscholar has parsed and identified as indicating a relation to a given ScholarlyArticle.

/?sscholar alphafold: https://www.semanticscholar.org/search?q=alphafold

On a Semantic Scholar search result page, you can click the "“" to check which documents contain textual and/or URL citations Semantic Scholar has parsed and identified as indicating a relation to a given ScholarlyArticle.

/?smeta alphafold: https://www.meta.org/search?q=t---alphafold

On a Meta.org search result page, you can click the article title and scroll down to "Citations" to check which documents contain textual and/or URL citations Meta has parsed and identified as indicating a relation to a given ScholarlyArticle.

Do any of these use structured data like https://schema.org/ScholarlyArticle ? (... https://westurner.github.io/hnlog/#comment-28495597 )


I wish they tried to tackle StarCraft 1 BroodWar AI where the game is (arguably) even harder than SC 2. Besides, there's a healthy BroodWar AI community with some very strong AIs out there.


I’d like to see AI take on a really difficult game like Root.


This is a great watch, since then there has been AlphaGo Zero which surpassed the AI you see play Lee, in 3 days.

https://deepmind.com/blog/article/alphago-zero-starting-scra...


There has also been AlphaZero [1], a generalized version of AlphaGo which also has been trained to play Chess and Shogi (all learning from only the rules), and MuZero [2], which is a further generalization which can also play Atari games and does not even use the rules of the game when doing tree search - it has to learn a model of the rules instead.

[1] https://deepmind.com/blog/article/alphazero-shedding-new-lig...

[2] https://deepmind.com/blog/article/muzero-mastering-go-chess-...


Highly recommend the Deep Blue documentary too. https://www.youtube.com/watch?v=HwF229U2ba8&ab_channel=Fredr...

It's easy to forget that none of this was guaranteed to work. Nowadays it feels inevitable that Chess and Go would fall to computers, but in the moment it was quite a different experience.


It really is a brilliant documentary, even if you have no interest in artificial intelligence or Go.


This documentary really helped me understand and appreciate Go way more. Started my new found love of the game. Wish more people in the US played!


There are quite a lot of us actually. See you on online-go.com.


Thanks for this, an enjoyable documentary I remember seeing on Netflix a few years ago but never getting around to it.

I understand it was made with a broader audience than HN in mind, but I wish they extended the runtime to cover the technical aspects in greater depth.

As it is, it’s a quality sports movie, although I do find it amusing that in a match between a self-made young man and a cutting-edge piece of technology developed by one of world’s most powerful companies, the latter was initially presented as the underdog.


More recent but not full length: AlphaFold https://www.youtube.com/watch?v=gg7WjuFs8F4&t


feel free to add/play me https://online-go.com/player/1021081/ ~8k


The best way to learn the rules of the game, as well as basic skills, is an app called badukpop. https://badukpop.com/

Once you've learned enough you can try playing via OGS, http://online-go.com

Don't expect to win right away.


One of the best films I've ever seen. Brilliant storytelling about man vs machine at the last frontier of our minds.


Narration, BGM - all good. Even someone with absolute no interest in AI will get goosebumps.

(Man) vs (Man with Machine)


This film in large part inspired me to start playing Go


It's a great film and a great game.

I learned how to play about forty years ago, when I was studying math in graduate school in the United States. I liked go partly because I felt refreshed after playing it, whether I won or lost, while chess had given me headaches.

I moved to Japan a few years later, and for a while I played fairly regularly at go clubs in Kabukicho and Takadanobaba in Tokyo. The shot early in the film of some old guys playing go reminded me of those places. They were seedy and smoky, but you could get a game any time of the day or night. Many of the older players seemed practically to live there. Once I got matched against someone who I found out later was a highly ranked professional. At first he seemed interested in playing against a foreigner, but he started looking bored after about my fifth move.

I stopped playing after my kids were born, and I haven't sat at a board—or played a game against a human online—in thirty years. But now that I’m approaching retirement myself, and I have a grandchild who will soon be old enough to learn, I’m thinking that maybe I should take it up again.


Its starwars for AI starters.


(2020)


The movie is from 2017.


Collective > individual. Do I need to say more?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: