Again, I didn't write this, but in general, to take a chess engine and apply to another game the main things you'd have to change are the board representation, and you'd have to retrain the neural net(likely redesign it as well). The tree search should work assuming the game you're going to is also a perfect information, minimax game. Though it could also work for other games. There's a good chance there's prior work on applying bitboards(board representation) on whichever game that is. Chessprogrammingwiki is an invaluable resource for information about how engines like this work. Godspeed.
The first non trivial chess programs were 'playing' in the late 40s(with pen and paper CPUs). Some of these include features you'll still see today.
https://www.chessprogramming.org/Claude_Shannon proposed two types of chess programs, brutes and selective. Alpha-beta is an optimization for brutes, but many search chess programs were selective with heavyweight eval, or with delayed eval.
Champernowne(Turing's partner), mentions this about turochamp, "We were particularly keen on the idea that whereas certain moves would be scorned as pointless and pursued no further others would be followed quite a long way down certain paths."
Not the author, but probably very poorly. This seems more like a proof of concept, it's written in Python, has a very basic tree search which is very light on heuristics. And likely the NN is undertrained too, but I can't tell from the repo. In comparison Stockfish is absurdly optimised in every aspect, from its datastructures to its algorithms. Considering how long it took the LeelaZero team to get their implementation to be competitive with latest Stockfish, I'd be shocked if this thing stood a chance.
Of course, beating Stockfish is almost certainly not the goal for this project, looks more like a project to get familiar with MLX.
Please tell me this is sarcasm. I mean, I know people love to extrapolate current LLM capabilities into arbitrary future capabilities via magical thinking, but "infinite context" really takes the cake.
IANAB, but from what I do understand. It depends what you mean by different genes. Information wise, DNA is a string of base 4 digits(nucleotides) in groups of 3 digits, these groups are called codons. Each codon corresponds to a specific amino acid*. A protein is made up of a bunch of different amino acids chained together. The gene determines which amino acids are chained together and in what order. This long chain of amino acids tends to fold up into a complex 3 dimensional structure, and this 3 dimensional structure determines the protein's function.
Now, there are a couple ways a gene could be different without altering the protein's function. It turns out multiple codons can code for the same amino acid. So if you switch out one codon for another which codes for the same amino acid, obviously you get a chemically identical sequence and therefore the exact same protein. The other way is you switch an amino acid, but this doesn't meaningfully affect the folded 3D structure of the finished protein, at least not in a way that alters its function. Both these types of mutations are quite common; because they don't affect function, they're not "weeded out" by evolution and tend to accumulate over evolutionary time.
* except for a few that are known as start and stop codons. They delineate the start and end of a gene.
I think this is something in some assembly formats too? I remember seeing it once and wondering if maybe that's where the idea of ending lines in C with semicolons came from since at least in the examples I saw in school, a large number of lines had trailing comments with a description of what the operation was doing.
IDA uses ; for comments in its disassembler view, but it looks like C-style // single-line comments and /* comment blocks */ are also accepted by certain tools: https://en.wikibooks.org/wiki/X86_Assembly/Comments
Based on what, your personal experience talking to stoners in college? Plenty of people smoke socially, and among people who smoke cannabis, only a small fraction "lock themselves into apartments and smoke weed all day". In fact, that fraction is much smaller than the fraction of people who drink alcohol who lock themselves in their apartments drinking all day. And I don't know if you've spent any time with those people, but they all have cognitive issues too. As do people who do speed all day. As do people who do heroin all day.
Cannabis can be a wonderful social drug, as can amphetamine or cocaine, and I'm sure even opioids, though I don't touch those because I have significant genetic risk for opioid addiction.
I don't think judging a drug by its most addicted users is fair. Especially since you're doing that for weed, and then comparing that to the most reasonable alcohol users.
All that being said, I absolutely agree that cannabis is not good for you in large quantities. It's absolutely bad for cognitive ability, working memory, and long term memory when used chronically. But the only people I've met who would disagree with that are cannabis addicts, and usually young ones at that(the older ones tend to figure it out unless they stay stoned for their entire lives, which does happen). Most people I talk to are well aware of these things. And at least from my own experience, these effects will generally pass once you cut down.
> Based on what, your personal experience talking to stoners in college? Plenty of people smoke socially
Yep. And plenty smoke privately, too. People see the public spectacles and incorrectly conclude it as wholly representative.
Many professionals use cannabis but choose to not advertise it since misinformation is still so rampant, originating from the "Reefer Madness" propaganda days. The tradeoff of potential career damage just isn't worth it.
Problem with that logic is, humans don't just evolve genetically, we evolve culturally, and that cultural evolution ends up affecting our biology as well. So it doesn't really matter how slow genetic evolution is. Cultural evolution is what defines the human species. It's much more rapid because it includes planning and foresight, unlike the blind watchmaker of biological evolution. It is also lamarckian in that it incorporates the experiences of the previous generation into the cultural phenotype of the next one.
That's precisely how we've changed so drastically is an evolutionary blink of an eye.
And now, our cultural evolution has reached the point where we're even able to change our own genetics with planning and foresight in a single generation. So it seems to me that the blind watchmaker is essentially irrelevant now.
How so? If great filters exist at all, which is not a given, there could be multiple ones, first of all. They could be somewhere between our level of biological complexity and the kind hypothesised to be responsible for this signal. Endosymbiosis is a very plausible such filter. The evolution of language and the bootstrapping of cultural evolution is another one. Both n=1 on our planet. Probably there are others I can't think of right now.