Impressive, as is the recent smaller program from a French programmer.
Note, however, that both have sacrificed some rules. I believe they have both left out en passant, which is rare enough that it won't be missed most of the time. It's rare enough that if you use it against a casual player, there is a good chance they will accuse you of making it up.
Castling, however, which is also omitted, is much more serious. Getting castled is one of the top goals of the opening. It happens in nearly every game. Taking that out is huge.
Pawn promotion is missing from the older program but included in the smaller French program (making the French achievement even more impressive). I'd say that is not as big an omission as castling, because many games end before it gets to that point. Still, the threat of promotion has a major influence on endgame play.
So what we actually have are very impressive implementations of chess-like games. That raises the question of what is the smallest implementation of full chess?
I suspect that both these programs could be expanded to the full rules without making them a lot bigger, by taking the short cut of allowing the human to make those moves, but not the computer (especially if the computer trusts the human and does not check for legality of the human's move). I think you could defend a claim that you then have full chess, and it is just a limitation of the AI that it does not consider those moves.
There's a rather ahistoric perspective at work here. I wonder if there's a generational thing.
ZX81 chess was so impressive because it was impossible (at least, I'd have bet real money on it). It was the thing programmers namechecked as the kind of thing you couldn't do on these newfangled cheap home computers. You need a 'real' computer to do it. There's just no way. I was writing programs for the ZX81, and the kind of simplicity they had to have is difficult to communicate to someone who grew up with 8086 PCs on their desks. Get an emulator and have a go.
So, you're right, it wasn't 100% total, complete, full chess. But, good god, it was chess, recognizably chess, playably chess! And the guy did it on a freakin' ZX81. It is difficult to explain how astounding that was.
Pointing out it's lack of castling and en-passant (and stalemate, repetition, etc) seems a bit churlish then.
That's why the challenge for the smallest chess program uses the ZX81 rules, not the full rules. It is why smaller programs make it to HN, whereas you don't know about the smallest program to play FIDE chess. It is a historical honoring of that great achievement.
I strongly recommend playing with this sort of machine. It's an utter education. It will, literally, change the way you think about programming. Also, the ZX81 had the best cover art for any programming manual EVER:
However, you will be frustrated with the ZX81; the built in BASIC was utterly appalling and also didn't have a tokeniser --- instead each key (plus modifiers) produced a different keyword. And it was stateful, so the keys did different things depending on what you were typing. This made typing on it horribly frustrating even when you had the keywords written on the keyboard, which you won't have with an emulator.
It may be of interest to check out the Jupiter Ace --- it's basically the same machine, made by some ex-Sinclair staff, except running Forth instead of Basic; you can at least type on that, and using Forth means you're much closer to the machine. (Plus, Sinclair's Basic was seriously horrifying. Any alternative is better.) There's a bunch of emulators. But it's a much more esoteric machine, so there'll be very limited resources around.
There are Javascript emulators for all the above, but they're pretty naff.
Agree with that. The syntax check while you typed programs would break your concentration. Still have my first machine in it's foam box, a zx80 with ROM upgrade.
They didn't work like modern interpreters, which basically compile the program into bytecode and then interpret the bytecode. They would do simple very simple parsing of the program, mostly just tokenising it, and then more or less execute each Basic statement directly. (Sinclair Basic's wacky keyword-at-a-time entry system meant they didn't need to ship a tokeniser.)
So:
10 LET I = 10
20 LET I = I + 1
30 GOTO 10
The program counter would point at the start of line 10; the interpreter would see LET (mandatory in Sinclair Basic) and know a variable name was next; it'd read that; it'd read the equals sign; then it would read the expression and evaluate it on the fly. Then the program counter would be pointing at the end of line 10, so it just advances over the line number, and it's ready for line 20. etc, etc.
Line 30 would read the GOTO, then read the line number, then reset the program counter to the beginning of the program and scan through looking at line number headers until it found the right one. Yes, O(n). Yes, you put most used subroutines at the beginning of the program for speed.
So the entire run-time state is a program counter, a couple of fixed-size stacks for FOR...NEXT and GOSUB...RETURN, and your variables. I've no idea how Sinclair Basic stored them, but BBC Basic used a hash table that was immediately after the end of your program. Strings were a problem because they could change size; Commodore Basic had the world's slowest garbage collector. BBC Basic just fragmented and ran out of space. I don't know how Sinclair Basic handled it.
"I've no idea how Sinclair Basic stored them, but BBC Basic used a hash table that was immediately after the end of your program."
I expected that Sinclair Basic would have a rule that variable names have one significant character, and reserved space for 26 integers A% through Z%, 26 strings A$ through Z$, and forget about floating point, but reading http://www.worldofspectrum.org/ZX81BasicProgramming/chap06.h..., that isn't true; this language even allowed spaces in identifiers.
At the same time, we have (same link):
"You will get a report 2/0, & looking up 2 in appendix B, you will see that it means 'variable not found',
So. This enormous 8kB ROM wasn't large enough to hold error messages.
And, reading http://www.cs.columbia.edu/~%20sedwards/classes/2006/w4115-f..., it seems that loop and stir g variables had to be single-letter (yes, your program had at most 26 strings. Not that much of a limit if you have a few hundred bytes of RAM for your program and your data)
That PDF also gives some more limitations of the language, claiming that it wouldn't do complex expressions such as
...wow. The framebuffer is between the program and the variables? And as the program changes size, and the screen changes size, it relocates everything above there? No wonder editing lines in a big program is so slow...
Also, that mention of the calculator stack? That's used by an embedded Forth-like language used to do floating point operations; which is why Sinclair Basic's trig functions are so appalling.
They actually make for quite a decent IDE --- it's a really cheap and easy way to allow arbitrary insertion at any point without needing the complexity of a full text editor. You really need a RENUMBER command, though (which BBC Basic had) or else you sometimes find yourself unable to subdivide numbers.
> It was the thing programmers namechecked as the kind of thing you couldn't do on these newfangled cheap home computers. You need a 'real' computer to do it.
When I first read the Jargon File in '96 or so, I remember being uncharacteristically annoyed by its terse, dismissive attitude towards affordable home computers ("bitty boxes"). That particular sort of elitism seemed very much at odds with the challenge-loving, egalitarian hacker spirit that the rest of the document extols. One could argue that ESR was just faithfully documenting the usage at the time, but he frequently editorializes in other entries; I think one can assume that he didn't disagree, which was a bit disappointing.
"That raises the question of what is the smallest implementation of full chess?"
Technically, that is a program that prints "I concede". If you also want it to play black (IIRC, this 1k chess doesn't) and validate moves, it would have to know the 20 opening moves for white and concede on black's first move.
Of course, you could claim that isn't playing chess. But it is hard to define what, then, is chess. Picking a random move? Moving only one of one's knights, and conceding when it is taken? Does a program need an ELO rating of at least 1500 to be considered an implementation of 'full chess'?
The idea is that you have to actually implement all of the rules, even if your AI never invokes them. The game in this article for example, was originally designed as a two player game. If you can swap out the AI for a second human player and still play a correct game of chess, then you have 'full chess' now matter how bad the AI is.
I know, but would a program that implements all the rules, but plays lousy, qualify? If so, it seems more reasonable to ask people to write a language checker (does this series of moves constitute a valid chess game?) than a chess playing program, as, of example, it seems one should prefer a program that plays a lousy game of chess but never castles itself over one that plays an incredibly lousy game but actively uses all the rules.
And, but the way, "all the rules" can get quite complex if you consider the 50 move rule and draws by repeated positions, even more so if you pick a rule set from a time when the 50 move rule allowed for exceptions (http://en.m.wikipedia.org/wiki/Fifty-move_rule#History)
I think it's helpful to consider that this program in 672 bytes of machine code rather than source code or assembler. That helps the comparisons to English text seem a little less jarring: if my words were expressed as 8bit bytes rather than a series of byte sized chars you'd see a lot of compression in my communication.
He also 'cheats' a bit by keeping part of the program on tape and never loading it into memory so it doesn't count toward the 672 byte limit. That's a good trick that I wouldn't have thought to do with display code.
There are 4.7 bits per character for Latin text, which means the comment would have 853 bytes of uncompressed information.
Estimates of information density for English are as low as 1.5 bits per character, giving 272 bytes in that comment.
The information density in machine code isn't 1 bits per bit though, because of the distribution of which opcodes are used, so the 672 would also drop.
So I think the analogy is a pretty good one, whether exact or not, some of the comments on this page contain more information than ZX81 chess.
I'm not an information scientist, so this maybe a lay understanding, but as I understand it there is a limit to how much you could theoretically compress any piece of content without losing information. So take that 4.7 bits per character: well, not every letter is equally likely, so we could construct an encoding which takes frequency into account, but then not every word is equally likely, so we could use that information to make a tighter encoding, and so on. At some point we would reach a theoretical minimum.
At that limit, the size of the compressed data has one bit per bit of information.
So, scaling back up, the estimate is (and it has to be an estimate, I guess, since we can't construct the actual theoretically smallest encoding) 1.5 bits of information per character.
I searched for this value, and didn't spend long figuring out if it was well calculated, so take it with a big googly pinch of salt.
> He also 'cheats' a bit by keeping part of the program on tape and never loading it into memory so it doesn't count toward the 672 byte limit.
I'm missing something here, but if something is never loaded in the memory, why should be counted? Why do you consider that a part of the program at all?
Can you please also provide more technical details, since I don't understand what you mean?
In a way you're right, the executable is definitely within bounds and if that is your definition of the program, it's small enough. If you count resources that are necessary to make the program run, however, you'd use a different metric.
What do you mean? The program really plays chess against the user by running in the 1K RAM of the computer that is designed to spend 768 bytes of the said 1K just for the screen content.
24 lines of 32 characters. 768 bytes usually cleared and scrolled away as you type. The user just has to load the cassette and play. Even if the program was developed on the Cray (it was not) everything needed for the program to run is less than 1K. Please explain your sentence "If you count resources that are necessary to make the program run, however, you'd use a different metric."
"The tape loads every time, without problems, in about 30 seconds." The cassette speed of ZX 81 was 250 bits per second, 30 seconds is less than 1 KB, and the sync header (the bits that don't carry the program) were certainly also counted in the given 30 seconds by the reviewer.
"the computer that is designed to spend 768 bytes of the said 1K just for the screen content."
No, it was designed to use up to 793 bytes (those 768 plus one byte for every line plus, it seems, one extra 'end of screen' byte) for the screen buffer. If you didn't write stuff, the size of the screen buffer was way smaller, down to 25 bytes for an empty screen (http://problemkaputt.de/zxdocs.htm#zx80zx81videomodetextandb...)
Also note that the main CPU drew the video from an interrupt handler. That's why one had fast and slow mode; fast mode disabled that interrupt, speeding up your program, but losing the screen display. I think it also means that one should interpret the expression 'halt instruction' as used in the description of the contents of video memory literally.
I also found this nugget in a disassembly listing:
"and although each interrupt interrupts the previous one, there are no stack problems as the 'return address' is discarded each time."
Thanks for explaining the details of the screen handling of ZX81. Still the author of Chess1K managed to use even the bytes of the "used" display lines for his calculations, and that it's still undisputed that the 600-something bytes are really the whole thing the user needed to run the program.
> I suspect that both these programs could be expanded to the full rules without making them a lot bigger, by taking the short cut of allowing the human to make those moves, but not the computer
Ah, but, for instance, what if the only remaining move for the AI was to use the en passant rule, for instance? By not invoking the rule, the AI would not be playing chess by the book.
The smallest implementation of full chess is an irrelevant question. It all depends on the underlying machine and how the instructions are designed. At one extreme, a closed circuit with no programming interface but with the ability to play chess is a chess implementation with code size 0.
A legitimate and interesting question is what is the smallest possible chess implementation in a particular instruction set architecture such as x86.
A comparable example is writing quines (a program that outputs its own source code without reading itself from a file). It is a badge of honour to write the shortest quine in a particular programming language but comparing the length of a python quine implementation to a Java implementation is pretty meaningless.
oh, with my latest chess programming language, the source code for chess is 0 bytes. feed an empty file to the chess compiler and out comes a working binary which plays chess.
There was a great interview on The Web Ahead recently with Jason Scott about Archive.org and the challenge of preserving these amazing bits of history: http://5by5.tv/webahead/97
Just for the record, the Atari 2600 also had a chess game with AI. It also had only 1024 bits of RAM, but it had access to all of it because there was no OS or underlying hardware support, like in the Speccy. So while this is definitely amazing, the 2600 version is very close in the "insanely well coded" category, especially because the 2600 had some SERIOUSLY quirky hardware to deal with. For example, on the 2600, you could only draw half the screen's background: the other side had to be a mirror image. Additionally, the columns of pixels that you drew on the screen were enumerated, to the halfway point on the screen as follows:
12345678,87654321,1234
So you were drawing forward, backward, then forward again, and that only got you half the screen!
Not exactly; this overstates the complexity of the chessboard display.
The order isn't exactly what you wrote, it's bits 4567 / 76543210 / 01234567 for the three playfield registers for the left half of the screen. This can be either repeated or mirrored for the right half. A common trick as 2600 Video Chess did was to simply not use the leftmost (mirrored to rightmost) register, which still gets you 16x2 bits of mirrored playfield in the middle, trading computational cost for a slightly smaller play area.
Now an 8x8 checkered board is not symmetrical for mirroring. 2600 Video Chess actually draws only 7 of the files with the mirrored playfield registers, with the center one of the 7 (the 'e' file) straddling the mirror line. The additional (leftmost) file is actually a sprite object (the "ball"). Pretty clever.
The real trick in 2600 Video Chess is drawing 8 sprites per row on a system whose hardware only has 6. It does this with the "venetian blinds" trick, where each sprite is striped onto alternating scanlines. That way there's only ever 4 actual sprites at a time, which are moved back and forth by a square's-width on alternating lines.
I read that chess was originally considered impossible to implement on the 2600, but there were promotional materials showing a guy playing chess on it and a customer complained that it wasn't available, so Atari went ahead and wrote it.
The graphics were horrible but the AI was actually pretty good, so we just used to set up a real board and mirror the moves on that.
It may only have had 128 Bytes of RAM, but it used 4K ROM cartridges, even more than that if the cartridge used bank switching. So Atari 2600 programmes had quite a bit more room to fit inside.
IIRC, early cartridges only had 1 or 2k - the 4k was a limit for a while, but I don't think they all shipped with 4k in the beginning (cost savings). IIRC there was a trick later to allow for 8k cartridges (Pitfall II was 8k, I think, but memory may be failing me).
The 2600 hardware is certainly quite... unique, but that didn't stop the demoscene (I'm surprised at no mention of the demoscene in the comments until this post) from doing some rather impressive things with it; here is a memorable recent example of that:
It didn't have to be a mirror image; it could also be an unmirrored copy. See, for example, the first screen of the second row of the map to Adventure:
Why does the author say "memory" when he means "disk space"? Or does he actually mean "memory", as in "RAM"? In that case, why would the size of the executable matter?
As for why the size of the executable matters, you needed to have the entire executable in RAM before you could actually execute it. That's not so true these days, but it was back in the good ol' days* when you hooked tape cassettes up as your external storage.
There was no "disk" on a ZX81. Storage was audio cassette tape.
The size of the executable mattered because it all had to fit in 1024 bytes RAM (no "K", no "M", certainly no "G") with all video & operating system memory.
"Operating System" is a stretch. BIOS would even be an overelaborate term for what the ZX81 had, given that IO was basically handled through fixed memory-mapping. What it had was a few inbuilt ROM routines and a main run loop that cobbled together a text-mode display, a line editor and a basic interpreter.
I didn't downvote you, but you asked a question the answer to which is easily available with the most trivial of web searches. You assumed he meant "disk space", assuming the author made a mistake when you could easily have checked and not asked the question here.
In some sense, it's not a legitimate question for this context.
Thing is, this context is way out of the norm for a great many HN readers. The notion of a general-purpose home computer without rapid-access storage, and hence without virtual memory, is so foreign to the younger crowd that the question is fair and the answer is almost unbelievable in their experience. There's nothing to prompt a web search, as what's being searched for is at odds with axiomatic presumptions.
The question asked exactly is the sort of thing that "the most trivial of web searches" wouldn't have answered as well as either of the replies already up.
One reply puts it into historical context, and educates the reader about the memory/persistence model of the ZX81.
The other reply points out that the lack of virtual memory and whatnot means that, in previous times, you needed to have the entire thing loaded into memory to execute something.
Without some digging and a knowledge of what to look for, those are non-obvious but useful points.
First thing that appears is a Wikipedia article which explains how the memory and off-line storage worked.
So I don't think finding this out would have been rocket science.
Having said that, and having written my first code on a ZX81 (if you don't count an even older TI programmable calculator with even less RAM) I'm old enough to be amused by the question. :)
However you slice it, chess in 1K was awesome coding.
its a 100% legitimate question. it was asked in good faith because the person asking it didn't know and wanted to know. its not a trivial web search either, because even knowing what to google for requires contextual knowledge that OP didn't have.
punishing a person for being ignorant while they are trying to learn and become less ignorant is terrible behavior.
Well, this is the site where, in an article about "the 36 questions it takes to fall in love" someone suggested building a bot so you could go through the questions on your own.
I'm not sure why the size of a program has any serious bearing on it's greatness. Sure it's an incredible feat of engineering, but shouldn't the "greatest program ever written" be primarily judged by what it allows us to do, not need to do, to build, or what insights it provides?
For example, certainly the first widely used assembler was a "greater" program than any chess game, not only because it stopped us from having to track memory locations by hand, but because it allowed us to dedicate our brain cycles to actual problem solving and writing more useful programs.
The 4 minute mile was amazing because of the four minutes, not because someone managed to run a mile. Amazing feats in the face of incredible constraints are inherently worthy, I'd say.
Why? Seems like a much poorer analogy to me, since it is trivial to make a movie for much less than that. It falls into the same trap as the commenter above, the 'greatness' is due to the quality of the program, rather than just making it exist at all.
Except that my original analogy has a lot of upvotes. But it is common enough to assume that the rest of the world agrees with you, when you think someone is wrong.
If, instead of trying the pissing contest, you could actually try to figure out why your "do you see why you're wrong", didn't, in fact, make me think I was wrong, we might get constructive. I think it didn't because it begged the question. Otherwise, piss away, it is funny to see how facile the 'yeah, no I'm right, because, I think so!' gets.
But it is common enough to assume that the rest of the world agrees with you, when you think someone is wrong.
Please, tell me what that's like.
You got so hung up on and defensive about my question that you were blind to my reasoning. I wasn't trying to hurt your feelings — I was offering a genuine explanation. There is no chance of a "constructive" discussion with people like you once they've made themselves a victim.
You haven't attempted to explain why my explanation was poor — which is less than what I gave you — but you're too invested in the lie that you don't understand my post to turn back now! And it's weird as hell to witness. You're a real big man.
I think your post begged the queston. Do you know what I mean by that? I think it assumed the things you wanted it to show. Namely
nor is the greatness of a program measured by its number of lines or memory footprint
That is the point you're trying to argue, but you don't make any attempt to show why that is more like the case of the movie budget, than the running race. You just declare it to be. Now, you made that argument in response to me asking why. But you didn't say why, you just gave the two analogies again, and mixed in the original question with the second.
So we have
"Its like X"
"No, its like Y"
"Why?"
"Because it is like Y"
See why I didn't find your response persuasive?
But by all means, ignore what I'm saying at tell me more about me. That's funny.
Which is exactly why it isn't a good analogy. The amount wasn't arbitrary, it was the memory of the computer. There wasn't more to it being the 'greatest program' than the fact that it was created. There's no aesthetic judgement here. It is chess, with AI, in < 700 bytes.
He meant the "movie budget" could be arbitrary chosen to satisfy the "feat under huge constraints".
Right, which is exactly why that isn't a good analogy to something where the budget was fixed, and the 'greatness' isn't an aesthetic judgement, but whether it can be done.
The greatness of an achievement may be in and of itself.
The Mona Lisa is great purely based on its appearance, not that anything useful comes of it. The "7 Wonders of the World" are noted for their scale vs the ability of then-modern construction, not their ability to move water or house dead bodies. Great exploration often leads to laudable yet largely unproductive dead ends (Everest, Antarctica). In this case, cramming a viable AI playing the quintessential expression of human intelligence (playing chess) into less than 700 bytes is astounding, and at least a candidate for greatness.
The first widely used assembler may vie for "greatest tool", but it did little of interest in and of itself.
And yet, when you ask what is the greatest thing or idea ever created by humans, the reply is not usually any of the Wonders of the World, or the Mona Lisa, or a chess program. It is usually language or writing or something which was similarly a sea change in the development of humanity as we know it today.
My argument is that these are the better criteria for "greatness" than the relative ease of performing a specific task, though that task may implicitly represent the great thing. E.g., one may call the printing press the greatest invention ever, not because it was great but because it ushered in a new age.
And this ushered in the age of chess on your home computer :) So it was practical while simultaneously being an engineering challenge, which is what (I guess?) most of us find so appealing.
I agree. I couldn't care less what he had to do in order to fit his not really usable (from a chess enthusiasts perspective) chess implementation in N kb. Apparently N kb is not enough for a good chess program.
However, 1024 bytes was, at the time, all the RAM available, in a computer that was the only one many people could afford. For some, like me, a computer was in fact a very good way to find a chess opponent.
Therefore, for a chess enthusiast of 1981-1982, this program was groundbreaking. You could actually get a computer to play against you.
(Personally, I did not have ZX81 but I got ZX Spectrum. I had to mow a lot of lawn and dig some graves at the local cemetery to get the money for a 16K Speccie.)
Yes, there were some feats done already then. I guess people can't imagine what strange tweaks were used in the machines.
Take for instance the 48K ZX Spectrum: it had 16K ROM from 0000H to 3FFFH, then 16K RAM (some of which was display buffer) from 4000H to 7FFFH made of eight 16 kbit chips, and the remaining 32K RAM from 8000H to FFFFH was actually made of eight faulty 64kbit chips which Sinclair could buy cheaply. The chips that had failed in manufacturing testing were just tested again to determine whether the faulty part in the chip was in upper or lower half of the 64kbit address range. The high address bit of memory chip was connected permanently high or low to keep the address pointing to a working half of chips, thus enabling the extremely cheap broken 64kbit chips to be used as valid 32kbit chips.
(All eight chips of memory of course had to have be broken in the same upper/lower configuration).
The Psion chess program for Spectrum needed the 48K RAM model, although I think someone also wrote a full chess program for the 16K model (including castling, en passant and all).
Psion, by the way, was publisher for much of the early software for ZX Spectrum, including chess, spreadsheet and 3D design programs. Later on it was known for handheld devices, as well as the EPOC system which later on was to become Symbian, used in Nokia smartphones.
Was going to bring this up. My father keyed that into his Kim-1 by hand and played it. Definitely predates the one here by several years and also fit into 1K.
On a side note, I wrote one that ran on a PIC16 and fit into 4K of flash, while the part had only 176 bytes of RAM. It played over the serial port in a terminal. The 4K is not impressive, but I wrote it in C with a compiler that didn't support recursion because the stack is for return addresses only and is only 8 levels deep. IIRC I could do a 5 level minimax search on it.
I put a bit of effort into that, basically making C macros to emulate 6502 assembly and getting the program to run on modern machines that way. Peter gives me credit on his site. I often think it would be a good idea to load up his machine code on a 6502 emulator and try to round trip confirm the accuracy of my translation, but time, as always is the enemy.
Impressive, Elite game also comes to mind. For you who don't remember, it was a really immersive game with huge universe that can fit on a floppy drive.
I remember when my friend show me proudly cd he just purchased with it, and didn't believe game is small enough to not needing it. His face was precious when he opened cd and saw how big the game was.
Whoah, I'd almost completely forgotten Kuro5hin, and a lot of other people obviously have, too, as the site is a ghost town. It was a really neat pre-reddit user-generated social media site; kind of a more democratic Slashdot. I knew the founder in some capacity (I don't even remember how/why, now, or their name...maybe because I worked on Squid or because I worked in Perl), and used to advertise my old company on Kuro5hin to be able to send some money their way.
I quite liked it, and there was a brief time when I divided my time between Slashdot and Kuro5hin, as I currently divide my time between reddit and HN.
Having disk available is cheating --- you get to load chunks of code off disk while the game runs!
Cassette Elite had to load the entire game into RAM. It had some omissions --- fewer ships, no (or fewer?) missions, no docking computer, etc. But it was completely self contained.
A close candidate for "greatest" indeed. Which is greater, the long-recognized summary of human intelligence in 672 bytes? or a universe in 77x the space?
Wow, that's amazing. I love programs written in less than a kilobyte. Personally, I developed a CSS framework that is only 995 bytes (not to compare it to this program) - http://mincss.com/.
In the same vein as this program, there's JS1k. Some of the stuff on there is also really amazing.
> Patients that switched from Bootstrap to Min reported up to a ninefold decrease in markup
hehe.
Actually though, as someone who learned CSS with the rule "Use Divs! No Tables!" What does a page with less divs even look like? What are the workhorses for page layout?
That rule is good. Using divs normally is fine. However, it is possible to overuse divs. Sometimes you'll have a div nested inside a div (and so on) six levels deep. That's considered bad.
For example,
<div id="div-holder">
<div id="inner-padding-div">
<div id="right-align-div">
<div id="centering-div">
<div id="actual-content">
is bad - you could reduce that to one or two divs at most.
In homage to things like this, there are a number of 512 byte, 1k, etc. game development competitions. You can find them allover for a variety of machines. If you're interested these tiny masterpieces, you should enter a competition, or at least check them out.
I made that page before I ever saw the article (and much later webpage) describing this 1k chess program. The article should enlighten anyone who isn't sure just how small 1k of code actually is: http://users.ox.ac.uk/~uzdm0006/scans/1kchess/
Amazing! it is hard to imagine to even barely scratch the surface with constraints like that. Sinclair ZX81 does seem to have an emulator[1], would it be possible to see it run? It seems the emulator would need a assembled binary, something I am not sure would be possible without whole lot of grunt work.
Holy smokes... blown away. 1K RAM? less then most email's we write... Now'a days rockstars and Ninja's are the glorified ruby guys who "seemingly" do Ruby/django magic... This is what counts...This is the real deal...
Exactly. For many, many projects nowadays, the project does not require a 1024 executable because RAM is cheap. Making code that's really easy to work with is just as impressive as this.
No, that's absolutely true. The first computers were for calculating logarithm tables, which were incredibly tedious, slow and error prone for humans to do.
If you're ever in Bletchley Park in the United Kingdom, go visit the National Museum of Computing. They'll show you (and you'll get to play with) the Harwell Witch, a decimal computer designed to compete with humans who are using mechanical hand-cranked calculators:
A well-trained human can actually outperform the Witch; for twenty minutes. (They tried it.) But the Witch doesn't make mistakes and it doesn't get tired.
It was only much later that they realised that computers could do things which humans weren't _able_ to do.
I wasn't trying to be. I wasn't saying "yeah, but 'entire point' strictly means... fnar fnar".
I was responding to a point above that seemed to imply that code like this can't be considered great (or at least, no greater than a Rails website), because its not what computers are for. They are for calculating things humans can't. I disagree with that. And not on a pedantic, language level. It is a genuine confusion, i think, between why something was made and what it can do.
Though on re-reading, I may have misinterpreted the OP.
Sorry, I was really just calling out the "real programmers fart assembly in their sleep" sort of stuff. The chess game is certainly an accomplishment, but there are other ways to be accomplished at programming.
Of course, the Atari (unlike the Sinclair in the OP) executed code directly from the cartridge-- you didn't have to fit your whole program into that 128 bytes of RAM. Atari cartridges could be up to 4kB, or 32kB with bank switching, which is downright roomy for the era.
this is an interesting historical artifact demonstrating the skill of one individual. its not "what counts", it was a low importance side project in its time, and is interesting today only as a piece of history, not for any lasting technology impact it had.
why you're using this as an excuse to take cheap shots at web developers is beyond me.
Not true. A lot of people were so amazed by the potential of ZX81 game programming they became programmers themselves - sometimes games programmers, but just as often general developers.
If the ZX81 hadn't done anything of interest it wouldn't have persuaded those people that awesome programming was possible.
Dismissing this kind of coding as a side show is arrogant and ill-informed.
The ZX81 itself was a similar project that squeezed maximum usefulness and affordability out of very limited resources.
IMO this illustrates the difference between engineers and hacks. Hacks are pleased when code sort-of works. Engineers are only satisfied with designs that are minimal, elegant, and use resources as efficiently as possible.
This actually matters. Efficient coding influences everything from battery life to mobile data costs to cloud fees to data centre profitability.
Sure, it's easy to ignore. It's just not very professional.
please don't put words into my mouth or deliberately misinterpret me for your own purposes. this kind of rhetorical dishonesty really reflects badly on you.
I didn't dismiss it as a sideshow. My exact words were "low importance side project". It was a side project, as in, not the code he was working on full time for his business. It was low importance, because it was just a chess game, not a piece of system software or a business application or anything else that mattered materially.
Its possible to admire the skill and craft of a low importance side project and its author, and to be inspired by that skill and craft, and yet still acknowledge that it was a low importance side project.
When you slander me by accusing me of being arrogant and ill-informed you reveal your true character. You don't know me at all. You're eager to falsely attribute beliefs to me that I don't have and you're eager to criticize me for things I have not even said. You should probably take some time to consider why you feel the need to do this. Its pretty obscene.
And here are the rules that formed the core of the AI (in case you missed them):
Score provides a move score based on the
following: first, the "To" position results in
taking of a piece. Second, the "From" position
is attacked. Third, the "To" position is
attacked. Fourth, "To" enables the computer
to obtain a check and finally the "From"
position is defended.
The other day, my son brought a discarded 3.5" floppy from PrintShop home from school. It got me thinking about the fact that 1440kb or even 720kb is more information than a person can wrap their head around unless it's dumbed down to a simply interpreted bitmap...i.e. we can interpret it with our visual cortex.
1024 bytes has 2^8192 permutations. And 3-SAT is NP-Complete.
Nice! ZX81 was my first computer. I didn't even have the tape storage. I just typed some program in from a magazine and left it on for a couple of days. I even got the Lampda 16K memory expansion, which was a big external module mounted on the back :)
I have that program! The cassette it came on is still in my basement (along with a couple of ZX-81s and TS-2000s). While it might be rudimentary AI, I also didn't beat it every time (maybe not even half the time when I started playing against it).
I believed to have it too but I checked and I've got the Chess program from PSION instead, which is for the 16kB ZX81. I remember that I lost every single game. I keep it on the old CDs bookshelf, with a cassette of Defender. They're maybe the last two tapes left from those years. The ZX81 is somewhere at my parents house. I also kept the BASIC Programming Manual. I develop with Ruby and JS now, probably no less than 16 MB for any program I run :-)
A great program scales with Moore's Law: when you get 2x memory, the program gets better, ideally more than 2x better.
Human limitations remain constant through the ages, that's why climbing Mt. Everest or the 4 minute mile are so compelling. But the most basic fact about computing capacity is its (exponential) upward trend.
Wonder if any student somewhere isn't trying to take up the challenge of adding castling to the code, using an emulator. I like to think that's something i would try to do as a student, because i'm pretty sure there's a tremendous amount of skill to gain by doing it.
Note, however, that both have sacrificed some rules. I believe they have both left out en passant, which is rare enough that it won't be missed most of the time. It's rare enough that if you use it against a casual player, there is a good chance they will accuse you of making it up.
Castling, however, which is also omitted, is much more serious. Getting castled is one of the top goals of the opening. It happens in nearly every game. Taking that out is huge.
Pawn promotion is missing from the older program but included in the smaller French program (making the French achievement even more impressive). I'd say that is not as big an omission as castling, because many games end before it gets to that point. Still, the threat of promotion has a major influence on endgame play.
So what we actually have are very impressive implementations of chess-like games. That raises the question of what is the smallest implementation of full chess?
I suspect that both these programs could be expanded to the full rules without making them a lot bigger, by taking the short cut of allowing the human to make those moves, but not the computer (especially if the computer trusts the human and does not check for legality of the human's move). I think you could defend a claim that you then have full chess, and it is just a limitation of the AI that it does not consider those moves.