This and more. The Sail version of TeX is mentioned, but everything else had a usable prototype, too: An early version of Metafont in Sail, an early version of Web (called Doc, though I don’t recall what the names of the Tangle and Weave equivalents were), and an early version of Computer Modern fonts in the early Metafont language.
Though fully documented and available via anonymous FTP on the Arpanet, all of these prototypes were experimental proofs-of-concept and were completely discarded, along with the languages they implemented, with the “real” versions rewritten from scratch by Knuth (each first by hand on legal pads, then typed in and debugged as a whole.)
And you missed one more obscure but instructive example: To get the camera-ready copy for ACP Vol 2, Knuth purchased an Alphatype CRS phototypesetter. And, unhappy with the manufacturer’s firmware, he rewrote the 8080 code that drives the thing. Eight simultaneous levels of interrupts coming from various subsystem: the horizontal and vertical step-motors that you had to accelerate just right, while keeping synchronized with four S100 boards that generated slices of each character in realtime from a proprietary outline format, to be flashed into the lens just as it was passing the right spot on the photo paper. (Four identical boards, since you had to fill the second while the first one did its processing; times two for two characters that might overlap due to kerning). Oh, and you had to handle memory management of the font character data since there wasn’t enough RAM for even one job’s worth (see our joint paper).
Fun times. I did the driver on the mainframe side, so got to be there for the intense debugging sessions that used only the CRS’s 4x4 hex keypad and 12-character dot-matrix display.
Thank you for creating it! Is there some article or something which talks about the original project or motivation for the DVI format? If DEK implemented it as part of TeX in 1982, what was DVI being used for before that?
Sure. Knuth's original Sail version of TeX just directly wrote output in XGP format, since the only type of usable device anybody had was the experimental XGP printer (which Xerox had bestowed on a few CS departments, years before electrostatic printers were commercially available, never mind laser printers). When Knuth ordered the Alphatype typesetter, he asked me to write a new output module for the required format, and we'd be able to statically link it instead of the XGP module, so there'd be two TeX executables to choose from, depending where your output was going.
Well, imagine you're an unaccomplished grad student, and you're going to tell Prof. Donald Knuth that he's wrong-headed, that his approach wouldn't scale, and that the right thing to do instead is to have a simple, intermediate format, so that nobody would have to muck around in TeX's internals to get a new output device going. Quite the rush when he gave the go-ahead.
So, the first DVI format was purpose-built for proto-TeX (and underwent a revamp for the final cut, just like everything else). It's really just a slightly compressed form of display-list (move to x,y; put down character c in font f; move; put down a rule; done with page) and nothing to write home about. While it was important in the early days to help us and others to get various devices going, and allow for Tom Rokicki's fabulous(ly important) DVI-to-PostScript program, given the ubiquity of PDF today, it's appropriate that we've come full circle, and PDF output is built into most TeXs that people now use, leaving DVI mostly a historical footnote.
Those were good times. I got a copy of the SAIL Metafont and managed to port it to Tandem's TAL language (rather like BLISS, but stack oriented on a 16/32-bit machine). There was some serious bit-twiddling in the code using every bit of every 36-bit word!
Working with TeX (thankfully the Pascal version), we spent lots of time doing DVI implementations for various devices. Are virtual fonts still a thing?
Very interesting. Thank you for filling this gap in my knowledge of historical events. I'm always keen to understand the context in which things like this came about. I take your point about PDF being the dominant format leaving little use for DVI, but intermediate representations are a powerful lever, so I hope DVI remains for a good while yet for people to experiment with.
I believe we used an XGP printer at MIT to print out solicitation letters for a Chorallaries[1] tour in the early '80s. All I remember was that it was a very large device, and was reputed to blow toner into the cable runway under the raised floor.
Not only is this staggering to think about in terms of the Herculean effort required, but what's even more amazing is the quality of the finished output of the system, unmatched to this day.
At least systems were smaller then and much better documented, so the effort was at least well supported. Perhaps the quality is correlated with that since you weren't able to squeeze quantity into your work anyway, so your efforts had to go into quality.
And, unhappy with the manufacturer’s firmware, he rewrote the 8080 code that drives the thing.
Presumably without the source code, so he RE'd it too? In that case, I'd say he captured the spirit of right-to-repair/read/modify even better than Stallman (who, if I remember correctly, founded his GNU empire entirely on complaining about not getting the source code for, ironically enough, also a printer.)
Alphatype shared their proprietary firmware and hardware specs with Knuth, and he did even discuss a few questions about it with some of their engineers, I believe. It was all under a non-disclosure agreement, carefully adhered to; I never saw the documents, even in his office.
The key issue was that their firmware insisted on loading proprietary font outline info from the included 5.25" floppy drives, while we needed to be able to have individual characters downloaded dynamically for caching in the machine's 64K (or less?) of RAM, intermixed with the per-line display list info, such that once you started the stepper motor that controlled the horizontal worm-screw that moved the lens over the paper, everything would be ready as it moved along; having to stop and restart a line would inevitably lead to jitter among the characters, as mechanical systems are subject to hysteresis -- you can never get back to exactly where you (thought you) were. I vaguely recall Knuth grousing that the standard firmware couldn't even be proved to be able to handle an arbitrarily complex line of text, and thus his full rewrite.
Ultimately, the output was not completely satisfactory. Tall parenthesis and integral signs are made up of multiple pieces (as that's how TeX and Metafont handle them), and because of the alignment issues mentioned above, plus the fact that the individual pieces were "flashed" onto photographic paper separately, so where they met there was a bit of multi-exposure that lead to some "spread" of the blackness. You can see this if you look closely at integrals in ACP Vol 2, 2nd Ed.; they're a little chubby right along the centerline.
I mention all this as it's yet another case of throwing away a prototype, this time software plus hardware. The Alphatype was replaced by an Autologic APS Micro-5, which didn't use a moving lens, and was generally more digital, and thus had no problem with characters made up from pieces. And no firmware needed replacing, though we did have to run it in a completely unintended mode where each character was sent in run-length format each time it was typeset (meant for occasional one-off logos); this was wildly inefficient and slow, but as the APS used long rolls of photographic paper rather than single large sheets like the Alphatype, it could run unattended for hours to produce many dozen pages at a go. At least Knuth didn't have to rewrite any firmware for it (nor software; I handled it, and avoided telling him how grossly inefficient it was, as I thought it would bother his soul).
Hey, on the Sail machine, user names were 3 letters, and you only got one level of subdirectory, also 3 letters long. So, in the syntax of the time, “[ACP,DEK]” was it!
(Six bits for each upper-case alphanumeric character means a 6-character directory name neatly fits into a 36-bit word on the DEC PDP-10. And file names were limited to 6 characters for the same reason; extensions were 3 characters, with the remaining 18 bits for file meta-data, I think.)
I think, though, that there was nothing preventing him from developing TeX in Fortran or something, and leaving it at that. He is just somewhat easily distracted. He nerd-snipes himself possibly.
Yak Shaving is the frustrating work we do because of impedance mismatch in the system. Libtool is a huge yak shave, for example. But so is writing a custom CSV parser because an input file is just not quite accessible by the standard, and you could just use python to make the translation, but your environment won't let you use python in that spot, etc. Days later nothing is really accomplished, you hate programming, and you can finally start work on the real problem after losing the motivation you had when you started.
But I like the idea of a patron saint for this, and Knuth is as good as any. Perhaps Dijkstra could be the patron saint of Unused Better Solutions. I'm sure there's a litany of things that could use saints in this field.
Actually, Knuth first wrote TeX in SAIL (1977–1978), then discovered everyone was interested and rewriting it for their own systems in divergent ways, so he rewrote a "portable" version in WEB (Pascal) (1980–1982), and he did indeed leave it at that, a few years later (https://tug.org/TUGboat/tb11-4/tb30knut.pdf). It's not that he's easily distracted; it's just that he really cares about quality, and underestimated how long this detour would take. :-) (He told his publishers in March 1977 that he'd have proofs for them in July; he was expecting it to be a month or two of work; not 10+ years.)
Hm what unused better software did Djikstra write?
Maybe Knuth could have used Fortran and not Pascal, but that doesn’t solve the literate programming problem ... he would still want something like WEB.
Seems like Fortran is more maintained today than Pascal, but Knuth couldn’t have known that. The community yak shave to translate the code arose out of other forces in the industry.
Pascal was a huge force in the computing world for quite awhile - Turbo Pascal on the IBM PC was by far and away perhaps the most performant programming environment available.
Right, I'm saying he didn't really need to solve the literate programming problem if he was really focusing on writing a book on algorithms that was to precede his book on compilers. That's alright, the journey is often more important than the destination. He wanted to tell the story of writing TeX in a book in which TeX is written, and it made sense to intercalate the text and the program. There just wasn't a tool that could do anything with it yet.
I follow the folks over at 100 rabbits (https://100r.co) and I think they’ve sort of developed yak shaving into a lifestyle.
I don’t want to go too far into it cause the website is a treasure trove and exciting to explore on your own but if you follow some of their latest stuff, they’ve built a stack based virtual machine, and a whole set of software around it and it truly feels like they are just exploring every avenue that interests them without rushing themselves. Truly the embodiment of my grow a beard and learn Haskell dreams.
The Art of Computer Programming itself is a yak shave. Knuth was originally going to write a book on compilers, but he realized that he needed to cover some prerequisites first, and that became TAOCP.
9. Lexical scanning (includes also string search and data compression)
10. Parsing techniques
...
And after Volumes 1--5 are done, God willing, I plan to publish Volume 6 (the theory of context-free languages) and Volume 7 (Compiler techniques), but only if the things I want to say about those topics are still relevant and still haven't been said."
Does that mean TeX and ACP delayed the revelation of thoughts about compiler technics by 40 years and he thinks his original thoughts to start the whole thing with are still relevant and at least partially not published/discovered?
Given what he did for computer algorithms (and doc type setting) this makes me really curious about his views on compilers.
He's been publishing in the space all the while [1], I'm guessing a lot of what he still has unpublished is in terms of how to present and connect across the ever growing literature.
Granted, there is a substantial degree to which font design has a strong subjective element (even though there are some non-subjective metrics that must be met too). However, the font that emerged in "Yak shave #4", Computer Modern, manages (for me) to pull off a somewhat exceptional double feat. On the one hand, it is to my eyes one of the ugliest fonts designed during the 20th century; on the other, my experience of it as a reader (hence again, my eyes) is that it is one of the easiest to read fonts ever created. I assume that DK found CM to be both beautiful and readable, which is fine, but I think it's even more remarkable that he could have created a font that someone with a modicum of interest in font design and typesetting could find both ugly and very readable.
While I would certainly not consider Computer Modern as a beautiful typeface, it is likely that your perception of it is biased due to looking at it on computer displays.
Knuth has chosen Computer Modern to be a so-called "Modern" typeface, which is a name applied to a certain style of typefaces that became popular at the beginning of the 19th century and which was frequently used for mathematics books of that time.
Of all styles of typefaces, the "Modern" typefaces are the least appropriate for computer displays, because no desktop computer display has a resolution high enough to render such typefaces.
On computer displays, Computer Modern and all typefaces of this kind (with excessive contrast between thick and thin lines) are strongly distorted to fit the display pixels, so they become much uglier than when rendered correctly on a high-resolution printer (i.e. at 1200 dpi or more).
Therefore you cannot compare Computer Modern with more recent typefaces that have been designed to look nice on computer displays or with old typefaces from other families, which were suitable for low-resolution printing.
Computer Modern was strictly intended for very high resolution printing on paper and not for viewing documents on computers.
I recall reading somewhere that it was designed for printers that “bled” and if using it on modern exact lasers you need to take that into account and make it “thicker” - but I suppose it could be tested by comparing output to early copies of TAOCP.
> On the one hand, it is to my eyes one of the ugliest fonts designed during the 20th century; on the other, my experience of it as a reader (hence again, my eyes) is that it is one of the easiest to read fonts ever created.
For me it's the exact opposite. I find the font to be beautifully designed and perfect for mathematical typesetting (e.g., it properly distinguishes ν and v for instance). However, it's terrible for on-screen reading and printing using laser printers because of how spindly the glyphs are. Indeed, Knuth designed CMR for the printers of the 1980s [1].
The most carefully designed math fonts IMHO are the MathTime fonts [2] designed by Michael Spivak to use in conjunction with Times. It isn't a free font however. (For a sample see this paper [3] published in the Annals of Mathematics.) MathTime is also a perfect example of yak shaving. Spivak was dissatisfied with most available fonts when he set out to update his book on calculus so he basically learned type design from the scratch [4]. I also really like the Fourier fonts (meant to be used with Adobe's Utopia) designed by Michel Bovani and I recently came across a book on statistical mechanics [5] that uses them, which I thought was typeset beautifully.
I really don't know. I just find that the cognitive experience of reading papers or even books set in CM is always very easy for me, certainly compared to some other common fonts. However, my first glance at the page(s) always leads to "Oh, ugh, not CM again" and my eye notices the serifs, the rather blocky aspect ratio, the weirdly varying line weights ... and then I read it and my eyes/brain love it.
Fair. I suspect there may be some reminder of over zealous advocates when you first see it? Such that knowing the font is enough to remember a lot of the nonsense around it. Even if it is somewhat objectively good?
That is, is this a blending of bad subjective experience with otherwise good objective experiences?
I wonder if this is something like the Ratatouille effect - you know enough to “know” what shouldn’t work, but Knuth knows enough to know what does work - and your eyes agree.
If you go through old Springer Undergraduate or Graduate Texts in Mathematics (typeset in TeX and Computer Modern) you'll notice they are really pleasant on the eyes. The amount of information density is just right. There's no noise and it's very easy to distinguish where sections start and end.
If you look into any great pre-TeX book like Rudin [1], it looks much uglier and more difficult to read. I'd actually pay a significant amount of money for new editions of Rudin or Halmos typeset in TeX and Computer Modern. They'd be much easier to go through.
I must also add Latin Modern is a slight update to Computer Modern that is slightly thicker and looks much better on screens.
I was going to mention Comic Sans but decided it wasn't necessary in context.
For many things, I'm a Palatino Person, but I am aware that it is aging poorly. I also love the european public display use of Helvetica, and for novels and other continuous text flow, I'm a cheap date, anything will do.
The only thing wrong with Palatino is that it was never intended for typesetting body copy (substantial blocks of text) as opposed to display (headings, splash text on posters, etc.) and its proportions aren't quite right for that purpose.
Try the closely related Aldus, or in extremis just condense Palatino by a few percent. (The latter suggestion may get me yelled at, somewhat understandably, but I really do think it's an improvement if you're setting substantial quantities of text.)
Today, major TeX distributions have their own Pascal(WEB)-to-C converters, written specifically for the TeX (and METAFONT) program. For example, TeX Live uses web2c[5], MiKTeX uses its own “C4P”[6], and even the more obscure distributions like KerTeX[7] have their own WEB/Pascal-to-C translators. One interesting project is web2w[8,9], which translates the TeX program from WEB (the Pascal-based literate programming system) to CWEB (the C-based literate programming system).
The only exception I'm aware of (that does not translate WEB or Pascal to C) is the TeX-GPC distribution [10,11,12], which makes only the changes needed to get the TeX program running with a modern Pascal compiler (GPC, GNU Pascal).
...
I may write a blog post on this since it's relevant to how https://www.oilshell.org/ is written in a set of Python-based DSLs and translated to C++.
The instructions for compiling TeX from scratch are a little daunting, and sometimes unclear (e.g., having to define printer settings). Here are my Linux instructions for those who want to get core TeX up and running:
Although TeX has many years of development effort behind it, the core functionality of converting macros to math glyphs is reasonably straightforward and can be accomplished in a few months---especially given that high-quality math fonts are freely available. Here's a Java-based TeX implementation that provides the ability to format simple TeX equations:
It is an entertaining article, but I am not too sure if I agree that Knuth was yak shaving. However by luck, inspiration, conviction or observation he understood what makes a program last. You focus on one thing and you do it well. On the way you resolve all the problems. He famously said that at first he had one user: himself but then things were different when he had 1000 users. At that stage you need to remove bugs and hence his famous checks rewarding people that reported bugs. When your user base grows allow them to extend the program through their own code. I also think that the program adapted well, throughout the four decades of its usage due to the well thought level of abstraction. Nearing my three score and ten years, and using TeX/LaTeX for all these decades, I can with confidence state, that the only people yak shaving was us all the early adopters, learning typography rather than focusing on our studies. The same can of course be said for current users.
I understand what you mean with this in context, and it's true, but it's still somewhat funny to talk about Donald Knuth and "focusing on one thing". It's more like he focused on everything in programming, and somehow does it well.
I'm not sure that it's slow moving. It's just that it generally grows on the edges, and unless you're out near the edges, you've no idea how quickly the edges are growing.
Fair enough, my view of mathematics is very much the lay persons field, seeing the mainstream mathematics as mostly unchanged for the last decade or two, but for someone doing mathematics professionally in a research capacity I would expect them to see nothing but green fields and novel approaches. And as the field grows (assuming a two dimensional, spherical math field here for the moment) so do the edges.
As someone only on their two score and ten, (and many bald yaks behind me) do you have advice for keeping on writing code as opposed to the horrors of management ?
Stay out of the hierarchy and start consulting, specifically troubleshooting. The rates are higher because the clients need to get stuff done and a lot of red tape will be cut for you by people in a position to make decisions. Also: your customers will be genuinely happy. Then: leave. Don't hang around because you'll become part of the problem, no matter how much they offer.
Increase your price by 25% every year or so, until you start losing customers, and ask your previous customers if you can use them as a reference.
It's a good gig because it actually requires something that is somewhat rare: lots of experience, so it's a natural match for the older hands-on tech people.
There's a certain kind of computer science paper from the 90s, very frequently encountered if you're familiar with the research. The typesetting is incredibly ugly, and they're all ugly in the same characteristic way. See for example [1] (just the first one I found, some are quite a bit worse). I want to know the story behind this.
In the case of my example, note that when you load this paper in a browser, the tab will say "main.dvi". So I'm guessing the paper was typeset in LaTeX, published as DVI, and when PDF came along they converted it, but the DVI -> PDF conversion algorithm was really bad.
These are examples of files that have gone the route DVI → PS → PDF, where the PS file contained Type 3 (bitmap) fonts without hinting instructions for on-screen viewing. If you have the original PS file you can often fix them with Heiko Oberdiek's amazing `pkfix` (and `pkfix-helper` if needed) tool [1].
(You can zoom the PDF to 500% to see the font's glyph shapes down to actual pixels; this pixelation is not inherently a problem as these PDFs look fine when printed on a typical high-resolution printer: try it!)
In more detail for your example PDF [2]: running `pdfinfo` gives:
$ pdfinfo H-SIGMOD1999.pdf
Title: main.dvi
Creator: dvips 5.58 Copyright 1986, 1994 Radical Eye Software
Producer: Acrobat Distiller Command 3.0 for Solaris 2.3 and later (SPARC)
CreationDate: Thu Jan 24 16:43:42 2002 PST
So presumably:
• `main.tex` has been typeset with TeX into `main.dvi` at some point (the paper has a "September 1998" and "SIGMOD 1999" in the title, so presumably it's from then),
• This `main.dvi` has been converted into a `.ps` file at some point, using dvips 5.58 from 1994 (dvips 5.70 was in 1997).
• This `.ps` file has been converted into `.pdf` using Acrobat Distiller on SPARC, presumably in 2002.
Now, with some searching online we can actually find the original PS file: it's at [3] (and dated "24-Sep-1998 19:15"). The dvips version 5.58 is too old for pkfix to run its magic directly, but by running pkfix-helper first (which does guessing based on font metrics, and in this case seems to have guessed mostly correctly: though the superscript font for footnotes is wrong), and then pkfix, and then converting to PDF, we get this equivalent I just made (compare with your example [2]): [4]
There are also some old papers, converted to PDF by way of dvips and Ghostscript, that are completely unreadable with a screen reader unless you OCR them, because the text in the PDF is somehow not using Unicode or even ASCII. You'll see the same problem if you run pdftotext or similar on the document. My usual example is this:
DVI files do not contain fonts. What happened here is that the DVI to PDF conversion targeted a low resolution bitmap output (for screen/download, not for printing). The vector font specified by the DVI file was rasterized at that point. Later, PDF compatible (PostScript) vector versions of the Computer Modern fonts became available.
In 1968 a book like TAOCP needed to cover assembly programming. Using an existing assembly language for an existing machine would not work well unless you wanted the book to be "TAOCP for CDC 6600 Programmers" or "TAOCP for Burroughs B5500 Programmers" or "TAOCP for IBM/360 Programmers" or similar. There were a lot of significant differences between different machines, and so if your book used any particular one for its implementations it would be harder to use for readers who used a different machine.
MIX allowed Knuth to capture the important points that you needed to learn about assembly programming, without getting you tied to any particular machine.
I wrote a DVI converter for the Xerox 9700 laser printer and the Autologic APS-5 typesetter in the early 1980's. Ran on an Apollo Domain node. Fun times.
I took a 1-week summer class at Stanford with Knuth. At lunch, he told us the story of a Chinese restaurant that served food in the style of the province of Hunan. They had chopsticks made with a typo, said "Human cuisine".
Incidentally, while this is a great post, the order of yak-shaving is actually pretty close to the opposite described in the post.
He started with wanting a digital version of Monotype Modern 8A (what became Computer Modern). His first idea was to go to Xerox and use their scanning equipment to digitize the font, but they would only let him use it if they would have copyright on the resulting fonts, so he started looking deeper at the fonts himself. That gave him the idea of describing the font shapes from scratch (with equations) instead of simply scanning them, so he came up with METAFONT. To use his own digital fonts, he would need a typesetting system: TeX. He did implement TeX in SAIL, but when it became clear he would need to rewrite it in a portable way (Pascal), and Hoare suggested publishing the program as a book, he came up with literate programming and WEB.
And as mentioned in another thread (https://news.ycombinator.com/item?id=29862858), TAOCP is itself a "yak shave", Knuth's response to being asked to write a book about compilers, in his first year of grad school in 1960 (given his compiler exploits earlier, like http://ed-thelen.org/comp-hist/B5000-AlgolRWaychoff.html#7). The publishers who were hoping to sell a book to aspiring early-1960s machine-language compiler-writers never got their wish, but they got something much more.
> But “The Art of Computer Programming” is an impressive book in its own right: it is still unfinished, currently spanning 3.5 volumes (yes, the fourth is unfinished, but the first chapters are released).
I wish he would update his volume on Searching and Sorting with modern search engine techniques.
From his point of view, this certainly seems like a continuation of a life passion. Wouldn’t we all just love to follow our curiosity? Highlights a life well lived.
I’m curious on if this helped or hindered AoCP? Did it improve the work because the great theorist is also practicing on the side? Or did it slow down creation of AoCP? If the answer is “both”, which effect is stronger?
This is written with all due respect to this great polymath.
I suspect it helped, because it gave him “full stack control” and never did he have to be pissed at the output and have to rework everything (instead he could just fix what was wrong).
Being in total control of your tooling is very nice for work that will span decades.
I wrote some macros in TeX, over 100 of them, and in them used programming language constructs including if-then-else, do-while, dynamically allocated typed variables, and file reading and writing. Sounds like a "programming language" to me.
Sorry, TeX was not written primarily for programmers or computer scientists. Instead the target audience was people who use a lot of mathematical notation. In my opinion, the main glory of TeX is how it helps position the mathematical symbols in mathematical expressions, including some really complicated ones.
I know a guy, a good mathematician, who had a really tough time understanding the purpose of TeX. He kept evaluating TeX in terms of what it did for the future of word processing taken very generally, maybe all the way to video as in some Hollywood movies. E.g., TeX is not promising for generating video of a Darth Vader light saber battle. Then he noticed that TeX is not really that future. I finally explained to him that TeX was not trying to be the future of some generalized word processing, thus, was not looking ahead, and instead was looking back and at something he knew well -- the literature of advanced math as in math journals such as published by the AMS (American Mathematical Society or some such). So, TeX was to ease the word processing needed for pages of mathematics as in the math journals and textbooks.
That friend kept asking me to write a converter that would convert a file of TeX to a file of HTML. I kept telling him that such a converter was impossible because TeX was a programming language and HTML was not. I did explain that at least in principle could write a converter to convert TeX output, that is, a DVI (device independent) file to HTML. There are converters, heavily used, to convert DVI to PDF (portable document format or some such).
I like TeX; it is one of my favorite things, and I use it for all my higher quality word processing, the core, original math for my startup, business cards, even business letters. My last published paper (in some mathematical statistics) was in TeX, and using TeX was liberating because I could just go ahead and do the math and not worry about how I was going to get the word processing done, did not have to bend the math and reduce the content to make the word processing easier.
Future of TeX? The fraction of the population that wants to typeset complicated math expressions seems to be tiny, and there are lots of alternatives for others. So, my guess is that TeX will be like, say, a violin -- won't change much in hundreds of years.
The OP mentioned LaTeX: For people new to TeX, no, you don't have to learn LaTeX. The approach of LaTeX is different. In an analogy, LaTeX wants you to state if you are building a bicycle, motorcycle, car, truck, boat, or airplane, and then lots of lower level details are handled for you. With TeX, never decide what vehicle type are building and, instead, work with the parts and pieces -- yes, with a lot of help.
There is Knuth's book on TeX, The TeXbook, and also the books on LaTeX. In comparison, Knuth's book is a lot shorter than the books on LaTeX. So, I got the books on LaTeX, looked at them, and decided that it was easier just to stay with TeX and the macros I could write for TeX. So, for people new to TeX, don't really have to get and read the books on LaTeX.
Getting math typeset was a big problem. TeX is a good solution. Problem solved. We can move on!
The collection of TeX macros I wrote has over 100 macros
> That friend kept asking me to write a converter that would convert a file of TeX to a file of HTML. I kept telling him that such a converter was impossible because TeX was a programming language and HTML was not.
According to Massimiliano Gubinelli (please take a look at the following comments thread https://news.ycombinator.com/item?id=27820466, in particular at https://news.ycombinator.com/item?id=27822662), the reason why a converter is impossible is that the _syntax_ of TeX is Turing complete. If one had kept syntax and programming constructs separated, then a converter would have been possible. (I understand the argument only superficially but I trust Gubinelli enough to report this here).
Yes, I considered justifying the impossibility of writing a TeX to HTML converter by saying that TeX was Turing complete and HTML was not. Yup, I considered doing that! Then I just made a simpler and less technical claim!
I regularly use TeXInfo, which is primarily a single macro file [0], which starts out as TeX, and has been completely changed into the new language by the end of the file (A new lang capable of conditionally embedding TeX, HTML, etc.)
Being able to seamlessly transform the language on such a fundamental level is one of the reasons I don't think we'll ever see a next major TeX - all it would need to be is a macro atop of TeX. And then everyone has their own feelings, so it'd probably stay as is.
This is Great! Articles like these are Inspiring; the Cleverness/Brilliance on display is just staggering and makes you really appreciate what is meant by the word "Genius".
I happened to be reading John Wilkins' introduction to "An Essay Towards A Real Character and a Philosophical Language" [1] yesterday, he mentions that Messalla Corvinus wrote a treatise on the letter S, though it seems we only have the title, "Liber de S. Lit[t]era" [2]. I wonder if Knuth had this in mind.
Some Buddhists who are vegetarian will justify eating Yaks because one yak can provide food for many people and only requires one life to be lost, as opposed to chickens or fish. The same can be true for computer science.
I have seen of the scrawnier ones in labs, they probably would only feed one or two people so I am not sure the utilitarian approach is a winner ? Maybe if we fed them a diet of Mountain Dew and Pizza it might improve the flavours?
It's largely due to the relatively low effort required to hunt and catch one. Less energy spent means it's economical to hunt the scrawny ones.
You could spend a couple of days chasing down the larger specimens, but they tend to put up more of a struggle, and in the time spent, you could have a dozen scrawny engineers already in your larder or stew.
This is of course similar to how early human hunters operated - simply chasing down the prey till they fell exhausted. For the average post-doc CS student this will be three to four blocks so the energy trade off is in your favour. However the available calories drops significantly with cryptographers and even though you can drop one after just one block the energy trade off is even.
Though fully documented and available via anonymous FTP on the Arpanet, all of these prototypes were experimental proofs-of-concept and were completely discarded, along with the languages they implemented, with the “real” versions rewritten from scratch by Knuth (each first by hand on legal pads, then typed in and debugged as a whole.)
And you missed one more obscure but instructive example: To get the camera-ready copy for ACP Vol 2, Knuth purchased an Alphatype CRS phototypesetter. And, unhappy with the manufacturer’s firmware, he rewrote the 8080 code that drives the thing. Eight simultaneous levels of interrupts coming from various subsystem: the horizontal and vertical step-motors that you had to accelerate just right, while keeping synchronized with four S100 boards that generated slices of each character in realtime from a proprietary outline format, to be flashed into the lens just as it was passing the right spot on the photo paper. (Four identical boards, since you had to fill the second while the first one did its processing; times two for two characters that might overlap due to kerning). Oh, and you had to handle memory management of the font character data since there wasn’t enough RAM for even one job’s worth (see our joint paper).
Fun times. I did the driver on the mainframe side, so got to be there for the intense debugging sessions that used only the CRS’s 4x4 hex keypad and 12-character dot-matrix display.
Thanks for the DVI shout-out, btw.