Just no. CM font was made for printing. It was last updated in 1992, before PDF was even a thing. CM is not made for screens and it is a horrible idea to use it on the Web.
> It's so good-looking that some scientists do research just so they can write it up in Computer Modern.
It's more like "undergrad students use it in their papers because they don't know better". The only Computer Science publications I'm aware of that still use CM are the Springer LNCS/LNAI series. Which also happen to still use a template optimized for printing the proceedings as books. And AFAIK, these templates are universally hated and considered archaic and outdated: They look bad everywhere (screen, A4 printout, letter printout) except the Springer-printed paper volume.
World wide the most used screen resolution is 1920x1080 at ~9% all other top resolutions is somewhere around 720p [1]. Even on desktop computers the top resolutions is 1920x1080 at ~21% followed by 1366x768 also at ~21% [2]. I still only use 1080p screens due to superior performance in games, refresh rate, battery life and that you don't have to deal with scaling issues. Things have changed since I bought them three years ago, but I don't need to upgrade.
Computer Modern is the "plaid shorts" of TeX fonts. I profoundly admire all of Don Knuth's efforts, and I've had the pleasure of a number of conversations with him. He threw in the towel himself, recognizing that artists could design better fonts than computers.
To excel at math requires focus. When I see Computer Modern I see an author who didn't get distracted choosing a better font. Not just undergraduates.
It's good that he threw in the towel. It takes a bit of a galaxy brain perspective to conclude that by being amateurish in his approach to font implementation with TeX/Metafont, Knuth was somehow being a more professional mathematician.
We need to accept that Knuth was an amateur type designer who was genuinely preoccupied with fonts (“I can’t go to a restaurant and order food because I keep looking at the fonts on the menu.” — Donald Knuthhttps://twitter.com/TeXtip/status/1389329788628783104). He could have delegated the design of the first font for his TeX system to a professional type designer. He didn't.
> He could have delegated the design of the first font for his TeX system to a professional type designer.
Knuth started work on TeX's fonts about the same time that he began work on TeX and Metafont (1977-78). CM was largely completed by about 1984-86. He was a computer science professor. I'm not aware of any funding he procured to develop TeX. You're saying he had the financial resources to hire a professional type designer for fonts for an academic publishing system of unknown future?
By 1980 Knuth was working with Hermann Zapf. So he was able to interest Zapf in his software essentially immediately. Knuth was a prestigious figure, so it's not a shock. Stanford students of typography and CS participated in the detailed production work on the font that was made with Zapf. It doesn't sound like funding was an issue.
Sure, bootstrapping the TeX project would have been a bit more difficult if a font designer had been required from the start; on the other hand, maybe Metafont would have been a more successful font design system if type designers had been (more?) involved from the outset.
"Zapf designed and drew the Euler alphabets in 1980–81 and provided critique and advice of digital proofs in 1983 and later. The typeface family is copyright by American Mathematical Society, 1983. Euler Metafont development was done by Stanford computer science and/or digital typography students; first Scott Kim, then Carol Twombly and Daniel Mills, and finally David Siegel, all assisted by John Hobby. Siegel finished the Metafont Euler digitization project as his M.S. thesis in 1985."
It's worth noting, more generally, that Knuth himself has described a realization that he gets better results from delegating tasks than from "going it alone":
"Some of you may recall that I wrote the entire
program for TEX78 and TEX82 all by myself, and
you may be wondering whether I’ve done the same
for iTeX. Don’t worry: This time around I’m having
the job done by people who know what they’re
doing. After many years I’ve finally come to realize
that my main strength lies in an ability to delegate
work and to lead large projects, rather than to go
it alone. Programming has never really been my
forté for example, I’ve had to remove 1289 bugs
from TEX, and 571 from METAFONT."
This is backwards: This comment (and its grandparent) makes it sound as though Knuth set out to create a typesetting system and a font creation system, and then was faced with the choice of who would design the first font for the system. In reality, his work in typesetting started with a very narrow goal: when the publishers of TAOCP were about to set the second edition of Volume 2 using phototypesetting (instead of hot-metal typesetting as in the first edition of Vols 1–3 and 2nd edition of Vol 1), he found the galley proofs awful, and at the same time he came to know of the existence of a digital typesetter, so he thought he might be able to retain the appearance of the first edition. That's all there was to it; it was supposed to be a three-month one-off job specifically for TAOCP (and later volumes/editions thereof), and he even thought his grad students could code up the whole thing over the summer while he was away.
More concretely: the design of Computer Modern is based on the typeface used for the first edition (10-point Monotype Modern 8A, etc). It's not identical, but you can compare say, the first (1973, Monotype hot metal) and second (1998, using CM/Metafont) editions of Volume 3; it's substantially the same typeface; at least most people's criticisms of Computer Modern would also apply to the font he found so beautiful and which he was trying to reproduce. (Except the "spindly" or "too light" complaint which is not a problem with "true" CM but only with the TrueType/OpenType version used today by nearly everyone except Knuth: see https://tex.stackexchange.com/a/361722/48 and the comment below it, and http://mirrors.ctan.org/fonts/mlmodern/doc/mlmodern.pdf.) In fact, he says he was excited when Addison-Wesley approached him to write a textbook (in the early 1960s) because he loved their typesetting so much.
His goal at first was simply to reproduce the same font digitally (TeX was created only because of the necessity of a system for using those new digital fonts). He approached Xerox (PARC, I think) to use their high-resolution equipment and obtain a digital scan of the Monotype font. If they had agreed unconditionally, that would have been the end of it, and METAFONT wouldn't have existed. They insisted that any font created digitally using their resources would belong to Xerox, which Knuth thought fair but unacceptable to him, so he set out to reproduce the font with the resources available to him. While staring at blown-up images of the letters projected on his wall at home via a TV camera, he had the realization that the shapes were not entirely arbitrary and therefore perhaps he ought to describe the shapes in terms of curves rather than simply capturing the shapes as numbers, and his syntax (and implementation) for describing these shapes turned into METAFONT. His shapes (glyphs) for CM did diverge from Monotype Modern a bit, with all the feedback he got from people like Hermann Zapf and Matthew Carter and Richard Southall and Chuck Bigelow & Kris Holmes and Neenie Billawala (one of the Stanford students)—these are the people credited in the preface to Volume E (Computer Modern Typefaces)—but it's not as if he sat down to design a typeface ab initio in a system that he had already built or was planning to build. METAFONT is simply the collection of tricks/code he came up with for the problem of creating a computer version of Monotype Modern (thus the name, "Computer Modern").
> Thus, I came to the conclusion that the designer of a new system must not only be the implementer and the first large-scale user; the designer should also write the first user manual. The separation of any of these four components would have hurt TeX significantly. If I had not participated fully in all these activities, literally hundreds of improvements would never have been made, because I would never have thought of them or perceived why they were important.
He's poking fun at himself with the whole "programming has never been my forté" -- the fact that he kept count of all the bugs in TEX and METAFONT is the amazing thing.
At the risk of stating the obvious, iTeX is not a real successor of TeX, it's a deliberately-vaporware joke.
CM isn't good for the web, but it is pleasing in print IMO.
I do agree though it has been abused a lot as a status symbol. Back at MIT, having a resume in CM spoke "I come from a science/engineering background" right off the bat, while having a resume in a Microsoft font spoke "I go to the business school".
If you move away from CM, great, there are lots of better web-friendly fonts, but don't go to TNR for f*ck's sake. TNR is awful. Unfortunately some publications have gone down this hole.
I like Crimson Pro of late. It looks VERY pleasing on mobile LCD/LED and e-Ink screens, has the effect of making your device feel like a quality printed book, and has a full selection of weights.
> Just no. CM font was made for printing. It was last updated in 1992, before PDF was even a thing. CM is not made for screens and it is a horrible idea to use it on the Web.
The same can be said about P.D.F. files.
I recently found out why so often when copying from them, spaces are either inserted doubly, or removed: the reason is that the format has no understanding of spaces, only of distance between characters and is completely about præsentation, lacking semantics in any way.
When copying, the reader simply applies a heuristic of what distance between characters constitutes a space, and is often wrong.
The format understands spaces just fine. The problem is that programs that produce PDFs are often designed just to make the PDF look correct, without any care for semantics.
I think there's just a bunch of history here--print is the use case that you really want PDF to get correct, and so you take existing print pipelines which discard semantics and add PDF as a new backend. This is the easy/lazy way to get PDF working, and there are also tons of PDF pipelines are just stapled onto the end of a print pipeline as if PDF were another type of printer.
You'll see a night and day difference between one of those pipelines and something more suited to processing a document with semantics, like FrameMaker. FrameMaker seems to be a bit more niche now that print documentation is less mainstream.
I can understand why the PDF ecosystem is a bit frustrating, but I still prefer it when I'm reading academic papers explaining some algorithm or mathematical concept.
> The problem is that programs that produce PDFs are often designed just to make the PDF look correct, without any care for semantics.
When the alternative is that they don't offer a non-proprietary export format at all, I'll take it. Even a poor PDF output is better for data portability than having your data forever trapped in a proprietary platform that can disappear or get discontinued at any time without a migration path.
It is not the case that treatment of spaces is completely wild and ad hoc. PDF can, and frequently does, contain the text itself along with the display of said text. This is why applications like pdftotext can often give you correct plaintext output.
OCR applications are able to make PDFs from scanned images accessible and searchable (and with the right spaces) precisely because they embed their output into that layer of the PDF meant for the text itself, not its presentation.
Actually what OCR programs do is a very clever hack. PDF is mostly just a way to describe how to render pages, and just like with any rendering frame work that can involve a bitmap, a set of stroke and fill commands, or some text rendering. A scanned document obviously is just a huge bitmap, but if you can identify the text, and can define a font that doesn’t draw anything, then you can add instructions to the PDF to render that text in that font and as long as you can specify character positions everything will line up nicely.
If you are generating a PDF with text you can render a string with spaces and let the glyph layout sort it out, or render individual characters, or draw those characters by hand and use the invisible font trick (because your font is not licensed for embedding). It’s not that you’re using different conceptual layers of the PDF, you’re just choosing how to render your content.
PDF is simply the lowest common denominator of what counts as "computerized sheaf of paper, with a layout and appearance that won't shift over time, no matter which reader software you use". It has its value, especially with preservation of pre-digital documents.
Accepting a potentially inferior machine-readable plaintext when copy and pasting is the price we pay for having something like that available.
FWIW, for non-math, single column text, LuaLaTeX + plex-otf [1] is my current favourite combo. Reads well both on screen and on paper. But if you are heavy on math, you will need to setup a second font to replace CM in the math formulas.
You can also scout the top conferences/journals in your niche (i.e. where your prof. publishes) and pick a template you like. The templates are typically linked with every CfP, and you can expect that they will have worked-out the common cases for their field.
While I always found Computer Modern to be a beautiful font, every paper I read that's typeset with it seems harder to understand.
I find it hard to explain, but it somehow makes processing information and retention harder for me, and it took me a long while to be sure that it wasn't the novelty of the subject matter.
E. Allen Emerson, later Turing Award recipient, comes knocking at the door of the Unix cave[1] and demands to know why we changed all the fonts and made his papers look different. No one had changed the fonts---the department had replaced the printers. CMR is wildly different at 600dpi from 300dpi.
Personally, I never really liked Computer Modern mostly because the variation in stroke widths was too great. The thin strokes just look spindly. Personally, I much prefer Computer Concrete, which looks horrible in the samples on the web page.
When I look at the individual glyphs, or even whole passages, I don’t like it. It looks wispy and particular. But there’s something really nice about the way it comes together on a printed page in a well set academic paper. It just looks right. Which could be a kind of Stockholm Syndrome I guess. I feel similarly about Courier and screenplays.
I always wondered: Why are screenplays written in a monospaced font? To me that just looks like an odd relic from typewriter times. Is there any better explanation?
> To me that just looks like an odd relic from typewriter times.
This is the initial reason. It persists because it is part of the shorthand filmmakers use to estimate time from text. A page of screenplay translates to about a minute of screen time. That conversion is deeply ingrained in Hollywood. Everyone will look at the page count of a screenplay and make assumptions about the running time of the resulting film.
If you change the font metrics, that conversion breaks and it would confuse the hell out of everyone. And if different screenplays (or different drafts of the same one) use different metrics then you lose the ability to compare their length just using page counts.
It's important to remember that screenplays are real physical working artifacts. In the production of a film, people will be carrying around dog-eared copies of it. The director will say things like, "We're going to try to get through three pages today." Pages are a real concept, not just an arbitrary subdivision of a continuous string of text.
Also the font is part of film culture at this point. Using a different font would convey that you are an outsider or don't care about the norms and history of cinema. It's exactly how when you see a page set in Computer Modern you think, "Ah, this is a real CS paper." A screenplay not set in Courier would look like a fraud.
If it makes you feel better, John August has a slightly more modern font that preserves the exact metrics of 12 point Courier:
Courier Prime is a good font, but one piece of advice.. if you use fonts aliased as I do, you need to use the original release of it.. the later version on the site which was done by someone other than the original designer (as far as I can tell) stripped out all the hinting so it looks bad if you have anti-aliasing turned off. For this reason, I keep the original around.
That's because the usual Computer Modern font files today are too thin.
Knuth designed in a "blacken" factor and gave instructions how to tune it to your specific printer.
Because the correct amount is dependent on your printer and its printing technology. Ink tends to blur a bit, for example.
There are font files around that try to make CM a bit more correct on typical printers today, but most people don't know about it and are using some bad default CM.
This is similar to how the “pixel art” versions of old console games never existed because the TV CRTs didn’t have definite hard lines between pixels. Computer Modern was designed to be bitmapped when printed and so took into account that it would all be dots not infinitely thin lines.
Amusingly enough, this is basically "The Renaissance" done in a short time; just as they perceived the "Greek" style as bare marble statues and buildings which never actually existed.
If the Nomad's display was anything like the SMS-based Sega Gamegear, then it may not have had CRT blur but it did put its own not-so-crisp spin on the picture:
Similar. A little less washed-out, but still not what we think of when we think of the precision of a modern LCD.
As I recall, LCD PC monitors until some time into, I dunno, maybe '05 or so, were regarded as almost always worse than CRT monitors in practically every way[0], including picture quality. Better than a typical consumer tube TV, maybe, but even some of those produced a crisper picture than a lot of LCDs, especially tiny low-power ones like that (the Sony Wega comes to mind)
[EDIT] [0] Practically every way except size & weight, I mean, obviously, and despite crushing CRTs on that front, were pretty unpopular due to being so much worse otherwise.
The common look of CM which you probably have always seen is way too thin. Just look at some printed versions (as in printed on actual paper) of CM, like in Knuth’s books for instance, to see how CM is supposed to look.
No, I also don't like it. I use LaTeX for papers, and I tend to go for Palatino.
But there's also an annoyingly performative aspect to it, at least in Economics. Using a recognisable LaTeX font carries an implicit "look at me: I'm smart and technical enough to use LaTeX".
One result of this is that almost every Economics presentation you see is done with Beamer; which lends itself to dense text, bullet points and equations; which are almost never the best way to present your work.
> almost every Economics presentation you see is done with Beamer; which lends itself to dense text, bullet points and equations
I don't think that has anything to do with Beamer. That's how economists present for some horrible reason. When I was on the job market about 20 years ago, the overhead projector was the only tool universally available for presentations, and I had a couple people comment that my work would be discounted by some because it was too easy to understand. That was back when I mistakenly thought the point of the presentation was to explain my research.
In medicine, the slide projector was very common, because they tended to want to display photographs.
I think that’s what drove the defaults of PowerPoint, with its white text on a dark blue background (see https://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=... for arguments as to why that’s good for slide projections. I think it also minimizes contrast differences between typical “medical conference” photo and text slides. That’s important in a darkened room , which they had to use because projectors weren’t bright enough yet.
Was on the market 4.5 years ago and it's still the same. We often have simple ideas, identification strategies, regressions, etc. and make them overtly complicated.
Making things easier on the other hand is liberating, but sadly comes at a cost (people might start thinking you only do trivial work)
>I had a couple people comment that my work would be discounted by some because it was too easy to understand
Haha, either they discount it because they understood it, or they respect it but don't know it. In either case, it's like they never saw the presentation.
You can adjust the parameters to make a new variant of the Computer Modern font if you think that it is too thin or whatever. (However, there are some parameters missing in my opinion (including the shape of the "R"), but it is possible to modify the code to add more if wanted.)
I think that Computer Modern is OK for print; it is not so good for screen. (However, it does have better kerning than some screen fonts.)
You aren't the only one. There's something subtly "wrong" about the typefaces. Likely because it's "scientifically designed" rather than simply designed. IMO typeface design is an art, not a science. And it requires talent, aesthetic awareness, and design sensibility to get them to look right.
Computer Modern was designed by Donald Knuth who is a computer scientist, not a typographer. He created CM as a demo of his parametric font engine Metafont.
So it’s fair to say that CM is a tech demo, not a professionally designed font.
Font designer Jonathan Hoefler has commented on Metafont: “Knuth's idea that letters start with skeletal forms is flawed.” When the fundamental design premise is wrong, you get something like Computer Modern.
Computer Modern isn't primarily designed using "skeletal forms"; that's one way of using Metafont, but not heavily used for the CM faces.
While it's true that Knuth is a computer scientist rather than a typographer, he has worked pretty closely with typographers. And in creating CM, he relied heavily on pre-existing (and professionally designed) Monotype fonts as a model. (See for example https://www.myfonts.com/fonts/mti/monotype-modern, which looks quite reminiscent of CM.)
Computer Modern is not to many people's taste today, partly because it doesn't reproduce all that well at low resolutions, but also, I think, because it is an older style that is now out of fashion and not commonly seen.
(The term "Modern" for this type of design dates from the 1800s -- see https://www.toptal.com/designers/typography/typeface-classif... -- so it's hardly surprising that "modern" faces like Bodoni or Monotype Modern or CM feel rather old-fashioned today!)
[It's also worth pointing out that Computer Modern was not created as a demo, and in fact, it's almost the other way round. He wanted better typography for The Art of Computer Programming, and created TeX and Computer Modern as means to that end (and created Metafont in order to do that).]
It’s also worth remembering that CM was designed with some very specific goals in mind in terms of clearly distinguishing a much broader character set than was the norm at the time, even at very small sizes. It is still arguably the most successful font family ever created within its intended niche. Very few font families can be used to typeset serious mathematics as legibly as CM, and even then, few authors will make the effort to set everything with the attention to detail Knuth has.
> Knuth's idea that letters start with skeletal forms is flawed.
This is a critique of Metafont, not Computer Modern. And interestingly, Knuth et al eventually reached the same conclusion; as I recall, most letters in Computer Modern are drawn as outlines and then filled in (instead of being drawn in a few strokes with a broader pen).
I think Hoefler phrased his comment well; the idea is flawed, not necessarily wrong outright. Letterforms derive from historical constructions: the uppercase roman letters from Roman square capitals, which were carved; lowercase from humanist miniscules (from carolingian miniscules), written by pen; &c. So in some sense, some letters do start with skeletal forms, but: when letters were adapted to print, the punches (the "master copies") for letters were made by engraving and by using counterpunches (reusable tools that create particular shapes of negative space in the letter). And that's where metal (and digital) type comes from; pens and styli are more distant ancestors.
[I'd highly recommend the book Counterpunch by Fred Smeijers on this topic!]
[Also, it's fun to look at some of the Arrighi italics from the early 16th century. They are astonishingly modern – compare it to, say, a heavier weight of Minion italic, one of the most popular typefaces used in books today!]
Anyway, on to Computer Modern. It's not my favorite Scotch roman, but take a look at engineering and mathematics books from the 1940s and 1950s for comparison. I have several books from the McGraw-Hill Electrical and Electronic Engineering Series, and they're really, really lovely, and the type is eminently readable on the printed page; here's a (somewhat poor) scan of one of them:
I am actually thinking of starting a math blog where I have to use extensive use of mathematical symbols. Using computer modern would also be nice. My idea was to keep the blog as minimalistic as possible, but I have not figured out how to do it the way I like. Terry Tao's blog [0] uses images for math symbols, and other people use MathJax [1] (but macros, which are used for convenience take too long to load). Maybe I'll just have to keep linking to PDFs.
MathJax works well for me, but I also don't make extensive use of macros. It's also possible to demacro your tex.
I'll also note that Terry started his site a long time ago, when mathjax either didn't exist or worked poorly. He's used the same latex2html compiler for a long time (as it isn't broken), but the images are pretty silly. Now it's pretty straightforward to make a simple math site [1] (in the style of the bettermotherfucking website [2]).
KaTeX is much faster, but has more limited support for macros. It is a good choice for little or moderate math content, but for some use cases Mathjax is (almost) the only practical choice.
Math-heavy documents using MathJax seem to take longer to load than PDF versions of similar documents, and don’t look quite as good. I would say, for highly mathematical content, you might as well just use LaTeX and link to PDFs. You can use hyperlinks to go back and forth between PDF and HTML documents, so the experience can be basically seamless for the reader. It helps to format the PDFs with web reading in mind, rather than using typical journal paper styles.
This version of CM looks too thin. The version that comes with KaTeX (https://github.com/KaTeX/katex-fonts) looks great in the browser though. Would be nice if someone packaged that font in a more user-friendly way.
CMR has been one of the worst possible fonts that proliferated throughout the academic world. I don't understand it, perhaps it's just me, but there are dozens of other fonts that are more readable on paper and on screen.
There's just something very wrong about the glyphs, the relative widths of letters, and the way that some of the letters get sort of squished together to make the text nearly unreadable.
I remember way back in the 1990's when I had to get an approval from the head of our department to submit my thesis in another font because I refused to use CMR.
It is you. CM is based on the Modern fonts used in scientific typesetting decades before Knuth made a MF version of it. I had often chance to give non-TeX (coming from humanities) proofreaders and copy-editors the choice between Times and CM. Almost all of them choosed CM.
It's not you. Computer Modern is based on Monotype Modern 8A, one of the typefaces available for Monotype's ‘4-line system’ of typesetting mathematics. It's a Scotch Roman design, a 19th-century fashion that represents the nadir of typographic taste.
I'm not sure if I'd call it a low point unless you're simply describing your personal taste; I think there's far from uniform concensus that Scotch romans (and other "modern" faces, like the didones) were a mistake. I mean, Georgia and Miller (both by Matthew Carter, who also did Verdana and Tahoma) are both revivals in spirit, used for body copy, and they're less than three decades old.
But I'm really glad someone mentioned the Monotype 4-line system! Two papers come to mind that might be worth sharing, which I thought were really enjoyable and well-done:
I think it's interesting, and I appreciate, that fonts can communicate so much all by themselves, raise all kinds of opinions, a cult following or a shared 'in-culture'.
At uni, a friend and I spent ridiculous time 'perfecting' a few Latex documents for minor assignments, so that when handed over to the senior and cool infosec crowd supervising us would give it a quick glance, followed up by a “Is that latex? Looks good.”. Ironic detached praise and humbleness commenced. “It's alright I guess, sure”.
I don't think anyone was ever fooled into believing it was anything more than a play on a shared hacker appreciation. Of making the effort for no reason but it being harder than all the sane alternative ways of producing a document (I'm not a mathmaticians).
Computer Modern gives me the same feeling. The font not being a modern font is a useful feature if it supports what you want to say. It has enough character that the idea of people using it by accident feels unlikely. People who doesnt's care seem to prefer sans-serifs.
I don't find it strange that CM being so intertwined with the tools used by people who requires typesetting, not word processing, would find that there is a level of shared convention. Based on pragmatism or even aesthetics. As someone wrote in this thread,
> “When I see Computer Modern I see an author who didn't get distracted choosing a better font.”
If we're looking back, there are loads of fonts that literally couldn't have been designed with screens in mind, as screens weren't a thing. I don't that as a reason to dismiss their continued use by whoever see their purpose.
Disclaimer: IBM 3270 everything. Computer Modern when celebrating.
Computer Modern isn't the most readable font in the first place, but these look absolutely terrible on Windows (in Edge, probably identical to Chrome) on a ~220dpi display. They're at least readable on MacOS/Safari due to Apple's tendency to render fonts bolder.
Do you mean the bolded text? The problem there is that the @font-face family is set up with just a single weight (no bold-italic face is provided), and so the browser applies a synthetic bold effect as a fallback.
Oh, I think you're referring to the "Computer Modern Serif Slanted" sample (not the "Computer Modern Classical Serif Italic"), right?
Yes, the @font-face rules for that family have 'font-style: normal', whereas they should really have 'font-style: oblique' which would have avoided that ugliness.
My own blog, which has sometimes appeared here on HN, has paragraphs set in Computer Modern.
It's a sort of cheeky stylistic hack, in my view, as CM adds a particular feeling to reading text set in it based on the contexts in which it otherwise appears. This is why I chose it, back in 2015 when I designed the current iteration of my website.
I thought I was being super clever, and only learned in the last year or so that CM on the web is A Thing.
Some of the faces I like but many of them are badly rasterized at some sizes if not all all sizes.
Also the font metrics are off the "w" and "e" in "weird" are too close for comfort on the web. TeX has a complex algorithm to justify text for print and maybe it looks better in that context.
The paragraph supposedly in "Computer Modern Serif Upright Italic" is broken; it just uses the browser's default font (because the actual name used in the @font-face rule doesn't include "Serif").
From reading these comments, it seems I'm in the minority but I really like CM and I consider myself to be a fan of good typography. There's no accounting for taste...
Computer Modern is one of the most butt-ugly fonts ever designed, and it would have already died a well-deserved death if it were not the default font of math and TeX. We can do much better, and we should.
21st century Word could still learn some lessons from 20th-century TeX (and 19th-century typography, in general). Any Word document I ever tried to create is plagued with bad kerning and bad geometry, and the defaults are basically guaranteed to give you bad-looking results.
Word is the wrong tool for the job unless you're looking to do something simple, like print out a shopping list. XeTeX FTW, or Pages, or pretty much anything else, even Excel, is a better choice.
Note that if you use Xe(La)TeX, for example, you get all the goodness and flexibility of LaTeX to structure and manage your document, along with the ability to seamlessly use all the same fonts you could use in Word etc.
Word is not bad at fixing my grammar, but once you start to know things that look decent, you will see the way word fails.
For one thing, it still can't make hyphenations correctly, it doesn't allow you to use small caps correctly, and I don't think it has support for lower case/old style letters.
All in all no big issues unless you are happy with documents that just look of.
Word's defaults are crappy and you need to go to the second tab on the font dialog to fix them, but it is capable of doing proper small caps and lower case/old style numerals.¹
I don't do hyphenation or full justification in Word so I can't speak to that, but despite being a long-time LaTeX user, for most of my writing, I prefer working in Word (not least of why is that it's the expected format for most non-technical writing which is the majority of my writing).
¹ I assume that you meant numerals and not letters.
Try TeXmacs (www.texmacs.org), which, despite the name would make one think that it descends from TeX and Emacs, is related to TeX only for the typographical quality and to Emacs for the extensibility.
It has the ease of use of a word processor and it is controllable through an own native macro system and Scheme Lisp.
> It's so good-looking that some scientists do research just so they can write it up in Computer Modern.
It's more like "undergrad students use it in their papers because they don't know better". The only Computer Science publications I'm aware of that still use CM are the Springer LNCS/LNAI series. Which also happen to still use a template optimized for printing the proceedings as books. And AFAIK, these templates are universally hated and considered archaic and outdated: They look bad everywhere (screen, A4 printout, letter printout) except the Springer-printed paper volume.