This is, of course, an argument against a strawman. I wish the author had not mentioned certain folks by name, because the analysis is interesting on its own, and makes some good points about apparent simplicity. However, by mentioning a specific person, and then restating that person's opinion poorly, it does a disservice to the material, and basically it just devolves the whole thing.
My understanding of Jon Blow's argument is not that he is against certain classes of "safe" languages, or even formal verification. It is that software, self-evidently, does not work well -- or at least not as well as it should. And a big reason for that is indeed layers of unnecessary complexity that allow people to pretend they are being thoughtful, but serve no useful purpose in the end. The meta-reason being that there is a distinct lack of care in the industry -- that the kind of meticulousness one would associate with something like formal verification (or more visibly, UI design and performance) isn't present in most software. It is, in fact, this kind of care and dedication that he is arguing for.
His language is an attempt to express that. That said, I'm not so sure it will be successful at it. I have some reservations that are sort of similar to the authors of this piece -- but I do appreciate how it makes an attempt, and I think is successful in certain parts, that I hope others borrow from (and I think some already have).
I endorse your much more thoughtful and well-argued post than my knee-jerk response down in the gray-colored section below.
John isn’t right about everything: he criticizes LSP in the cited talk, and I think the jury is in that we’re living in a golden age of rich language support (largely thanks to the huge success of VSCode). I think he was wrong on that.
But the guy takes his craft seriously, he demonstrably builds high-quality software that many, many people happily pay money for, and generally knows his stuff.
Even Rust gives up some developer affordances for performance, and while it’s quite fast used properly, there are still places where you want to go to a less-safe language because you’re counting every clock. Rust strikes a good balance, and some of my favorite software is written in it, but C++ isn’t obsolete.
I think Jai is looking like kind of what golang is advertised as: a modern C benefitting from decades of both experience and a new hardware landscape. I have no idea if it’s going to work out, but it bothers me when people dismiss ambitious projects from what sounds like a fairly uninformed perspective.
HN can’t make up its mind right now: is the hero the founder of the big YC-funded company that cuts some corners? Is it the lone contrarian self-funding ambitious software long after he didn’t need to work anymore?
> But the guy takes his craft seriously, he demonstrably builds high-quality software that many, many people happily pay money for, and generally knows his stuff.
He has made a few good games, but how has he done anything that would paint him as a competent language designer? Frankly, Blow has done very little (up to and including being a non-asshole) that would make me terribly interested in what he's up to.
Paul Graham is on record that the best languages are built by people who intend to use them, not for others to use. FWIW, I agree.
The jury is out on Jai, but it’s clearly not a toy. John emphasizes important stuff: build times, SOA/AOS as a first-class primitive, cache-friendliness in both the L1i and L1d. And he makes pragmatic engineering trade offs informed by modern hardware: you can afford to over-parse a bit if your grammar isn’t insane, this helps a lot in practice on multi-core. The list goes on.
And “he made a few good games” is really dismissive. He doesn’t launch a new game every year, but his admittedly short list of projects is 100% wild commercially and critically successful. On budgets that a FAANG spends changing the color of some UI elements.
And that’s kind of the point right? Doing better work takes time, and there is in fact a lucrative market for high-quality stuff.
As for him being an asshole? He’s aspy and curt and convinced he’s right, which is a bad look on the rare occasions when he’s wrong.
But Bryan Armstrong is on the front page doubling-down on such bad treatment of his employees and shareholders that they are in public, written revolt. This may have changed since I looked, but no one is calling him an asshole.
A world in which a passionate craftsman misses on diplomacy while discussing serious technical subject matter is an asshole but a well-connected CEO revoking employment offers after people already quit their old job is “making the hard calls” is basically the opposite of everything the word “hacker” stands for.
> And “he made a few good games” is really dismissive. He doesn’t launch a new game every year, but his admittedly short list of projects is 100% wild commercially and critically successful.
How is that dismissive? He has indeed made a few good games, but making good games doesn't certify you as a language designer any more than it makes you a good plumber or equinologist. Hollow Knight is my favorite game of all time, immensely successful both critically and commercially, and yet if Team Cherry were to release a programming language I reserve the right not to be terribly excited about it.
> But Bryan Armstrong is on the front page
OK, Bryan Armstrong is an asshole. I can call two people an asshole. I can call more people than that assholes too, if it comes to that.
> Asshole
Because I'm dismissive of Jonathan Blow? Listen, if you want to fanboy/girl your brains out over the guy, be my guest. He just doesn't impress me all that much and I don't think "aspy and curt and convinced he's right" is anything remotely approaching an excuse for poor behavior. I've been told I'm on the autism spectrum, too, yet I manage not to act like an asshole. Though clearly you disagree.
The cherry-pick quotes from parent and refute is the laziest form of argument on HN.
In this instance, it allows you to blow past the few concrete examples amongst many that I cited where Jai is trying new things in the language space. It’s not hard work to learn a little about Jai. Jai may be an utter failure, but it’s not a toy or hobby, it’s being co-designed with an interesting game engine that looks pretty hot. It’s at least as expressive as C99, compiles way faster on modern gear, targets LLVM and x86, it’s at a minimum interesting.
Calling someone who does their homework a “fanboi” is A-ok, but someone else is looking for an excuse for poor behavior?
> He has made a few good games, but how has he done anything that would paint him as a competent language designer?
You can watch his Twitch streams and see what he does and how he uses the language.
He's developing at least two games using it (the development of one of them he also shows on stream) and so far it's been proven to be a very strong contender for a C-like language suitable for game development. Just the fact that his entire 3-D game builds in under a few seconds is definitely something to aspire to.
Not necessarily. Speed is actually very easy to come by if we push down quality to the level of "wrong answers infinitely fast", which trivially allows you to achieve as much computational performance as a broken clock. As well, if you write code solo or in a small team, you will almost always get a more consistent, higher quality result than if it's written at corporate scale because that eliminates incidental communication overheads that get reflected in the software dependencies(e.g. Windows Terminal's low performance is mostly an artifact of Microsoft's processes).
Jon and the authors of uxn commit a common fallacy in that they're chasing a brass ring of in-the-small performance metrics, getting it in the form of particular demonstrations, and then gradually accreting features to it until, most likely, they end up in a similar position to the old tech. Many software projects start off as the "light and simple alternative" and then develop into something not light and simple. This isn't necessarily an issue for any particular project, because if you know the goal of your tech, you don't need all the features and so can omit some things to claim a definite advantage for the application; but it's not in and of itself a solution to the general issue of making computing better, because it entails bespoken effort from expert practitioners, while the general trend in computing tech is the same as most industrial automation - it's quality-first. Quality comes first when you automate, because a superhuman level of quality can redefine what's possible, and it can compensate for the downsides of not being a bespoke, artisanal result.
The actual problem faced by language authors is that they face difficulty in defining quality while also generalizing the problem space. New languages are mostly "old concepts, new syntax and libraries" - still giving improvements in UX and therefore quality, but with most of the features carried over from previous languages.
> Jon and the authors of uxn commit a common fallacy in that they're chasing a brass ring of in-the-small performance metrics, getting it in the form of particular demonstrations, and then gradually accreting features to it until, most likely, they end up in a similar position to the old tech.
The authors of this post note the same tendency with Gemini [1] and demonstrate that the clients are actually quite fat, much fatter than their rationale documents claim they should be.
> Jon and the authors of uxn commit a common fallacy in that they're chasing a brass ring of in-the-small performance metrics, getting it in the form of particular demonstrations, and then gradually accreting features to it until, most likely, they end up in a similar position to the old tech
If creating full games is an in-the-small performance for a particular demonstration, I can't even imagine what an in-the-big is.
> The actual problem faced by language authors is that they face difficulty in defining quality while also generalizing the problem space
As far as I can tell, jai is exactly that. It's already dropped a few features that looked fine in theory but didn't work for actual development.
> You can watch his Twitch streams and see what he does and how he uses the language.
I can watch coworkers blaze through Common Lisp in emacs, that doesn't mean it's the Next Big Thing in developer experience and performance.
The privacy of development has made Jai less than compelling, for a lot of us I think. I'd be personally more excited if I could use it and poke around at it, rather than see someone tinker on video streams. I get why he's doing things that way, but it's just hard to feel like it's going to be meaningful for anyone but him so long as it's kept hidden away.
I appreciate that the Jai closed beta is atypical for compilers these days, and plucks some strings about the bad old days of proprietary build chains.
With that said, it’s on record that an open source release is planned, and whether or not it works, it’s not insane to run something past 100 early adopters before putting it on GitHub.
With regards to language design, Blow is a guy with a series of YouTube videos.
The common thing in PL is to publish something written or code. So don’t be surprised when some people don’t feel like they have the time to go through an unconventional format.
> [...] counting every clock. Rust strikes a good balance, and some of my favorite software is written in it, but C++ isn’t obsolete.
This isn't a good argument for C++. If you can't get where you need to go in Rust because you are "counting every clock" you need to go down, which means writing assembler -- not sideways to C++. Once you're counting every clock, none of the high level languages can help you, because you're at the mercy of their compiler's internal representation of your program and their optimisers. If you care whether we use CPU instruction A or CPU instruction B, you need the assembler, which likewise cares, and not a high level language.
Both C++ and Rust provide inline assembler if that's what you resort to.
There are things to like about C++ but "counting every clock" isn't one of them.
> redefining “=“ to mean linear move is JavaScript level why the fuck did we do that.
IMNSHO this is another place where C++ has the defaults wrong. If you have both copy and move assignment, then move is the correct default. C++ didn't start out having move semantics at all, so this wasn't practical, but too bad.
From a pedagogic point of view the Rust choice is much easier to teach. Having taught move assignment, Copy is just an optimisation in Rust. Whereas in C++ you need to teach both separately, and it's understandable when people don't "get it".
Eh, fair enough. Frankly it would probably be better to do new operators anyways.
The “=“ operator meant copy for a long time, and everything from Java to Python technically kept those semantics by calling pointers “handles” or whatever.
I write a lot of C++ and type “std::move” too much, for some kinds of code it is in fact the default you want.
But there are plenty of punctuation characters in ASCII alone. Hell Pascal has been dead long enough we could bring “:=“ back.
“=“ meaning move is the worst kind of pun: it violates 30+ years of intuition, masks that there is still often some code-gen involved, and generally flexes the “Rust vibe” that whatever you knew before is irrelevant because we fixed computing.
Rust is a cool language in some ways. Rust attitude is: “we brigade HN. And you will do nothing, because you can do nothing.” I got into Rust despite the community.
I don't think it's as much of a strawman as you're making it out to be. In his talk, Blow says that higher-level abstractions haven't made programmers more productive than they used to be, and appears to use this as an argument against abstraction. He doesn't say (as far as I recall) that we shouldn't use something like formal verification, but he does put the blame for bad software at the feet of abstraction rather than "unnecessary complexity". Or at least if that were his point he wasn't particularly clear about it.
Particularly since the article spends such a big chunk of its text talking about bounds-checks and criticising Blow for being against them, when in fact in his language Jai, as far as I know, bounds-checks are enabled by default, although you can disable them selectively when they are e.g. in hot loops. So it's not any different than say Rust in this regard. It's the strawman of strawmen.
Blow's talk does raise a few valid questions but is so full of factually incorrect statements, cherry picking and contradictions[0] I'm surprised anyone can take it seriously.
It's also very hard, for me at least, to interpret the sum of his arguments in the talk as anything except "If you're not managing memory, you're not a real programmer."
I agree with the overall sentiment. There's a loud minority of developers out there who are nostalgic for something they never actually experienced. It's a reaction to the explosion of complexity of computers and the increasing depth of the stack of software we must depend on to get anything done. It makes us vulnerable because we must depend on leaky abstractions since there's too much software to fully understand all of it. And sometimes there are bugs and it only takes being burned once or twice before you become suspicious of any software you haven't (or couldn't have) written yourself. But starting from scratch kills your productivity, maybe for years!
I'm truly sympathetic. The simplicity of the microcomputer era has a certain romance to it, especially if you never lived through the reality of it. There's a sense of freedom to the model of computing where only your program is running and the entirety of the machine is your own little playground. So I get why people make stuff like PICO-8 and uxn.
I agree with the criticism of Jon Blow's rhetoric, even though the tone is harsher than strictly necessary. Blow's criticisms and proposed solutions seem overly opinionated to me, and he's throwing out the baby with the bath water. He describes things like compile time and runtime hygienic macros like they're a new invention that hasn't existed in Lisp since before he was born.
However, I think targeting uxn is unfair. Uxn would be viewed better as an experiment in minimalism and portability. I think of it more like an art project.
It's unfair because the author is comparing mainstream languages that benefit from countless users' input and innovations over the course of 60 years to a niche system with a small handful of developers that has existed for maybe 2 years or so. That's a strawman if I ever heard one.
"There's a loud minority of developers out there who are nostalgic for something they never actually experienced."
Having actually experienced it, all the way back to writing business applications in mainframe assembler, I am not nostalgic for it. Today I write mostly in Rust.
Likewise, as a child born in the early 80s, my family's first computer was an Apple IIGS, and I routinely used Apple IIe computers in school from 1st through 7th grade. I wrote Apple II assembler during that time (albeit heavily relying on routines copied from books and magazines). And while occasionally fooling around with an Apple II emulator with an Echo speech synthesizer, or even just listening to one [1], makes me nostalgic for my childhood, I don't miss the experience of actually using or programming those computers. Things are so much better now, even (or perhaps especially) for disabled folks like me.
[1]: https://mwcampbell.us/a11y-history/echo.mp3 This file has a real Echo followed by an emulator. Unfortunately, I forgot where I got it, so I can't provide proper attribution.
I’m from the modern web dev generation so to speak, but just yesterday had an amazing conversation with my dad, who did what you describe.
He explained to me what a “patch” actually means, or meant back then. He was talking about debugging and implementing loaders for new hardware and such, then he mentioned a “patching area”, I asked wtf that means and apparently the release cycles then where so slow, that a binary left some dedicated space for you to directly hack in a patch. He then would change the machine code directly to jump to the patched area so he could fix a bug or extend the behavior.
Contrast this to the arguably wasteful CI/CD setups we have now.
As a kind of middle-ground: CI/CD burns our abundant cycles out of band. That’s a good place to spend extravagantly.
I’m all for burning more cycles on tests and static analysis and formal verification etc. before the software goes on the user’s machine.
But we all live every day with “good enough” on our machines every day. I think there’s a general consensus that too much software spends my CPU cycles like a drunken sailor.
It's also the question of what the burned CPU cycles actually buy. Cycles burned on testing buy us less buggy programs. Cycles burned on your CPU that do stuff like out-of-bounds checks or garbage collection also do that.
But most of them are burned on layers upon layers of abstraction that do not actually do anything useful safety- or correctness-wise; they're there solely because we've turned code reuse into a religion. Which wouldn't be bad by itself, if that religion had a firm commandment to not reuse bad solutions - yet that is exactly what we keep doing, again and again, patching it all with the software engineering equivalents of duck tape and glue to keep it all from falling apart. Why is C still at the bottom of virtually every software stack we run? Why do we keep writing desktop apps in HTML/JS? Does a simple app really need 20 dependencies of uncertain quality?
JavaScript is a good example. It's not a bad language because it's high-level - to the contrary, that's the best part of it! It's a bad language because, despite being high-level, it still gives any user ample opportunities to mess things up by accident. We need something just as (if not more) high level but better for general-purpose software development.
I like bitching about JS as much as the next guy, and as someone who has implemented ECMA-262 I guess I’m more entitled than most.
But let’s not get carried away with it. Eich had 9 days from unreasonable manager to shipping physical media. To do kind of a cool Scheme/Self thing from scratch in 9 days? I’ve been on some high-G burns, but that’s fucking nuts.
But since there’s no scientific way to quantify any of this, I’ll throw my anecdote on the scale in favor of my opinion and note that Brendan Eich was a hard-ass Irix hacker at SGI before he followed Jim Clark to Netscape.
I'm not blaming Eich. And I very much doubt that anyone originally involved with that project thought that their tech would be the foundation for large server-side and desktop apps.
But, regardless of how we got here and whose fault it is, we're here - and it's not a good place.
I’m not arguing. Just observing the difference. Different times, different needs and practices.
For example back then it was common to understand the whole machine code of a binary in total. We’re talking no abstraction, no runtimes. Portability and virtual memory were luxuries.
I definitely think CI/CD could be less wasteful, but I don’t necessarily think we should manually patch binaries in place.
There were other moments in time between mainframes and today.
For example, when I scroll a two-page document in Google Docs my CPU usage on an M1 Mac spikes to 20%. For an app with overall functionality that is probably less than that of a Word 95.
Rust can also be used to write business applications that will compile cleanly to mainframe assembler (at least if your mainframe is 64-bit and runs Linux).
That Jonathan Blow talk is awful. Repeatedly Blow stands up what is barely even a sketch of a straw man argument for a position and then just declares he's proven himself right and moves on. I can hardly see why even Jonathan himself would believe any of this, let alone how it could convince others.
And at the end it's basically nostalgia, which is a really old trap, so old that the warnings are well posted for many centuries. If you're stepping into that trap, as Blow seemingly has, you have specifically ignored warnings telling you what's about to happen, and you should - I think - be judged accordingly.
Assuming you're talking about the "collapse of civilization" talk, I think the primary appeal is that he's confirming the feeling many of us have that software, particularly user-facing software on personal computers, is going downhill. And I use that metaphor deliberately, because it reminds us that things get worse by default, unless we put in the work to counteract the relevant natural forces and make things better.
Whether he has any real solutions to the decay of modern software is, of course, another question. It makes intuitive sense that, since previous generations were able to develop efficient and pleasant-to-use software for much more constrained computers than the ones we now have, we can gain something by looking back to the past. But those previous generations of software also lacked things that we're no longer willing to give up -- security, accessibility, internationalization, etc. That doesn't mean we have to settle for the excesses of Electron, React, etc. But it does mean that, at least for software that needs qualities like the ones I listed above, we can't just go back to the ways software used to be developed for older computers. So, I think you're right about the danger of nostalgia.
> But those previous generations of software also lacked things that we're no longer willing to give up -- security, accessibility, internationalization, etc. That doesn't mean we have to settle for the excesses of Electron, React, etc. But it does mean that, at least for software that needs qualities like the ones I listed above, we can't just go back to the ways software used to be developed for older computers. So, I think you're right about the danger of nostalgia.
I think that's precisely the danger. It's the danger of using nostalgia to feed into a purity spiral. We can simultaneously acknowledge that there's problem where we create bad software that wastefully uses resources on a user's computer while understanding that part of modern development has made computing much safer and more accessible than it used to be. Instead of looking _backward_, we can look _forward_ to a future where we can continue to be safe and accessible while not being as wasteful with a user's resources.
We need to look both backward and forward, because the past still has so many useful lessons to teach (which is a far better way to learn them than making the mistake that prompted them in the first place!). The problem is blindly repeating the past, not looking into it.
> He describes things like compile time and runtime hygienic macros like they're a new invention that hasn't existed in Lisp since before he was born.
I suspect that this is where some of the inspiration comes from, because he mentioned being a fan of Scheme at a young age. At the same time he wants strong static typing, fine grained control and power (non restrictive).
> It's a reaction to the explosion of complexity of computers and the increasing depth of the stack of software we must depend on to get anything done.
People say this with a straight face, and I don't know if it's an elaborate joke of some kind or not.
We're building a tower of babel that requires supercomputers to barely run, and somehow end up defending it.
In most english translations the tower of babel was struck down by god because it represented the ability of humans to challenge the power of god. If we are indeed building a tower of babel that's cool, it means that "nothing that they propose to do will now be impossible for them."¹
The comment you're replying to is not defending the explosion of complexity but pointing out that we resent it precisely because we find ourselves dependent on it. The article is pointing out that we tend to take bad or counterproductive paths when we try to free ourselves from that complexity though.
> There's a sense of freedom to the model of computing where only your program is running and the entirety of the machine is your own little playground.
These days, we call that little playground a 'sandbox'. But I think OP's point is that sandboxes can be a lot more efficient that what they see w/ uxn. It's not exactly a secret that the whole FORTH model does not easily scale to larger systems: that's why other languages exist!
My name is Devine, and I'm to blame for the uxn disaster.. I'm getting this link thrown at me from all sides right now, so I figured I might as well chip in.
I'm a bit more on the design-y side of things, and all these fancy words to talk about computers and retro computing, are a bit beyond me, and reference an era of computing which I didn't really had a chance to experience first hand.
But let's talk about "the uxn machine has quite the opposite effect, due to inefficient implementations and a poorly designed virtual machine, which does not lend itself to writing an efficient implementation easily."
This is meaningful to me and I'd love to have your opinion on this. Before setting on the journey to build uxn, I looked around at the options that were out there, that would solve our specific problems, and I'd like to know what you would recommend.
Obviously, I take it that the author is not advocating that we simply return to Electron, and I take it they understand that re-compiling C applications after each change is not viable with our setup, that Rust is manyfolds too slow for us to make any use of it, and that they understand that our using Plan 9, C is not a good candidate for writing cross-compatible(libdraw) applications anyway.
So, what should we have done differently?
I'm genuinely looking for suggestions, even if one suggestion might not be compatible with our specific position, many folks come to us asking for existing solutions in that space, and I would like to furnish our answers with things I might have not tried.
I am not very familiar with Uxn, but the one thing from the article that struck me as an actual problem was that the memory containing the program's code can be modified at runtime. The downsides of that can heavily outweigh the benefits in many cases; it turns "write a uxn -> native code compiler" from a weekend project to a nearly impossible task, for example. This is probably relatively easy to fix, assuming uxn programs do not regularly use self-modifying code. (The article proposes an elaborate scheme involving alias analysis and bounds checks, but something like "store executable code and writable data in two separate 64KB regions" would work just as well.)
The article suggests that Uxn programs can write to arbitrary locations on the filesystem. If that is the case, it seems like it would be really easy to change that in the interpreter. Then Uxn's security model would be essentially identical to how the article describes WebAssembly: the interpreter serves as a "sandbox" that prevents programs from touching anything outside their own virtual memory. This is a good security model and it likely makes a lot of sense for Uxn.
Otherwise, the article seems to be more bluster than substance. Uxn is probably not going to be the fastest way to compute the Fibonacci sequence, nor the most secure way to protect a telecommunications network from foreign cyber-attacks, but it doesn't need to be either of those things to be useful and valuable as a way to write the kind of UI-focused personal computer software you want to write.
The second (or first, maybe) most popular Uxn emulator, Uxn32, has sandboxed filesystem access, and has had it since the beginning. The author of the original article doesn't know what they're talking about.
I use Uxn's self-modification powers quite a bit, I use them mostly for routines that runs thousands of times per frames, so I don't have to pull literals from the stack, I just can just write myself a literal in the future and have it available. I wonder, what about this makes a native code compiler difficult, is it because most programs protect against this sort of behaviors? or that programs are stored in read-only memory?
Some of the emulator sandbox the file device, it is likely to be the way that all the emulators will work eventually.
> I wonder, what about this makes a native code compiler difficult, is it because most programs protect against this sort of behaviors? or that programs are stored in read-only memory?
Say you're trying to "translate" a Uxn program into x86 code. The easiest way to do this is by going through each of the instructions in the program one by one, then converting it to an equivalent sequence of x86 instructions. (There is a little complexity around handling jumps/branches, but it's not too bad.)
But if the Uxn program is allowed to change its own code at runtime, that kind of conversion won't work. The Uxn program can't change the x86 code because it doesn't know what x86 code looks like--it only knows about Uxn code. There are some ways around this, but either they're really slow (eg by switching back to an interpreter when the Uxn code has been modified) or much more complex (eg a JIT compiler) or don't work all the time (due to the halting problem).
It's ultimately a hardware-dependent answer. Self-modifying code fell out of fashion in the 1990's, once pipelined and cached code execution became the norm in desktop computing. From that point forward, correct answers to how to optimally use your hardware become complex to reason about, but generally follow the paradigms of "data-driven" coding: you're designing code that the CPU understands how to pipeline(and therefore is light on branching and indirection during inner loops), and data that the CPU can cache predictably(which leads to flat, array-like structures that a loop will traverse forward through).
Therefore what compilers will actually do is reorder program behavior towards optimal pipeline usage(where doing so doesn't break the spec). This has clear downsides for any programming style that relies on knowing instruction-level behavior. And it is so hard to keep up with the exactly optimal instructions across successive generations of CPU that in the majority of cases, humans just never get around to attempting hand optimization.
The benefit of defining a VM is that you can define whatever is optimal in terms of pragmatic convenience - if you want to write programs that have a certain approach to optimization, you can make some instructions, uses of memory or styles of coding relatively faster or slower, and this leads to an interesting play space for programmers who wish to puzzle through optimization problems. But unless it also happens to represent the actual hardware, it's not going to achieve any particular goal for real performance or energy usage - at least, not immediately. Widespread adoption motivates successively more optimal implementations. But that logic makes it hard to justify any "starting over" kind of effort, because then you end up at the conclusion that the market is actually succeeding at gaining efficiency through its fast-moving generational improvements, even if it does simultaneously result in an environment of obsolescence as the ecosystem-as-a-whole moves forward and leaves some parts behind.
An alternate path forward, one which can integrate with the market, is to define a much more narrow language for each application you have in mind, and be non-particular about the implementation, thus allowing the implementation to become as optimal as possible given a general specification. This task leads, in the large scale, towards something like the VPRI STEPS project, where layers of small languages build off of each other into a graphical desktop environment.
I think a VM for a small, but highly abstract, language like Scheme might address the objections of the author(s) of this article. You might like Chibi-Scheme: https://github.com/ashinn/chibi-scheme
Having said that, IMO, if you're having fun with uxn and its retro 8-bit aesthetic, by all means keep going with that.
I did use Chibi! I've implemented a few lisps, but I always found their bytecode implementation too convoluted and slow, so I went with a stack machine for this particular project. I might at some point, implement a proper scheme in uxntal.
It's neat, but I don't remember seeing a graphical API for it, I'll have a look :)
I'm not aware of one. I was thinking that you could roll your own, just as your Varvara computer defines a graphics device on top of uxn. You'd still gain the benefits of using an existing language.
> For a time, I thought I ought to be building software for the NES to ensure their survival over the influx of disposable modern platforms — So, I did. . Sadly, most of the software that I care to write and use require slightly more than an 8-button controller.
Seems to me you could have saved a lot of effort by changing gears slightly to target other ubiquitous 6502 machines such as the apple 2 or c64.
I did a few C64 test applications both in plain 6502 and via cc65(which works very poorly on ARM btw), but that didn't really work out for us. I ran into issues porting VICE to Plan 9, and I had all sorts of issues with c64 sound emulation.
Oberon is great! I remember reading the book, it gave me all sorts of ideas for Uxn. I love writing Pascal, Modula+Overon's drawing API is excellent. It's much too massive a system for me, and I can't even begin to imagine how I'd bring that over the GBA/Amiga/etc.. but I recommend people go through the book from time to time.
Yeah, the Oberon OS might be a bit much for your applications, eh? The chip is interesting as a target for a low-tech 32-bit platform. I've heard that the folks at Noisebridge (here in San Francisco) are playing around with making their own silicon ICs.
The "Provably Correct" book presents the work of Dr. Margaret Hamilton (she of Apollo 11, who coined the term "software engineer"). It shows a simple elegant way to make easy-to-use safe programming systems.
> re-compiling C applications after each change is not viable with our setup
What setup is that? Or, where can I read more about it? I realize this is kind of irrelevant, since the original article criticizes C and similar languages. But I'm curious.
Why can't you compile C on a raspberry pi 3?? Thats a supercomputer compared to anything that existed throughout the 80s 90s. Especially since your progs seem to be pretty small with kind of vintage retro graphics id imagine they compile basically instantly? I never programmed a videogame but isn't the common pattern to make a game engine plus embedded scripting language intreprter so you dont have to recompile for all the little tweaks?
I mean dont get me wrong no reason not to do what you did but i'm not seeing the unique challenges solved, like in what direction r u trying to move the needle here?
It's quite common that people think they know what working on a Pi is like, because on paper, it has all the specs of a very fast machine, but to actually use the thing as a daily driver, is another story altogether.
So, of course you can, I've written every application that now exist on Uxn, in C prior to porting them. There was a moment when I was quite convinced that C was a good candidate for what we wanted to do.
Compiling Orca C(SDL on Pi), is about 10x~ slower, more battery hungry, (and also for some reason, very upsetting to ASAN), than building C(libdraw on Pi), which is itself about 50x slower and battery hungry than building the Uxn version.
I can compile little C cli tools on the Pi just fine, that's not the issue, building x11 or SDL applications with debug flags is another story. I much rather be assembling uxn roms in this case. I wrote a few cc65 compiled applications prior to building uxn as an alternative, but that was a non-starter on ARM.
I have tried Pi as only PC, I gave up after a short bit. Not to go too much on a tangent, my takeaway is it depends on YOUR life and work whether or not any computer will work.
If you can limit yourself within low resolution, small color pallete, direct use of simple graphics API, basicalyl target the computing experience of when computers had 50mhz CPU. Then its possible to make things blazing fast even on a Pi, using ANY language? I'm sure on a "well ackshually" level there is some difference in how the CPU executes things but practically its like milliseconds and milliamps not anything perceptable.
WIth C Im not sure if I know what im talking about here (i've compiled hello world with gcc myself, otherwise i know how to use Make even though i dont understand it but ive also never needed to understand it, but seems like every project uses makefiles or something like it...) but isn't compiling SDL or libdraw a 1 time thing the very first time you build the program, then each time you make a change to orca.c it compiles pretty quick (until 'make clean')? you dont include the time to re-compile uxn emulator when you say uxn is 50x faster?
What is ASAN?
One of the reason I tend to favor Plan 9 is that it's blazing fast compared to DietPi/Raspian-like distros. It does away with most of the rounded corners, alpha, soft-looking fonts - but after a little while, I barely notice they're gone.
I haven't found that the C build systems necessarily make this faster or more pleasant, they often get in my way and make it hard to replicate my environment across devices.
It's that same idea that you just said, that "any language will do", that sent me down the path toward virtual machines, if the language doesn't matter, I might as well pick something that appeals to my aesthetics and maps well with what I'm trying to do.
If Orca was written in Lua for a framework like Love2d, for example, then I wouldn't have to recompile love2d itself, it would be more akin to how uxn works. That's usually why people use scripting languages, I think?
ASAN is an address sanitizer, if you do any c development on arm devices, you'll get pretty familiar to its countless ways of breaking in fun ways.
> In a DevGAMM presentation, Jon Blow managed to convince himself and other game developers
Setting the content aside, when an author starts off their with with this kind of tone I'm immediately turned off.
There's no need to be so antagonistic. Jonathan Blow "said", "claimed", "discussed", or any of the infinite other neutral and non-insulting ways to refer to another person's work.
The author has clearly never seen what real safety critical code look like.
When safety/robustness really is critical, there are ways to achieve it with high confidence, and it can be done in any language as long as the team doing it is qualified and applying stringent rules.
Of course this is time consuming and costly but luckily we've known how to do it for decades.
The kind of rules I am talking about are more stringent than most people who never worked on those fields realize.
In the most extreme case I've seen (aerospace code) the rules were : no dynamic memory allocation, no loop, no compiler. Yeah, pure hand-rolled assembly. Not for speed, for safety and predictability.
For example, the specification framework for the C language by Andronick et al8 does not support "references to local variables, goto statements, expressions with uncontrolled side-effects, switch statements using fall-through, unions, floating point arithmetic, or calls to function pointers". If someone does not want to use a language which provides for bounds checks, due to a lack of some sort of "capability", then we cannot imagine that they will want to use any subset of C that can be formally modelled!
that said this isn't an essay about safety, it's about the emotional appeal of a sort of false simplicity that some programmers are prone to falling for and pointing out the inherent inability of a couple projects (in synecdoche for a whole shit ton of other projects) to live up to promises of that mirage.
What I am saying is that when safety is required, there are known techniques to achieve it, but this is mostly about people and their skills, much more than tools.
I am upset reading ignorant people talking about computer security without any real knowledge about it.
This guy don't know what he is talking about.
The fact that you can tag some part of the code as [unsafe] does not make the rest of the program better of "safe", this is magical thinking at best.
> The fact that you can tag some part of the code as [unsafe] does not make the rest of the program better of "safe", this is magical thinking at best.
Good thing nobody is claiming that then. Let's see you try to actually articulate the argument being put forward instead of putting up a straw man. Here's a hint for where to start: describe the difference in safety (with respect to whether undefined behavior occurs) between a language like C++ and Rust.
Interesting. It's hard to take that claim seriously, given that Rust's entire design is based around the idea that any code you write outside of 'unsafe' blocks should be free from UB. And that the only way you can introduce UB into your code is by writing an 'unsafe' block. Contrast that with C++ where UB can be introduced anywhere in your code.
The only thing I can think of is that you're using a different definition of 'safety' than everyone else. Which is fine, you can define words however you wish. But that's why I specifically included "with respect to whether undefined behavior occurs," in my comment, to avoid that kind of definitional misunderstanding. So perhaps you just didn't read my comment carefully enough.
But yeah, if you think there is literally no difference in safety between Assembly and Java, then I'd probably call that "totally and completely nutso." It's nuts enough that I don't think any kind of asynchronous discussion could fix it. Next time, lead with that statement, so that everyone else reading along can calibrate their expectations for just how seriously they should take you. :-)
The other interesting bit here is that I was effectively asking for a steel man. But instead, you just decided to give your own answer to the question. So either you misunderstood that too, or you just literally don't have any idea what other people are actually claiming.
In my experience this statement is incorrect: “Using abstractions extensively actively encourages study into how to implement them efficiently, contrary to Blow's claim that their use will lead to few understanding their implementation.”
I would agree with Jonathon Blow here that it is very common to use abstractions without there being any understanding of or even an interest to discover the underlining implementation of that abstraction.
For example, why else would the question, “Do you use multithreading in NodeJS?” be a common interview question?
It is commonly known that NodeJS “runs code asynchronously” but how often would it be that an engineer could accurately explain how this is done?
While some may find the benefit in understanding the system in which they work in completion I believe a majority is content with living with assumptions.
My personal opinion of this is that you can have high and low level code written in the same language at the expense of requiring more skill from the programmer.
You can write a pure-functional RPN calculator in a handful of lines of D, you can also write D's entire implementation in D including the "low-level" aspects.
I also think the distinction between low and high level code in languages that are not built around some theoretically abstract model of execution (e.g. the lambda calculus) to be rather pointless and mostly useful only for making poor arguments.
I'm also sceptical of simplicity as an unqualified idea. I have read "simple" code with no abstractions that makes it extremely hard to read, I have read "simple" code that can be considered as such because the abstractions are actually abstractions: "Abstraction is the removal of irrelevant detail".
I think when they say "system" they mean "computing stack" from the gate-level up. You can't have a different simple on top of silicon that expects another type of simple. It's too incompatible with the throughout increase that pipeline and the specific caching implementation give. Though I don't think Jonathon Blow is necessarily ignoring the underlying silicon expectations from the Livestream or two I've seen.
There are best practices and optimized algorithms. Due to ignorance or preference these are sometimes ignored. This results in a lot of confusion, bloat, and failed attempts.
We all want less sloppy code and less sloppy abstractions, but it's hard to do in the real world with tens of millions of developers placed under different constraints.
"I was not given time to do this correctly, I'll just use a library that adds 200mb RES but basically works for now."
The Internet Architecture Board, W3 working groups, OWASP, Open telemetry, and hundreds of other groups are working hard to standardize things so we don't have to repeat the same mistakes in a problem area. Heck even community sites like leetcode help raise awareness about sub-optimal solutions to problem spaces.
The article is all over the place, I don't really know what it's trying to say. It starts by criticizing Jon Blow's talk about "collapse of civilization by nobody being to able to do low-level programming anymore" as a tirade against the tirade of abstractions, but then talks about dxn as a prime (flawed) example of this "effort to remove abstractions". I mean, dxn is clearly just a wrong abstraction for the hardware we have right now, it's actually opposite to what most low-level programmers would do. So the dxn example is actually supporting what Jon was saying in the talk? It's just not a good example.
Also, the example about dynamic dispatch isn't really as persuasive as the author might think. Even if everyone benefits from these abstractions, what's the point when that abstraction is fundamentally slow on hardware in the first place, no matter how much optimization you can do? I mean, the Apple engineers have done everything they can do to optimize obj_msgSend() down to the assembly level, but you're still discouraged to use Obj-C virtual method calls in a tight loop because of performance problems. And we know in both principle and practice that languages which heavily rely on dynamic polymorphism (like Smalltalk) tends to perform much worse than languages like C/(non-OOP)C++/Rust which (usually) doesn't rely heavily on dynamic polymorphism. In these languages, when performance matters devs often use the good ol' switch statement (with enums / tagged unions / ADTs) instead to specify different kinds of behavior, since they are easier to inline for the compiler and the hardware is made to run switch statements faster than virtual calls. (Or to go even further you can just put different types of objects in separate contiguous arrays, if you are frequently iterating and querying these objects...) The problem I think for most programmers is that they don't know they can actually make these design choices in the first place, since they've learned "use virtual polymorphism to model objects" in OOP classes as a dogma that they must always adhere to, whereas a switch statement could have been better in most cases (both in terms of performance and code readability/maintainability. Virtual calls may be a good abstraction in some cases, but in most cases there are multiple abstractions competing with it that are more performant (and arguably, can actually be simpler).
The point Jon is trying to make (although maybe not that clear enough from the talk), is that we simply need better abstractions for the hardware that we have. And C/C++ doesn't really cut it for him, so that's why he's creating his own abstractions from scratch by writing a new language. He has often said that he dislikes "big ideas programming", which believes that if every programmer believes in a core "idea" of programming then everything will magically get better. He instead opts towards a more pragmatic approach to writing software, which is writing for the hardware and the design constraints we have right now. Maybe he may seem a bit grumpy from the perspective of people outside of OS/compiler/game development (since he also lets out some personal developer grievances in the talk), but I think his sentiment make sense in a big picture, that we have continuously churned out heaps of abstractions that have gone too far from the actual inner workings of the hardware, up to the point that desktop software has generally become too slow compared to the features it provides to users (Looking at you, Electron...)
The "analysis" of uxn is unproductive to the point of being anti-productive.
In particular:
> The most significant performance issue is that all uxn implementations we have found use naïve interpretation. A typical estimation is that an interpreter is between one and two magnitudes slower than a good compiler. We instead propose the use of dynamic translation to generate native code, which should eliminate the overhead associated with interpreting instructions.
Okay, go for it. Literally nothing is stopping you from implementing an optimizing compiler for uxn bytecode.
Meanwhile, zero mention in this "analysis" of program size, or memory consumption, or the fact that uxn implementations exist even for microcontrollers. Interpreters are slow, but they're also straightforward to implement and they can be pretty dang compact, and so can the interpreted bytecode be much smaller than its translated-to-native-machine-code equivalent.
(Interpreters also don't need to be all that slow; I guess this guy's never heard of Forth? Or SQLite?)
> Writing a decent big-integer implementation ourselves requires some design which we would rather not perform, if possible. Instead, we can use a library for simulated 32-bit arithmetic. […] The resulting program takes 1.95 seconds to run in the "official" uxn implementation, or 390 times slower than native code! This amount of overhead greatly restricts what sort of computations can be done at an acceptable speed on modern computers; and using older computers would be out of the question.
Well yeah, no shit, Sherlock. Doing 32-bit arithmetic on an 8-bit machine is gonna be slow. Go try that on some Z80 or 6502 or whatever and get back to us.
And no, using older computers ain't "out of the question". There are literally uxn ports to DOS, to the Raspberry Pi Pico, to the goddamn Gameboy Advance, to all sorts of tiny constrained systems. The author would know this if one had done even the slightest bit of investigation before deciding to shit all over uxn and the folks making it.
> The uxn system could also be a sandbox, and prevent damage outside of its memory, but a filesystem device is specified for the platform, and is often implemented, and so a uxn program can destroy information outside its virtual machine. Any isolation would have to be performed outside of the virtual machine; thus running a random uxn program is as insecure as running a random native executable.
Absolutely nothing in the uxn spec suggests that the "filesystem" in question should be the host's filesystem; it could very well be some loopback device or a sandbox or what have you. If security/isolation is desired, then that's pretty trivial to implement in a way that a tiny bytecode stack machine would have a very hard time escaping. Either the author is incapable of actually reading the documentation of the thing being analyzed, or one is being blatantly intellectually dishonest.
----
Like, I don't know what the author's beef is - maybe Rek and Devine ran over the author's dog with their sailboat, or maybe the Bytecode Interpreter Mafia smashed the author's kneecaps with hammers - but the article reads more like bone-pickery than some objective analysis.
Uxn32, a full Uxn implementation for Win32, already implements this kind of filesystem sandbox, and has for months. I don't think the author did any research, and instead focused on research-y aesthetics.
> Interpreters also don't need to be all that slow; I guess this guy's never heard of Forth?
Well, for one thing, once you get to subroutine-threaded code (which is often the most efficient way to implement it on modern architectures), it's arguable whether still counts as "interpreted".
But even then, FORTH is still several times slower than equivalent native code. Which is better than most naive bytecode interpreter, but it's also why industrial FORTH compilers do native code inlining and other optimizations to reach the desired degree of performance. At which point, what's the fundamental difference with C?
They do mention disregarding uxn's default choice to optimize for size and change it instead to optimize for speed (-OS vs -O2) to give a more level comparison (for speed specifically). But it helps to show that the comparison is a bit stretched.
> we'd even say that the greatest programmers are so because of how they produce redundancy.
Perhaps the greatest of all programmers produce redundancy while depending on very little of it in their own code. For example, Richard Hipp created SQLite, the ubiquitous and amazingly high-quality embedded database, in C. Thinking about that makes me feel like I ought to be using C in my own, much more modest attempt at a reusable infrastructure project [1]. Cue the cliches about younger generations (like mine) being soft compared to our predecessors.
A thorough test suite or formal verification increases the quality of the resulting code by removing bugs, while DRY violations and redundant checks often reduce it by adding overhead (which can often but not always be safely removed, and the skill of programming lies in knowing how to write minimal software and remove redundant checks in hot paths and areas without constantly changing contracts, and instead proving statically those paths are unreachable).
It might be important to consider that making a high performance, reliable database is something that is far more dependent on the author than the language they choose to use, and that it may not be a good choice to cargo cult their language choices in your own project.
My great-grandfather died of Polio in his twenties. Look at me, a thirty-something, with no Polio, not even any Covid symptoms! Curse my soft-handed generation!!
The prime example is that in the 90's, Java thought they were going to make Windows irrelevant (a pile of poorly debugged device drivers or something like that).
But now Java is just Windows/Unix process.
Same with all these "alternative computing stacks" -- in the end they will almost certainly be just Unix processes.
The only situation I can think of where they wouldn't be is a revolution in hardware like the IBM PC producing MS-DOS, etc. And even not then, because we already had the mobile revolution starting ~15 years ago, and both platforms are based on Unix (iOS/Android).
----
I do think people should carefully consider whether they want to create an "inner platform" or make the platforms we ACTUALLY USE better.
Evolving working systems is harder than creating clean slates. So clean slates are fun for that reason: you get to play God without much consequence.
Sometimes clean slates introduce new ideas, and that's the best case. But a common pattern is that they are better along certain dimensions (the "pet peeves" of their creator -- small size, etc.), but they are WORSE than their predecessors along all the other dimensions!
So that is the logic of Oil being compatible and painstakingly cleaning up bash -- to not be worse than the state of the art along any dimension! I've experimented writing a clean slate shell in the past, but there are surprising number of tradeoffs, and a surprising number of things that Unix shell got right.
So I would like to see more systems take a DUAL view -- compatible, but also envision the ideal end state, and not just pile on hacks (e.g. Linux cgroups and Docker are a prime example of this.)
You will learn the wisdom of the system that way, and the work will be more impactful. For example, it forced me to put into words what Unix (and the web) got right: https://www.oilshell.org/blog/2022/03/backlog-arch.html
The basic premise of this article seems to be "writing good programs without heavy runtime checks is physically impossible".
I'd like to ask the author, how does he think his computer works? The hardware is rather complex and it has to work near perfectly all the time. It should be impossible according to his premise, but it's clearning happening.
Hardware goes through extensive formal verification as well as testing (incidentally, it tends to be heavily instrumented, cf 'runtime checks', while it is being tested), and has its design frozen months before it goes into mass production. If you developed software the same way, you might see similar results. Most people do not develop software this way.
Yeah, formal verification of software is sort of a niche topic, formal verification of hardware is like, one of the primary tools. Shit even something like Built in Self Test, which is common in hardware, I’ve seen only rarely, and never comprehensive (I’m also not sure comprehensive BiST is the right way to think about reliability for software, it’s probably only very specific things that actually want that)
Hardware is also much simpler than software conceptually at least.
You might think of a modern CPU as a black-box that goes out and finds (the lack of) dependencies in the instruction stream to exploit them, but this all has to be condensed into a logical circuit that is bounded in memory (registers), can be pipelined, and verified.
And even then with hardware you are typically verifying things like "The pipeline never locks up entirely" or "The cache never gives back memory from the wrong physical address" basic things like that whereas these same kinds of invariants in software are rarely profitable to try and verify.
And despite all that effort, hardware is far from free of bugs! Lots of broken features ship and have to be disabled, or the software has to do horrible workarounds. These fixes are hidden inside CPU microcode, OS kernels, GPU drivers.
Unless you build the program around the static analysis you're going to have a bad time.
David Malcolm has done a really good job with the analyser but it can't catch everything by virtue of the C type system making it possible to write code that can't be guaranteed either by construction or by making opaque boundaries.
Jon Blow live-codes compilers and 3D rendering engines and shit from scratch on YouTube or whatever. Starting you essay by dissing him is not a great intro.
I’m saying that John Blow works on fairly difficult software projects and shows the process, mistakes and all. That’s takes a lot of both confidence and humility. I tend to make more mistakes the first 2 or 3 times I pair program with someone, I can only imagine with 10k people watching my every mistake.
Add to this that John Blow is such an exemplar of the entrepreneurial spirit that this site has basically its foundational value.
The guy worked in the rat race for awhile, saved a modest amount to self-fund building Braid. Braid smashed every record both commercially and critically for what one guy in a garage could do in games.
He took all that money, hired a few people carefully, and built The Witness, not on fucking Unity or something, but from the shaders up so that it would have a unique look and not be a clone of something else. The Witness was also a huge commercial and critical success.
His most recent project is, uh, ambitious. I don’t know if it’s going to prove feasible. But I’m sure as hell rooting for success rather than failure.
Now mostly this is directed at the auth or the post, but you’ve kind of signed up for a little of this: what the fuck is your CV?
I agree that he has an impressive resume, but I believe the parent's point is still valid: live-coding compiler work doesn't make someone magically faultless. Everyone has their blind spots.
If Jonathan Blow truly does believe that there are "perfect" programmers who never, ever need runtimes that do bounds checking (and other useful things)[0], then that is a huge blind spot. Maybe these unicorns exist, but they are just that: incredibly, vanishingly rare. And Blow certainly isn't one of those people; I've played both Braid and The Witness, and I've seen them both crash (in game code, not in a linked platform library). They're amazing, beautiful games, but that doesn't make their author above criticism.
> ... but you’ve kind of signed up for a little of this: what the fuck is your CV?
Valid criticism/skepticism need not turn into a dick-measuring contest. Trying to invalidate someone's opinion by asking for their credentials is logically fallacious. Please don't do this sort of thing here.
Also not sure where all the anger is coming from. Why do you feel such a personal stake in Jonathan Blow's reputation as some sort of infallible coding god?
[0] Not saying that I know for a fact that he does believe this; I'm just going by OP's article, which could easily be wrong or at least exaggerated.
Thank you for the rundown on Jonathan Blow. I've worked in game development for over a decade and am familiar with his games and accomplishments, but maybe it'll be useful to someone else. If you want to direct a question to the author of the post you might want to reach out to them directly as I'm not their proxy.
Yeah I want to emphasize that my reply is kind of to all the peanut gallery stuff on this thread, not trying to single you out.
If you work in games then you know that John Blow is by many measures the most demonstrably successful game developer without a big studio behind him, others might not.
Talking about this stuff on the Internet is a sloppy, haphazard business: deep insight rarely fits in a tweet. I don’t mind that the blog author is not only saying ridiculous things but naming-and shaming earnest, serious pros into the bargain.
I mind that so many people on this site, which I do care about, are lining up behind that bullshit.
>many measures the most demonstrably successful game developer without a big studio
ex gamer here (player only, no idea about the game making industry), never heard of him or any games. Is he really the most successful? From the outside i would have guessed hte most successful indy game dev, based on how much they're talked about, would be something like: dwarf fortress, minecraft, flappy birds, wordle...
He's really only a recognizable figure in the puzzle game space. Both of his games are highly esteemed, with The Witness in particular praised as "the dark souls of puzzle games." All this said, puzzle games are harder to design than to develop and IMHO his past success only qualifies him as a great designer rather than the general computing guru he proclaims himself to be.
I'm not even sure what "live-coding compilers" would mean. What's "live" about it?
I've watched people live coding audio software. That's a performance, like jamming or rap battling where you make music spontaneously for an audience. It's a distinct skill from being able to polish stuff in a studio, just like playing a guitar or singing live is a distinct skill, it's even distinct from working with a loop sampler (like Marc Rebillet) although it's often related.
But for a compiler, what's "live"? Somebody writes some code and you... tokenize it in real time, transform it into some intermediate representation, optimise that and then spit out machine code? No? Then it's not "live coding", you're just talking about how he got paid to stream on Twitch or whatever. Loads of people do that. Ketroc streamed his last minute strategies for the recent SC2 AI tournament, he's not even a "professional" programmer, half his audience haven't seen Java before.
"Live coding" just means that you are writing code in front of an audience. All that means in this case is that while he was working on his compiler, he was recording and streaming what he was doing.
I think that's kind of the point. You wouldn't want to drive your car across a bridge that was live-designed on YouTube. You want a bridge that was designed in a boring way with lots of redundant safety systems.
My understanding of Jon Blow's argument is not that he is against certain classes of "safe" languages, or even formal verification. It is that software, self-evidently, does not work well -- or at least not as well as it should. And a big reason for that is indeed layers of unnecessary complexity that allow people to pretend they are being thoughtful, but serve no useful purpose in the end. The meta-reason being that there is a distinct lack of care in the industry -- that the kind of meticulousness one would associate with something like formal verification (or more visibly, UI design and performance) isn't present in most software. It is, in fact, this kind of care and dedication that he is arguing for.
His language is an attempt to express that. That said, I'm not so sure it will be successful at it. I have some reservations that are sort of similar to the authors of this piece -- but I do appreciate how it makes an attempt, and I think is successful in certain parts, that I hope others borrow from (and I think some already have).