I agree that we need more operating systems, but...
> These machines used specialized hardware and microcode to optimize for the lisp environments (Because of microcode you could run UNIX and the Lisp OS at the same time).
This is a dead end in the history of CPU design.
Processors are all vaguely similar these days. Your CPU is built around ALUs which typically take one or two inputs, produce one output. Around that, you build some logic for shuttling these inputs and outputs to and from registers, or in some cases, to and from memory.
The core here, the ALUs, have gotten more and more sophisticated over the years, but the wiring on the outside has remained fairly modest relative to its excesses back in the day. I'd say that the lesson here is simple: rather than add complicated operations to make your high-level language faster, do the complicated stuff in software... which gives you a lot more flexibility, and the combined hardware-software stack ends up being faster and cheaper anyway.
> With lisp machines, we can cut out the complicated multi-language, multi library mess from the stack, eliminate memory leaks and questions of type safety, binary exploits, and millions of lines of sheer complexity that clog up modern computers.
I can understand where this notion is coming from... but practically speaking, switching to Lisp doesn't eliminate memory leaks or questions of type safety or binary exploits. Even with an idealized version of Lisp, I don't think these problems could possibly go away. Neither garbage collection nor systems like Rust really "solve" memory leaks, they just provide strategies to make memory leaks less common.
The same thing applies to type safety. You'd have to define "type safety" very narrowly to say that Lisp solves all type safety problems. Again, I can understand where the author comes from--it's kind of an intuitive notion of type safety, that you don't end up operating on an incorrectly typed pointer, or something like that. But the notion of type safety is much more broad than that these days.
And the C/Unix strategy is actually pretty good, too, when it works--contain memory leaks within a process, then terminate the process.
> These machines used specialized hardware and microcode to optimize for the lisp environments (Because of microcode you could run UNIX and the Lisp OS at the same time).
This is a dead end in the history of CPU design.
Processors are all vaguely similar these days. Your CPU is built around ALUs which typically take one or two inputs, produce one output. Around that, you build some logic for shuttling these inputs and outputs to and from registers, or in some cases, to and from memory.
The core here, the ALUs, have gotten more and more sophisticated over the years, but the wiring on the outside has remained fairly modest relative to its excesses back in the day. I'd say that the lesson here is simple: rather than add complicated operations to make your high-level language faster, do the complicated stuff in software... which gives you a lot more flexibility, and the combined hardware-software stack ends up being faster and cheaper anyway.
> With lisp machines, we can cut out the complicated multi-language, multi library mess from the stack, eliminate memory leaks and questions of type safety, binary exploits, and millions of lines of sheer complexity that clog up modern computers.
I can understand where this notion is coming from... but practically speaking, switching to Lisp doesn't eliminate memory leaks or questions of type safety or binary exploits. Even with an idealized version of Lisp, I don't think these problems could possibly go away. Neither garbage collection nor systems like Rust really "solve" memory leaks, they just provide strategies to make memory leaks less common.
The same thing applies to type safety. You'd have to define "type safety" very narrowly to say that Lisp solves all type safety problems. Again, I can understand where the author comes from--it's kind of an intuitive notion of type safety, that you don't end up operating on an incorrectly typed pointer, or something like that. But the notion of type safety is much more broad than that these days.
And the C/Unix strategy is actually pretty good, too, when it works--contain memory leaks within a process, then terminate the process.