Hacker News new | past | comments | ask | show | jobs | submit login
The School of Wirth (hansotten.com)
113 points by gjvc on March 14, 2021 | hide | past | favorite | 64 comments



One principle that Niklaus Wirth has always espoused and with which I agree: you cannot reduce the complexity of a programming language by expanding the size of the language with greater features. This sounds like a contradiction, but as Wirth says in the preface to his Oberon programming tutorial:

"The language Oberon emerged from the urge to reduce the complexity of programming languages, of Modula in particular. This effort resulted in a remarkably concise language. The extent of Oberon, the number of its features and constructs, is smaller even than that of Pascal. Yet it is considerably more powerful."

The FreePascal of today has grown considerably over time. The same can be said of many popular languages today: PHP, Ruby, Python, C#, Javascript, etc. Some languages start big from the beginning (e.g. Ada).

With large languages there is also the feeling you've only learned a subset of the language, blissfully unaware of many other features the language offers. At least with smaller languages you have a greater chance to master the language whole.

Of course, it doesn't automatically follow that a smaller language will be simpler or easier to grasp. (There is plenty of argument surrounding Go's supposed simplicity.)


At some point the language becomes so small that the true featureset is hidden in libraries or extensions.

Prime example: Lisp. The language has the smallest possible syntax, but its semantics are mostly in special forms. If you don't know the exact meaning of all special forms, you don't know the language.


Right. To claim that Lisp is more flexible than any other programming language is like claiming that you are much more flexible at the atomic level than at the molecular level. Flexibility has its price. It's always a trade-off.


Ada was a real bear of a language. When it was introduced as the language of instruction in some class at UIC back in the 80s, students had to work on teams as a single compilation was enough to consume the weekly CPU time allocation for a student account on the VM/CMS mainframe. They had to set up file sharing between accounts and move to a new account with each new day's login. I think there were a few cases where students ran out of CPU time even with pooled accounts before they could complete an assignment.


One of the nice things about Oberon-07 is that the compiler I’ve played with could compile itself and the standard library all in a tiny fraction of a second.


When I benchmarked oberonc [0], an oberon-07 self-hosting compiler for the JVM, it took about 100 ms with a hot VM on a old Intel i5 @ 2.80GHz. That compiler follows the same one-pass compilation approach.

[0] https://github.com/lboasso/oberonc


I take a look at the list and found this: https://github.com/prospero78/Oberon07ru

Looks interesting. I wonder how hard/easy to port this to Linux/Mac, for example.


I think Krotov’s is the one that I have tried. The package includes (or used to include) a simple IDE (a code editor) also written in Oberon, which provides another good example.

http://exaprog.com/

https://github.com/AntKrotov/oberon-07-compiler

Linux seems to be supported.


If you are looking for a Oberon compiler that is easy to port then you could consider OBNC.

http://miasap.se/obnc/

The compiler is written in C and should compile on many POSIX compatible systems. Works like a charm on the Pinebook Pro for example.


V compiles itself in 150 ms on my machine.

0.6 seconds on a free tier AWS instance:

https://fast.vlang.io


^^ We can insert that message as a definition of shameless plug in a dictionary.


Note that before Oberon-7 happened, after Oberon came Oberon-2, Component Pascal, Active Oberon and Zonnon.

Although Wirth seems to only have been involved with Oberon-2 and Component Pascal.


I'm reading Hardcore Software by Steven Sinofsky as he writes it at the moment, it's very interesting.

https://hardcoresoftware.learningbyshipping.com/

One of the things he talks about it the approach the Microsoft Tools team took to implementing MFC in C++. He talks about a lesson he learned from Martin Carroll, one of the early C++ gurus, that essentially just because a feature exists, that confers no obligation to use it. “You’re writing code for your product, not a compiler test suite,”. He took that and turned it into the MFC team's approach of using C++ as a better C, not OOP for the sake of OOP.

I remember those days well, even then I was mostly working on Unix systems with a bit of Windows here and there, but I was well aware of what MS was doing at the time. He gives a really engaging account of those times and it's interesting how many of the lessons he learned and talks about are still relevant today.


But the principle also applies to the other way around: if the programming language is too "simple", complexity still arises, because you have to explicitly write into the code many things that a programming language could simplify. This makes the code larger than necessary and less easy to understand. So it needs the right balance. From my point Oberon-07 is already too simple. Personally, I find Oberon-2 the more practical programming language (e.g. it has dynamic arrays). I also find the requirement to capitalize keywords very impractical.


> you cannot reduce the complexity of a programming language by expanding the size of the language with greater features. This sounds like a contradiction

(Maybe I'm missing something, but) I'm not sure why this should sound at all like a contradiction – how could increasing a language's size/features reduce its complexity?! Maybe you meant "the complexity of programs in a language"?


It's often developers who are pushing for new features in a programming language. They think the addition of a feature will make it easier (or more convenient) to code the problem they face. Possibly reduce the lines of code.

That is the contradiction: does the addition of features makes the language more complex? Or do the features help simplify the code? Different answers for different languages?

It will be interesting to see the trajectory of new languages like Nim, Julia and Rust. These are medium-sized languages and Rust in particular is already considered complex by some programmers. Only time will show how much they grow in language features and complexity.


Thanks. It still feels to me like you're talking about the complexity of programs in the language, not the complexity of the language.


If the language makes writing the language (compiler, libraries, tooling, or whatever) easier, at the expense of making it harder to write programs in the language, then you're going to have a nice, easy-to-implement language that nobody will actually use to write programs. And what's the point of that?


Herb Sutter, a C++ expert, said that is possible (but it's not the general case) to simplify C++ by adding features. IIRC, on such feature is the "spaceship" operator <=>, which simplifies writing comparisons.

You do need to think carefully about how to add features, though.


Thanks. This does just seem to be a different way of talking about things! It seemed strange at first. What you talk about also seems to make the language more complex, and programs simpler. Seems like you and open-source-ux don't see it/talk about it that way though.

By "the language", I mean every part of it, (I thought that's what everyone means!) and you two seem to mean, only the parts of a language used in the programs you write.


The Pascal compiler on UNIX (v7, 4.1BSD) was idiomatic. I think probably no more than any other language, but enough to make me (graduate from York, a pascal teaching university) put off by micro-differences from what I learned.

The uplift to C was simpler in some ways. Why use a confusing port which winds up having to call C system libraries, when you can code directly in the language the system libraries is coded in?

Fortran, oddly, this wasn't such a big deal. Maybe the bulk of necessary code in Fortran meant the barrier to entry was lower.


Must read paper by Wirth : A Plea for Lean Software. The more the industry "advances" the more relevant this becomes.



If you're interested in more Wirth I recommend his book, Compiler Construction. It's a great small piece on building a language from scratch.


When I read this book I had an hard time running and compiling the source code included. If you want to read the book and try out the example compiler (oberon0), you can follow the instructions here: https://github.com/lboasso/oberon0. You just need a JVM >= 8 and the oberonc compiler: https://github.com/lboasso/oberonc


Second this. Much easier to follow than the dragon book. And you’ll end up with a better implementation if you follow Wirth vs the YACC dudes.


Primarily because it omits many essential things and focuses on only one (now outdated) architecture.


There is a lot of Wirth influence in Go, like “goroutines”, and Oberon style type embedding. Google Go team including Rob Pike are fans of Wirth, and Robert Griesemer actually comes from ETH.


> a lot of Wirth influence in Go, like “goroutines”

Go inherited a lot from Oberon, but definitely not goroutines; rather e.g. the separate declarations of methods from the type declaration and the type binding syntax (which was actually an idea by H. Mössenböck, implemented in Oberon-2 in 1991, see e.g. ftp://ftp.inf.ethz.ch/pub/publications/tech-reports/1xx/160.pdf).


Well, between Oberon and Go, there is Active Oberon, which has co-routines.

Although goroutines are already in Limbo.


The author of Active Oberon is Patrik Reali in the group of J. Gutknecht. Gutknecht (not Wirth) started to extend the Oberon language with direct support for concurrency and active objects (see e.g. https://link.springer.com/chapter/10.1007/3-540-62599-2_41). There were coroutine libraries for Oberon, but it was not part of Wirth's Oberon language. I don't think the concurrency support of Active Oberon has much in common with goroutines (but a lot in common e.g. with Ada protected objects). Modula-2 inherited its coroutine support from earlier languages (Mesa, Simula).


Apparently channels (incl. "mk" instead of "make"), the communication operator "<-", the "select" statement and "goroutines" (using "begin" instead of "go") were already present in http://swtch.com/~rsc/thread/newsqueak.pdf in 1994, even before Limbo.


If Wirth would have been an US resident, working at UC or MIT, Delphi would have taken over the world instead of C and descendants.


I think Delphi got caught out by the coupling of the IDE and the need to support that maintenance - GNU C++ was a shot across the bows, but when Java came that business model sank.

Unfortunately. Pascal and decedents have a lot going for them.


The pricing model was weird as well.

For commercial development it was either cheap or hugely expensive there wasn't much in the middle - that was the norm when it was created but they failed to react to the move to a different pricing model during it's heyday.


And it is not open-source

Nowadays most people refuse to pay for languages at all, when there are open-source languages.


Yet they expect to be paid.


Baloney. Borland was running Delphi. Having Wirth 3000 miles nearer would have made very little difference.


The main competitor of Delphi ultimately was Visual Studio, especially VB and later VC#.

And it was the mess that Borland (by then bough up, I think) made with Delphi 8 that was critical in decline of Delphi.


I grew up using big languages. Hell, my first language was Perl, then I learned C++. Recent languages I liked are Haskell and Rust.

I'm a bit concerned that my technical aesthetics gravitate towards these types of languages. I really like the ideas and design of things like Scheme and C (and I use the latter when possible). Though I suspect I admire their simplicity from an ease of implementation POV more than day-to-day usage.

But I never actually sit down and study why something like Oberon is better from a design standpoint. I just nod my head at the simplicity bullet point, and go right back into worrying about insanity like "can I std::move this smart pointer in a copy constructor" with remarkably little cognitive dissonance.

I think what I'm asking is: have you had luck using a language with a lean, Wirth-style design? I tend to fixate on what those languages lack (e.g. Go's lack of generics) vs you get in return.


Languages with all the bells and whistles make you worry about how all the things interact in the situation you're dealing with (like your std::move example). A simple language can be much easier to reason about, because there's less to reason about.

On the other hand, if the simple language doesn't have what you need, then you have to do it yourself. Worse, if the language tries to "protect you from yourself" (as Wirth languages tended to), the language may block you from doing it yourself.

To me, that was the most frustrating problem with the original (pre Turbo) Pascal. Some guy thousands of miles away, who knew zero about my circumstances, was deciding what his language would allow me to do. When you're on the wrong end of that, it's very frustrating.

So I would say that either large or small languages can work, they just have different trade-offs. But avoid languages that try to restrict what you are allowed to do, even if they do it "for your own good". Languages need escape hatches. If they're clearly marked in the code, that's even better.


> why something like Oberon is better from a design standpoint

It's not "better". Wirth simply left out everything that did not seem absolutely necessary to him at the time. His colleague and he wanted to save work by doing this, i.e. to make their Oberon project at that time feasible for two developers on a part-time basis. With Oberon-07 he went even a step further in that direction.


>"On this website you will find information on Pascal for small machines, like Wirth compilers, the UCSD Pascal system, many scanned books and other files on UCSD Pascal, Pascal on MSX and CP/M, Delphi programming on PC, Freepascal and lazarus on Windows and Raspberry Pi, Oberon systems.

Many sources of early Pascal compilers!"

[...]

"WIRTH (1)1970- Pascal compilers, the P2-P4 compilers, Pascal-S, student VU Pascal (the forerunnner of the Amsterdam Compiler Kit), Andrew Tanenbaum, Professor R.P van de Riet.

1980 – UCSD P-System, Turbo Pascal, Pascal-M, 10 years VAX/VMS Pascal programmer, teacher of the Teleac support course Pascal, teacher and examinator Exin/Novi T5 Pascal

1990 – Turbo Pascal 3 on CP/M to Delphi on Windows

2010 – Freepascal + Lazarus on Windows and Linux"


Program = DataStructure + Algorithm. The book written by Wirth taught me programming.


Algorithms + Data Structures = Programs.

Classic Book.


I wonder, why Modula (-II) never overtook Pascal?


Having loved Pascal and struggled with Modula 2, I'll go out on a limb and I'll say: for cosmetic and typing issues.

Modula 2, at least in the implementation I had to use at my University, had case-sensitive keywords which were in UPPERCASE. Compared to Pascal's case-insensitive keywords, in the editors available at that time, Modula 2 stuck out like a sore thumb and made writing and reading code like a chore.


It's just a matter of habit. For people coming from Pascal or Modula, writing braces around blocks looked just as awful (and I still think using = for assignment IS objectively awful).


I agree mostly because in JavaScript you have to write '==' for comparison.


When I started grad school, in 1986, I'd written a lot of Pascal and used Turbo Pascal on my PC. But the school used Sun workstations with unix, so I had to learn C pretty quickly. Nobody seemed to use Pascal. There was a Modula II compiler around, but it was a resource hog and never popular. Oberon was published about the same time. I thought it was pretty cool and worked on a compiler for it, though the language evolved out from under me. My impression is that none of the Wirth languages made much of an impression in any of the US grad schools in that era. Of course, I didn't see everything that was going on, but I saw most of the compiler work. Almost 100% in C, moving eventually to C++.


Wirth languages were more of a thing in Europe.

Here Pascal was everywhere and C was just yet another language on MS-DOS.

The PC Systems Programming Bible had samples in MASM/TASM, QuickBasic, Quick/Turbo Pascal and Borland/Microsoft C and C++ compilers.


Thinking about it a bit more, I realize I've over simplified. Things were more diverse. At the time, there were significant academic compiler projects in the US written in PL/1, SETL, and Lisp. ML (as in Standard ML of New Jersey) work was getting started. I knew older guys who were interested in Algol 68. In academia, I saw no PCs and most grad students didn't have one.


I'd say it is because Turbo Pascal and then Borland Pascal got popular enough, and had TurboVision and other stuff.

Modula-2 was a nice language (and modern Go is built basically on its ideas, plus Oberon's), but it had a hard time competing for the same niche on the PC. It worked better in embedded space, though.

On Unix, there was C, a language with many shortcomings, but one big upside: it was the language the kernel and the userland was implemented in. Its compiler was readily available as a part if any Unix installation. It seemed a natural choice over Pascal, unless you wanted drastically different features (then you had Fortran, or awk, or Tcl, or later Perl, etc.)


Because Turbo Pascal already had modules.


Not just that Turbo Pascal had modules, but that it had enough of the market that it would have been hard for Modula-x to get a foothold, even if Wirth was a salesman, which he was not.


Borland had a Modula-2 compiler, created by Martin Odersky of Scala fame.

They decided to focus on Turbo Pascal instead.

Turbo Pascal units are based on USCD units actually.

Outside MS-DOS, Modula-2 had better chances, but since it was MS-DOS that won the 16 bit wars, it became irrelevant, as the only major alternative were UNIX clones were C and C++ ruled.


Modula 2 was quite popular on the Commodore Amiga where C compiler where a bit so so (or very expensive) for quite some time, but in the end the PC won.


I agree: There were several successful Pascal systems that (a) pretty much had the commercial market for such languages saturated and (b) had pragmatic solutions for areas such as I/O.

I/O in the original Pascal definition was particularly unfortunate in that it both needed ad-hoc mechanisms and was not particularly expressive. Modula-2 improved on this by being capable of doing I/O without ad hoc mechanisms, but it was still not very expressive.

But if you bought into the more purist Wirth vision, and especially if you tried to use the language as officially defined, Modula-2 was a much nicer language than Pascal.


TP became popular before it had units, IIRC. It was small, fast and, well, available.


I remember the Modula-2 I used in 1988-1991 was a royal pain in the butt to do any I/O. It was a pain compared to Turbo Pascal and, as Koshkin stated, already had modules with a significant ecosystem.


Funny; TopSpeed Modula-2, available around that time, had a fine I/O library. Damn, it had coroutines! Much less ergonomic than Go's, though.


Really want to migrate turbo pascal bridge. Guess too old to complete that.


Why? Free Pascal should be able to compile a lot of TP almost unchanged. See for https://www.freepascal.org/port.html some suggestions on what to look out for when porting. See also https://wiki.freepascal.org/Mode_TP for a brief note about Turbo Pascal mode.


i have had a book of the same name on my amazon wish list for a while now. will definitely get to it at some point. big fan of the work of wirth.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: