Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you should read it if you work on compilers, even now. But the reason is a bit different.

It was the survey of the state of the art at the time, but obviously it is not the state of the art now. Then why should you read it? Because it is written in two layers.

The first layer goes, we tried many optimization ideas, but only these were effective in practice: inlining, register allocation, etc. Others were not. Surprisingly, this layer is still mostly true today! This is both happy and sad depending on your view. Personally I think it testifies that compiler is a mature field, and it matured by 1970. (And that Frances Allen did lion's share of work maturing it.)

The second layer is, so here is how you should do inlining, register allocation, etc. While this layer is also full of gems, it is necessarily badly outdated. The paper predates graph coloring register allocation, for example. On the other hand, ironically, the state of the art 1970s algorithms are often a good choice today when you want an algorithm that is fast and low memory. (Ironic, because they were slow and high memory at the time!) This doesn't apply when there is an important new concern, for example cache locality, but happily it mostly doesn't affect compiler.

I think there should be a project to write the-state-of-the-art-in-1970s compiler. It would be a great simple and minimal alternative to GCC and LLVM, and it would also work great as a JIT code generator. We probably should name it Fran.




> On the other hand, ironically, the state of the art 1970s algorithms are often a good choice today when you want an algorithm that is fast and low memory.

Simple things can run at higher speeds than complex things. Simplicity should be an architectural element, not a quality, but a feature.

Your idea for the Fran compiler is excellent. Throw in METAII from Dewey Schorre and it can be language agnostic.


Maybe I should call the backend for Objective-S Fran :-)

> graph coloring register allocation

I knew some people working on linear optimization, at the time one of the most performance-intensive applications around (with real money/competitiveness in many industries riding on it). The compiler that produced the best code, by quite a bit, was the IBM FORTRAN compiler (3090 was the preferred target at the time), which also didn't do graph coloring. It just allocated the registers by loops from innermost to outermost.

¯\_(ツ)_/¯\

For a simple compiler, we should also look at Wirth's works, and Wirth's rule: an optimization has to pay for itself, that is, the compiler compiling itself with the optimization both in the executable and in the source code has to be faster than without it in both places.


> For a simple compiler, we should also look at Wirth's works, and Wirth's rule: an optimization has to pay for itself, that is, the compiler compiling itself with the optimization both in the executable and in the source code has to be faster than without it in both places.

That will result in a compiler best at optimizing programs that behave like compilers. Which may be fine for simple compilers, but it's worth keeping in mind that will not be a good compiler for domains where the programs don't behave like compilers (such as scientific computing, or even ML these days) that also want high performance.


So that is were this Golang rule comes from.


I agree with most of your comment but I don't agree that compilers was a mature field by 1970. That was before SSA, graph coloring register allocation, and even linear scan, all three of which were pretty revolutionary.

I saw her 2007 Turing Award speech. I have tremendous respect for Fran and like your idea to name a codegen after her :-)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: