Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, language design decisions absolutely have a massive impact the performance envelope of compilers. Think about things like tokenization rules (Zig is designed such that every line can be tokenized independently, for example), ambiguous grammars (most vexing parse, lexer hack etc.), symbol resolution (e.g. explicit imports as in Python, Java or Rust versus "just dump eeet" imports as in C#, and also things whether symbols can be defined after being referenced) and that's before we get to the really big one: type solving.


This kind of comment is funny because it reveals how uninformed people can be while having a strong opinion on a topic.

Yes grammar can impact how theoretically fast a compiler can be, and yes the type system ads more or less works depending on how it's designed, but none of these are what makes Rust compiler slow. Parsing and lexing are negligible fraction of compile time, and typing isn't particularly heavy in most cases (with the exception of niches crates who abuse the Turing completeness of the Trait system). You're not going to make big gains by changing these.

The massive gains are to be made later in the pipeline (or earlier, by having a way to avoid re-compiling pro macros and their dependencies before the actual compilation can even start).


Hard agree. Practically all the bottlenecks we run into with Rust compilation have to do with the LLVM passes. The frontend doesn't even come close. (e.g. https://www.feldera.com/blog/cutting-down-rust-compile-times...)


The point was language design influences compiler performance. Rust is heavily designed around "zero-cost abstraction", ie. generate tons of IR and let the backend sort it out. Spending all the time in LLVM passes is what you would expect from this.


Had you read the linked blog post, you'd have seen that here this isn't so much an issue with LLVM having too much work, but rustc being currently unable to split the work into parallelizable chunks before sending it to LLVM, and as such it takes a very long time not because LLVM has too much things to do, but because it does it in a single-threaded fashion, leaving tons of performance on the table.

> Rust is heavily designed around "zero-cost abstraction", ie. generate tons of IR and let the backend sort it out.

Those two aren't equivalent: Rust is indeed designed around zero-cost abstraction, and it currently generates tons of IR for the backend to optimize, but it doesn't have to, it could run some optimizations in the front-end so it generates less IR. In fact there has been ongoing work to do exactly this to improve compiler performance. But this required rearchitecturing the compiler in depth (IIRC Rust's MIR has been introduced for that very reason).


While LLVM is known to be slow, not all LLVM-based languages are equally slow.


This isn't an issue with LLVM being slow, but of rustc not calling LLVM efficiently, read the linked blog post!


I guess this was my point.


The lexer hack is a C thing, and Ive rarely heard anyone complain about C compiler performance. That seems more like an argument that the grammar doesn't have that much of an impact on compiler performance as other things.


Yeah. It's exactly backwards, because good language design doesn't make anything except parsing faster. The problem is that some languages have hideously awful grammars that make things slower than they ought to be.

The preprocessor approach also generates a lot of source code that then needs to be parsed over and over again. The solution to that isn't language redesign, it's to stop using preprocessors.


The preprocessor does not necessarily create a lot of source code in C. I can, if you expand arguments multiple times and there is an exponential explosion for nested macros, but this also easy to avoid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: