Hacker News new | past | comments | ask | show | jobs | submit login

Doesn’t most performance-critical code involve some sort of similar “domain”? The examples people have been talking about in this thread, like media codecs, cryptography, matrix solvers, databases, etc. all seem to.



Yes, but these domains are all very different, and would require different languages, which would defeat the point.


Why? Why, if I care about performance, should I write my database, my kernel scheduler, my TCP stack, my JPEG codec, my cryptographic hash function, my physics engine, and my 3d-rendering pipeline in the same language?

If we assume (obviously this isn’t quite true today) the domain specific languages existed which would let me write more readable and modifiable source, in a highly portable way, with explicit control over optimization strategies so that I could get orders of magnitude better performance than the naive C++ version, why wouldn’t I use those whenever possible?

Then all the cold “glue” code that isn’t any kind of performance bottleneck can be written in whatever language, and doesn’t need to be optimized at all.


That shifts a lot of cost to defining and implementing many DSLs. (For an extreme case, consider the VPRI STEPS project where it looked like some of the DSLs were only used for one program.)


That extreme case may still be worth it, if the result (program + DSL implementation) is easier to maintain than the mainstream alternative (program in a general purpose language).

Besides, it wouldn't be that many DSLs. There are relatively few domains where performance is really critical. Cryptography, rendering, encoding/decoding… that's a few dozens at most, with global impact for each. If those DSL have any positive impact at all, they will be worth the cost a million times over.


So instead of writing a database, a kernel scheduler, a TCP stack, a JPEG codec, a cryptographic hash function, a physics engine, and a 3D rendering pipeline, we have to write a DSL for most/all of these programs and then the program itself. And this is less work than just writing them in hand-optimized C in the first place?


In many (but not all) cases, yes - because you don't just write programs. You have to read them and maintain them and update them too - sometimes 'forever'. A DSL is designed with the whole lifecycle in mind. Also, these tools can come with things like specialized analyzers, debuggers, verifiers etc for those specific domains, which can help increase assurances and improve your ROI a lot.

Cryptol is a decent example of a domain-specific cryptographic toolkit which can easily do verification of C code against specifications, for example. It can't do much besides writing crypto, but it's pretty good at that.

In other words, the idea is that the cost of the toolset implementation is amortized over the whole lifespan of the project, and multiplied many times over for every use - it's not just the initial implementation. Naturally this approach doesn't always work of course, for a million reasons that might be specific to your use case.


So then it stands to reason that we need a DSL for writing DSL's.



Less code / easier to read / less bugs / easier to maintain / easier to formally verify? Alan Kay seems to be a fan of this principle and from what I have seen it makes sense. He doesn't do it for optimisation per se but the same idea applies.


Because these are all artificial boundaries you have created. Many of the operations within the domains you mention here are going to be the same. It all comes down to effects, data structures, and algorithms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: