Hacker News new | past | comments | ask | show | jobs | submit login

I spend a lot of time debating program speed (mostly C vs MATLAB), but the problem is that the programming and compile time usually makes more of a difference than people consider.

If my C is 1000x faster and saves me 60 seconds every time I run the program, but takes an extra 2 days to write initially, and the program is seeing lots of edits meaning that on average I have to wait 2 minutes for it to compile then I am MUCH better off with the slower MATLAB until I am running the same thing a few thousand times.

Plus there is the fact that I can look at HN while a slightly slower program is running, so I win both ways.




I think a lot of that delta is going to prove to have been an accident of history, though. In the past 10-15 years, we've had a lot of "dynamic" languages, which have hit a major speed limit (see another comment I made in this discussion about how languages really do seem to have implementation speed limits). Using a "dynamic" language from the 1990s has been easier than using gussied-up static 1970s tech for a quick prototype, but what if the real difference has more to do with the fact that the 1990s tech simply has more experience behind the design, rather than an inherent ease-of-use advantage?

It's not hard to imagine a world where you instead use Haskell, prototyping your code in GHCi or even just writing it in Haskell directly, pay a minimal speed penalty for development since you're not being forced to use a klunky type system, and get compiled speeds or even GPGPU execution straight out of the box. (And before anyone freaks out about Haskell, using it for numeric computations requires pretty much zero knowledge about anything exotic... it's pretty straightforward.) It's not out of the question that using Haskell in this way would prototype even faster than a dynamic language, because when it gives you a type error at compile time rather than at runtime, or worse, running a nonsense computation that you only discover afterwards was nonsense, you could save a lot of time.

I don't think there has to be an inherent penalty to develop with native-speed tech... I think it's just how history went.


> In the past 10-15 years, we've had a lot of "dynamic" languages, which have hit a major speed limit

Exactly. I think that is correlated very well with single core CPU speedups.

Remember when Python was rising the fastest, single core CPU speed was also pretty much doubling every year. SMP machines were exotic beasts for most developers back then.

So just waiting for 2-3 years you got very nice speedup and Python ran correspondingly faster (and fast enough!).

Then we started to see multiple cores, hyperthreads, and so on. That is when talk about the GIL started. Before that nobody cared about the GIL much. But at some point, it was all GIL,GIL,GIL.

> It's not hard to imagine a world where you instead use Haskell

Hmm interesting. I wonder if that approach is ever taken in a curriculum. Teach kids to start with Haskell. It would be interesting.


I share that theory. Part of what led me down this road was when I metaphorically looked around about two years ago and realized my code wasn't speeding up anymore. Prior to that I'd never deeply thought about the "language implementation speed is not language speed" dogma line, but just accepted the sophomoric party line.

"Hmm interesting. I wonder if that approach is ever taken in a curriculum. Teach kids to start with Haskell. It would be interesting."

To be clear, I was explicitly discussing the "heavy-duty numerical computation" case, where typing as strong as Haskell's isn't even that hard. Learn some magic incantations for loading and saving data, and it would be easy to concentrate on just the manipulations.

But yes, people have done this and anecdotally report significant success. The Google search "Haskell children" (no quotes in the real search) comes up with what I know about, so I'll include that in this post by reference. It provides support for the theory that Haskell is not that intrinsically hard, it's just so foreign to what people know. If you don't start out knowing anything, it's not that weird.


Makes sense if you are the only person running your programs (and you are allowed to ignore things like hardware and power costs).

Also, 2 minutes per change to compile the object files affected and link the executable seems a bit excessive considering the entire Linux kernel can generally be built from scratch in less time than that (assuming a modern system).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: