Hacker News new | past | comments | ask | show | jobs | submit login

    > Syntax is not really an issue for any programmer with a reasonable amount of experience.

    > They're a perfect use case for functional programming, there's not a lot of complex state ideally,
    > and yet even simple simulations are notoriously fraught with unexpected behavior. See: thousands
    > of YouTube videos of physics glitches in games.
I'm not sure I'm following. Are you arguing that because physics engines are a perfect use case for functional programming, that they are implemented as pure functions? I don't know of any physics engines that are implemented this way, hence the buggyness.

If an implementations of 3D scene rasterization contains glitches, the language used is most definitely the issue, because producing glitch-free 3D animations is a solved problem. No programmer writes a physics engine to produce glitches, so if they magically appear, even though they aren't visible in the spec, the spec language is faulty.

I guess I'm arguing that the problem is that most popular languages allow you to write programs/specs that are invalid (will crash when executed). So we're writing specifications in languages where we can't even say whether the specification is valid or not (actually implementable). If humans produce buggy programs, that is proof that we're using the wrong tool, I'm arguing, because no one intends to produce bugs.

Haskell is a bit scary, at least it was for me in the beginning, because it can seem so hard just to get your program to compile. But that's because the compiler actually verifies that your program is valid - that no unhandled exception will occur. If people are having writing valid, purely functional programs, perhaps this is, in part, because their idea is unimplementable, and Haskell is telling them this via not being able to compile your spec, while other languages inform you of this via an unhandled exception that crops up a year later, after the code is in production.




> No programmer writes a physics engine to produce glitches, so if they magically appear, even though they aren't visible in the spec, the spec language is faulty.

I'm saying that popular physics engines generally have bulletproof, tried-and-tested code, and are even implemented in a very functional way on a conceptual level. Despite this, they exhibit glitches. The glitches don't appear magically out of the spec language, they are an inevitable result of the discrete nature of realtime physics simulations.

This is meant to be a counter-example to your assertion that bugs come from miscommunication between computers and humans. My argument is that more often than not, we communicate our ideas perfectly, but our ideas are flawed. In the case of physics engines, they are flawed by design in order to compromise accuracy for performance.


    > In the case of physics engines, they are flawed by
    > design in order to compromise accuracy for performance.
This is an odd definition of "flaw" to me. I would call it a design choice, compromising accuracy over speed, which we're forced to do always, since we don't have infinite compute power. Would you call H.264 video and MP3 audio flawed by design because they prioritize bandwidth reduction over lossless reproduction?

    > The glitches don't appear magically out of the spec language,
    > they are an inevitable result of the discrete nature of
    > realtime physics simulations.
You seem to be arguing both that glitches in physics engines are inevitable, and that they're a design choice (sacrificing precision over speed).


> the compiler actually verifies that your program is valid - that no unhandled exception will occur

That's not actually correct. The compiler verifies (modulo your transitive use of unsafe primitives) that certain things won't occur. Unhandled exceptions are not one of those things.


That depends on how you define an unhandled exception :)

You're right of course: no compiler can guarantee lack of exceptions when it comes to IO, but I would argue that GHC, indeed, does verify that - unless a function explicitly throws an exception - no exception will occur. This is very different from C, where an exception can be the result of just about any operation (if you do something wrong), rather than only an effect that appears when you use "throwIO" (or similar). It's the difference between exceptions being used as a tool by the user (programmer), rather than a way for the compiler to tell the user (after your program is compiled and is running) that it has no idea what to do now, and will abort. In Haskell I write the "error"/"throwIO" statement causing an abort, in a C program I might have intended something completely different, but the compiler throws one at runtime.

We mustn't mix up IO with everything else, because reality is inherently unreliable, and it's just the nature of it that a certain file won't necessarily be readable tomorrow - we can never know that. But we can know whether a pure function, sans compiler bugs, will execute without throwing an exception. And I would argue that adding "error" or "fail" does not constitute an unhandled exception, as that statement was put there intentionally by the author of the program (perhaps to test out something).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: