Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The code was tested by the compiler, reducing the failure space to whether or not his declarative types are an adequate Proof.

It's an amusing anecdote. Even when using Haskell, there are some obvious constraints to relying on the compiler as the sole validation of correctness.



That's not good enough; you're still testing a proxy behavior, not the behavior under question. The hypothesis is, "Haskell's type system is robust enough that one can code an application through type checking alone." Doing the hypothesis is not testing the hypothesis; assuming the hypothesis is correct to demonstrate that the hypothesis is correct is begging the question.

In particular, the kinds of errors that still may exist are ones where the implementer has a fundamental misunderstanding of what needs to happen. The code may correctly implement what he wanted it to do, but what he wanted it to do is wrong.

If you still object, consider: I'm describing the scientific method.


Consider that his Haskell program (ostensibly) relies on mathematical proofs, a chain of them. The compiler consumes and validates this chain of proofs and checks whether or not the program is "true." It's hard to imagine why you'd use the scientific method to verify that 1 + 1 + 1 == 3; basic math and logic--- something computers are good at--- are sufficient.

As he and others have pointed out, the author was pretty up-front about the scale (relatively small; looks like a few hundred LOC) and nature (implementing an extant, well-defined API) of the problem. It's not an application as you suggest, and the author doesn't suggest it's a good idea for applications, either.

The conclusion isn't "use Haskell & never test your code!". After all, he explicitly offers no conclusion. :) It's just an example of how, sometimes, you can be reasonably certain that your Haskell program is correct when it compiles.

Myself, I'd still run it, or at least QuickCheck it. I don't trust myself not to write logically sound, completely incorrect code.


As an exercise, I think it's great. But my point is that he has not completed the exercise.


Once you've figured out on paper that 1 + 1 = 2, you don't necessarily have to put one apple next to one apple and count two apples to “complete the exercise.”


No, you don't. But software is not that simple.


I object because you're attempting to extrapolate an axiom from an anecdote merely intended to demonstrates the value and power of the type system.


My point is we have not yet verified if the assumed conclusion of the anecdote is correct.


I actually encounter this a lot with Haskell. Once you taste the power, it isn't good enough. You want more. Dependent types, etc... But the truth is, it is still a large step up from where we were before in my opinion.


The compiler can only test what he writes - not what he intends to accomplish.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: