Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why type systems are interesting (2004) (lambda-the-ultimate.org)
55 points by gnosis on Jan 19, 2013 | hide | past | favorite | 19 comments


Sure, type systems are very practical. I wouldn't want to program without a good one any more.

But that's not really why I find them interesting. Mostly, I just find most type theory simple and beautiful. It's fun and gives me a new perspective on how to think about computation (as transformations between types).

Also, while a type system restricts the set of possible programs, it can actually make a language more expressive. Haskell's typeclasses make a whole bunch of things much nicer: custom numeric types, overloaded string literals, monads, QuickCheck... All this is far uglier in a language without typeclasses.

Coincidentally, I'm very excited that Rust is promising to bring much of the typeclassy goodness to low-level programming. It really does make life much nicer, and promises to bring some progress to the otherwise seemingly stagnant field of systems languages. I think that's the sort of innovation we really need rather than Go, but I'm obviously very biased :).


don't common lisp multimethods do kind of the same thing as typeclasses but on a dynamic language?


They're missing one very important part: dispatching on the return type. All my examples require that. That is, in Haskell, numeric literals are polymorphic on the type they are supposed to be: the type system can infer whether you need an Int, an Integer, a Double, a Word or even a Word18--which my last project did--just from using a literal number. In every other language I know, you would have to specify which one you wanted somehow, either by using a current syntax (like 0L or 0.1) or calling a function on the number. In Haskell, the type system can figure it out for you.


The way this happens underneath is that every type that has a Num instance has a fromInteger function. If a literal like 3435 is encountered, the fromInteger function gets called on it to convert it to the appropriate type.


I'm a big fan of Haskell and I have a desire to move my career in that direction, but nevertheless I find Frank Atanassow's belittling comments that "dynamic languages have no advantages" tiresome. Mathematically his argument is no doubt true in the sense that if you can only accept a perfectly correct program, something like Haskell will always be better than something like Perl.

However he is willfully ignoring the fact that perfect correctness is not a requirement for most programs. He might be unable to sleep at night without knowing that every case has been accounted for, but in the real world you can often create tremendous value hacking together a vague idea of what you need. Piping some text files through an ad-hoc perl script can be incredibly useful even if the script has horrible horrible bugs due to unconsidered cases.

Atanassow seems to be arguing that the fact that the perl script has bugs that would have easily discovered in a statically typed program nullifies the benefit of rushing a script out the door that produces the correct output for the output at hand even if the incorrect cases would never be run in practice. This mentality is a luxury afforded to PL researchers and other people who are paid purely to think, however it does more harm than good to bringing the powerful ideas in modern functional programming to the average programmer who, in my opinion, is in desperate need of them.


Frank Atanassow's contention is that statically-typed languages are more expressive, and therefore superior because they can perform both static and dynamic type checking (he actually only uses the word "type" for something that can be statically checked; he uses "tag" for something carrying dynamic type information). Dynamic languages cannot perform static validation, and so are less expressive, and therefore strictly inferior.

His reasoning is sound, but I have an intuition as to why his conclusion is a bit off: Dynamic languages may be less expressive, but it can be much simpler to write, and especially modify, code without having to worry about two different sets of type semantics. Take the situation where you have a variable whose type was originally static, but now, because of a new requirement, needs to potentially take on values of different types determined at runtime. With a dynamic language, this probably isn't a major issue -- just assign the new values to the variable, and change the code to handle the new values where necessary. With a statically-typed language, you may have to redesign the entire static type structure of your program to change what were static "types" into dynamic "tags" and create new static types to handle those tags, which can mean major surgery.

Sure, a well-designed type structure is important even with a dynamic language, but in real life there are so many cases when it just isn't worth the time to refactor your whole design just to add a new feature. More than that, big refactorings can introduce bugs, and who honestly has 100% test coverage?

In a large enough project with a large enough budget, redesigning to maintain a proper static type structure is always going to be the better choice when viewed over a long enough time frame. As another commenter put it, statically-typed language are only "asymptotically superior". My sense is that, for finite projects/budgets/timeframes, ditching strict static typing is often a useful simplification, even if just to make a program easier to understand for busy programmers with finite available attention.

Just a thought that occurred while reading the comments -- I may not really know what I'm talking about.


"asymptotically superior" is exactly what my sense is as well.

I've bookmarked the page for later reading as the discussion is far too huge to process in a single sitting.


> if you doubt that, try coming up with some examples of operations you can perform on a value without knowing something about the type of that value.

Interestingly, the fact that this reduces the domain of possible functions significantly is specifically what lends static type systems incredible power, such as described by this author who has never run his own (deployed) Haskell libraries: http://r6.ca/blog/20120708T122219Z.html

For example, the only obvious function that goes from A -> A for all possible values of A is the identity function - and I'm struggling to think of any other non-obvious function that could do the same.

Of course, this only works with highly polymorphic functions, but that's still a huge statement.


Keep in mind that there are (infinitely) many ways to implement the identity function. One such example (in Haskell) is:

    foo = head . (:[])
This is the composition of putting an element into a list with removing it from the list. You can do this with any functor.


> You can do this with any functor.

Not to be pedantic, but being a functor has little to do with the fact that you can add and remove elements from lists.

Knowing that a particular type is an instance of `Functor` tells you only one thing about that type: that you can `fmap` functions over its contents. But look at `fmap`'s type:

    fmap :: Functor f => (a -> b) -> f a -> f b
There is no function with that type which can extract values from a functor, nor insert new ones. `fmap` can only modify the values already contained by the functor, leaving them in the container.

Extraction of contained values from a wrapper and lifting/insertion of values into a wrapper require pattern matching and constructor application respectively, which means the type must have exposed constructors, a property independent of whether that type is a functor.


Which ultimately doesn't matter, because

  head . (:[])
doesn't behave differently from (\x -> x) no matter what the input is. The only possible variation on the id function is in how efficient it is, from fastest to never terminating.


Hmm?

The set of functions with the type \alpha -> \alpha is trivially infinite.

f(x) = x - 1 for \alpha = Z

is an obvious example that's not the identity function.

Or do you mean the function

f(x) = x

because obviously there is only one such function (and it's called the identity function).


I think he meant the type ∀α. α → α. At least in Haskell, there is only one non-⊥ function of this type: id.

Moreover, there is no non-⊥ value of the type ∀α. α.

There is a tool called Djinn which can actually provide functions of such sufficiently constrained types. Another example would be the type ∀α.∀β. (α, β) → α, which only contains fst.

Even types that contain an infinite number of functions can have interesting constraints from polymorphism like this. For example, the type ∀α. [α] → [α] contains a whole bunch (for sufficiently infinite values of "whole bunch" :P) of different functions. However, you are guaranteed that the functions can only affect the structure of the list: the function cannot create or modify values of type α in the list. So the function reverse would have this type, but a function which added 1 to each element of the list could not. Moreover, a function of this type would always have to map [] to [].


> f(x) = x - 1 for \alpha = Z

What? The whole point of what you're replying to is that \alpha remains undefined. Once you specify \alpha, of course you can specify more interesting functions.

IOW, write a function that works even if \alpha is a composite datatype the details of which I refuse to disclose.


ah, he didn't existentially qualify the type


That was his whole point. For all possible values and types of A, the only trivial function he found that goes A -> A is identity function.


What about generic programming and templates? I guess that the type is filled in for you, but you still wrote the code without referencing the specific capability of any one specific type.


Hey cool :) it was me asking that question on LTU. I was never converted myself, but I did recently meet a pro Haskell hacker at TechMesh who said he'd been inspired to learn Haskell from that LTU discussion so it would seem that good explanations were made.


"If someone could next show what makes type systems so interesting in terms that I'll understand then I'll be well on my way into the modern functional programming world.. :-)"

That question is very weird. Not all functional programming languages are statically typed (btw the answerer specifically mentions that we're talking about static typing here).

For example I'm having lots of fun with monads in Clojure right now. Very functional. Not statically typed at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: