> FORTRAN’s conflation of functions (an algebraic concept) and subroutines (a programming construct) persists to this day in nearly every piece of software, and causes no end of problems.
This is exactly what Haskell solves. Not by eliminating subroutines, but by separating the two concepts out again. In particular, functions are functions in the algebraic sense, and subroutines just become values in the IO type.
> Tracing compilers scratch the surface of reversing this mistake, but so far I know of no programming languages that are specifically designed around such a mechanism.
I'm not sure why he thinks tracing compilers help rectify the issue. Perhaps they can claw back some performance gains from knowing a function is pure, but static compilers can do that too.
And I think it's fair to say that Haskell was basically designed around "such a mechanism"; it's a shame the author doesn't know about it.
> ISWIM (which some programming language histories identify as the “root” of the ML family) is based on ALGOL 60 (source), which of course is based on FORTRAN
While ISWIM was influenced by ALGOL, it was based on the λ-calculus. And the λ-calculus, of course, precedes Fortran—and, in fact, computers in general—by quite a margin! It was originally developed in the 1930s.
Many modern functional languages are just thin layers over a typed λ-calculus: Haskell's Core intermediate representation is actually exactly that extended with some low-level primitives. This means that they are far closer to formal logic than they are to Fortran!
This is a minor nit, but there are effects in pure Haskell functions, namely partiality and non-termination. (In other words, the sense in which "functions are functions" is actually a deep question)
The Haskell language is described in The Haskell Report via an informal presentation of its denotational semantics. Its types are all "lifted" from Sets to something like Scott Domains to account for partiality and non-termination, which are denoted by the value called "bottom" or _|_.
So, they are not strictly functions in the normal set-theoretic sense, but they are (mostly?) mathematically accurate continuous functions between Scott Domains. As the semantics are not formally defined, there is a limit to what you can say about them, but there is an interesting paper entitled "Fast and Loose Reasoning is Morally Correct" that shows that treating Haskell as if it worked in terms of total set-theoretic functions is a reasonable thing to do in some practical circumstances in the use of Haskell.
If you want really pure, total set-theoretic functions in a programming language, you will have to go to a total functional language such as Coq or Agda. You lose Turing-completeness when you stick to total functions, though, and most people only type-check their functions rather than actually running them (this is not as crazy as it sounds--these languages are primarily used to help formulate formal proofs, which are finished when they pass the type-checker).
In any case, the bit in the blog about FORTRAN and everything conflating procedures and algebraic functions strikes me as nonsense, at least without further explanation to clarify/justify it.
> user error: Table './ltu/cache' is marked as crashed and should be repaired
query: SELECT data, created, headers FROM cache WHERE cid = 'filter:4:37963a22e3cdd6b501519c657a75ceeb' in /home/vhost/ltu/www/includes/database.mysql.inc on line 66.
I always thought that algebraic functions are not guaranteed to be defined given certain parameters. It's just that perfectly algebraic functions don't throw errors, they silently return +-infinity. Like the asymptotes in `tan x`.
To be pedantic, tan doesn't have a value at tau/4 (or pi/2, if you swing that way). Also, algebraic functions don't return, they are - cos(0) is 1, it doesn't return 1, it doesn't compute 1, it is not a kind of computer, nor a kind of program, nor any kind of thing that consumes resources and time and returns a value; it really literally is 1.
Algebraic functions are just syntactic notation. You can sit down and convert from one notation to another, like how cos(5) is a number quite close to 0.28366218546322625, and deriving one representation from the other does take resources and time, because it's a physical process performed by a person or computer.
But sin, cos, tan, cotan, log and all their friends by themselves don't compute, they are just a different kind of notation for numbers.
Which is why I find the desire to make functions in programming like algebraic functions silly - by definition they are two completely different things. One is a specification for a process that produces binary-encoded numbers, the other is a syntactic notation for real numbers.
I upvoted you because I think this is an interesting line of reasoning, but I disagree that it's a very useful one.
You seem to be implicitly defining "computation" as that which a physical computer does (machines, biological brains...). Something that literally consumes physical resources. By that logic, a Turing machine is not a computer and lambda calculus is not about computation. As I said, you could spin the semantics that way but is it useful? Does that give us useful insights?
Functions are not just syntactic notation. Functions are, by definition, mappings from set A to set B. They don't have anything to do with notation. "cos(x)" is merely a notation, yes, but not of a number but of a function. This is an important distinction. "cos(5)" evaluates to a certain number, yes, but it's not just syntactic sugar for that number. Not to mention that functions don't need to operate on sets of numbers.
I agree with your point about functions versus algorithms (or computation if you prefer that term), but I disagree that your definition of functions isn't useful for computing.
It's precisely that sense of function that things like Haskell (or Erlang, or Prolog) try to introduce - a different syntax for writing the same value.
A pure function is just explaining that a particular value is functions applied to other particular (stand in) values (and a way to introduce names to values). These are only useful for explaining which computation you'd like to happen when the program actually goes to carry one out - I'd like the number which is the same as the cosine function applied to 5, which tells it precisely how to compute that number (assuming it can build an algorithm equivalent to cosine). Of course, we could use another pure definition (that cosine of x is the same as a series expansion about x) to allow the compiler to replace the desired cosine function with something that has intrinsics on the system - evaluation a polynomial at the point 5.
So in essence, we can use functions as a tool to describe what value we want from the program (or various structural properties), in such a way that the compiler can correctly infer how to build an algorithm to do so, and tie it correctly in to our (actually) algorithmic code.
Minor aside: functions (in the math world) apply in contexts besides the real numbers, including being syntactic notation for binary numbers (or fields of order 2^n).
To be really pedantic, "algebraic functions" are functions that can be defined as the roots of polynomial equations, I.e. solutions of equations of the form f(x_1, x_2, ..., x_n) = 0. There are other, more general notions of "function" in mathematics, though. I'm not sure how pedantic the author of the original post meant to be!
Most generally, a function is not a number, it is a relation between two sets. One can define such things in a number of ways, but one way is to describe the relation via a formal language such as the lambda calculus. The lambda calculus is not technically a "programming language", it is a formal system of meaningless symbols and a set of rules for forming, manipulating, and eliminating them.
Although there is no fixed relation between reduction rules in the lambda calculus and physical resources in a computer, one can still compare the number of reduction steps required to determine which form in the co-domain of a function defined in the lambda calculus corresponds to a specific form in its domain, and this will provide a reasonable stand-in for comparisons between resource usage of similar computer programs.
So, really, computation is not completely foreign to mathematics, and mathematical functions and "computing functions" are not completely different animals, just... distantly related? Some languages are more mathematical in their approach than others.
> But sin, cos, tan, cotan, log and all their friends by themselves don't compute, they are just a different kind of notation for numbers.
No, they are a notation for ideas. The fact that they have numerical values associated with them is true but misses the point that trigonometric functions are primarily defined with respect to each other and with respect to certain geometric concepts.
In other words, cos(0) equals 1, but 1 has many more meanings than just cos(0). I would much rather have a student tell me that tan(x) = sin(x) / cos(x) than to say tan(x) is just a number.
Actually, to be pedantic, functions are notations for relations between sets of numbers: sin and cos are relations between the real numbers and the closed interval [-1, +1], and so on.
Roughly speaking, partial functions are functions that might crash -- or that might throw exceptions, if you prefer. So even if it always terminates, it might to do so at the cost of not returning a value of the expected type.
A bit more precisely, a partial function is a function that is not defined for some values of its domain (its input). A well-known instance is the division operator, which is not defined when the divisor is zero. Other common examples are head and last:
-- this promises to receive a list of elements of some
type t and return the first element
head :: [t] -> t
head (first:rest) = first
head [] = error "that list has no head, yo!"
Where error is analogous to throwing an unchecked Throwable (eg RuntimeException) in Java.
One solution is to use a safer version that returns Maybe a instead of a. This is analogous to using checked exceptions.
safeHead :: [a] -> Maybe a
safeHead (first:rest) = Just first
safeHead [] = Nothing
The solution above is common and idiomatic.
Another option is to accept only arguments of a different list type that's guaranteed to be non-empty.
This is exactly what Haskell solves. Not by eliminating subroutines, but by separating the two concepts out again. In particular, functions are functions in the algebraic sense, and subroutines just become values in the IO type.
> Tracing compilers scratch the surface of reversing this mistake, but so far I know of no programming languages that are specifically designed around such a mechanism.
I'm not sure why he thinks tracing compilers help rectify the issue. Perhaps they can claw back some performance gains from knowing a function is pure, but static compilers can do that too.
And I think it's fair to say that Haskell was basically designed around "such a mechanism"; it's a shame the author doesn't know about it.
> ISWIM (which some programming language histories identify as the “root” of the ML family) is based on ALGOL 60 (source), which of course is based on FORTRAN
While ISWIM was influenced by ALGOL, it was based on the λ-calculus. And the λ-calculus, of course, precedes Fortran—and, in fact, computers in general—by quite a margin! It was originally developed in the 1930s.
Many modern functional languages are just thin layers over a typed λ-calculus: Haskell's Core intermediate representation is actually exactly that extended with some low-level primitives. This means that they are far closer to formal logic than they are to Fortran!