So here's a "folk theorem" that explains why you should care about such abstract structures.
Take a 2x2 matrix with real elements (a b)(c d). If I take the space of all such matrices and put a uniform probability distribution over it, what is the probability of getting a matrix that is not invertible? Not invertible is det M = 0, or ac - bd = 0. I can solve for a in terms of b, c, and d, which shows that the non-invertible matrices are a 3 dimensional subspace of the 4 dimensional space, which has zero volume. So a matrix chosen at random is invertible.
The folk theorem is in analogy with this: say I have a set of elements {a, b, c, ...}, and I describe mappings of various kinds that take some subset of tuples of the elements into other subsets. You can draw it as a directed multigraph. If you consider the space of all such graphs, how many of them generate "rich" or "regular" structure? For example, having a system where my mapping is defined over all elements is actually pretty restrictive. For a set of N elements, there are N-1 + N-2 + ... + 1 sets that are not defined over all elements. As N gets large, this dwarfs my one regular version. As I put in more operations and go to infinite sets of elements and add more regularity properties, this imbalance grows. So systems that have rich, regular structure are a zero volume subspace of the set of all such systems.
Given this, suddenly the interest in the few dozen algebraic structures that algebraists of various kinds explore makes a lot more sense. They're following infinitely thin paths of structure through space. You start from a raw set with no operations. In one direction you trace through monoids, groupoids, semigroups, groups, rings, rigs, tropical rings, fields, vector spaces, modules, etc. In another you go through pre-orders, partial orders, chain complete partial orders, semilattices, lattices, etc. In another you go through categories, topological spaces, natural transformations, monads, arrows, topoi...
Roughly, the groups, rings, fields path takes you through values that have regularity that looks vaguely like numbers. Orders and lattices take you through things that look like decomposition into pieces. Categories to topoi take you through things that look like sequences of operations and transformation. That the latter might be of interest to a programmer is fairly obvious from this point of view. So when someone says a monad is interesting, what they are trying to tell you is that it is the relevant data structure for describing an imperative program the way an array or list is the relevant data structure for describing an ordered sequence of numbers.
The reason you care, then, is that once you have traced out these paths, when you are looking at a problem your brain will automatically try to draw you back towards the thread of regular structures. Sometimes you don't go to familiar ones. I ended up building an odd variation of lattices to describe genome annotations because of this, and it was an exercise in finding what about the domain drew me out of strict regularity rather than trying to find my way into it, which is a lot easier.
Similarly, the entirety of the literature on eventual consistency in databases can be summarized as "make your merge operation the meet of a semi-lattice." If you've traced through that particular thread, then you can immediately think in a deep way about what kind of structure eventual consistency has and what the variations on it are.
Thank you. A lot. This is one of the most well-founded, didactic and insightful comment I ever read on HN.
I happen to have a decent background in mathematics. Some people say it's useless for most contexts that a programmer can meet (exceptions being vectors and matrix for 3D and quaternions for some 3D APIs for example), but I disagree. You just illustrated how abstract mathematics can be directly relevant to programming.
Even when the mathematics one knows are not directly applicable, I think having studied mathematics helps shape the brain activity at all tines into thinking things orderly and more efficiently. Having correct intuitions about what will be a productive approach and what will not. Loving and embracing simplicity
That said, perhaps we should use different words for the different activities that look like programming. To be caricatural, one is just piling frameworks and libraries on a language where you don't even master the basics, filling in the blank from examples and snippets from the net. The other activity is writing well thought out programs on top of highly generic patterns like Reactive Programming or even designing your own patterns and frameworks to be used by others.
The first activity does not need much mathematics, the second greatly benefits from it.
> What do you think? Should we use different words?
Maybe it's because I was up until midnight debugging a production problem on Christmas eve, but I can't make myself care much either way. My kneejerk reaction is to say no, because I hesitate to introduce categories without being quite certain of their operational reality.
Take a 2x2 matrix with real elements (a b)(c d). If I take the space of all such matrices and put a uniform probability distribution over it, what is the probability of getting a matrix that is not invertible? Not invertible is det M = 0, or ac - bd = 0. I can solve for a in terms of b, c, and d, which shows that the non-invertible matrices are a 3 dimensional subspace of the 4 dimensional space, which has zero volume. So a matrix chosen at random is invertible.
The folk theorem is in analogy with this: say I have a set of elements {a, b, c, ...}, and I describe mappings of various kinds that take some subset of tuples of the elements into other subsets. You can draw it as a directed multigraph. If you consider the space of all such graphs, how many of them generate "rich" or "regular" structure? For example, having a system where my mapping is defined over all elements is actually pretty restrictive. For a set of N elements, there are N-1 + N-2 + ... + 1 sets that are not defined over all elements. As N gets large, this dwarfs my one regular version. As I put in more operations and go to infinite sets of elements and add more regularity properties, this imbalance grows. So systems that have rich, regular structure are a zero volume subspace of the set of all such systems.
Given this, suddenly the interest in the few dozen algebraic structures that algebraists of various kinds explore makes a lot more sense. They're following infinitely thin paths of structure through space. You start from a raw set with no operations. In one direction you trace through monoids, groupoids, semigroups, groups, rings, rigs, tropical rings, fields, vector spaces, modules, etc. In another you go through pre-orders, partial orders, chain complete partial orders, semilattices, lattices, etc. In another you go through categories, topological spaces, natural transformations, monads, arrows, topoi...
Roughly, the groups, rings, fields path takes you through values that have regularity that looks vaguely like numbers. Orders and lattices take you through things that look like decomposition into pieces. Categories to topoi take you through things that look like sequences of operations and transformation. That the latter might be of interest to a programmer is fairly obvious from this point of view. So when someone says a monad is interesting, what they are trying to tell you is that it is the relevant data structure for describing an imperative program the way an array or list is the relevant data structure for describing an ordered sequence of numbers.
The reason you care, then, is that once you have traced out these paths, when you are looking at a problem your brain will automatically try to draw you back towards the thread of regular structures. Sometimes you don't go to familiar ones. I ended up building an odd variation of lattices to describe genome annotations because of this, and it was an exercise in finding what about the domain drew me out of strict regularity rather than trying to find my way into it, which is a lot easier.
Similarly, the entirety of the literature on eventual consistency in databases can be summarized as "make your merge operation the meet of a semi-lattice." If you've traced through that particular thread, then you can immediately think in a deep way about what kind of structure eventual consistency has and what the variations on it are.