Category theory has served mathematicians well, but only because they had a great mass of concrete detail which needed to be abstracted away.
An abstract framework like category theory can actually be harmful where there isn't that much concrete detail to begin with. A personal example I faced was being overwhelmed by category theory jargon when starting to learn Haskell a couple of years back. My confidence returned only when I realized that there was just one category in play with types as the objects and functions as the arrows. The jargon was unnecessary. Today Haskellers discuss so many interesting issues about the language and its implementation which do not fit into the category theoretic framework at all.
In mathematics, I would describe category theory as an offshoot of structuralism. The main idea is that instead of starting with some "premodel" of what you want to study ("I want to study geometry on surfaces, so I'm looking at subsets of R^3!") and then taking away structure ("...but what is the preferred center of the universe? I better only look at properties which are invariant under translations..."), you try to identify structure in your problem and then determine the consequences. This is obvious in retrospect. If your models describe reality and everything is built out of sets then is 0 an element of 2?
Category theory provides useful tools for identifying structure. It's not necessarily a good vocabulary to talk about these things. Forcing everything into a first-order mold is typically confusing. On the other hand, there has been a lot of work using category theory to solve real problems. This has resulted in some sophisticated tools and a very "scalable" way of looking at things (e.g. universal properties and adjunctions are everywhere).
Category theory is needlessly complicated. There are many redundant side conditions, and the definitions are ... let's be charitable and call them idiosyncratic. This is just more apparent when you apply catgeory theory to programming languages. In semantics you already have a precise way of writing statements (type theory) and you know that everything you can write down is parametric. This eliminates almost all side conditions on most definitions in category theory. So yes, coming from Haskell, catgeory theory probably looks extremely complicated and not worth the effort to learn it.
However, some of the tools from category theory are as useful to computer science as differentiation is to engineering. There are plenty of questions to which you can calculate the answer using some elementary category theory. What is the dual concept to the reader monad transformer? What is the eta law for sum types? Can this data structure be a monad? If not, can I embed it into a monad (e.g. using codensity)? These are all simple questions, but if you don't know the general strategy for dealing with them it seems like I'm asking you to solve a puzzle instead of asking you to compute 7 * 4.
I think type theory is really nice for writing down certain things. Quantified statements are a perfect example! But Category Theory has nice properties, too. You already mention universal properties and adjunctions. Those are obvious examples in the other way.
Much of math is this way. Find a subject that you can see from two perspectives and abuse the best parts of each!
Such that f id = id is automatically a functor (respects composition).
In type theory, every equivalence is automatically natural.
There's a lot more and it's made worse by the fact that most of the side conditions are redundant even in normal category theory. For instance, you can specify an adjunction using two functions F : A -> B and G : B -> A, together with an equivalence phi : forall a b. Hom(F a, b) = Hom(a, G b) such that phi satisfies one additional equation. It follows from this that F, G are functors and that the equivalence is natural. Since a lot of definitions in category theory mention adjunctions you can usually simplify the definitions like this. For example the definition of a Cartesian closed category in most textbooks expands to something like 20 equations, but you can simplify it down to the usual few equations which specify a model of simply typed lambda calculus.
> the definition of a Cartesian closed category in most textbooks expands to something like 20 equations, but you can simplify it down to the usual few equations which specify a model of simply typed lambda calculus
Has that been written down somewhere convenient? I'd like to have a look.
By the way, I found lots of your comments on your post history also very interesting!
I agree that Haskellers tend to get ridiculously over-excited about category theory, but it's simply not true that there's only "one category in play". Categories abound in Haskell, for example the Kleisli category, and any instance of Category that isn't (->).
One of the most applicable use-case I remember being constantly mentioned about category theory was classification of some viruses.
Category theory makes sense when your business is large enough and you are looking to make everything streamlined. You could use some Category theory to rearrange everything from employee, customers and products into better types.
But the idea that you absolutely need to understand this stuff to get going does not make sense to me.
An abstract framework like category theory can actually be harmful where there isn't that much concrete detail to begin with. A personal example I faced was being overwhelmed by category theory jargon when starting to learn Haskell a couple of years back. My confidence returned only when I realized that there was just one category in play with types as the objects and functions as the arrows. The jargon was unnecessary. Today Haskellers discuss so many interesting issues about the language and its implementation which do not fit into the category theoretic framework at all.