I think it's because they solve a problem that people don't realize they have (and truth be told maybe they don't really have it), so the learning process has no motivation.
They solve a major problem that Haskell has (namely circumventing a type-anal non-strict compiler), so they're a big deal in Haskell, whereas in imperative languages straightforward code works just fine.
More to the point, compared to the case of for loops vs `map`, the use case for this kind of function chaining doesn't appear all that often (nor is it as syntactically localized as a for loop) so it's not as easy to abstract. I.e. the underlying pattern is not as pronounced, so it's not really recognized as a pattern/problem/something to DRY in the first place.
> They solve a major problem that Haskell has (namely circumventing a type-anal non-strict compiler), so they're a big deal in Haskell,
Type anal? lol. Would you want a type checker that wasn't anal and exact in what it expects?
> whereas in imperative languages straightforward code works just fine.
Except when you have to do things concurrently and have to use locks. Mixing pure and impure functions also makes the source of bugs harder to find in my experience.
> More to the point, compared to the case of for loops vs `map`, the use case for this kind of function chaining doesn't appear all that often
Function chaining appears quite a bit in my Haskell code.
> (nor is it as syntactically localized as a for loop)
Huh?
> the underlying pattern is not as pronounced
But, I can tell you that a map will have no side effects and return a specific type whereas I can't guarantee that with a for loop.
> so it's not really recognized as a pattern/problem/something to DRY in the first place.
You realize that their is also fold and things like mapConcat right? Maybe I'm misunderstanding your argument.
I'm saying that in imperative languages for loops are more "above the noise" syntactically and frequency-wise than sequential error checking is, at least traditionally. It's not as obvious that there is anything there to give a name to, and so without an underlying motivation, monads are quite a bit more abstract than a lot of other FP concepts.
Actually, I think the "underlying motivation" for monads is specifically that they provide a useful abstraction for which there is not an equivalent elsewhere. That does make it harder for those with experience in other programming to form an intuition of the motivation than for abstractions with a close, e.g., imperative counterpart, but it also is why people with experience with monads in environments where they are widely used often look at ways to port them to other environments. Was there a clear, common equivalent, "monads in language foo" would be less popular.
"(namely circumventing a type-anal non-strict compiler)"
Monads don't circumvent anything.
"in imperative languages straightforward code works just fine"
Or seems to, but is actually deeply broken. Which is regularly the case in the code bases I deal with day-to-day.
"the use case for this kind of function chaining doesn't appear all that often"
Or it does, and you miss it. All of computation can be expressed as function chaining. Writing C or Python, I've often wanted to be able to constrain things at that level.
That's my point. Aside from my lack of amusement by Haskell's type system, I'm not making any moral or qualitative judgements one way or another. I'm talking about programmers in general. The parent's question was why monads are difficult to convey. My argument is that, in the first place, the pattern monads are DRYing is not obvious to people, and might not even be recognized as a pattern in the first place.
Hmm. I suppose there is some semantic ambiguity in "appear". If you just mean that any arising opportunities are not noticed, then I certainly agree with that part of your comment.