I truly think I am, but I'm still struggling to learn Haskell...
>the abundance of poor pedagogy is surely a big reason why so many people develop mathematical anxiety.
I'm one of them, but 28 years later, I'm finally realizing that mathematics actually aren't that hard.
>The programming world lately is inundated with tutorials that take their pedagogical style from the Teletubbies or at best Bob Ross. Maybe that is not a viable way of teaching Haskell.
I think you hit the nail on the head, here, and I would like to hazard the following opinion: this is a deeply cultural problem in the United States, where we operate under the implicit belief that learning should be fun.
I don't mean to suggest that learning isn't rewarding, deeply satisfying, and occasionally exhilarating, but beyond learning your ABCs, it's rarely fun in the traditional sense. The problem, as I see it, is that we're failing tell the truth: learning non-trivial things is tedious, grueling, difficult, and tooth-grindingly frustrating while still being one of the most worthwhile things in life. Instead, we try to teach difficult or non-obvious concepts with the pedagogical equivalent of coloring books.
Stated differently, we simultaneously tell our youth that one has to "work hard" and "make sacrifices" for things like sports, while telling them that intellectual pursuits should be nothing less than blissfully entertaining.
As a general rule, I avoid anything labeled as being "for beginners", unless it's from a foreign editor. I'm lucky to be a fluent speaker of French, and I've noticed that French textbooks systematically approach subject matters in a rigorous, Cartesian, matter-of-fact way ... and this from grade 1!
It sucked. I mean it was awful. I spent weeks pouring over this thing, fighting the kind of frustration I described above.
And I got a D+ in undegraduate pre-calculus. I'm not a "math guy", but like everybody else, coloring books don't cut it after you start growing armpit hair. Big-boy concepts require big-boy methods.
Agreed, and thanks, that looks like an interesting paper.
In Gothenburg, Sweden, where I studied, Haskell is the first introductory language for students of CS and (I believe) CE. I know people who had barely even used a computer when they started there who are quite proficient with Haskell.
A side benefit of that is that the dorks like me who had been coding Perl and C and whatever for years before were thrown back to "oh shit, I have to actually study" mode and thus equalized with the people who were starting from scratch.
In terms of gender, when I started studying women were way less likely to have been coding since the age of 6. In the exam for a later course on Advanced Functional Programming that included monad transformers, functional embedded languages, the basics of dependently typed programming, and a semi-large Haskell programming project, I was happy to see a female classmate who hadn't programmed at all before university get the best score of anyone. But I digress... (Shoutout to Elin if you're reading!)
I'm somewhat anxious that my way of talking about this stuff might make people think that Haskell is way more difficult to learn than other languages, which I don't think is the case. I think people probably forget how much trouble they had grokking concepts like "objects" and "superclasses" and "generators" and so on. Haskell might be difficult in some other ways. Maybe in some ways it forces you to really understand things more than other languages. But it's still very possible to learn to program in Haskell without being exceptionally good at math.
Also, some of what people find tricky about Haskell is also present in other ML-style languages. I learned about O'Caml and some SML, and also Scheme, before I got into Haskell. So I knew about functional recursive styles of programming, and algebraic data types, and probably the biggest new things were lazy evaluation and the complete lack of side effects leading to the need for monadic sequencing. Those are not insurmountable.
Edit: By the way, I also appreciate how Haskell is becoming a sort of grass roots study group for coders to learn more about math, algebra, category theory, and what have you. It is not at all constrained to the universities anymore. This article is at the top of HN right now! Interesting times.
I think you're on to something here. It may be that programming languages that require you to 'reboot' are better suited to people totally new to programming than to people that already know how to program using some other methodology.
This is because compared to your old, well used and worn tools that new tool starts off as a set-back rather than an advantage and we more or less expect an instant productivity boost from new technology. That set-back then becomes a huge hurdle to overcome.
The question a programmer faced with a problem to solve is always one of investment: should I invest my time and effort into this new technology that will make me x times more productive or simply a better programmer (or someone capable of tackling more complex problems) a year or more down the line, or should I break out my trusty tools and turn a buck today (or turn a buck for my employer).
In most cases that equation works out in favor of the old tools so the energy required would need to be offset by concrete short term gains if a person is to switch from one tech to another.
That lack of concrete short-term gains is the big stumbling block for old hands, and if you manage to hack that then I think a lot of these better technologies would find traction.
People new to programming don't have that particular hang-up and I'd expect them to do much better with such new tool or different tools. (But they'd have the same problem if they needed to switch to other tools, it's just that in this particular instance they have a blank slate advantage, that evaporates the moment they become proficient in that technology.)
I went to the same school (I presume) in Gothenburg. I came into the CS classes with a decent amount of math background, and a small amount of programming classes (FORTRAN, C, ADA and other kinds of evil you use in chemistry and physics). Our CS intro classes were Java and Haskell. Haskell didn't feel like a reboot to me, it felt like using math to write programs, so at least for me, Haskell felt a lot more accessible than learning OO concepts in Java did. I am sure lecturers caring more about Haskell also mattered, but at least for my background, getting the Haskell concepts was a easier than the Java concepts.
This comment and the parent comment was really interesting. It's made me think about my current path and somethings I have done/learned that has incrementally pushed me towards functional programming over a long window of time.
I have played with Scheme, along with a half-completed SICP read through. I have also played with Haskell, along with partial read throughs of LYAH and real-world Haskell. Those experiences, although incomplete, have been important to me as a programmer. I don't think those actually 'sold' me on FP though.
I took a detour for some time and decided to become a better programmer using OO methodologies. I used TDD, I memorised SOLID and all that. Read up on design patterns and then used them practically. All well and good. From my toying with Scheme, Haskell and associated literature, I wasn't entirely satisfied. The code I produced was always a bit, I'm not sure, but not entirely satisfiying. It looked and worked better than what I get given to review when considering freelancers for an extra pair of hands - part of that is because of PHP I think. I carried on this way for a while in any case.
In my own self-learning, I'd started looking at language design and compilers. I read some books and decided I didn't have enough mathematical skills to consume the texts. I turned to learning discrete mathematics. That was a big eye opener. As I was reading about formal logic, set theory, proofs, etc, my mind kept turning to the problem of applying this in my work. Haskell and FP came to mind too. I believe doing this helped it finally click that a functional approach can be taken successfully in most of the work I do.
Whilst I was doing this, I had a fairly simple project that, if designed correctly, could make my life easier in future projects. I took an OO approach, designed the UML, started writing tests, etc. I decided that it was over-engineered or at least it was a lot of work for what I was trying to do. I decided to take an FP approach. I had the project completed with far less code that I was happier with. In the same language. That was almost like an 'epiphany moment'. I started getting seriously interested about functional programming then. I am now starting to see the flaws and bloat in the approaches I was taking before.
I recently read Out of the Tar Pit again. I had read it before but I didn't really get it. This time it excites me and made my brain whir with ideas. Especially on how I can make these approaches compatible with the constraints of my day job.
I actually now digest as much as I can about FP. Haskell, as an important player, frequently comes up. After laying the ground work, monads make a lot more sense. Haskell as a language excites me.
In my opinion, and I am self-taught when it comes to computers so it may not be 100% applicable, here is what I did to start my transition from imperative to declarative:
a) experiment with functional programming. Not actually get it and be ok with that. Carry on down the path.
b) Read HN and note articles posted about FP
c) Continue learning about imperative programming and consume the advice from OO thought leaders. Apply it to your work.
d) Learn a bit about maths and what it actually is, especially realising it's not just rote computation and application of algorithms from school.
e) Take a functional approach instead of an OO approach within the language you are working with. I did this in PHP. Not only will it show you how FP compares, it will also show you why Haskell/OCaml/etc by way of the deficiencies for FP in your chosen languages. In PHP, the type system (arrays as your Everyman...) and inconsistent type hints, explicit closing over of variables in outer scopes and piss-poor support for modules is one. Biggest is the only way to pass a function in the function namespace is to use a a fully-qualifies string (so function name along with namespace - you get undefined constant otherwise). With Java, it might be that you cannot have stand alone functions, which would suck.
This helped a lot for me but I'm sure spending a long time learning about software down to the hardware in my spare time contributed a lot to this too.
I had similar moments of epiphany while taking a course on abstract algebra and writing down some theorems in the form of Agda definitions.
As you may know, Agda is a Haskell-inspired language that has an even more powerful type system that makes it possible to define arbitrarily strong types, so that for instance you can define the type of a function that sorts a list—and then know confidently that any implementation of that type does indeed sort a list.
Anyway, when learning Agda, one comes across the notion that programs and proofs are in some sense the same thing. In other words that the code for a function can be read as a proof that the type of that function is inhabited. This seemed to me kind of bizarre and magical at first...
The more I thought about it, the more it made real sense, and not only that, it helped me understand proofs in a much more intuitive way. Before, if you had asked me what a proof is, I might have said "it's a very strict way to argue that a statement really is true." I didn't know that such proofs can be given formal structure... I couldn't really explain how to relate it to programming or functions...
But now that I've to some degree internalized the notion, if you ask me what a proof is, I might say it's an unambiguous description of how to transform an input into an output.
If you call the input P and the output Q, then P is a premise and Q is a conclusion, and the proof is the "arrow" P -> Q that through some well-defined sequence of "truth-preserving transformations" gives you knowledge of Q assuming nothing but P (and some fundamental axioms).
And in the world of programming, P is an input argument and Q is a resulting value, while the "arrow" P -> Q is a well-defined sequence of correct transformation that produces a Q-value given only a P-value (and some fundamental primitive functions).
So in this sense you can say that the implementation of a function is actually a kind of logical argument.
If you want to make use of this insightful connection (the "Curry-Howard equivalence"), however, your programming language needs to be pretty strict. If your P -> Q argument suddenly throws a null pointer exception, it's not much of an argument—it means your claim of being able to produce a Q given P is false.
This connection really helped me to understand both proofs and programs in a deeper way.
1. I'd heard of Agda but never read any code. What a brilliant language. I've added it to my list of my languages to spend time with.
2. I think your experience with Agda, especially the realisation that programs = proofs, leading to the Curry-Howard correspondence is exciting and rightfully bumps Agda to the top of my list.
Functional programming has also opened my eyes with type systems too. That actually, an inconsistent and weak type system is a bigger liability than once thought - I'm realising that daily. Especially with null values and "trying to call method on non-object" errors.
> ... Our intention is not to use any deep theorems of category theory, but merely to employ the basic concepts of this field as organizing principles. This might appear as a desire to be concise at the expense of being esoteric. But in designing a programming language, the central problem is to organize a variety of concepts in a way which exhibits uniformity and generality. Substantial leverage can be gained in attacking this problem if these concepts are defined concisely within a framework which has already proven its ability to impose uniformity and generality upon a wide variety of mathematics.
(John Reynolds, "Using category theory to design implicit conversions and generic operators", 1980.)
This is completely unrelated, but I found this paper to be similarly brilliant in its simplicity, clarity, and rigorous explanation: https://ramcloud.stanford.edu/raft.pdf
I have a hard time already convincing myself that correct sequential imperative algorithms are indeed correct - which probably means that I'm a bad programmer, but I can't change who I am. So convincing myself that that Raft, a concurrent algorithm, is correct (meets its specification) from an informal description is completely out of question - where is the formal proof of correctness?
I truly think I am, but I'm still struggling to learn Haskell...
>the abundance of poor pedagogy is surely a big reason why so many people develop mathematical anxiety.
I'm one of them, but 28 years later, I'm finally realizing that mathematics actually aren't that hard.
>The programming world lately is inundated with tutorials that take their pedagogical style from the Teletubbies or at best Bob Ross. Maybe that is not a viable way of teaching Haskell.
I think you hit the nail on the head, here, and I would like to hazard the following opinion: this is a deeply cultural problem in the United States, where we operate under the implicit belief that learning should be fun.
I don't mean to suggest that learning isn't rewarding, deeply satisfying, and occasionally exhilarating, but beyond learning your ABCs, it's rarely fun in the traditional sense. The problem, as I see it, is that we're failing tell the truth: learning non-trivial things is tedious, grueling, difficult, and tooth-grindingly frustrating while still being one of the most worthwhile things in life. Instead, we try to teach difficult or non-obvious concepts with the pedagogical equivalent of coloring books.
Stated differently, we simultaneously tell our youth that one has to "work hard" and "make sacrifices" for things like sports, while telling them that intellectual pursuits should be nothing less than blissfully entertaining.
As a general rule, I avoid anything labeled as being "for beginners", unless it's from a foreign editor. I'm lucky to be a fluent speaker of French, and I've noticed that French textbooks systematically approach subject matters in a rigorous, Cartesian, matter-of-fact way ... and this from grade 1!
You want to know how I finally grokked monads? I read this paper: http://repository.cmu.edu/cgi/viewcontent.cgi?article=2846&c...
It sucked. I mean it was awful. I spent weeks pouring over this thing, fighting the kind of frustration I described above.
And I got a D+ in undegraduate pre-calculus. I'm not a "math guy", but like everybody else, coloring books don't cut it after you start growing armpit hair. Big-boy concepts require big-boy methods.