Hacker News new | past | comments | ask | show | jobs | submit login

If you want to see how the Curry-Howard isomorphism works in practice, this is a very accessible introduction: https://groups.seas.harvard.edu/courses/cs152/2021sp/lecture...



Accessible for who, do you mean? That is in no way 'accessible' without solid exposure to quite a lot of discrete maths (which I have and it's still hard going).

λf :(τ 1 × τ 2 ) → τ 3 .λx:τ 1 .λy:τ 2 .f (x,y) has type ((τ 1 × τ 2 ) → τ 3 ) → (τ 1 → τ 2 → τ 3 ).

Yeah, that's accessible.


It's accessible to anybody with any experience programming and who is willing to pay attention. You are just taking a formula out of context, omitting the plain English explanation that comes before it, and pretending it's some kind of alien math. It's not scary at all, in fact it's very simple.

Let's do this in Python. First, write a function that curries:

  def curry(f):
      def g(x):
          def h(y):
              return f(x, y)
          return h
      return g


  my_func = lambda x, y: 10 * x + y
  curried = curry(my_func)
  print(curried(5)(3))
  # output: 53
Now, let's type it in such a way that mypy --strict won't complain:

  from typing import TypeVar, Callable

  A = TypeVar('A')
  B = TypeVar('B')
  C = TypeVar('C')

  def curry(f: Callable[[A, B], C]) -> Callable[[A], Callable[[B], C]]:
      def g(x: A) -> Callable[[B], C]:
          def h(y: B) -> C:
              return f(x, y)
          return h
      return g
What's the type of curry? Let's hover our cursor over the function name, and lo and behold:

  Callable[[Callable[[A, B], C]], Callable[[A], Callable[[B], C]]]
That's it. That's all it's saying.


This would be a really good (and much more accessible than the linked pdf) explanation without the condescending tone. This is some pretty advanced math, and pretending otherwise isn't constructive. If someone says "I'm having trouble understanding this", it's not their fault, and it's generally not laziness - it just means that the explanation isn't suited to them.


It's not any kind of advanced math at all! The quoted fragment above is just a line of computer code. Anyone who has done any programming in any typed language should be easily able to understand it, though they may need to look up the syntax.


Nope, not me. But I've only been programming for 40 years, so maybe I'll get it when I've got more experience.

In other words, baloney. If you don't do that kind of programming, that stuff is very opaque. I might be able to get it if I were willing to stare at it for another 15 minutes, but I'm not willing.


15 minutes is not a long time. Easily understand doesn't mean instantly understand with no background reading.


You're just ignoring what he is saying.


What part am I ignoring?


> In other words, baloney. If you don't do that kind of programming, that stuff is very opaque.

That part.


Well, I interpret that as the assertion that it's more opaque than other types of programming. But I disagree and think that it is actually simpler in terms of the syntax and amount of prior understanding required. My blunt reply is that to assert that a particular example is inaccessible but then only to have dedicated 15 minutes to prove so is silly. I'm sure most people who seem to suggest that this stuff is opaque had no problem learning PCRE or complicated SQL joins and also didn't complain that it took more than 15 minutes to do so. Of course TT is a deep field and there are many complicated parts of it, but the syntax, rules and the example given are not opaque and very understandable to anyone who can tackle languages and abstraction.


Okay then. Please give some links to the following.

– Lambda calculus tutorial that gets you up to what you need to understand the referenced paper in 15 minutes

- A description of Hoare notation that I can also understand to adequate depth to understand the referenced paper, also in 15 minutes

- A description of classical logic etc. etc. 15 minutes

- A description of intuitionistic logic etc. etc. 15 minutes (presumably in how it differs from classical)

These would be very useful. If you can also explain them in a way that would allow me to use the information I've learnt in an hour to do something useful with programs like represent them, transform them, verify them against a specification, I would gladly spend a couple of weeks or a month doing that, genuinely.

If you can't do any of these things, please stop posting how easy it all is


Mate, the point I was making is that 15 minutes is not a reasonable time frame to learn new programming concepts in.


You're right, I should read more carefully. Nonetheless, my point still stands, if you can give me some resources to learn these things, with considerably more than 15 minutes allowed, and bearing in mind I have no one to ask when I get stuck, and at my end aim is to actually use these things rather than treat them as interesting (which I already find them so) then I'd be grateful.


> You are just taking a formula out of context, omitting the plain English explanation that comes before it,

which, this?

"We saw earlier in the course that we can curry a function. That is, given a function of type (τ 1 ×τ 2 ) → τ 3 , we can give a function of type τ 1 → τ 2 → τ 3"

That helps no-one unless you have some serious hand-holding. I know the type of curry, I've written curry/bind/whateveer in C#, scala, JS and likely others. I have a background in this. It's not 'accessible' to mere mortals and even I'm not familiar with some of the notation here. Stop pretending it's just laziness on other people's part.

(oh yeah, did I forget to put constructive logic in my list above? Brouwer's intuitionism? FFS get real, I never even heard of that until a few years ago)


It is accessible if you know a few bits of syntax, such as → being right-associative so τ1 → τ2 → τ3 means τ1 → (τ2 → τ3). And others such as "f: X → Y declares f as a function from X's to Y's." which you can find in any math textbook. The point of using such a terse notation rather than faffing around with Callable[…] is to make it possible to work with larger examples without getting bogged down in the verbiage. It's why we use symbols to solve equations in K-12 math.


The curse of intelligent people, the one thing they can't understand; never being able to understand that other people don't understand.

There's something of Cassandra about it.


If someone can learn C type syntax this is MUCH simpler. That doesn't mean you don't have to spend a little bit of time learning how it works, but it is not some kind of number-theory level maths construction only accessible to savants.



By trying to look smart you look stupid in assuming this is simple. You need to improve your soft skills.


It's just the 'mathy' type-setting which is hard to read. Types and terms are in the same font & colour and they're on the same line. To paraphrase:

You can curry a function (supply its arguments one-by-one instead of all at once).

You have:

  f(x, y) which has type (τ1 x τ2) -> τ3
You want:

  f x y which has type τ1 -> τ2 -> τ3
Here's a function which converts what you have to what you want:

  λx. λy. f(x, y)
It has type:

  ((τ1 × τ2) → τ3) → (τ1 → τ2 → τ3)
This type "Curry-Howard"s to the logical formula:

  (A ∧ B ⇒ C) ⇒ (A ⇒ (B ⇒ C))
Since this logical formula is a tautology, the above conversion function preserves the meaning of the function it converts.


> (A ∧ B ⇒ C) ⇒ (A ⇒ (B ⇒ C))

> Since this logical formula is a tautology

You forgot to say 'obviously' /s


Seems accessible, if you're willing to look up what what you don't understand.


By that definition "Baby Rudin"[1] is an accessible introduction to analysis as long as you're willing to look up what you don't understand (ie everything probably the first time) in some other source.

[1] https://maa.org/press/maa-reviews/principles-of-mathematical...


Rudin is not accessible if you're just looking up things. But it is if you have someone to explain stuff when you get stuff.

This one doesn't need that.


It doesn’t matter what you think is accessible, it only matters if the person trying to access it thinks it’s accessible. I don’t.


If you can learn how to program, you can learn how to understand this. You first must learn the notation, just as when you learned "def", or "func", or "[x for x in ...]", or "int main() { ... }", or whatever your favorite language's notation is.

"Accessible" doesn't necessarily mean "with just knowledge of English, you can jump to this example and immediately understand what it means". Some work is always required; accessible just means this work is not too hard.

For example, Python's notation has been occasionally described as "accessible", yet I wouldn't expect my mum to immediately understand a non-trivial Python snippet (so no "print('hello')") without any previous explanations.


Maybe if you view "programming" as one homogenous block of understanding, but there is a difference between doing basic low-level programming and effectively writing abstract code in Haskell. I can only speak for myself but layers of abstraction make things exponentially harder for me to follow. Compact mathematical notation on top of that feels a bit exhausting.

It can probably be called accessible for people already interested or talented in maths, but there's a reason most people don't just casually learn astrophysics for fun even though some introductory material might be considered accessible - not everyone can do it without unreasonable amounts of effort.


> Maybe if you view "programming" as one homogenous block of understanding

Oh, I'm not saying it's the same as programming in your favorite language.

I'm saying if you have the mental machinery to understand a new language, you also have it to learn lambda calculus. Trust me, it's not that hard. It just looks extraneous out of context because you are not familiar with it.

> It can probably be called accessible for people already interested or talented in maths

I'm not particularly good at maths and I found introductory-level lambda calculus not very difficult. The notation is different from, say, a C program. You just learn this notation, which your professor will explain. Nothing too terrible.

> layers of abstraction

I'm confused. Which layers of abstraction? I mean, it's an abstraction just like C or Python or "1 + 1" are abstractions. There are rules and you learn how to apply them.


It's about the entire set of formalisms given in the pdf, not just lambda calculus. The argument here about LC was because I had just given a snippet of it, but it's not the whole paper.

Now, the next bit – so I've spent my 15 minutes reading up on each of these formalisms (lambda calculus, intuitionistic logic, Hoare notation, whatever else). So now I can read the paper and I do, and now I have absolutely no way of making use of anything I've learnt. I spent personally a lot of time on the computer science side of things, and I have got frankly just about nothing out of it. So frankly again, there's no point learning it (from a utilitarian perspective).

When I was a kid I picked up a book on Fortran 4 and tried to learn it from the book, without a computer. That's pretty much the situation here – a passive notation that, had I managed to internalise it, would have been completely useless because I had no computer to run it on. And I'm tired of this discussion of people saying it's easy and it's useful, because it's not easy, and it's not useful without a whole lot more studying an application to bridge the abstraction of the formalism to the actual programming environment where I can use it.

Please just stop repeating stuff.


Ok, but now you are making a completely different claim!

You started out saying, "oh, these lecture requires some advanced math background to understand", and quoted the curry function. And then a lot of people explained that no, there is no advanced math, this is just a function written in a programming language. And now it seems you agree, if you spend only 15 minutes you can understand the lecture notes. That's all I, and others, wanted to say.

Separately, it's the case that understanding this will not be very useful in a day-to-day programming job. That's just because it's not all that useful! (See other discussion on this this same HN page.) I think it is very cool and philosophically interesting in itself, and it offers an interesting point of view if you are designing new programming language type systems, but it doesn't have very much "utilitarian" use.


> And now it seems you agree, if you spend only 15 minutes you can understand the lecture notes

that was sarcasm.

> I think it is very cool and philosophically interesting in itself, and it offers an interesting point of view if you are designing new programming language type systems, but it doesn't have very much "utilitarian" use.

It has phenomenal use is my point (eg. formal verification, an interest of mine), but only at a vastly higher level than "I've been to the lectures". What I keep saying is, most people aren't interested in the abstract, and neither am I. Give me a practical application and suddenly I'll 'get it'. Without that application it's valueless because I have no reason to engage with it, and actually cannot. Put another way: its application and its understanding are interlinked. You can separate them; I and others can't.

I know this stuff is useful and I literally spent years trying to understand it. For someone like me it just isn't easy, and you keep persistently not understanding that.


I'm going to be cynical and make the following observation:

Given that there's this correspondence between Math and Programming Languages, what it says is just that if you understand types in programming language, there's no point in understanding the notations in math.

And this is especially true if you take a cynical attitude towards theorem provers in the sense that real proofs are in the minds of the thinker, not in the formalism.

The fact that the Curry-Howard correspondence is was proposed in the 1930s suggests that it was a relatively "primitive" analogy that might be restated in some obvious way in today's language of computation. Maybe... "you can write programs to be automated theorem provers for formal systems"?

I wonder what CS people would have to say about the efforts "to prove a program correct". If a program is equivalent to a proof, "proving a program correct" seems to be "proving a proof correct".

I don't really know what I'm talking about though.


You've got the right idea. A program is a proof for the proposition made by its type. Only for the right languages though, and not in the naive sense of `assert(1+1 == 2)` proves the assertion.


OK, not what I expected. Surely a spec is a spec, the program fulfils the spec hopefully, and the proof is that the program implements the spec. How do you see it? thanks


Curry Howard shows that for a logical proposition (A) with corresponding constructive proof (B), there will be a type (A') with program (B'). It's not about proving desired properties of programs. However, you can use the CH principles to do this too, as long as you can encode the proposition about a program in its type. This will not often look like the dataflow type of the naive program though. Look at software foundations for real detail, I'm not expert at this stuff, just aware of the basics.


A program is not a proof.


There's a lot of talk in these comments that assumes that certain people are just ‘built different’ and can grok complicated new concepts that are unrelated to anything they know by virtue either of being smart or of being nerds with no life.

I think there are several different types of ‘background’ required to read something, none of which are innate or unattainable without a particular kind of education (e.g. a university degree). This confuses discussions about whether something is ‘accessible’ because the person complaining that it is inaccessible might be complaining of a gap between their current state and the state assumed by the book, in any of those three categories.

1) Factual background.

What facts do you need to know, what theories do you need to understand, in order to be able to follow what the author is saying? If the author is trying to teach you classical mechanics but you don't know about addition, you're not going to get very far until you go and learn to add and come back.

A source can address this (reducing the amount of background required) by providing more background in the material itself. But it obviously doesn't scale that well if every explanation of classical mechanics comes with an explanation of addition (and counting, etc.), and the background you do get, if it isn't the primary goal of the material, is often rushed and not as well-written as a dedicated source. So instead (or as well as providing a quick refresher for people who already have the background but might have forgotten it) authors will often refer to other materials that provide better explanations.

Inaccessibility arguments from this perspective look like ‘I don't know what this word means’ or ‘I don't know how to read this notation’.

2) Motivational background.

What applications do you need to know about in order to find out worthwhile to learn a topic? If you don't know about bicycles, levers, etc., learning about classical mechanics can seem like an abstract waste of time.

As a function of the complexity of a topic, it necessarily takes a certain amount of time before the author's thoughts are complete enough to relate to something that the reader will find compelling, though a good author will try to minimize it. People vary here in their trust of an author, and there are certain external factors that can influence this as well: e.g. if the material is very well-regarded by other people the reader might be happier to spend time figuring out what it's trying to say, or if the reader has a background in a similar topic they might be able to imagine some motivations that will keep them going.

That's not to say that the reader should always have this: indeed, most materials are actually useless to most people. One of the pre-tasks of reading something is deciding whether it's something that is (potentially) even useful to read, and for most things your answer should be ‘no’.

Inaccessibility arguments from this perspective look like ‘I don't care about this’ or ‘this is abstract nonsense’. Note that text is never actually completely abstract: authors, like readers, are always motivated by practical concerns. But there might be a long chain of problems and solutions from a problem that the reader is aware of to the solution that the author is explaining — in fact, in pure maths, the applications of a theory aren't always even known — but there's good reason to believe that they exist.

3) Attention span.

Difficult concepts just take a long time to learn. This can be addressed somewhat in the text by providing longer explanations in which the concept is spelt out more elaborately, but then you risk alienating readers who've lost track of your point by the time you've got to the end of it — there's a balance to be struck, and the correct point on the continuum depends on your intended readership. Also, on the reader's end, this is to some extent a ‘muscle’ that needs to be trained and regularly exercised in order to not lose it.

All of these things lead to valid arguments of inaccessibility, but the failure isn't inherent in the writing: instead it indicates a mismatch between the writing and the reader. If the writing means to target a certain type of reader, that can make it badly written; but no writing can target every possible reader. If some writing seems inaccessible for you, if you're interested in it you can work on those three things to make it more accessible (which will involve some combination of staring at the target writing until it makes sense, and reading around related writings to get knowledge and practice reading about that kind of thing). Alternatively, you might be able to find other writings on the same topic with prerequisites that are a better match to you. For example, I've been trying to write a series of posts explaining CS concepts that I think are cool or useful, assuming the knowledge and motivation of an enterprise programmer, and drawing the chain of motivation/explanation as far back as necessary to get to concepts they're likely to know already. My writing on the lambda calculus is here:

https://twey.io/for-programmers/lambda-calculus/

Please excuse the sloppy writing, this is an attempt to get back into technical/explanatory writing after many years — but criticisms (with that particular intended audience in mind) are very welcome!


A quick skim says this is a really thoughtful bit of writing; I'll try to do it justice tomorrow but right now I can't. So thank you, and I'll try to respond tomorrow in a bit more depth. Thanks.


Certain proofs about a program, implemented using types, are useful in high level programming, as well as low level programming. See Rust in the linux kernel [1], and Rust on Android[2].

One way to think about it, is that proofs about a program are a continuum, rather than one or the other. Sure, going all the way, disallowing mutability and requiring types for everything, might make the system more provable, but slower as well.

There are some different ways to look at a program:

Corporate developer: "Programs are to be run, and occasionally read or proved correctly" Donald Knuth: "Programs are to be read, and occasionally run" Type Theorist: "Programs are to be compiled, and occasionally run or read"

https://thenewstack.io/rust-in-the-linux-kernel/

https://security.googleblog.com/2023/10/bare-metal-rust-in-a...


Quiz: what is the opposite of patronising? Because that's what you're being. Not talking down to people but assuming more than is typical.

You're clearly highly intelligent, evidently more so than I am. Additionally my mind doesn't work like yours, I'm not an abstract thinker. Plonk stuff like this in front of me and I can decipher it eventually, if I can get some clear statement of the notation and its semantics anyway, and there is stuff in there I'm not familiar with.

Most people aren't abstract thinkers. They are motivated by practical concerns. I spent an enormous amount of time reading up on stuff like this and I really can't see the value to it to me as a programmer. It's deeply frustrating because I know it has value, but I just can't use it. It's interesting, I know it's useful, but I can't use it.

Please allow that other people think differently and are motivated differently, and don't assume that what's easy for you is for them. I wish I had your cranium.


> Quiz: what is the opposite of patronising?

Humble, respectful, modest?

I respect that different people will find different things hard or easy, so that a generalization is never truly possible (except, uh, this sentence?). On the other hand, speaking with absolutely no generalizations whatsoever is simply not possible, because so many caveats and exceptions would make communication impossible.

I do stand by my initial assertion that, in general, if you have a mind capable of learning a new programming language, you can learn lambda calculus, and it won't be particularly hard.


> Most people aren't abstract thinkers. They are motivated by practical concerns.

Abstract thinking is motivated by practical concerns, namely being able to think comfortably about larger problems without being limited by one's capability for complex reasoning. Most people don't do it because they aren't used to, or they have never been taught how.


Under that constraint, what wouldn't count as an accessible exposition?


"I have a truly marvelous demonstration of this proposition which this margin is too narrow to contain."


So the only way for an explanation not to be accessible is to not provide it? Okay.


In the spirit of the topic, I ran the truth tables you got yourself a valid argument. Feel free to pin this to the top.


Something involving implicit knowledge and conventions that is not spelled out.


Oh get real. I have a background in this. Lambda notation, Hoare notation (I think), 'standard' logic notation.

Don't talk rubbish about 'look it up'. You need a degree to get this. Not a metaphorical degree, the real thing.

Edit - and a brain bigger than mine.


> Don't talk rubbish about 'look it up'. You need a degree to get this. Not a metaphorical degree, the real thing.

You really don't need a degree. The Curry-Howard isomorphism is taught in undergraduate CS courses, to students who are simultaneously being introduced to lambda calculus.

While I wouldn't say it's trivial, it's not rocket science either. Most students get it and pass the exams.


It's an introductory CS concept, requiring a degree would make things a bit circular...


I would interpret "accessible" here as meaning relatively accessible compared to what it could be. The underlying concept is inherently difficult, and so it will always be somewhat inaccessible, but within the space of all possible explanations, I would agree that it's on the more accessible end rather than the less.


Nifty. That was a nice 4-page exposé of the correspondence. There are no general proofs, but the correspondence between notations is very suggestive and invokes interesting intuitions, I think. The example derivations with the uncurry example are quite fun to work through. The paragraph about continuations corresponding to double negation is also super neat!

Now I just wish there were something similar for the missing category theory third of the trinitary mentioned in another comment.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: