Hacker News new | past | comments | ask | show | jobs | submit login

If you can learn how to program, you can learn how to understand this. You first must learn the notation, just as when you learned "def", or "func", or "[x for x in ...]", or "int main() { ... }", or whatever your favorite language's notation is.

"Accessible" doesn't necessarily mean "with just knowledge of English, you can jump to this example and immediately understand what it means". Some work is always required; accessible just means this work is not too hard.

For example, Python's notation has been occasionally described as "accessible", yet I wouldn't expect my mum to immediately understand a non-trivial Python snippet (so no "print('hello')") without any previous explanations.




Maybe if you view "programming" as one homogenous block of understanding, but there is a difference between doing basic low-level programming and effectively writing abstract code in Haskell. I can only speak for myself but layers of abstraction make things exponentially harder for me to follow. Compact mathematical notation on top of that feels a bit exhausting.

It can probably be called accessible for people already interested or talented in maths, but there's a reason most people don't just casually learn astrophysics for fun even though some introductory material might be considered accessible - not everyone can do it without unreasonable amounts of effort.


> Maybe if you view "programming" as one homogenous block of understanding

Oh, I'm not saying it's the same as programming in your favorite language.

I'm saying if you have the mental machinery to understand a new language, you also have it to learn lambda calculus. Trust me, it's not that hard. It just looks extraneous out of context because you are not familiar with it.

> It can probably be called accessible for people already interested or talented in maths

I'm not particularly good at maths and I found introductory-level lambda calculus not very difficult. The notation is different from, say, a C program. You just learn this notation, which your professor will explain. Nothing too terrible.

> layers of abstraction

I'm confused. Which layers of abstraction? I mean, it's an abstraction just like C or Python or "1 + 1" are abstractions. There are rules and you learn how to apply them.


It's about the entire set of formalisms given in the pdf, not just lambda calculus. The argument here about LC was because I had just given a snippet of it, but it's not the whole paper.

Now, the next bit – so I've spent my 15 minutes reading up on each of these formalisms (lambda calculus, intuitionistic logic, Hoare notation, whatever else). So now I can read the paper and I do, and now I have absolutely no way of making use of anything I've learnt. I spent personally a lot of time on the computer science side of things, and I have got frankly just about nothing out of it. So frankly again, there's no point learning it (from a utilitarian perspective).

When I was a kid I picked up a book on Fortran 4 and tried to learn it from the book, without a computer. That's pretty much the situation here – a passive notation that, had I managed to internalise it, would have been completely useless because I had no computer to run it on. And I'm tired of this discussion of people saying it's easy and it's useful, because it's not easy, and it's not useful without a whole lot more studying an application to bridge the abstraction of the formalism to the actual programming environment where I can use it.

Please just stop repeating stuff.


Ok, but now you are making a completely different claim!

You started out saying, "oh, these lecture requires some advanced math background to understand", and quoted the curry function. And then a lot of people explained that no, there is no advanced math, this is just a function written in a programming language. And now it seems you agree, if you spend only 15 minutes you can understand the lecture notes. That's all I, and others, wanted to say.

Separately, it's the case that understanding this will not be very useful in a day-to-day programming job. That's just because it's not all that useful! (See other discussion on this this same HN page.) I think it is very cool and philosophically interesting in itself, and it offers an interesting point of view if you are designing new programming language type systems, but it doesn't have very much "utilitarian" use.


> And now it seems you agree, if you spend only 15 minutes you can understand the lecture notes

that was sarcasm.

> I think it is very cool and philosophically interesting in itself, and it offers an interesting point of view if you are designing new programming language type systems, but it doesn't have very much "utilitarian" use.

It has phenomenal use is my point (eg. formal verification, an interest of mine), but only at a vastly higher level than "I've been to the lectures". What I keep saying is, most people aren't interested in the abstract, and neither am I. Give me a practical application and suddenly I'll 'get it'. Without that application it's valueless because I have no reason to engage with it, and actually cannot. Put another way: its application and its understanding are interlinked. You can separate them; I and others can't.

I know this stuff is useful and I literally spent years trying to understand it. For someone like me it just isn't easy, and you keep persistently not understanding that.


I'm going to be cynical and make the following observation:

Given that there's this correspondence between Math and Programming Languages, what it says is just that if you understand types in programming language, there's no point in understanding the notations in math.

And this is especially true if you take a cynical attitude towards theorem provers in the sense that real proofs are in the minds of the thinker, not in the formalism.

The fact that the Curry-Howard correspondence is was proposed in the 1930s suggests that it was a relatively "primitive" analogy that might be restated in some obvious way in today's language of computation. Maybe... "you can write programs to be automated theorem provers for formal systems"?

I wonder what CS people would have to say about the efforts "to prove a program correct". If a program is equivalent to a proof, "proving a program correct" seems to be "proving a proof correct".

I don't really know what I'm talking about though.


You've got the right idea. A program is a proof for the proposition made by its type. Only for the right languages though, and not in the naive sense of `assert(1+1 == 2)` proves the assertion.


OK, not what I expected. Surely a spec is a spec, the program fulfils the spec hopefully, and the proof is that the program implements the spec. How do you see it? thanks


Curry Howard shows that for a logical proposition (A) with corresponding constructive proof (B), there will be a type (A') with program (B'). It's not about proving desired properties of programs. However, you can use the CH principles to do this too, as long as you can encode the proposition about a program in its type. This will not often look like the dataflow type of the naive program though. Look at software foundations for real detail, I'm not expert at this stuff, just aware of the basics.


A program is not a proof.


There's a lot of talk in these comments that assumes that certain people are just ‘built different’ and can grok complicated new concepts that are unrelated to anything they know by virtue either of being smart or of being nerds with no life.

I think there are several different types of ‘background’ required to read something, none of which are innate or unattainable without a particular kind of education (e.g. a university degree). This confuses discussions about whether something is ‘accessible’ because the person complaining that it is inaccessible might be complaining of a gap between their current state and the state assumed by the book, in any of those three categories.

1) Factual background.

What facts do you need to know, what theories do you need to understand, in order to be able to follow what the author is saying? If the author is trying to teach you classical mechanics but you don't know about addition, you're not going to get very far until you go and learn to add and come back.

A source can address this (reducing the amount of background required) by providing more background in the material itself. But it obviously doesn't scale that well if every explanation of classical mechanics comes with an explanation of addition (and counting, etc.), and the background you do get, if it isn't the primary goal of the material, is often rushed and not as well-written as a dedicated source. So instead (or as well as providing a quick refresher for people who already have the background but might have forgotten it) authors will often refer to other materials that provide better explanations.

Inaccessibility arguments from this perspective look like ‘I don't know what this word means’ or ‘I don't know how to read this notation’.

2) Motivational background.

What applications do you need to know about in order to find out worthwhile to learn a topic? If you don't know about bicycles, levers, etc., learning about classical mechanics can seem like an abstract waste of time.

As a function of the complexity of a topic, it necessarily takes a certain amount of time before the author's thoughts are complete enough to relate to something that the reader will find compelling, though a good author will try to minimize it. People vary here in their trust of an author, and there are certain external factors that can influence this as well: e.g. if the material is very well-regarded by other people the reader might be happier to spend time figuring out what it's trying to say, or if the reader has a background in a similar topic they might be able to imagine some motivations that will keep them going.

That's not to say that the reader should always have this: indeed, most materials are actually useless to most people. One of the pre-tasks of reading something is deciding whether it's something that is (potentially) even useful to read, and for most things your answer should be ‘no’.

Inaccessibility arguments from this perspective look like ‘I don't care about this’ or ‘this is abstract nonsense’. Note that text is never actually completely abstract: authors, like readers, are always motivated by practical concerns. But there might be a long chain of problems and solutions from a problem that the reader is aware of to the solution that the author is explaining — in fact, in pure maths, the applications of a theory aren't always even known — but there's good reason to believe that they exist.

3) Attention span.

Difficult concepts just take a long time to learn. This can be addressed somewhat in the text by providing longer explanations in which the concept is spelt out more elaborately, but then you risk alienating readers who've lost track of your point by the time you've got to the end of it — there's a balance to be struck, and the correct point on the continuum depends on your intended readership. Also, on the reader's end, this is to some extent a ‘muscle’ that needs to be trained and regularly exercised in order to not lose it.

All of these things lead to valid arguments of inaccessibility, but the failure isn't inherent in the writing: instead it indicates a mismatch between the writing and the reader. If the writing means to target a certain type of reader, that can make it badly written; but no writing can target every possible reader. If some writing seems inaccessible for you, if you're interested in it you can work on those three things to make it more accessible (which will involve some combination of staring at the target writing until it makes sense, and reading around related writings to get knowledge and practice reading about that kind of thing). Alternatively, you might be able to find other writings on the same topic with prerequisites that are a better match to you. For example, I've been trying to write a series of posts explaining CS concepts that I think are cool or useful, assuming the knowledge and motivation of an enterprise programmer, and drawing the chain of motivation/explanation as far back as necessary to get to concepts they're likely to know already. My writing on the lambda calculus is here:

https://twey.io/for-programmers/lambda-calculus/

Please excuse the sloppy writing, this is an attempt to get back into technical/explanatory writing after many years — but criticisms (with that particular intended audience in mind) are very welcome!


A quick skim says this is a really thoughtful bit of writing; I'll try to do it justice tomorrow but right now I can't. So thank you, and I'll try to respond tomorrow in a bit more depth. Thanks.


Certain proofs about a program, implemented using types, are useful in high level programming, as well as low level programming. See Rust in the linux kernel [1], and Rust on Android[2].

One way to think about it, is that proofs about a program are a continuum, rather than one or the other. Sure, going all the way, disallowing mutability and requiring types for everything, might make the system more provable, but slower as well.

There are some different ways to look at a program:

Corporate developer: "Programs are to be run, and occasionally read or proved correctly" Donald Knuth: "Programs are to be read, and occasionally run" Type Theorist: "Programs are to be compiled, and occasionally run or read"

https://thenewstack.io/rust-in-the-linux-kernel/

https://security.googleblog.com/2023/10/bare-metal-rust-in-a...


Quiz: what is the opposite of patronising? Because that's what you're being. Not talking down to people but assuming more than is typical.

You're clearly highly intelligent, evidently more so than I am. Additionally my mind doesn't work like yours, I'm not an abstract thinker. Plonk stuff like this in front of me and I can decipher it eventually, if I can get some clear statement of the notation and its semantics anyway, and there is stuff in there I'm not familiar with.

Most people aren't abstract thinkers. They are motivated by practical concerns. I spent an enormous amount of time reading up on stuff like this and I really can't see the value to it to me as a programmer. It's deeply frustrating because I know it has value, but I just can't use it. It's interesting, I know it's useful, but I can't use it.

Please allow that other people think differently and are motivated differently, and don't assume that what's easy for you is for them. I wish I had your cranium.


> Quiz: what is the opposite of patronising?

Humble, respectful, modest?

I respect that different people will find different things hard or easy, so that a generalization is never truly possible (except, uh, this sentence?). On the other hand, speaking with absolutely no generalizations whatsoever is simply not possible, because so many caveats and exceptions would make communication impossible.

I do stand by my initial assertion that, in general, if you have a mind capable of learning a new programming language, you can learn lambda calculus, and it won't be particularly hard.


> Most people aren't abstract thinkers. They are motivated by practical concerns.

Abstract thinking is motivated by practical concerns, namely being able to think comfortably about larger problems without being limited by one's capability for complex reasoning. Most people don't do it because they aren't used to, or they have never been taught how.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: