Hacker News new | past | comments | ask | show | jobs | submit login

I think he was arguing that if programming consists of symbol manipulation and proofs, the task of writing proofs for large programs consists of a lot more symbol manipulation (although probably a lot more tedious in nature) than many proofs written by mathematicians in the past, something made worse by the need to precisely describe each step so that a machine could conceivably execute it instead of being able to skip over proof steps considered “obvious” to a human mathematician.

I think he was especially thinking of mathematical logic - he referred to programming as Very Large Scale Application of Logic several times in his writing.




For the peanut gallery:

If you ever want to see what it's like for a mathematician to not hand wave anything away, look at excerpts from Bertrand & Russel in principia mathematics (no, not the newton book).

It takes 362 pages (depending on edition) to get to 1+1=2

https://archive.org/details/principia-mathematica_202307/pag...

Of course, just like real mathematicians, in our everyday work we stand on the shoulders of giants, reuse prior foundational work (I've yet to personally write a bootloader, os, or language+compiler, and include 3p libraries), and then hope that any bugs in our proofs are caught during peer review. Like in math, sometimes peer review for our code ends up being a rubber stamp, or our code/proofs aren't that elegant, or they work for the domain we're using them in but there's latent bugs/logic errors which may cause inconsistencies or require a restriction of domain to properly work (ex code only works with ASCII, or your theorm only works for compact sets).

And of course, the similarities aren't a coincidence

https://en.m.wikipedia.org/wiki/Curry%E2%80%93Howard_corresp...


Note: There are today completely formal systems of proofs which are much more concise than what Russel had. You can now prove 1+1=2 after a few pages of rules (say of Martin-Löf type theory), a couple of definitions (+, 1 and 2) and a very short proof by calculation.

In practice, one would use a proof assistant, which is like a programming language, such as Agda. Then it is just the definitions and the proof is just a call to the proof checker to compute and check the result.


For example, here it is in Coq (not using -- but instead replicating explicitly -- the built-in definitions of 0, 1, 2, +, or =).

  Inductive eq' {T: Type} (x: T) : T -> Prop :=
    | eq_refl': eq' x x.

  Inductive nat' :=
    | zero
    | succ (n: nat').

  Definition one := succ zero.
  
  Definition two := succ (succ zero).
  
  Fixpoint plus' (a b: nat') : nat' :=
    match a with
      | zero => b
      | succ x => succ (plus' x b)
      end.
  
  Theorem one_plus_one_equals_two : eq' (plus' one one) two.
  Proof.
    apply eq_refl'.
  Qed.
As you allude to, there's also the underlying type theory and associated logical apparatus, which in this case give meaning to "Type", "Prop", "Inductive", "Definition", "Fixpoint", and "Theorem" (the latter of which is syntactic sugar for "Definition"!), and allow the system to check that the claimed proof of the theorem is actually valid. I haven't reproduced this here because I don't know what it actually looks like or where to find it. (I've never learned or studied the substantive content of Coq's built-in type theory.)

We would also have to be satisfied that the definitions of eq', one, two, and plus' above sufficiently match up with what we mean by those concepts.


Don’t forget the source code of Coq itself. What you have written here is not a formal proof, it’s a series of instructions to Coq requesting it to generate the proof for you. To know that you’ve proved it you need to inspect the output, otherwise you’re trusting the source code of Coq itself.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: