Hacker News new | past | comments | ask | show | jobs | submit login
A Computer Scientist Tells Mathematicians How to Write Proofs (scientificamerican.com)
230 points by xhevahir on Jan 29, 2015 | hide | past | favorite | 101 comments



As a math major who went to grad school in CS theory, I find the article's title vexing. It projects a faux dichotomy of computer scientists v. mathematicians when, in reality, it's mathematicians v. mathematicians.

Many areas of computer science, at least the areas in which Leslie Lamport is best known, is a strongly mathematical discipline inspired by models of computing. Dr. Lamport probably considers himself as a "computer scientist", but I bet it is really far from what many laypeople (even the HN crowd) imagine to be computer scientists.

Another issue is, mathematicians have as much to do with mathematical pedagogy in secondary education as computer scientists have to do with science experiments in high school. Again, the title is really misleading in this regard.

Anyway, I am glad that a great mind like Leslie Lamport and David Mumford (definitely a mathematician) are thinking hard about how to make math education better.


> Dr. Lamport probably considers himself as a "computer scientist", but I bet it is really far from what many laypeople (even the HN crowd) imagine to be computer scientists.

Sure, but I guess he is also really far from what most mathematicians imagine to be mathematicians.


I agree with the concerns on the false dichotomy between theoretical computer science and mathematics, but from the mathematician's perspective I think there's another issue not being discussed due to the typical CS obsession over automated proof systems.

Mathematics is a social subject, invented by humans largely for their own amusement. The goal of a mathematical proof is to communicate insight to other humans. Of course the point is also to be correct, but somehow communicating insight takes priority. Relying on large bodies of latent knowledge, and omitting details, is an effective way to do that. Allowing for occasional mistakes is okay, because mathematics as a whole is fault tolerant (or at least, history gives evidence to that claim).

Lamport's field is a bit different, because there they study distributed protocols with the intent of creating value in the real world. So when they prove a theorem, they're not just proving a theorem to gain understanding about something, they're proving a theorem so they can improve the security or fault-tolerance (or whatever else) of some real system. Writing down a 200+ page TLA proof is measurably worth the investment because the users of their work (applied computer scientists and engineers) want error-free theorems and a clean CS literature much more than they want insight.

For mathematicians communicating to other mathematicians, a two-paragraph argument that is convincing and omits the details (that one can verify on their own) is much more valuable. I'm not trying to make a value judgement either way, just trying to explain the culture of math proofs and why I think something like TLA+ will never catch on among mainstream mathematicians. You might say it's mathematicians defending their job security, but it's also simply that mathematicians like exploring and inventing and hearing creative proofs and perspectives. Deferring things to computers removes the a-ha moment.

[Edit] And the idea that somehow high school students and undergraduates will magically start reading and understanding proofs because we write them using TLA+ is a joke.


What do you think of the idea of writing proofs in the hierarchical style in Lamport's section 3, but not using TLA? To me, the hierarchical proofs seemed very clear and easy to understand, but the TLA (although I only skimmed it) seemed to introduce too much formalism and thereby hurt readability.

Also, it would be easy to put paragraphs of text in between hierarchical proof statements, which would preserve the ability to give intuition and proof sketches. Here I'm thinking of things like analysis, where (at least in my experience) what you really want to do is see the picture of the proof in your head and understand it that way, and the written proof is always just a way of checking it.


I think people already do write things in a hierarchical way. They have one main theorem which is supported by multiple propositions and lemmas and corollaries. At least, this is how all good papers I read are organized.

I think Lamport's vision is that the hierarchy extends down to the very last detail. This is what I'm skeptical of, because no mathematician has time to do this. To paraphrase Newton, you need to stand on the shoulders of giants to make progress.


Could anyone with first-hand experience of TLA+ comment on how practical it actually is?

I've been toying with idea of trying this for a not-quite-trivial (programming!) project I'm working on, but the documentation and UX have so far kept me quite skeptical of using this "for real". The fact that I also can't turn the proof into a program in a semi-automated fashion also seems to indicate that there's at least some barrier where the "doubled" amount of work would be justified. (I'm more inclined towards something like Idris or LiquidHaskell which seem to support practical programming better and will probably let you get arbitrarily close to "proof" in any practical sense where proof will matter.)

I take it that this was mostly about mathematics and not implementations of programs, which may not quite have the same trade-offs. I still would like to hear from anyone with direct experience :).


I was formerly in the same lab as Leslie. As a side project, we (Tom Rodeheffer, mainly) used TLA to verify the correctness of Naiad's coordination protocol. The protocol is very simple (distributed reference counting), and my initial "proof sketch" was about two paragraphs long (and very unconvincing, apparently :). Tom's TLA+ version was closing in on 200 pages, available at

http://research.microsoft.com/pubs/183826/paper.pdf

A short publication about it, with some reflective discussion of why it was hard to write down, is at

http://research.microsoft.com/pubs/199767/clock-verif2.pdf

My understanding is that part of the problem was that Tom needed to explain (re-prove) many meta-facts we take for granted.


Very interesting, thanks (and to the other sibling poster).

> Tom's TLA+ version was closing in on 200 pages, available at [snip]

This is one of those things that I forgot to mention, namely the sheer amount of ambient knowledge we have embedded in our (not our computer's!) background knowledge of math and logical reasoning. Of course, sometimes we're actually wrong, so maybe we could stand a little double-checking by computer-based proof assistants, but I digress...

One thing that's also been mentioned in previous discussions I've been involved in and which is slightly worrying is that these kinds of languages tend to be quite anti-modular in that the proofs required for one type of system often don't transfer very well to other types of systems. (In general "algebra" proofs should probably transfer pretty well, but we're usually interested in very specific proofs for our systems.)

Aside: I wonder if Russel & Whiteheads's classic 1+1 proof has yet been (computer-)formalized?

EDIT: Russel & Whitehead


> I wonder if Russel & Whiteheads's classic 1+1 proof has yet been (computer-)formalized?

http://us.metamath.org/mpegif/pm54.43.html (though using a slightly different set of axioms to R&W: http://us.metamath.org/mpegif/mmset.html#axioms)


Interesting. So perhaps we need some sort of "CTLAN" (Comprehensive TLA+ Tautology Archive Network?) kind of thing to be able to import/match obligations to already-established proofs from some common repository? Maybe future versions of TLAPS would benefit from such a thing?

(Sorry if this already exists; I'm just an interested layman.)


For the Isabelle theorem prover I know the Archive of Formal Proofs [0].

[0] http://afp.sourceforge.net/


Damn; this is amazing stuff... I had no idea: (from [1]) so just get a local copy, set the path, and voila you can import... Wonderful. Thank you for sharing this!

http://afp.sourceforge.net/using.shtml


for distributed protocols, you might want to try Alloy. Basically it allows you to find counter examples (which are in this case scenarios). http://www2.research.att.com/~pamela/zave_podc.pdf is a well known example, working on the Chord join/leave protocol.


I've used it a bit for stuff inside Azure. It's difficult to find a problem where the model checker shines; state machines are far too simple (better to use P[1] or something similar), and something like cache coherence (the Hello World of real-world TLA+) isn't a problem I hit often.

TLA+ isn't only a model checker & theorem prover, though. The language itself is an excellent high-level way of expressing system design. Recently I've been applying it to eventually consistent systems, and writing an idea in TLA+ is a great way to flesh out the concept and spot hidden assumptions or edge cases. Lamport knows this; in the introduction to his TLA+ book, he quotes:

"Writing is nature's way of letting you know how sloppy your thinking is."

Followed by his own expansion on the concept:

"Our basic tool for writing specifications is mathematics. Mathematics is nature's way of letting you know how sloppy your writing is. ... The mathematics we use is more formal than the math you've grown up with. Formal mathematics is nature's way of letting you know how sloppy your mathematics is. The mathematics written by most mathematicians and scientists is not really precise. It's precise in the small, but imprecise in the large. Each equation is a precise assertion, but you have to read the accompanying words to understand how the equations relate to one another and exactly what the theorems mean. Logicians have developed ways of eliminating those words and making the mathematics completely formal and, hence, completely precise."

Most of the value I get from TLA+ is as a software design tool. It's also just a fun language to write stuff in.

You may also be interested in the AWS paper, which had excellent results.[2]

[1] https://plang.codeplex.com

[2] http://research.microsoft.com/en-us/um/people/lamport/tla/am...


> Most of the value I get from TLA+ is as a software design tool. It's also just a fun language to write stuff in.

This sounds very intriguing. It guess there's at least some learning curve, but maybe it'll be a fun experiment vs. P&P in future. ;)


We use it at Amazon Web Services (http://research.microsoft.com/en-us/um/people/lamport/tla/am...) and have found it very practically useful.


Yes, thank you, that was very interesting and insightful -- I've read it before, in fact it sparked my initial interest in TLA+. Unfortunately this is not necessarily applicable for a "working programmer" aka "poor man without huge resources at his disposal". Maybe I should work on the latter problem and then start to apply TLA+. Or maybe it should be the other way 'round. :)


Pretty good read, but the OCD in me could avoid pointing out that the example they give has a wrong solution:

x2+10x=39. Find x2.

This actually has two solutions for x: 3 and -13, so x^2 is 9 or 169.

It is probably a good example of how referring to preconditions at every step of the proof would help catch errors. From my experience writing code, I'd also argue that this would also make proofs more beautiful, because discovering that some steps require a lot of previous references might prompt the mathematician to restructure/refactor the proof to make it simpler to write, thus making it simpler overall.


There are more solutions than just that; for example, we could take x to be the matrix [[387, 320], [-480, -397]] (so x^2 = [[-3831, -3200], [4800, 4009]]), or the split-complex number 8j - 5 (so x^2 = 89 - 80j), or 5 in the ring of integers modulo 36 (so x^2 = -11).

But al-Khwarizmi was presumably writing in the context of familiar quantities >= 0, which is a perfectly fine thing to do.


> we could take x to be the matrix [[387, 320], [-480, -397]]

If x is a 2x2 matrix, how can the left side be equal to 39 (a scalar)?


It's 39 x^0 = 39 I


I haven't checked your work, except over Z/36Z, but you made me :)


j? Electrical engineer?


In the context of electrical engineering, j is often used to denote the imaginary unit, that is, something such that j^2 = -1; in other words it's the same as what is usually called 'i'.

But, in the context of the split-complex numbers, j is something such that j^2 = 1, and i suppose it is writen as 'j' to distinguish it from 'i'.

So, the 'j' used here is different from the 'j' used in electrical engineering.

( https://en.wikipedia.org/wiki/Split-complex_number )

For example, if j were the square root of -1, as in electrical engineering, then (8j - 5)^2 would equal -39 - 80j: 64j^2 - 80j + 25 = -64 -80j + 25 = -39 - 80j; but here, in the split-complex numbers, (8j - 5)^2 = 64j^2 -80j + 25 = 64 - 80j + 25 = 89 - 80j.


Interesting: Never heard of 'split-complex numbers' before. Thanks for letting me know :)


Read al-Khwarizmi's writing again:

“A square and 10 roots are equal to 39 units. The question therefore in this type of equation is about as follows: what is the square which combined with ten of its roots will give a sum total of 39? The manner of solving this type of equation is to take one-half of the roots just mentioned. Now the roots in the problem before us are 10. Therefore take 5, which multiplied by itself gives 25, an amount which you add to 39 giving 64. Having taken then the square root of this which is 8, subtract from it half the roots, 5 leaving 3. The number three therefore represents one root of this square, which itself, of course is 9. Nine therefore gives the square.”

If you interpret square to mean a literal square, then x can't be -13 as that would give a square whose sides are negative in length.


From Wikipedia[1]:

"In the 9th and 10th century AD, Islamic mathematicians were familiar with negative numbers from the works of Indian mathematicians, but the recognition and use of negative numbers during this period remained timid. Al-Khwarizmi in his Al-jabr wa'l-muqabala (from which we get the word "algebra") did not use negative numbers or negative coefficients, although al-Karaji wrote in his al-Fakhrī that "negative quantities must be counted as terms"."

So it would indeed seem that your interpretation is what al-Khwarizmi meant.

[1] http://en.wikipedia.org/wiki/Negative_number#History


The problem is that, in Math papers in journals, there's charges based on the number of pages used.

You really would need an electronic format, and a wiki-style format that graduate students can use, to fill in the details of all the proofs.

When I was a grad student reading papers, it would sometimes take me a day or two to read a sentence, filling in all the details. I felt bad that there was no way for me to share my progress with others. Someone else reading the same paper (other than the author or a couple of other experts) would have to do the same thing as I did.

I thought it would be a neat project, but my advisor said it would be bad for my career if I spent time on it, so didn't pursue it. To find a job, it would be based on the papers I published, and not based on making other people's research easier to read.


Like a RapGenius of mathematical (or other rigorous) proofs? I think this would be amazing, especially for academics in fields related to the topic at hand but not deeply involved, or interested laymen.

As a teaching tool it would be equally useful, even from the first year of university.


This is a fantastic idea, something I'd use as a consumer. It dovetails nicely into the growing movement around democratizing access to scientific research as well.


The article starts with a false equivalence between archaic verbose language and mathematical notation.

For my part, I usually prefer the prose. Formulas are easier to read for people who are used to and have practice with mathematical notation, but even then they are easier to read for those people when they are sufficiently complete, and formulas seem to often come with huge leaps and unstated assumptions.

Especially in computer science papers I tend to see extensive use of formulas over prose or - preferably - code or pseudo-code as big warning signs: Often it turns out to mean the author is glossing over a massive amount of hugely important details. E.g. a common problem I saw when studying was papers that would describe processes, but omit any indications of sensible ranges for parameters that were essential to getting good results (I was working on techniques for image processing to improve OCR results), or greatly obscure details of algorithms that would have taken no more lines to write out in working code.


Does prose not also often come with huge leaps and unstated assumptions?


I think the solution is probably to have it all, according to your taste: code because you can run it as a working proof, and explanations in both comments and prose.


How do you prevent discrapencies between those? You already see it with comments on source code, which eventually get outdated.


I think the answer to this, if my experience as a maintenance programmer is worth anything, is: don't trust the prose. Trust the code (proof). This is why strong type systems are so insanely valuable in programming, especially if they encode proofs of side effects or lack thereof.

Maybe there's a sort of golden middle way here where all mathematical prose-proofs should be annotated[1] by the associated computer-checked proof. The trustworthiness of a particular proof could be assessed by the number of such references and how much coverage (of the prose) they provide...?

[1] Perhaps only by reference, as here. :)

EDIT: Quick edit, I say this as someone who -- earlier in xir career -- probably subjected a lot of people to somewhat verbose comments. In practice, my comments were usually right and the programming language wasn't powerful enough to capture the semantics of what I was doing. One hopes this is the distinction between good and bad comments. Sorry for veering off-topic.


Well I think you are talking about actual source code, and not the context of expository or academic writing. In that context, I would not trust comments even if companies had a strict policy of updating comments, because personally I just don't want to be susceptible to that.

It does make me wonder, though, about those computer-generated proofs which are so massive that no human can understand it. If you can run it...?


Hopefully, that won't be a problem with code written for a paper. You can freeze it then(for that version), unless errata is required.


It does! And that’s OK!

A proof is a proof if you’re convinced. Ideally, a proof is correct, but that’s not always the case.

(Except for undergraduates! Show your work! ;))


The archaic prose bit is there to point out that mathematical notation has evolved, in order to setup a presentation of a possible future evolution, namely hierarchical structure for proof parts. The article is not about prose vs. symbolic notation.


I have seen another instance of the talk summarized in the article. It is about machine-checked proofs (and even if you haven't seen the talk, have you seen who is giving it?). You won't see “huge leaps and unstated assumptions” in that format, although unimportant details may be relegated to fourth-level indentation or prelimary lemmas.


"I think most of us, including my math-phobic Facebook friends, would find the second version easier to follow and understand." I guess I'm in the minority. I preferred the words (though the translation was a bit ungainly).


I'd say the problem with math notation is that it simply has too many symbols. It has a different symbol for everything because it was designed as a handwritten system.

For you mathematicians, think what you would do if you woke up tomorrow and all the math symbols had been replaced with emoji? I think most normal people would just learn the four operators and that's all... you know: megaphone, pizza slice, backpack, and recycle.

Math is what it is, designed for a pre-computer world, but what really kills me is when programmers use untypeable symbols like α (alpha), which at least is an actual letter in one language, or worse yet random unicode like · ("middle dot") or ∕ (division sign, not /). There's just no excuse for that.


The reason for using different scripts (Greek etc.) is to sort symbols by broad categories. Done right, it actually makes things more readable (to the sophisticated reader, which is the mathematician's target audience anyway). It's almost like an implicit type system: (within a particular context) Greek letters mean ordinals, lowercase Roman letters mean predicate logic variables, uppercase German letters mean universes... It would be much less fun, and much harder, to follow an intense mathematical work if the variables were all in the same font.


Yes that's the pre-computer I was talking about. Sophisticated mathematicians should be concerned about if it is machine readable or not. If it's machine readable then you can use whatever font, color, or any other style you want that helps you to read it. The viewer can abbreviate descriptive names to single glyphs if you want.

Programming is much less fun, and harder to follow, without syntax highlighting. But it doesn't follow that we should manually be color coding syntax and variables, or that integers should always be blue. That's essentially the case with the "readability" of mathematics.


> Programming is much less fun, and harder to follow, without syntax highlighting.

That's your opinion.


Then there's Color Forth, where color is used as syntax (http://www.colorforth.com/cf.htm).


Why? I write mathematics to share with mathematicians, not machines.

Would you ask a poet to make the same sacrifice for machine comprehension?


A poem can mean whatever any reader wants it to. In math objective truth matters.


> A poem can mean whatever any reader wants it to. In math objective truth matters.

Maybe? Penelope Maddy, et al, have convinced me that this isn’t as clear as we’d like:

http://www.socsci.uci.edu/~pjmaddy/bio/DualistVol15_Maddy.pd...


>Greek letters mean ordinals... mean universes

Maybe in your particular subfield.

When I think of epsilon or delta I think of a small number. Capital Delta is a difference. Capital Gamma is the gamma function. lambda is a wavelength. pi is a constant. theta is an angle.

a,b,c denote known constants. x,y,z denote variables to solve for. k,n,m denotes integers. i is the imaginary unit. f is a function. y is a function depending on x. Y is the integral(which in this context has nothing to do with integers)/laplace transform/fourier transform of y.


Math notation is OK, our ways to represent it digitaly are bad (because math notation isn't 1-dimensional string of symbols, it's actually 2-dimensional).

The solution is to make a way to represent it properly, not to throw it away (because we don't have a better 1-dimensional alternative).


> what really kills me is when programmers use untypeable symbols

Maybe the problem is that "untypeable" symbols should be easier to type. Mathematicians get by just fine with their typesetting system. Maybe the problem is that programmers are constrained by ASCII, and mathematicians have the upper hand because they have the freedom to define their own notation.

Maybe the "problem" is a matter of perspective.


In many languages it's normal for a single symbol to encode a whole concept. Alphabetic scripts are artefacts of cultures that didn't have access to decent paper, and had to use a small number of simple forms that could be scratched / carved / etc. and remain recognizable.


> Alphabetic scripts are artefacts of cultures that didn't have access to decent paper, and had to use a small number of simple forms that could be scratched / carved / etc. and remain recognizable.

This is largely incorrect. Outside of Han China and a few other areas in its historical cultural area of influence, alphabetic scripts won over ideographs, and won big time. The Egyptians developed paper-like papyrus thousands of years before the first uses of "paper" paper, and they largely abandoned their ideographic hieroglypics in favor of the Heiratic and Demotic scripts, which are syllabaries, for all but formal usages.

Now if the Mongols or the Khitans or the Uygher steppe empires had been able to impose their scripts on the Chinese after conquering them, instead the assimilation going the other way, Unicode might be a considerably smaller clusterfuck today.


It’d be fine. Notation doesn’t replace prose.

The best feature of mathematical notation is its ambiguity. It needs to be parsed by a human, not a machine. Poetry is similar. There’s tradition, but few rules.


A combination of both, something like Knuth's idea for Literate Programming, would be the best I think. Of course that would be way more work for whoever has to type it all up, so I don't think it has much of a chance to catch on.


Leslie Lamport is on this "how to write a proof" subject for long time. At least from 1995: http://research.microsoft.com/en-us/um/people/lamport/pubs/p...


Since there seems to be some confusion: the article is not about prose vs. symbolic notation in proofs, or even degree of rigor in proofs; it introduces a potential way of evolving our presentation of proofs, so that their parts are hierarchically arranged. This better facilitates reading at variable levels of detail, promotes reuse of parts, and provides useful hints guiding comprehension generally.


This could have been more precise. The example at the start was great, but there's no example of the lecturer's technique outside of the link to the actual lecture



After reading that, I think structured proofs should be written with an outliner [0] interface, where you can actually expand and collapse the hierarchy. Lamport also knows this. He repeatedly mentions it as "hypertext". However, he seems to be locked into LaTeX [1] and pdf generation.

[0] https://en.wikipedia.org/wiki/Outliner

[1] Not really surprising. Lamport invented LaTeX.


Right. It would have been nice to have a short yet more in-depth example in the article.


I just finished a course in Coq - it was pretty fun. Seems like something of the same the guys paper is about.


My discrete math and logic undergraduate course was in Coq. It was a brutal way to learn the proof process at first but it was worth the investment by the end. It's the logic equivalent of TDD.

If you are interested in computational logic you might enjoy Aaron Stump from UIowa's work: https://queuea9.wordpress.com/


The article has nothing to do with Coq:

> When I saw the title of the talk, I assumed Lamport would be talking about computer proof checking programs or even a proof creating program like the one Timothy Gowers has worked on and written about. But Lamport’s advice was much more down-to-earth.


I'm a referring the guys paper, which is linked here in the comments somewhere. There TLA+ is mentioned which is a tool to formally prove something. The same can be done in Coq - which is why I'm drawing a parallel.


While that parallel is genuine, it's hard to compare Coq and TLA+ since the mechanism and style of proof they embody is very different. TLA+ tries to ensure that you can exhaustively check implied models as its basis of proof (I believe) while Coq recognizes that types syntactically represent logics and we can thus construct programs in those logics to represent proofs.

So, one layer deeper they're dramatically different beasts.


The TLA+ tools can do exhaustive model checking, as well as mechanically checked proofs. The proofs typically use an SMT solver to verify them. Of the two I've only used model checking in a serious way, but you can find Lamport's proofs of things like paxos and have it turn all the theorems green.


SMT is still a pretty far cry from type theoretic proof. At the end of the day it's just another kind of exhaustiveness checker.


you are correct. There is a huge difference between the two and as you say only coq is able to express proofs in themselves and not just a coarse grained summary of their structure.


Basically, do it the way Spivak does. Funny thing is in freshman year we were taught to do just that, but then abandoned the formality pretty much immediately after.


My understanding from the linked paper in another thread, is not to do it the way Spivak does. That is, the article says Lamport "walked us through a proof of a corollary to the mean value theorem from that textbook, pointing out statements that he felt were not precise or rigorous enough and rewriting the proof using his technique."

My reading of that is that Lamport found faults in Spivak's approach. Did I missread that?

I will say that this really looks like a way to write proofs for computers to read, moreso than a way to write them for other people. Probably has great value, but not too surprising that it isn't widespread.


No, you didn't misread it. You can see the example for yourself in the paper. It's Section 2 here:

http://research.microsoft.com/en-us/um/people/lamport/pubs/p...


I also read through Lamport's critique and I'm not convinced by it. The 'errors' he points out are trivial, and the cultural assumption is that an active reader will fill in the omitted details.


Hear, hear. Spivak motivated me to emphasize the humanistic features of mathematics.

Thankfully, his view underpins elite mathematics programs, e.g.

http://www.hcssim.org

http://en.wikipedia.org/wiki/Math_55


What does spivak have to do with Math 55? Math 55 assumes students can write a tolerable prose proof, it doesn't teach any particular style. Unless it changed in recent years.


> What does spivak have to do with Math 55?

> Math 55 assumes students can write a tolerable prose proof.


> He says that once you have a hierarchical proof, you can use pieces of it in new proofs very easily. For example, if you want to change one assumption slightly or focus on solutions that have different properties, you will be able to tell very easily which parts of the proof can stay the same and which parts will need to change.

If only that were true of a piece of LaTeX moved from one place to another, haha!


If you move a piece of LaTeX from your prose to another proof, there's nothing at all there to help you ensure you get it right. I think the point here is this hierarchical system allows you to very easily discover what parts of the proof depend on what other parts, so if you make a change to one part, you can "invalidate" everything that depended on it easily. With prose, you have no assistance in figuring out what pieces of the proof need to be updated for any given change.


This is how I learned proofs:

http://i.imgur.com/omXqvos.png

Well, technically I learned proofs from Euclid first. But this book followed close behind.

Clear and comprehensible (once you learn the basic symbols). Very much structured. There's hardly any prose in the whole two volumes.


Care to share the title?



Ah, Russell. I've always been put off by people complaining about his somewhat pedantic approach. I guess I'll give it a try. Thank you.


"His method, which you can read about in more detail on his website (pdf), is a hierarchical structure that doesn’t seem entirely dissimilar from the two-column proofs that most of us learned in middle school or high school geometry class, although he points out that it can handle complex problems that would be unwieldy in that two-column format. Each line is numbered, and each assertion is justified with numbers referring to previous lines and assertions."

This is just too obvious. Every proof has this structure, though sometimes this structure is left implicit (which may or may not be a good ting).


I think the "you should try it either way since it may clarify your thought process or reveal problems" is important. It's often good to check your discoveries with different methods.

Of topic but in a similar vain: in a completely different field I dabble in reconstructing existing theories in structuralism. There's some very valid criticism of the method and yet it still unearths issues with theories every now and then.

http://en.wikipedia.org/wiki/Structuralism_%28philosophy_of_...


As noted in some comments, writing a non-trivial proof by hand with TLA+ only seems possible if there exists a good repository of non-trivial propositions that one can start from.

Does anyone know of interesting initiatives out there to build an open repository of mathematical proofs?



Great link, thanks!


The problem isn't notation in itself; it's rarely used and non-self-explanatory notation that's a problem. You can achieve the same with words by inventing your own words and demanding that everyone learn them to understand you.


Which is just about the same as using a notation and demanding that everyone use the notation to understand you - except that a large number of people already know the notation.


No, words are better than notation, they're self descriptive; there's a reason we communicate with words instead of pictographs; words are more flexible and easier to create abstractions with than symbols.


Words are less precise, so they can be used to approximate ideas we don't clearly understand.


Words can be either less or more precise, that's their power.


How else can you describe something which does not exist except by making up a new word (or repurposing an old one)? Mathematicians do this every day.


And that isn’t a problem if you’re writing about something I care about …


If this takes off Lamport's paper is going to get more citations than any other paper.


As any mathematician knows the difficult part is not to write the proof but to found it.


OK, try writing graph theory proofs algebraically.


To be fair, Leslie Lamport is also a Mathematician (his PhD was in Math).


And "computer science is one of the hardest branches of applied mathematics" (who's the author? Did Dijkstra said something like that?), so Lamport is in his own field.


I think it's about time theoretical computer science was considered a branch of pure mathematics. All we do in this field is prove theorems.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: