You know, I'm wondering: could part of Dijkstra's reputed arrogance be due to a cultural difference? I'm Dutch, and bluntly calling out flaws in each other's work is not considered all that rude over here; it's almost the opposite: not calling someone out on their flaws implies we either consider them a lost cause or not worth the hassle of educating.
I almost got fired from a teaching position in Sweden because I told my 3rd year bachelor students that many did not bother to add their names or the assignment number, or using paragraphs and in some cases even basic interpunction on their assignments. And that this was well below the level required to to pass secondary school, and that I expect better from them.
This was apparently too confrontational, and a few upset students later I got chewed out and almost fired. Meanwhile, from my point of view, I was just doing my job and already sugarcoating it by Dutch standards.
Having said that, yes, even by Dutch standards I would say that Dijkstra liked to troll people a bit.
PS: I really like the following insight from Dijkstra's review: But whereas machines must be able to execute programs (without understanding them), people must be able to understand them (without executing them).
a) Dijkstra is a bit of a personal "hero" of mine (hero in quotation marks, because I don't like to think of myself as a hero worshipper), because it was due to him I learned of textbooks written by guys like Eric Hehner and Roland Backhouse, which ended up changing my life significantly.
b) I do not really grasp the technical issues being discussed in the correspondences between Dijkstra and Backus
Given this disclosure:
Regardless of the cultural issues (which I suspect you are right about), I found Backus' first response to Dijkstra full of ad hominems, and very light on technical rebuttals. If we subtract out the ad hominems, I suspect the technical rebuttals would take less than a page (which is a significant reduction, considering that the full letter is 4 (typewritten!) pages).
Calling Dijkstra arrogant seems to have been a popular insult (as Backus himself admits), so I wonder if Backus was simply jumping onto the bandwagon as a defensive reaction to the arguments presented in EWD 692.
I think Dijkstra's responses were indeed trollish. I wish he wouldn't have said anything further after he noticed the gratuitous personal attacks against his character in Backus' response, but he couldn't help but say something back. Oh well.
I personally found Backus to be way more of a dick in these letters, though I did gloss over some parts. It's strange how different people can get very different things from these letters. Maybe they just didn't have personalities that meshed well. I don't think it's cultural: I'm American and I find Dijkstra's demeanor to be perfectly acceptable, if a little a colorful. Although, in the same vein, people often don't like Linus Torvalds for many of the same reasons.
I found Dijkstra's letters appalling. Writing - utterly without shame - that his negative response to Backus' paper was disingenuous and served a "political" purpose is utterly despicable to me.
E. Hehner has a course called "Formal Methods for Software Design", which he presents completely openly online (including video lectures, a textbook, solutions, and so on): http://www.cs.toronto.edu/~hehner/FMSD/
I really liked it.
I also liked R. Backhouse's approach to it, encapsulated in two books:
No problem. You'll notice as you go through the links that none are about verification of correctness. Rather, they are about "correct by construction".
I have learned not to put much weight in Amazon reviews for technical literature. Borrow a copy of the books from a library (my city library actually had access to a digital version of Backhouse's latest book, and you could always Yarr! yourself a copy) and check them out, before shelling out the cash :)
Well, the one reviewer said it was only a start toward a method for practical software with practical part maxing out at a sorting algorithm. Is that true or do you have examples of realistic stuff covered by book?
Backhouse's more playful book, focuses primarily on teaching one how to write and use proofs in the "calculational style" of Dijkstra/Hoare/Lamport/others.
Backhouse's less playful book covers the same topic, but has some more programming examples:
* cyclic code error correction
* simple sorting algorithms
* real number to integer conversion is used to introduce Galois connections
Again, this isn't a lot in terms of "practical examples". No production level software is built here. If you're looking for that sort of stuff, maybe you want to try out L. Lamport's book: http://www.amazon.com/Specifying-Systems-Language-Hardware-E...
Anyone that knows about all these resources should really scour them to put together a list of their application to real-world problems. Any researchers that like the stuff should apply them to real-world problems. The reason is that we don't know if such methods are a waste of time or useful until that happens. So, I'm interested in anything you have on that.
EDIT: Your recommendation was good as it clearly has practical advice. I can tell just looking at table of contents. And, as a gift, I found the whole book for free plus other things in link below. :)
You're right that the technical rebuttals are fairly slim, but the rest of the exchange, which is all about political play and saving face[0], isn't pointless.
The whole argument Dijkstra makes boils down to "if everyone was as technically competent as you and I, then I wouldn't need to do this. But the sad truth is that the incompetent are not only the majority, they decide what gets funded and researched, so I have to." I think he is right about that, and probably even acted correctly on it, but even so: positioning himself like he's taking on a sacrificial trollish saviour role and just doing this to save programming research is pretty arrogant, so Backus is right as well!
And that's just the politics of it, saving public face hasn't even entered the equation here. Don't forget, Backus' reputation also directly affects the respect and by extension funding that he receives for research that he wants to do. With that in mind I can very easily see how Backus must have been furious after reading EWD692, especially considering that it contained a few insinuations about him, which again I would partially account to Dutch culture, which (used to) have a severe case of tall poppy syndrome[1]. But from Backus' point of view, if they had been friends and colleagues before, it might have felt like backstabbing, both politically and as friends.
So I think I understand Backus' ad hominems - I suspect he was sincere about trying to criticise Dijkstra from the position as a long-time friend; his writing style boiling down to: "Look, I respect you and all, but I have to point out that you're being a dick; oh and have a taste of your own medicine on the side, because two can play that game."
Of course, it looks like he played right into the hands of Dijkstra, because Dijkstra doesn't appear to be as emotionally invested in saving public face, and seems to rather enjoy puzzling Backus a bit longer.
In general, I think it speaks for both of them that they managed to resolve these differences to what appears to be mutual satisfaction!
Hah, I'd say that both Dijkstra and Backus are "heros" for me---scare quotes for the exact same reason. I'm mainly interested in ideology/vision in reading about the early pioneers, and I find theirs more interesting than, let's say, Knuth's.
I am Dutch, my father studied under Dijkstra (TUE) and I studied under some of Dijkstra's prodigies. The humor Dijkstra uses is very Dutch to me and to me it's hilarious. His hard stabs were considered good practice in a lot of professional Dutch high end careers in the 70s/80s/90s but I believe it became less common. We have less issues telling like it is even if it hurts people in general, but when I still worked in NL this was definitely dying out.
That said; I admire Dijkstra for having such strong opinions and striving to create solid software no matter what. He was used to write software for machines that didn't exist yet[1] and I for one wish I had the time/budget to write software proofs. Especially with the modern tools (not the ones I learnt using from Dijkstra's works, but that was a good theoretical foundation which gave me understanding) we have now it's very possible but only for very expensive systems with very generous deadlines. I could say we will not deliver without that but I would be living under a bridge...
> I'm Dutch, and bluntly calling out flaws in each other's work is not considered all that rude over here;
quote: “Object-oriented programming is an exceptionally bad idea which could only have originated in California.” – Edsger Dijkstra
Backus, on the other hand, is a genuinely lovely man. If you haven't read/seen his interview with CHM [1], then I urge you to do so. (Pay attention to his closing remarks!)
I could never find a reliable source for this oft-quoted remark about Object-Oriented Programming.
As far as I know, Dijkstra never said this, not in this exact phrasing anyway.
It seems no one is able to find the source of the quote, it's in fact much more likely he did not say anything like that, and that the quote is a fabrication.
>Reason is always a kind of brute force; those who appeal to the head rather than the heart, however pallid and polite, are necessarily men of violence. We speak of ‘touching’ a man’s heart, but we can do nothing to his head but hit it.
While I would agree with that in principle, I think the word "violence" is being stretched a bit. Being unforgiving, stubborn, demanding, hard-hearted, or anything adherence to logic might be claimed to imbue are not acts of violence, even if they are emotionally hurtful to others.
Emotional harm can be violent if it is forcibly inflicted on someone. If they volunteer themselves to be emotionally harmed of their own volition, this is hardly violence.
Rational thought is not violent, no matter how much it pains you to hear what it concludes. Disagreeing with someone is not violence, even if they can't bear to suffer it.
You're really not special, though. Why should I care if you asked me, your family asked me, your culture asked me, or nature itself forced my hand? As long as I have a justifiable reason for doing it, I should do it, whether you think it violent or not.
What does being special or not matter, where did that come from? Violence is violence. It may or may not be morally justifiable. Justifiable violence is still violence.
Is the issue that you think violence only occurs when it's somehow immoral by some code? Hitting someone is violent regardless of the reason why. Applying verbal and emotional force in a way that causes someone distress is violence regardless of the reason why.
Nothing in my posts says that violence itself is inherently wrong. It's a thing, the reason and effect in the situation it occurs determine the rightness or wrongness of the act. Scolding a child can cause temporary distress. This is violence though possibly necessary and right. Telling a child they're worthless repeatedly is violence and also wrong.
Volunteering for violence also doesn't make it not violent. It's still violent. It just changes the calculus on the morality of it.
Seriously? Where the fuck does this even come from?
Emotional harm is violence. Like all violence it exists on a spectrum.
There's unintended harm, accidental insults and the like. These are easily brushed over for healthy people but for others can become one straw too much.
Sometimes it's intentional and "for their good". Honestly, honesty can be this sometimes. An honest answer may hurt someone but may be exactly what they need. Other times it's just taking it too far and being careless/heartless.
And then there's the truly heinous deliberate form. Ask abused women and children. Did theirs users always hit them? Burn them? Starve them? No. Sometimes, oftentimes, they verbally and emotionally abused them. Keeping them from those who care about them. Creating a bubble around them to keep them despairing. Telling them they're worthless over and over until they believe it.
I don't see where there's any self-pity in my post to be ashamed of. I'm just stating the fact that emotional harm is violence.
Was there a meeting where someone took the explicit decision to have followers repeat this lie until other people believed it, or do people like you take it upon yourselves to repeat this lie out of the basic evil in your heart?
You know, I'm wondering: could part of Dijkstra's reputed arrogance be due to a cultural difference?
Dijkstra got a lot of attention from his essay, "Goto Considered Harmful." Even though he did not come up with that title, he learned from it that a hyperbolic saying can grab attention, and be used for good.
So thereafter, he spent a while when he would say things in the most flame-bait way possible, to gain attention, because it worked.
I'm a lecturer in Sweden, and i know how you feel :) To quote some students on a list "What's this nonsense with independent thinking? Nobody ever told us we had to do that."
Ouch. As if it's a chore instead of a privilege to get to do so!
Well, luckily my story had a happy ending: my colleagues (who were checking the assignments with me) stood up for me, and after the course other students defended me by saying "finally, a teacher willing to call out the freeloaders! It's so demotivating to see them get away with not doing anything while I put in so much effort and get nothing in response."
The reason my former boss had reacted like an overprotective mother was because the complaining students had made me look like someone who motivated through bullying, intimidation and fear. He takes his responsibilities of creating a safe space for students to learn and overcome their insecurities very seriously, and I respect that. Meanwhile, I have high expectations of my students precisely because I believe they are capable of so much more than they give themselves credit for, and he now understands and respects that, and has given me some pointers in how to get that across in ways that Swedes aren't allergic to. So we resolved our issues too.
But yeah, that was a pretty strong culture clash; in the end it was educational for everyone involved I think.
I can concur. The Swedes have turned into an absolute disgrace. Constant late-coming, talking and other distracting behaviour. They have become a nation of low moral fibre, pure and simple. I say become, because I refuse to believe this was the modus operandi of the likes of Linnaeus, Arrhenius, Berzelius, Celsius, etc.
I'm going through the motions myself, at an institution that shall remain nameless, knowing how kindly they take to criticism here.
Belgium here: it's a bit the same. However HOW you say it matters to. First times you are expected to say it "without" emotion, like not being angry. Consequential times incrementally go nuts :D
Culture shock / Calibration. Similar to Russian approach on social politeness. Or the lack there of. They don't smile unless they have a real reason to be happy. For westerners they appears cold hearted.
Autistic persons are a bit like that too. They have a very simple optimization scheme, so pointing at suboptimal things directly is seen as a gift when most people will have a broad view on being told, shame, disrespect etc etc.
Many, far from all, Swedish students are way to pampered with. I was call Gestapo because I pointed out that it isn't acceptable behavior to be 40 minutes late to the lab session and not having read the instructions beforehand.
But it is definitely also a cultural thing. The contrast became very obvious when we had Russians in the seminars. The Swede says things like "I don't think I really understand how you mean, could you maybe elaborate that point a little bit further". The Russians: "You are wrong and it's very obvious to even a high school student what your mistake is. And besides, all this has already been done in the Soviet Union 40 years ago."
I have heard of the cultural difference you refer to--my son worked in the Netherlands for some months. Yet I don't think that is all of it. Is it simple frankness to say that the use of Fortran damages the brain?
I do think that a good deal of the internet--the great unwashed both correspondents refer to--remember Dijkstra the insult comic better than Dijkstra the pioneer of computing. It doesn't improve discourse on the internet, I guess, or worsen it much either. I don't think it just to Dijkstra's reputation, though that is no responsibility of mine.
In his biography of Alexander the Great, Peter Green quotes the Athenian Demades's reproach to Philip:"King Philip," he said, "Fortune has cast you as Agamemnon; but you seem determined to act the part of Thersites."
> I'm Dutch, and bluntly calling out flaws in each other's work is not considered all that rude over here; it's almost the opposite: not calling someone out on their flaws implies we either consider them a lost cause or not worth the hassle of educating.
Woah, I do that too! Maybe I should move to Netherlands.
If it isn't possible to work with somebody while disagreeing with them, then you're the problem. If you spend long enough talking to anybody you'll eventually come across disagreements.
Many people take disagreements too personally.
I was just yesterday talking to a paramedic. I mentioned to him the article I saw on HN about medical error being the 3rd highest cause of death next to cancer and heart disease.
He immediately started to attack me, exclaiming how I was not an authority in this area so how would I know what I was talking about and therefore had no right to talk about something like this.
I explained I was not the source of the information, and that large numbers of people inside and outside the medical profession are concerned that a great deal of medical research isn't reproducible and in any case if car accidents don't even touch the number of medical fatalities occurring then it's a problem since car accidents are so problematical it's one of the leading reasons we're trying to automate them.
He blew his top. Ad homs everywhere.
Later I took to the Net to see if I was correct in my impression. Turns out my 'Master of Paramedics who read lots of journals' can't have been paying much attention because the British Medical Journal and Lancet are in lockstep with everything I had been saying. In fact I was probably far too kind judging from what the leading lights of the medical profession are saying themselves.
I expect he would accuse me of being arrogant.
I've said it before but tribalism is very powerful. It is far more dangerous to Science than Religion and it definitely gets in the way of technological change.
I don't know what your friend said, but the "errors are #3 killer" is a twisted fact. It comes from assigning "error" at the cause of death of every injury or illness that might have been non-fatal if every perfect test, advance in practice, technology, and mode of care were applied. It isn't a measure of how many otherwise healthy people were killed by errors that introduced new complications.
One of highlighted examples from an abstract of a major study was an "error" where a doctor didn't tell a kid with diagnosed heart disease that strenuous exercise would be dangerous, and the kid died after collapsing during a run.
There's an interesting summary of the debate about this study on Medscape [0].
Failing to tell a patient that strenuous exercise might be fatal sounds like an error to me!
As it happens, my maternal grandmother was killed by a medical error (before I was born). She went in for some kind of routine surgery and didn't make it. Decades later, when the doctor involved died, among his effects were found a letter he had written to another doctor, explaining that he had made a stupid mistake that killed her. (I don't know any more details.)
Does that mean I believe the "#3 killer" claim? Not necessarily. But I'd bet that the problem is worse than most medical professionals would expect, and that there remains considerable room for improvement.
It happened with my father's birth I think. He was the oldest of seven children like typical Catholic families of the time. As a child I knew she walked kind of funny but I put it down to her being an old lady (she wasn't but to a child's eyes it's different). As with something you're familiar with you don't question it. Later my father joined a cult and developed an acrimonious relationship with my grandfather who believed the cult was trying to steal his property and so he disinherited my father. We moved away while I was still very young and so I didn't see my grandmother for a time. In my late teens my father and grandfather became more friendly and we visited occasionally.
My grandmother was becoming ill, first diabetes, then Alzheimers. Around this time I learned from my mother than the doctors in the Bons (as we called the Cork Hospital) had performed a Symphysiotomy. My mother described it as a butcher shop involving hacking and sawing. I think a radio show prompted her outburst on the topic, although she and my grandmother didn't get along she obviously felt deep revulsion about the affair. As with most of the women I don't think they asked for permission, they just wheeled them into surgery. I think what galled my grandparents the most is that they perceived the Bons to be a much superior hospital to all the others. It was private. It had reputable doctors. It was Catholic. They were farmers so this would have been a considered expense for them. I'm confident they didn't want to talk about it, it was probably too much.
The part marked '5' was severed using something like a wood saw or circular saw. Saying they broke the pelvis isn't an exaggeration.
The accounts from the wiki are grotesque. Catherine McKeever, a private patient at the Lourdes Hospital, Drogheda in 1969, told the Committee that she did not realise what had happened: 'I saw him [the doctor] with an instrument which I thought was a bit brace because my father was a wood turner. I felt a crack … Nobody answered me or said anything'. Margaret Conlan, who was operated upon in 1962 in St Finbarr's Hospital, Cork, testified that she had never been told anything about it: 'My baby’s head was perforated and the baby died… I did not find out [about the symphysiotomy] until I read it in the newspaper'.
It seems the practice was done to encourage more children. I'm not familiar with Catholic dogma so I don't understand why doctors would foist on their patients. Today I wonder today how the Muslim and African doctors are getting away with FGM in Irish and English hospitals, so in a way the barbarism continues. Don't trust religious fanatics or doctors and especially not both.
The problem is that they both tend to come well dressed and respected by their communities. Taking them down makes you the bad guy, not them.
> Does that mean I believe the "#3 killer" claim? Not necessarily. But I'd bet that the problem is worse than most medical professionals would expect, and that there remains considerable room for improvement.
Yes exactly, that's my position.
I feel that doctors are overworked generally, don't sleep very much so it's not surprising they don't have the time for introspection on this subject. The error rate is probably largely caused by economic factors like the perpetual labour shortage (in Ireland/UK at least).
On a personal level my grandmother had her pelvis snapped by doctors.
In my own experience with a (not life threatening) medical procedure I went to three different doctors asking for help. None was given. They ignored my requests for analysis and a solution. I didn't even know what was wrong.
Finally I went online, found out what the problem was about, found out what the solution could be for it, and then I literally took a flight to England to get an experienced medical practitioner to sort it out which he did very professionally for a reasonable fee.
After this I complained to the medical ombudsman. Only then did they get into a flutter after literally years of waiting for them to solve the problem. A solution was provided that I no longer needed and I told them so. In fact I had already told them so but they weren't paying much attention. Pretty sure I could have sued them into the ground for malpractice.
I work with a coworker who broke his ankle falling down from a ladder. A horrible injury that shattered. That didn't even turn out to be the real problem though because after going into the hospital a doctor didn't wash his hands, and put them into the wound, infecting it with MRSA. My coworker who was awake, literally asked him to wash his hands right there and the doctor said "No, it's Ok".
I'm just one guy in his twenties and I can think of 3 cases like this. That is not good!
Lastly on an optimistic note my uncle fell off a roof he was working on with roof tiling. He fell two stories and behind him a trolley containing roof tiles slipped down and onto his back, almost snapping his spine in two.
The doctors fixed him up with steel plates, taking them out years later. Now he walks around like nothing happened (still on roofs!) but it's an amazing feat of medical attention to detail that this is true given his spinal column was almost severed.
Yes I realize that. The BMJ paper is using hyperbole to prove a point (just like Dijkstra's GOTO considered harmful). It's not really the #3 killer but to be honest it should not be in the top ten of things likely to kill you or even close to it.
The medical profession, much like the teaching profession has a nasty habit of victim blaming if things go wrong when it can get away with it. They need humbling because they are too confident. They are highly resistant to this because it diminishes their status and few people enjoy a loss of confidence even if introspection is required for improvement. Computer people tend to take this for granted because we're in a feedback loop with the compiler that tells us constantly when we're in the wrong. That we're fucking idiots all the time isn't news to us. It is news to some other professions that don't have such a tight feedback loop between success and failure.
It is true for example that people don't take their medicine on a timely basis but frustration of dealing with people has bled into other areas and that is unhealthy.
>"That we're fucking idiots all the time isn't news to us. It is news to some other professions that don't have such a tight feedback loop between success and failure."
This is great, there is nothing like thinking "wtf is wrong, someone changed some library last update and that is the cause", then hours later realizing you are a moron.
How exactly can this happen in medicine without severe social consequences?
Instead of library updates imagine half the doctors and nurses don't speak English with a high degree of fluency and don't come from the native culture. That is so obviously a problem for communicating subtleties that it's infuriating that it has to be explicitly pointed out.
Speaking of reform. I think the only answer is closing the gap between action and success/failure i.e. a tighter feedback loop.
Some futuristic thinking is required.
For surgery I would propose a VR sim in which they performed the same surgery again and again, with the details/features changing all the time. Then at a high level of proficiency I would copy the real movements to a machine copying the surgeons actions on a real patient with the feedback visual presenting itself to the surgeon as part of the sim.
- This helps homogenize surgeons movements.
- Increases confidence of a successful operation.
- Allows for high level practiced skill rapidly.
- Removes them from the theater directly to prevent disease.
- Calms the nerves so the surgeon is able to work without fear.
- System gets better over time with more data.
For diagnostics I would record the patient's voice describing her symptoms with a visual of the person. I would then pattern match for related conditions in my database that appear to resemble the symptoms. The system has some degree of common sense as it is a traditional expert system. Audio/Video inputs would be used to make requests e.g. can I see your arm that is paining you for more detailed visual analysis using machine learning. Vocal analysis for other detectable disease. Finally a list of options would be offered up to the doctor as plausible possibilities. In the beginning these would be offered up after the doctor made an independent diagnosis and a second opinion would also be made. This is to give feedback accuracy to the diagnosis system. Later on the 'obvious cases' would be presented to a doctor in real time as the patient began describing his or her symptoms. This would be presented either as a list or perhaps more appropriately as a mind map.
- Acts as a medical record.
- Logic behind choices can be retroactively justified.
- General practitioners can see patients faster because they won't need as many notes.
- Patients will be more confident in their diagnosis with a digital second opinion.
- Prescriptions can be autofilled to prevent error.
- System gets better over time with more data.
Now it is important to note I have no idea what I'm talking about. This took literally 30 seconds to come up with. So why hasn't something radically better already been done? I say it is very simple. It is not allowed. It is defacto illegal to do any of the above. You can't even digitize medical records properly. That is frightening.
The only way this is going to work is if we
- Provide services where they are impossible e.g. third world.
- Only provide this technology to private healthcare. Possibly even on Seasteads to avoid anti-tech litigation.
- The government creates a SMZ (Special Medical Zone) outside of the medical establishment's reach.
In addition to this you'd have to make the general public aware of how bad their healthcare really is. That's the only way to get "Tech companies screw 3rd world with experimental medicine" stories from whoever will be the next Gawker off your backs because the established order is going to do everything it can to blacken your name.
>"Now it is important to note I have no idea what I'm talking about. This took literally 30 seconds to come up with."
I think you have some great ideas.
>"So why hasn't something radically better already been done?...In addition to this you'd have to make the general public aware of how bad their healthcare really is."
Maybe at this point the consequences would be worse than the current problem, at least in the short term.
A doctor failed to tell a CHILD about an appropriate level of activity for his condition, and you honestly have the gall to put quotes around that error?
That child could have easily died running involuntarily from bullies or dogs. Or from having his heart race due to anything else: being startled, nervous, or excited on a first date or whatever.
He didn't die while undergoing a medical procedure; he died of a bad ticker.
The medical establishment didn't cause his heart ailment, it just neglected to inform him. That is unfortunate, but it is somehow not the in the same category as, say, being treated for a broken leg, and given a fatal overdose of some painkiller.
Honest accounting for deaths due to medical error must only count situations when someone dies of something irrelevant, not directly connected to the condition for which they are treated. (Certainly not of the condition for which they are treated, especially if that kind of condition itself carries a reasonable probability of causing death!) That cause of death which occurs must not be a recognized risk factor in that kind of treatment (when that treatment is correct).
The key reasoning is that without any medical intervention at all, the child would also have likely died, and of the same thing.
If we count as a "death due to medical error" a situation in which medicine merely failed to prevent a death, that has to be very well justified. This means that it was very probable that if the mistake had not been made, the patient would have made a full recovery: their condition would have cleared away to the point of that condition no longer being a death risk, and that this positive outcome is virtually guaranteed in such cases when the correct treatment is applied.
The advice not to ever strain yourself physically for the rest of your life or get excited is not a treatment which results in the condition being cured. The condition remains in place, along with the risk of dying.
The dude is a paramedic. His job is to drive like a madman through red lights to emergency situations to frantically try to save someone, not to weigh the merits of research.
This quote keeps on showing up out of context. Edsger and I got along quite well. He loved to be the way he was and pushed it. His friend Bob Barton (the great genius of the B5000 design) was very similar in loving to push buttons and being very critical of the not-quite-a-field that we have. When the U of Utah faculty complained to Dave Evans about Barton, Evans said "We don't care if they're prima donnas as long as they can sing!" This was a general attitude back then. Dijkstra was much more funny than annoying for anyone who had any sense of self. The two biggest problems in our not-quite-a-field were those who listened to him too carefully and those who didn't listen to him carefully enough. In any case, in the research and faculty worlds of the 60s -- where the funding from ARPA was very very different than funding is today -- consensus was not very important: really capable people explored without worrying so much about what others thought (this is hard to explain to people who've grown up in the intensely social and identity seeking world of the last 20 years or so). His comment about OOP (a rather different kind of thing at Xerox PARC than what the term means today) did not bother any of us at all -- it was rather a compliment even (which was also not a big deal) for those of us who liked the way California worked in the 70s. His comments helped when they helped, and they mattered not at all when they didn't.
I often conceptualize programming as a two-dimensional continuum, with Alan Kay on one dimension — flexibility, interactivity, composability, exploration — and Edsger Dijkstra on the other — correctness, efficiency, simplicity. We always need a certain amount of each virtue to make our systems: without enough EWD, we can't get them to work at all, and without enough AK, what they do isn't worthwhile.
Many design choices, of course, involve trading off one virtue against another. But that hardly seems to matter when so many of our current systems are so far from the efficient frontier where tradeoffs are necessary!
Priority is tricky to nail down e.g. the EDSAC was operational a year before the Mark 1 (which actually was not operational until 1949). Because of the "B" Williams Tube which held the two index registers of the Mark 1, many other manufacturers -- e.g. Burroughs -- later called their index registers "B registers". (Also, I think the Univac I and II were successful commercially, and earlier than the 704.)
I started programming in earnest and for money in 1961 mostly in machine code on a variety of machines, and I'm pretty sure that the notions of single dimensional sequential locations of memory as structures (most often called arrays, but sometimes called lists), predated the idea of index registers. This is because I distinctly remember being first taught how to move through arrays sequentially just by direct address modification -- this was called indexing in the Air Force -- and later being introduced to the index register or registers -- depending on the machine.
My strong guess (from above) is that the array as sequential memory accessed by address modification of some kind was a primary idea in the very first useful computers in the late 40s, and that this idea pre-dated index registers.
It would be well worth your time to look at the instruction set of EDSAC, and try writing some programs.
According to this https://www.quora.com/Why-did-Dijkstra-say-that-%E2%80%9CObj... the quote was spoken to Bob Crawford in 1988 or so and appeared in the journal TUG LINES produced by the Turbo user's group, Issue 32 (August September 1989). Bob Crawford is alive, I presume one could ask him but I would rather not even link his faculty page because I am afraid he will be inundated by an avalanche of HN readers.
Funny thing is that Alan Kay himself often comes across just as arrogant these days.
"Comes across", because more often than not I think it's more likely that people who are too pleased with themselves don't like to be told that, really, what they've achieved isn't good enough, or even misguided. Even if, or perhaps especially if, the person who tells them that (Dijkstra when it comes to programming, Kay when it comes to HCI) is right or at least has a very good point.
> Funny thing is that Alan Kay himself often comes across just as arrogant these days.
indeed! no tumblr posts. no videos on YouTube. no SnapChats. no downloadable apps on Steam, and no Kickstarters. no accepted PR's on GH. no certs. and a pretty low Influencer (TM) rating on FooBlamWooBlah.io. therefore, he's obviously a slacker! or a sock puppet. prob yet-another-teen-in-his-mothers-basement. guy probably needs to discover things like "Khan Academy" and Bitcoin and Node.js
What recent talks has Kay given? To be honest, my only criticism of him based on what I've seen the last decade is that he's reusing too much of the same material, even though he certainly has a lot more to say.
I'm bookmarking that. Appreciate the link. You're mischaracterizing it a bit, though. He usually gave the specific counter-example(s) of what was there first so person could look it up. The only one I disagreed with was Prolog for declarative programming. "Portman" was right that real-world use of it typically requires extra steps for the how as much as the what. I've read many horror stories, even for modern implementations.
That SO thread is so far above the SO average that I wouldn't feel right nitpicking anything about it. I appreciated that Kay complimented a few ideas he thought were good.
The whole "that idea was invented in the 60s/70s" thing is interesting because it so often triggers a meta-discussion about progress and what makes it difficult. I always want to take a different approach, drop everything, and really examine and dissect the thing and why the idea hasn't been advanced or abandoned. Depth-first search vs. breadth-first search, or something, I guess...
Kay is the reductio but forgot the absurdum. The natural numbers are the most recent invention in computer science. The rest is just implementation details.
Did not expect that! I hope he read my comments about him as the implied compliments I intended them to be; being an interaction designer he's one of my biggest heroes! :)
This is a series of cell phone shots of an exchange of letters between Backus and Dijkstra. There is no transcript and some of the letters are hard to read, especially the handwritten ones.
I only skimmed it but it is definitely an interesting read.
I didn't interpret their stance as arrogant though. In my book both are enthusiastic about their subject and think the other party is misguided. Both take some effort not to hurt the others feelings while still bringing their point across.
The title "This guy’s arrogance takes your breath away" is taken directly from Backus's own description of this collection of letters. I've changed the title to make clearer that it is a direct quotation.
Unfortunately, the Library of Congress does not allow scanners without prior approval, so this was the only way I could make my own copies. It did not help that all these letters were written or typed on very thin mimeograph paper.
Thank you for going to the trouble of sharing this amazing material! I'd also like to hear about the historical paper you're working on.
We've taken the "arrogance" quote out of the HN title and replaced it with a neutral description of the letters—perhaps a bit too neutral, given how fabulous the post is. But HN readers can figure that out for themselves, especially once a post gets so high on the front page.
The paper is about, among other things, the history of the array data structure. It's far too early to advertise, but you can see a very early version on my GitHub account. :)
I was surprised to discover recently that the word 'array' prior to 1950 was used exclusively to describe two dimensional tables of numbers that one might find in a matrix or determinant. But by the advent of FORTRAN I in 1957 and ALGOL 58, 'array' now referred exclusively to a one-dimensional entity, as compared with 'n-dimensional arrays'. I was interested in digging through John Backus's papers from this era to see if I could find any clues.
I was able to narrow down the near window to 1952-1954, since the FORTRAN preliminary report of 1954 uses the word 'array' casually in the modern one-dimensional sense as interchangeable with 'subscripted variables', the latter being the more common terminology at the time. By comparison, a virtually unknown paper by Rutishauser in 1952 describing the "for" loop did not use the word 'array' at all, only 'subscripted variables'. (Rutishauser was an accomplished mathematician and quite possibly the world's first computational scientist.) A paper by Laning and Zierler at MIT in 1954 describing a formula compiler also used only the term 'subscripted variables'.
Backus's papers also have evidence showing that FORTRAN I was clearly written specifically to take advantage of the IBM 704's machine capabilities. Not only was the IBM 704 the world's first commercially successful computer, it was also an improvement over the preceding IBM 701 in providing index registers (3 of them) and floating point instructions which were fast for its era. Backus's papers describe how providing hardware support for indexing and floating point was revolutionary, as all programs up to that time had to write in all these instructions by hand (and for many programs was pretty much all they did).
So it is clear to me now that the changeover in the implied dimensionality of the word 'array' must be related to how the array developed as a data structure abstracting away indexing operations. By the time IAL (pre-ALGOL) came on the scene in 1958, the idea of indexable homogeneous containers was already well established. But I still haven't found any strong smoking gun evidence introducing the one-dimensional sense of the word. I suspect further digging into the description of the IBM 704 may be necessary. The 704 was not the first to provide index registers, but it may have been the first to call them as such. (The Manchester Mark I computer of 1948 appears to be the first computer with an index register, but it was called a B line. The [patent](https://www.google.com/patents/US3012724) claiming to cover index registers uses the term "control instruction" - no arrays mentioned - but it very cutely describes numbers as residing in known locations or "addresses" in quotes.)
Priority is tricky to nail down e.g. the EDSAC was operational a year before the Mark 1 (which actually was not operational until 1949). Because of the "B" Williams Tube which held the two index registers of the Mark 1, many other manufacturers -- e.g. Burroughs -- later called their index registers "B registers". (Also, I think the Univac I and II were successful commercially, and earlier than the 704.)
I started programming in earnest and for money in 1961 mostly in machine code on a variety of machines, and I'm pretty sure that the notions of single dimensional sequential locations of memory as structures (most often called arrays, but sometimes called lists), predated the idea of index registers. This is because I distinctly remember being first taught how to move through arrays sequentially just by direct address modification -- this was called indexing in the Air Force -- and later being introduced to the index register or registers -- depending on the machine.
My strong guess (from above) is that the array as sequential memory accessed by address modification of some kind was a primary idea in the very first useful computers in the late 40s, and that this idea pre-dated index registers.
It would be well worth your time to look at the instruction set of EDSAC, and try writing some programs.
Yes, there are at least two separate stages of historical development here. The first is when people realized it was useful to repeat the same operation on different data in memory and viewed the collection of data as a variable in its own right. The earliest term I can find for this concept is "subscripted variable" (many examples prior to 1954, e.g. Rutishauser, 1952 in the "for" loop paper; Laning and Zierler, 1954)
but the idea appears to go all the way back to Burk, Goldstine and Von Neumann in 1946. Quoting p. 9, paras. 3.3-4:
"In transferring information from the arithmetic organ back into the memory there are two types we must distinguish: Transfers of numbers as such and transfers of numbers which are parts of orders. The first case is quite obvious and needs no further explication. The second case is more subtle and serves to illustrate the generality and simplicity of the system. Consider, by way of illustration, the problem of interpolation in the system. Let us suppose that we have formulated the necessary instructions for performing an interpolation of order n in a sequence of data. The exact location in the memory of the (n + 1) quantities that bracket the desired functional value is, of course, a function of the argument. This argument probably is found as the result of a computation in the machine. We thus need an order which can substitute a number into a given order-in the case of interpolation the location of the argument or the group of arguments that is nearest in our table to the desired value. By means of such an order the results of a computation can be introduced into the instructions governing that or a different computation. This makes it possible for a sequence of instructions to be used with different sets of numbers located in different parts of the memory.
"To summarize, transfers into the memory will be of two sorts:
"Total substitutions, whereby the quantity previously stored is cleared out and replaced by a new number. Partial substitutions in which that part of an order containing a _memory location-number_-we assume the various positions in the memory are enumerated serially by memory location-numbers-is replaced by a new _memory location-number_.
"3.4. It is clear that one must be able to get numbers from any part of the memory at any time. The treatment in the case of orders can, however, b more methodical since one can at least partially arrange the control instructions in a linear sequence. Consequently the control will be so constructed that it will normally proceed from place n in the memory to place (n + 1) for its next instruction."
The language is of course archaic, but the idea described clearly is that of indexing in 3.3 and arrays in 3.4. They use the word "sequence" but arguably this usage is in its ordinary mathematical sense.
The written historical evidence, at least, would confirm your strong guess that the idea of arrays itself is older than index registers.
There's a missing etymological link though: when did a sequence of data stored consecutively in memory become associated with the word "array"? Still, the earliest written reference I can find for this second stage of historical development is the 1954 preliminary report on FORTRAN.
Maybe the word "array" is somehow derived from the advent of RAM, which even in its earliest form in Williams tubes had memory locations arranged physically in two dimensions. So right from the start we have two dimensions physically, but only one dimension logically, since the earliest computer instructions only dealt with (one-dimensional) offsets, if at all. Furthermore, popular science accounts of magnetic core memory describe them in terms of arrays. To give one example,
the June 1955 issue of the Scientific American (no 192, pp 92–100) writes about "magnetic core arrays".
I missed that the quote was Backus's, it makes a lot more sense now.
The quality of your photos is excellent and I doubt a scanner would make them much better. I wish there was a transcript though.
As a start, here is the first letter transcribed:
to John Backus
International Business Machines Corporation
5600 Cattle Road
SAN JOSE CA 95193
U.S.A
Monday 29th of May, 1978
“To interrupt one’s own researches in order to
follow those of another is a scientific pleasure
which most experts delegate to their assistants.”
(Eric Temple Bell in
“Development of Mathematics”)
Dear John,
I do not claim to be more noble or sensible than “most
experts”; perhaps it is only becauseI have only one
assistant to whom I can delegate no more than one
man can do. But when you open your letter with:
“I am quite sure that you have never read
any paper I’ve sent you before”
it is my pleasure to inform you that - although
“quite sure” - you were very wrong. I very well
remember that you mailed me a sizeable paper
on reduction languages to which I owe my introduction
to that subject. I didn’t only read it, parts of it
were even studied. I also remember that it left me
with mixed feelings.
I can very well understand your excitement, although,
for lack of experience, I still cannot share it. I am
far from delighted with the state of the art of programming
today, and anyone suggesting an alternative has in
in [sic] principle my sympathy - until, of course, he loses it again,
like Terry Winograd did when he suggested natural language
programming - “natural descriptions”! - as an alternative-.
In the long run I have more faith in any rephrasing of the
programming task that makes it more amendable to mathematical
treatment. But you must have lots of patience, for the
run will be very long. It isn’t just mental inertia - that
problem can be solved generally by education a new
generation of youngsters and waiting until the old ones
have died - . It is the development of a new set of
techniques needed to achieve a comparable degree of
proficiency.
Could you get me a copy of G.A. Mago’s (not
yet published) “A network of microprocessors to execute
reduction languages”? That might whet my appetite!
From various angles I have looked into such networks
and I am not entirely pleased with what I have
seen. Firstly I suspect our techniques for proving the
correctness of such designs: each next proof seems to
differ so much from all the previous ones. I suspect
I discovered that all I could design were special
purpose networks, which, of course, made me suspect
the programming language in the Von Neumann style
which, already before you have chosen your problem,
seems to have set you on the wrong track.
Semantically speaking the semicolon is, of course,
only a way of expressing functional composition: it
imposes the same order that can also be expressed
with brackets - innermost brackets first -. In combination
with the distribution you can generate many innermost
bracket pairs, thus expressing very honestly that it is
really only a partial order that matters. I like that,
it is honest.
When you write “one can transform programs [....]
by the use of laws [...] which are _part of the program-
ming language_” etc. I am somewhat hesitant. I am
not convinced (yet?) that the traditional separation in
fixed program, variable data and assertions is a
mistake. The first and the last - program and assertions -
are somewhat merged in LUCID, in which correctness
proofs are carried out in (nearly) the same formalism
as the program is expressed in. On the one hand that
presents a tempting unification, on the other hand I
thought that mathematicians speerated carefully and
for good reasons the language they talk about and
the metalanguage in which they do so. To put it in
another way: given a functional program, I feel only
confident if I can do enough other significant things
to it besides easy[striked out three times] carrying it out. And those I don’t
see yet. The almost total absence of redundancy
is another aspect of the same worry. In the case
of a traditional program we know how to make it
redundant: by inserting assertions, combination
of text and assertions makes it into a logically
tightly knit whole, that gives me confidence. How
do we this with functional programs? By supplying
two of them and an argument that demonstrates
their equivalence?
What about the following example? (My notation
because I lack the fluency in yours.)
(1) Consider the function f defined by:
f(0) = 0, f(1) = 1, f(2n) = f(n), f(2n+1) = f(n) + f(n+1)
(2) Consider the following program (“peven” = “positive and even”,
so for “podd”)
{N>=0} n, a, b := N, 1, 0;
_do_ peven(n) -> a, n := a + b, n/2
[] podd(n) -> b, n := b + a, (n-1)/2
_od_ {b=f(N)}
Here (1) gives a recursive definition of f, (2) gives
a repetitive program. Both definitions can be translated
in a straightforward manner into functional programs.
What would be involved in the demonstration of their
equivalence?
The above seems to me little example of the appropriate
degree of sophistication to try your hands on. (I am
not going to try it myself, as I fear that my past
experience would misguide me: I am afraid that I
wouldn’t rise above the level of translating (2)’s
traditional correctness proof - with which I am very
familiar - in an unfamiliar notation. Good luck!)
With my greetings and warmest regards,
yours ever
_Edsger_
P.S. Please note my new postal code: 5671 AL
(capitals obligatory) _in front of_ the village name
EWD.
Maybe someone with OCR software at hand can give the typed ones a try?
As an addition to your transcription, here is the last letter transcribed:
to John Backus
91 Saint Germain Avenue
SAN FRANCISCO, California 94114
U.S.A.
12th July 1979
Dear John,
My schedule is very tight, and I am afraid that I
must disappoint John Williams; Monday 20 or Tuesday 21
August seem the only two slots that are really available.
I shall arrive in the USA on Sunday 29th July, and
had already committed myself for the week of Monday
30 July to Burroughs in Mission Viejo. My wife and
the two youngest children will join me for the first
three weeks of the trip, and the week of 6th August
was planned as "the real holiday." Since then Bill
McKeenan has twisted my arm: the programming
course to be given that week was so heavily over-
booked that he asked me to give a second course,
in parallel to David's. Under those circumstances
I would like to avoid further commitments in
the week starting on Monday 13: it is their last
week in the USA, and I have then been working
almost all the time!
Immediately after WG2.3 I shall go to Austin,
Texas (Burroughs and University), and then home!
If I have a completely free choice, I think I
would prefer Monday 20 slightly over Tuesday 21.
New title and abstract:
Title: Termination Detection for Diffusing Computations
Abstract: The activity of a finite computation may
propagate over a network of machines, when machines
may delegate (repeatedly) subtasks to their
neighbours; when such a computation is fired
from a single machine, we call it "a diffusing
computation." We present a signalling scheme
--to be superimposed on the diffusing computation--
enabling the machine that fired it to detect its
termination. Our signalling scheme is perfectly
general in the sense that, independently of the
topology of the network, it is applicable to
any diffusing computation. (End of abstract.)
Please give my regards to John Williams and tell
him how sorry I am that I shall miss him. I
am not familiar with distance and transport facilities
between Santa Cruz and San Jose. If it is better
when I come to San Jose the day before that
is fine with me; may I leave the logistics to
you? (Wife and children leave on Friday 17th of August.)
With my greetings and best wishes,
yours ever
_Edsger_
What do you say? Shall we divide the task among a few of us and type this up? (Or do people perhaps do this with OCR nowadays?) Would there be enough interest in having these letters in searchable form to motivate the effort?
> Would there be enough interest in having these letters in searchable form to motivate the effort?
Yes, yes, yes.
> Shall we divide the task among a few of us and type this up?
I did the first one, see my other comment. It took me about 20 minutes to type and another 20 minutes to proof read. It is one of the longer ones and handwritten. If a few other join in, absolutely doable.
I'm willing to do at least one, but would like to have some assurance that I'm not working on the same letter as someone else before doing so. I'm more than willing to take on one of the handwritten ones, as I intend to just transcribe it with no effort at OCR anyway.
Edit: I'm going to just do Dijkstra to Backus, 1978–05–29 right now, update when done
I only just noticed that weinzierl did this one four hours ago.
Dr. Edsger W. Dijkstra
to John Backus
International Business Machines Corporation
5600 Cottle Road
SAN JOSE CA 95193
U.S.A.
Monday 29th of May, 1978
"To interrupt one's own researches in order to follow those of another is a scientific pleasure which most experts delegate to their assistants."
(Eric Temple Bell in "Development of Mathematics")
Dear John,
I do not claim to be more noble or sensible than "most experts"; perhaps it is only because I have only one assistant to whom I can delegate no more than one man can do. But when you open your letter with: "I am quite sure that you have never read any paper I've sent you before" it is my pleasure to inform you that - although "quite sure" - you were very wrong. I very well remember that you mailed me a sizeable paper on reduction languages to which I owe my introduction to the subject. I didn't only read it, parts of it were even studied. I also remember that it left me with mixed feelings.
I can very well understand your excitement, although, for lack of experience, I still cannot share it. I am far from delighted with the state of the art of programming today, and anyone suggesting an alternative has in principle my sympathy - until, of course, he loses it again, like Terry Winograd did when he suggested natural language programming - "natural descriptions"! - as an alternative -. In the long run I have more faith in any rephrasing of the programming task that makes it more amenable to mathematical treatment. But you must have lots of patience, for the run will be very long. It isn't just mental inertia - that problem can be solved generally by educating a new generation of youngsters and waiting until the old ones have died -. It is the development of a new set of techniques needed to achieve a comparable degree of proficiency.
Could you get me a copy of G. A. Mago's (not yet published) "A network of microprocessors to execute reduction languages"? That might whet my appetite! From various angles I have looked into such networks and I am not entirely pleased with what I have seen. Firstly I suspect our techniques for proving the correctness of such designs: each next proof seems to differ so much from all the previous ones, I suspect that we haven't found the general patterns yet. Secondly I discovered that all I could design were special purpose networks, which, of course, made me suspect the programming language in the Von Neumann style which, already before you have chosen your problem, seems to have set you on the wrong track.
Semantically speaking the semicolon is, of course, only a way of expressing functional composition: it imposes the same order that can also be expressed with brackets - innermost brackets first -. In combination with the distribution you can generate many innermost bracket pairs, thus expressing very honestly that it is really only a partial order that matters. I like that, it is honest.
When you write "One can transform programs [....] by the use of laws [...] which are _part of the programming language_" etc. I am somewhat hesitant. I am not convinced (yet?) that the traditional separation in fixed program, variable data and assertions is a mistake. The first and the last - program and assertions - are somewhat merged in LUCID, in which correctness proofs are carried out in (nearly) the same formalism as the program is expressed in. On the one hand that presents a tempting unification, on the other hand I thought that mathematicians separated carefully and for good reasons the language they talk about and the metalanguage in which they do so. To put it in another way: given a functional program, I feel only confident if I can do enough other significant things to it besides carrying it out. And those I don't see yet. The almost total absence of redundancy is another aspect of the same worry. In the case of a traditional program we know how to make it redundant: by inserting assertions, combination of text and assertions makes it into a logically tightly knit whole, that gives me confidence. How do we this with functional programs? By supplying two of them and an argument that demonstrates their equivalence?
What about the following example? (My notation, because I lack the fluency in yours.)
(1) Consider the function f defined by:
f(0)=0, f(1)=1, f(2n)=f(n), f(2n+1)=f(n)+f(n+1)
(2) Consider the following program ("peven" = "positive and even", so for "podd") [1]
{N >= 0} n,a,b := N,1,0;
do peven(n) -> a,n := a+b,n/2
| podd(n) -> b,n = b+a,(n-1)/2
od {b=f(N)}
Here (1) gives a recursive definition of f, (2) gives a repetitive program. Both definitions can be translated in a straightforward manner into functional programs. What would be involved in the demonstration of their equivalence?
The above seems to me a little example of the appropriate degree of sophistication to try your hands on. (I am not going to try it myself, as I fear that my past experience would misguide me: I am afraid that I wouldn't rise above the level of translating (2)'s traditional correctness proof - with which I am very familiar - in an unfamiliar notation. Good luck!)
With my greetings and warmest regards,
yours ever
Edsger
P.S. Please note my new postal code: 5671 AL (capitals obligatory) _in front of_ the village name.
EWD.
Transcriber's notes:
[1] This was a little difficult to transcribe cleanly into pure text. The "od" on the last line is clearly the ending delimiter for the opening "do" (both of which were lowercase and underlined). The pipe on the second-last line was, I think, meant to clearly show that the line beside it was within the do loop.
In a few places Dijkstra underlined text. With some I have not carried that over, with others I've surrounded the text with underscores.
update - not working on this one now. The pages are pretty readable, if you save them. The image display stuff that medium is using is what screws them up to the point of unreadability.
I found their prose to be quite good. It's sad that it seems long-winded to modern readers. These letters are not long. Maybe attention spans are too short.
Some of the "verbosity" seems to arise from their apparently mutual concern to preserve their relationship by delivering criticism of ideas or behaviors without attacking the person. We could certainly use more of this style of communication today, but all it earns now is a "TL;DR" at the beginning or the end.
It's amazing isn't it, the language that surrounds the detailed arguments. The constant attention to recognize the effort put into the designs and ideas, trying not to offend, but clearly still managing to do so.
My technical correspondence is nowhere near this kind, and I know that it offends in some cases, but I also find that anything over 2-3 paragraphs these days is generally ignored.
Email, texts, slack, etc. have greatly abbreviated our conversations, allowing for many more, but reducing their quality.
I'm sad that in this thread and https://news.ycombinator.com/item?id=11786193, nobody is actually talking about the technical arguments at play. I've only read the EWD692 "opening salvo" but already there are interesting things to point out.
EWD1303:
> The profound significance of Dekker's solution of 1959, however, was that it showed the role that mathematical proof could play in the process of program design. Now, more than 40 years later, this role is still hardly recognized in the world of discrete designs. If mathematics is allowed to play a role at all, it is in the form of a posteriori verification, i.e. by showing by usually mechanized mathematics that the design meets its specifications; the strong guidance that correctness concerns can provide to the design process is rarely exploited. On the whole it is still "Code first, debug later" instead of "Think first, code later", which is a pity, but Computing Science cannot complain: we are free to speak, we don't have a right to be heard. And in later years Computing Science has spoken: in connection with the calculational derivation of programs —and then of mathematical proofs in general— the names of R.W. Floyd, C.A.R. Hoare, D. Gries, C.C. Morgan, A.J.M. van Gasteren and W.H.J. Feijen are the first to come to my mind.
Backus as quoted in EWD692 (Djikstra attacks this):
> One advantage of this algebra over other proof techniques is that the programmer can use his programming language as the language for deriving proofs, rather than having to state proofs in a separate logical system that merely [sic!] talks about his programs.
From a perspective of a PL person like myself, these ideals are very compatible. Without correctness by construction, there is not enough proof reuse so formality will forever be doomed for niche applications (aka computers embedded in dangerous things). Likewise after trying out intuitionistic type theory--based things (e.g. Agda) separating the program and proof langauges just seems clumsy. Overall separating programming and proof whether spatially (seperate languages) or temporally (correctness proved after the fact) is bad, and the reasons hardly depend on the dimension of separation
IIUC, Backus also talks about correctness proofs with "Von Neumann model" languages just being a lot harder to prove, a separate point from the language for the proof itself. Well, that's a huge selling point for PL research to this day, Dijkstra evidently came fully on board years later as evidenced in https://www.cs.utexas.edu/users/EWD/transcriptions/OtherDocs...:
> A fundamental reason for the preference is that functional programs are much more readily appreciated as mathematical objects than imperative ones, so that you can teach what rigorous reasoning about programs amounts to.
> separating the program and proof langauges just seems clumsy.
Lamport's TLA (which serves as the basis for TLA+) is simpler than Agda because in Agda, even though you use the same syntax for both types (i.e. propositions) and programs, they are conceptually separate, while in TLA they are the same. Both the program (well, its specification) and its properties are just logical propositions. TLA+, which is untyped (and based on TLA and ZFC) has the added advantage of being far, far (far) easier to learn than any dependently-typed language. TLA was introduced in this paper[1], which begins thus:
> Correctness of the algorithm means that the program satisfies a desired property. We propose a simpler approach in which both the algorithm and the property are specified by formulas in a single logic. Correctness of the algorithm means that the formula specifying the algorithm implies the formula specifying the property, where implies is ordinary logical implication. We are motivated not by an abstract ideal of elegance, but by the practical problem of reasoning about real algorithms. Rigorous reasoning is the only way to avoid subtle errors in concurrent algorithms, and we want to make reasoning as simple as possible by making the underlying formalism simple.
However, there is a debate between the PL approach (championed by Backus, Milner) and the specification approach (championed by Dijkstra, Lamport) over utility. The question is whether proving at the code level is empirically viable (at a reasonable cost) for all but the simplest of programs. Indeed, so far the answer seems to be no. There has been only one program ever written and verified using dependent types (CompCert), it is certainly non-trivial but rather small, yet it required a world-class expert, took a lot of effort, and even then required corner-cutting (Leroy says he gave up on proving termination, simply adding a counter and throwing a runtime exception if it runs out). TLA+, OTOH, is used extensively and regularly by Amazon (and Oracle, Microsoft and others), by "plain" engineers, on real-world large systems (far larger than CompCert), and management loves it because it actually seems to save them time and money. This is only possible because TLA+ is not the programming language.
Lamport doesn't reject the theoretical possibility of a PL that could provide large-scale verification of some sort (end-to-end verification will always be extremely expensive due to simple complexity arguments), only points out that no PL has so far come anywhere close to fulfilling this promise. Of course, TLA+ doesn't provide end-to-end verification, but that is too expensive -- and unnecessary -- for 99% of the industry anyway. I think that the ideas are compatible, but so far in theory only, while in practice they are not (yet?).
Has there ever been another non-trivial real-world program written in Coq (or another dependently-typed language for that matter) other than CompCert?
> Didn't know people actually used TLA.
Oh, far more than Coq; possibly more than Isabelle, even. TLA+ was designed for engineers, and evolved along with careful observation of how it is used by them. Coq is mainly used by type theoreticians to explore proving various mathematical theorems using ITT, and by PL people exploring various PL theories. TLA+ is used by engineers for large-scale software[1][2][3].
> I'd still want a it embedded in a hosting type-theory so I can use it both for proofs and macros.
I'm not sure what you mean by macros (I'm not a type-theory person in the least), but what does it matter if it's a type theory or a set theory? Lamport's thesis was that just as virtually all math proofs don't require type theory, neither do program proofs; indeed virtually none of the algorithm proofs in computer science employ type theory at all. TLA+ is a formalization of such proofs.
The main advantage of TLA+ is that it's as powerful as Coq for verifying algorithms (though it is not designed for proving general mathematical theorems), yet it any engineer can master it in weeks, without learning any new (not to speak of rather exotic) math. Also, it is amenable (and indeed supports) to model-checking, which alone is enough to make it practical. Deductive machine-checked proofs (which TLA+ also supports) are just too time consuming, and have ever been used on full programs only a handful of times. Their main use in industry is to tie together model-checked programs[4].
TLA+'s disadvantage is that it's very hard to extract a program from a TLA+ spec. Translations going the other way (from code to TLA+) have been done a number of times (for C and Java) but only in academia AFAIK. But this is unsurprising as Lamport's (and Dijkstra's) thesis is that effective, practical, program verification can only be done at a level higher than program code (unless it's a small program, specifically written in a simplified matter for the sake of verification, and the very large effort is acceptable -- all three conditions that have been met in the few cases where deductive, whole-program verification has ever been used).
Dikjstra had very good command of English for a Dutchman; far better than the average American. It seems he could easily have taught first year English. Look at the way he nicely incorporates quotes and the diction and sentence variety.
From this we can tell where he placed his efforts; if you want to troll computer science on an international level, you better beef up your English!
He would have made a great asset to any software team, as a documenter; I wouldn't have had him write much code, though.
My understanding is that English is a natural second language for the Dutch and is taught extensively in schools (i.e. learned by everyone). Also, they are close enough to the U.K. to receive broadcast television, etc. so they also have that level of immersion.
Every Dutch person I've met has had a terrific command of the language, much better than average native speakers. I suspect that at least some of this is due to the fact that as second-language students they actually take time to learn the rules of grammar, etc. Most native speakers pick it up "on the street", so the Dutch (that I've met) tend to sound more formal and educated, especially when there's no discernible accent.
"Dikjstra had very good command of English for a Dutchman; far better than the average American."
Once upon a time, I read a thriller of the Cussler style where a Russian translator for the CIA or something goes to a conference and gets to exchange knowing glances with two colleagues over some mistake that only they recognized. Then I realized that there would have likely been more than a few native Russian speakers in the room.
Dear John,
...
But when you open your letter with:
“I am quite sure that you have never read
any paper I’ve sent you before”
it is my pleasure to inform you that - although
“quite sure” - you were very wrong.
Could anybody explain why this comment by "internaut" was downvoted so heavily?
I found internaut's comment to be insightful, well-conceived and on-topic. What's wrong with this comment?
EDIT: Moreover, who downvoted me so quickly for asking this question? This happened almost immediately after I posted this. Are there some nasty bots at place here?
I don't know what to make of this. This hint is not useful at all. The guideline in question ...
> Please resist commenting about being downvoted.
... does not explain anything here.
1. This guideline does not explain the downvote of internaut's comment.
2. This guideline does not explain the downvote of my initial comment, where I complained about another person being downvoted, not about myself being downvoted (which I wasn't at that point in time).
3. This guideline does not explain the situation after my EDIT. Although it violated that guideline, it did not receive heavy downvotes, but received multiple upvotes instead.
It simply asks you not to go on (and continue to go on) about downvoting. The forum is better if everyone does as it asks. If you're really worried about being targeted by some evil bot, you can just email the site admins.
You were asking me to follow the guidelines. Which is perfectly fine. I wasn't aware that I violated them, I'm thankful for the hint, and I'm more than willing to respect and follow them.
I just didn't get that from your very short initial comment, because I misunderstood it.
I almost got fired from a teaching position in Sweden because I told my 3rd year bachelor students that many did not bother to add their names or the assignment number, or using paragraphs and in some cases even basic interpunction on their assignments. And that this was well below the level required to to pass secondary school, and that I expect better from them.
This was apparently too confrontational, and a few upset students later I got chewed out and almost fired. Meanwhile, from my point of view, I was just doing my job and already sugarcoating it by Dutch standards.
Having said that, yes, even by Dutch standards I would say that Dijkstra liked to troll people a bit.
PS: I really like the following insight from Dijkstra's review: But whereas machines must be able to execute programs (without understanding them), people must be able to understand them (without executing them).