Some of those are classics. The Lipton-Perlis-DeMillo argument against program verification is a good one.[1] They demonstrated that manual programming verification via hand theorem proving is buggy. Automated theorem proving was new back then.
This was an issue back in the 80s for a philosophical reason in mathematics. Mathematics classically approves of short, clever, elegant proofs. The paper-and-pencil crowd was trying to do that for code. In practice, program verification is about mechanized grinding through massive numbers of dumb assertions to catch off-by-one errors.
This used to bother mathematicians. When the four-color theorem was originally proved, it was done partly by hand and partly with computer case analysis. The computer part bothered many mathematicians. The culture has changed. Recently, someone redid the proof with the old hand parts re-done by computer. Since that eliminated some possible human errors, that was a step forward in solidifying the proof.
We also know now how to check a theorem prover. Some theorem provers emit a proof trace, huge files of "Step 2437: apply rule 42 to change 2*X to X+X". Proof traces can be fed into a dumb proof checker which does the indicated rewrite and checks that the expected output is obtained.
They start with small logics you can do by hand in verified code. They gradually build on them, one layer at a time, increasingly sophisticated logics until they have a useful theorem prover in first-order logic. I also found a HOL-to-FOL converter. Since most of their stuff is HOL, that means we're almost done at that point for the verification chain.
This was the one I was going to bring up as well. USC-ISI had several students working in this space in the early 80's. The other one which comes to mind are the arguments about AI and whether or not it was a thing or an artifact of better algorithms.
While not exactly a debate -- more of a fundamental difference in outlook -- these are interesting, completely opposite claims:
Bob Harper[1]:
> There is an alternative… without… reference to an underlying machine… [W]e adopt a linguistic model of computation, rather than a machine model, and life gets better! There is a wider range of options for expressing algorithms, and we simplify the story of how algorithms are to be analyzed.
Leslie Lamport[2]:
> Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. … State machines… provide a uniform way to describe computation with simple mathematics. The obsession with language is a strong obstacle to any attempt at unifying different parts of computer science.
So far, evidence is leaning toward latter view where physics imposes hard constraints on how we build our CPU's, etc for max performance. Same with software where certain things keep showing up (eg caching) due to mechanical constraints. Best result would seem to be a combo of abstract models for mathematical analysis and high-level languages that easily map to both those and the machines. Of course, many people have produced such tools. We're going in what appears to be objectively correct direction. :)
Lamport also advocates abstract models and high-level specification, but in program logics (based on abstract state machines) rather than in programming languages.
That's what I said but in different words. That's slso been where the payoff has beenin about any, automated activity on software. Generally some math notation that's easy to analyze or something close to CPU/memory models.
Langsec is a notable exception where Chomsky's hierarchy paid off big time. The notations and implementations still more consistent with Lamport, though. So, semi-exception.
pardon my anecdote but after years of imperative and OOP, ad nauseam, I went into LISP/ML, to be stateless, but then structural parametric types and "state" got closer and closer at the conceptual level, just not "mutable".
I was a co-author on one of the contentious papers [0] in log-structured file systems, addressing the question of whether log cleaning could be done during idle times (yes, for any reasonable workload).
There was a surprising amount of drama and personal rivalry. While distracting, knowing that our results would be picked apart motivated us to be really rigorous.
In the end, modern file systems take the best aspects of both. Like the RISC-CISC debate, it's hard to draw a firm conclusion about the two ideas because they're tied up with the quality of implementations, and the optimal answer involves a synthesis of the ideas.
One debate that I thought would be mentioned is "Goto statement considered harmful", "'GOTO Considered Harmful' Considered Harmful.", and "GOTO Considered Harmful" Considered Harmful' Considered Harmful?" I think it's somewhat disappointing that a lot of CS education always mentions Dijkstra's original argument, but not the other side. The latter two can be found here:
That's because the original Dijkstra argument was part of a research line that created structured programming. The essay (and the other papers) has really nothing against the usage on that your first link. The real problem was goto into loops or subs, not out of them, and not as switch statements.
Current languages mostly don't even allow the bad kinds anymore, so people can ignore history, and complain that "goto isn't so bad, I've never seen anybody abusing it like he claims in the essay."
If the bad kinds of `goto` aren't even allowed by compilers any more, then there is precious little to be gained by teaching youngesters that gotos are bad.
But we do teach them that. As a result I recently had to review changes made by a (very capable) junior colleague who failed to actually implement the desired feature, but did get so upset by the goto-based error handling that he replaced it with incorrect exception-based code.
When teaching a language that support bad gotos, or compilers. Otherwise, you can count a lot of people (including me) out of that undefined "we" pronoun.
There's little point in teaching newbies about goto at all. The few modern implementations are for experts, because it can still lead to some bad code, just not the kind of "bad" Dijkstra was talking about. Yet there's a group of people that will evangelize about any subject you can think about, normally people with very shallow knowledge on the subject.
This is actually the main thing I like about Knuth. Things are either proven, in very strict terms, or they are empirically studied. Also typically in rigorous means.
People usually present his work in terms of proofs only. Typically with big O considerations. Reading him, he very quickly warns of the dangers in big O analysis. (He is still a fan of it. Encouraged it as a math aid for grade school work, at one point.)
I recommend anybody to look at Dijkstra's notes on structured programming¹. It's a very worth teaching on how to conduct impartial research and transform a field of knowledge.
But TL.DR. - He got digged plenty of repeating cases of bad code, and proceeded to fix every one of them with very few coherent and systematic changes.
The best modern analysis of Dijkstra's original letter I've seen is is David Tribble's "Go To Statement Considered Harmful: A Retrospective" which dissects it line by line and explains it in the historical context in which Dijkstra wrote it.
And of course the last one in the sequence is "Goto Considered Harmful Considered Harmful Considered Harmful". Yet another illustration of Ultrafinite Recursion!
I had heard vaguely of the 'microkernal debate' in the past, but never checked it out. Didn't realize the other main participant (aside from Linus Torvalds) was Andrew Tanenbaum. Very entertaining/interesting read. (The linked article refers us here: http://www.oreilly.com/openbook/opensources/book/appa.html)
The Shapiro article in Tannenbaum's link is good too. He's a bright guy that did the EROS project for anyone that doesn't know. Here it is in archive.org:
To be fair, these are debates about issues that are rather insignificant from a broader CS perspective. Engineering problems (methodology, network protocols, file systems, kernel implementation details) or debates that are infamous mostly due to their polemic nature (Dijkstra-Backus, or Dijkstra-Anything, for that matter).
I think it's true. In CS there are no deep fundamental or philosophical debate-evoking schisms of the likes you sometimes find in other fields. Consider economics, philosophy, physics (theory of everything) or the soft sciences.
Even the hottest problem in CS (P vs NP, of course) doesn't seem to evoke much debate among computer scientists.
For example: someone in nineties-or-naughties once experimentally "proved" that Java was faster than C for matrix maths -- because of the overhead of `free()`! If that sounds ridiculous to you, then understand this really matters for certain implementations of certain matrix problems.
So the take-away hangs not on the nature of C vs. Java, but on whether the study matters in the real world. Emprics in CS are good for engineers quietly doing their own job, but for debates they are as fraught as in the soft-sciences.
Indeed CS is a soft-science. Or rather it is not a science at all, but a mathematical art, with bonus people issues.
This was an issue back in the 80s for a philosophical reason in mathematics. Mathematics classically approves of short, clever, elegant proofs. The paper-and-pencil crowd was trying to do that for code. In practice, program verification is about mechanized grinding through massive numbers of dumb assertions to catch off-by-one errors.
This used to bother mathematicians. When the four-color theorem was originally proved, it was done partly by hand and partly with computer case analysis. The computer part bothered many mathematicians. The culture has changed. Recently, someone redid the proof with the old hand parts re-done by computer. Since that eliminated some possible human errors, that was a step forward in solidifying the proof.
We also know now how to check a theorem prover. Some theorem provers emit a proof trace, huge files of "Step 2437: apply rule 42 to change 2*X to X+X". Proof traces can be fed into a dumb proof checker which does the indicated rewrite and checks that the expected output is obtained.
So progress has resolved that issue.
[1] http://www.yodaiken.com/papers/p271-de_millo.pdf