Hacker News new | past | comments | ask | show | jobs | submit login
On the cruelty of really teaching computing science (1988) (utexas.edu)
168 points by torstenvl 5 months ago | hide | past | favorite | 136 comments



This is obviously a snapshot in time (title includes 1988). I can’t attest to the State of Computer Science Education in the late 1980s. He laments the term software engineering because software engineering does not fit the traditional, more mechanical engineering terms.

The more mathematical proof style that he is advocating for is used sometimes in aerospace (in particular Airworthiness) and medical.

Some might say that his proposed methods only apply to software requiring that level of rigor. It is also really expensive.


We are slowly getting there, SE is just a bit over 50 years old. For example, Dafny (by MSR) is relatively easy to use and scales nicely to build systems of 20-30 KLOC with some formal guarantees. It's a nice imperative language where specifications are encoded as contracts, so quite familiar to developers. Costs are still relatively high obviously, and it won't scale to bigger systems.

I think LLMs could help a lot to lower cost by providing some automation to turn specifications into code. DeepMind has already shown a proof of concept for mathematical theorems using Lean. I have a toy implementation for Isabelle oriented towards SE that works quite well.


If you look at the proofs that Google solved, the fast one was similar to problems in Knuth, thus common in the corpus and the other two were reducible to exhaustive search, especially with lean4 feedback. Which is why they took more time than a human was given.

There results were impressive, but not sufficient for SWE.

If you note, it completely failed at the combinatorial problems.

SW engineering is details, nuance, and tradeoffs.

The specification problem that lead to an AI winter for expert systems still applies.

If we could create good, exhaustive contracts, programming is easy.

The failure of UML as a code generation tool is a good example.

As LLMs are good at pattern finding, at least within the simplicity bias problem, I do think that they will have utility.

But as an Enterprise Architect, form, complete specifications would completely remove the need for an LLM assistant as I could just write fitness functions and let the developers run with what they wanted.

As we know LLMs are in context learners built of threshold circuits, they simply don't have the ability to consider tradeoffs and details.

Funny enough this post has the smell of an author that wasn't aware of papers that came out in the previous couple years describing the specification problem.


I agree, I am also not interested in a system that replaces humans. I am interested in a system that automates the boring parts and makes SWE simpler and closer to classical engineering disciplines while keeping humans in the loop.

That is, an engineer would design and decompose the problem and the system would help to fill in the gaps in proofs. Sometimes this won't be possible, so the human engineer will need to redefine specifications or insert some intermediate lemmas.


LLMs are really, really bad at proofs so far in my experience. Especially proofs in proof assistant since those are machine checked and unable to be faked.


Thats what the ai would say


that's exciting news! what does your toy implementation do?


It translates natural language into Isabelle/HOL specifications, and then tries to fill these up to prove things. One can then use Isabelle code generation facilities to extract Scala, OCaml, etc. I think the trick is to have a programmer to guide the whole process, especially decomposition into different modules and abstractions.


Nice. I agree that LLMs will be game-changing for easing the writing of specifications/contracts for verified code; it could become the standard. Interesting that you're using Isabelle/HOL for code generation. I'm working towards auto-formalization of maths using LLM translation to an intermediate language between English and the proof assistant (for which I'm leaning towards Isabelle/HOL) and then a toolbox of techniques (i.e. ATP) for the remaining translation. Formalization is too painful at the moment. It's nearly the same problem as yours. But I'm starting out on simpler problems.

So I'm really interested in the "filling up" part, if you have anything to say about it.


what have you been able to get it to write so far?


(i hope this comment didn't sound skeptical or dismissive)


No worries! The comment was fine, I was simply gone so I couldn't reply.

TBH, I haven't written anything impressive, just toy examples that are formally verified.

I imagine making it practical for larger systems would require a lot more work.


i'm curious to know what you mean by toy examples! are we talking about an assignment statement, an arithmetic expression, finding the maximum of a list, insertion sort, mergesort, a parser for a context-free language, a cooperative multithreading system?


Some simple imperative data structures, including simple operations to add, remove, test membership, etc. with formal guarantees.


that sounds awesome! what does the process of getting to one of those proofs look like? is there a lot of back and forth, or is it mostly autonomous?


Mostly autonomous, with a bit of guidance, but this might be because the training set has examples that are similar enough, making interpolation easy.


fantastic! i hope you decide to share more details; i'd love to hear them


Thanks. It's a toy at the minute, there is nothing super impressive.

Taking this forward would require a lot more work and funding to pay for computation to fine-tune models further.

It's an interesting topic, perhaps good for a startup or a research lab in case one finds a decent monetization stream.


pretty sure you'll be able to monetize writing correct software at scale with an llm


Very few domains where software is used now make sense for that level of rigor, in practice.

At least with current levels of available staffing. The vast majority of software is being developed more akin to ‘residential construction’ type levels of investment rather than even ‘commercial construction’ or ‘major engineering projects’.

Which makes sense. Rules of thumb work well enough most of the time, and when it’s actually important (say a structure member, ahem, crypto library) then it makes sense to get it looked at more carefully by someone who more deeply understands what is going on.

Though I don’t think we have a solid idea of what those areas are yet, let alone have codified them. So YOLO.


Great post. Yes when your GC or carpenter needs to remove a wall, they bring in a real structural engineer. These roles exist for software engineering also. Only thing we are missing is a real licensing and governance body.


Well, and some sort of stable target to aim at regarding ‘important’. There is pretty solid consensus in construction that things like foundation design, structural walls/supports, projects over a given size, etc. need engineering sign off.

There is no such consensus I can see right now on the software side, and software projects are also a lot more complex than a typical construction project in ways that are hard to quantify.

How would you even define a licensing test that wouldn’t be obsolete in a year or two even?


It depends on which parts of software engineering you work in. If you work anywhere where performance matters people tend to know how computers work. If you work in 90% of software engineering you’re likely actively writing code which is terrible from an engineering perspective. Look at how popular things like SOLID and Clean Code in general are. When they are a direct path to horrible performance as you enter the area of L1 cache misses and vtavles on even an extremely simple function. It’s so weird to switch between working close to the hardware and working high above. Especially because Clean Code doesn’t actually seem easier to read or indeed maintain.

As an external examiner for CS students I can’t say I’m surprised. They aren’t taught science anymore, they’re taught practices and patterns, and since nobody knows how a computer actually works or how to write performant code it’s easy for various grifters to sell them nonsense.

I mean, how well would Clean Code, SOLID and to some degree Agile really sell if people knew the key people behind these things haven’t worked in software engineering since 20 years before Python was even invented? Probably not so well.


When they are a direct path to horrible performance as you enter the area of L1 cache misses and vtavles on even an extremely simple function. It’s so weird to switch between working close to the hardware and working high above. Especially because Clean Code doesn’t actually seem easier to read or indeed maintain.

If it's not on the critical path, what difference does it make? Code outside the critical path should be optimizing for something and maintainability is as good as anything.


GP said

> Especially because Clean Code doesn’t actually seem easier to read or indeed maintain

If that is true, then there is indeed no point in applying Clean Code. But I disagree with GP that Clean Code leads automatically to bad performing code. That depends on your language and execution environment. A JVM is very good at effectively removing vtable indirection if they are not needed at runtime.


I don't think a compiler could ever reliably transpose an array-of-structs representation to a struct-of-arrays one.

There's also other issues with this paradigm, like creating objects out of the arguments to a function. That not only makes your code less maintainable (what if you need a subset of the variables), but now you'll have to drag all that data to each function you create.

Clean code is, IMO, worthless, data- and domain-driven design practices accomplish the goals of clean code better than it the paradigm itself does, and also improve other considerations like correctness and performance.


>I don't think a compiler could ever reliably transpose an array-of-structs representation to a struct-of-arrays one.

An interesting question, but a SoA isn't necessarily better than an AoS: it depends on the access pattern, so the question is whether this optimisation could be added as a part of PGO..


I'd also argue that even if vtable indirection can't be removed, it's unlikely to be a notable part of your performance profile, at least in the general case. Profile! You might instead see the hasher for your hash table has a much bigger.


The critical path can often be very wide and long and is not some small for loop deep down. Not all software is a video codec. Maintainability also does not necessarily exclude performance, smaller code can be easier to grasp and fit better into caches.


I’d argue that splitting what is essentially a four line check or default into something like three different functions makes it a lot harder to grasp.

I think my issue with clean code is primarily that it doesn’t actually make code more maintainable. As another poster mentioned, data or domain driven design does the same thing better.


> The programmer is in the unique position that his is the only discipline and profession in which such a gigantic ratio, which totally baffles our imagination, has to be bridged by a single technology. He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before. Compared to that number of semantic levels, the average mathematical theory is almost flat. By evoking the need for deep conceptual hierarchies, the automatic computer confronts us with a radically new intellectual challenge that has no precedent in our history.

I am no biographer of Djisktra’s, so is he being unrealistic about programmers here, or does he not have exposure to what a mathematician would consider Mathematics (Wikipedia entry claiming him a mathematician or no)?


> He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.

This is not really true. There were complex multilayered systems before computers. In large systems, Western Electric #5 Crossbar was comparable to a large real-time program. General Railway Signal's NX system had the first "intelligent" user interface. But that level of complexity was very rare.

Both mechanical design and electronic design are harder than program design. The number of people who did really good mechanism design is tiny. There were only two good typesetting machines, over most of a century - Merganthaler's Linotype and Lanston's Monotype. Everybody else's machine was a dud. In the printing telegraph/Teletype business, Howard Krum and Ed Klienschmidt designed the good ones, and the other twenty or so designs over many decades were much inferior. There were been lathes for centuries, but all modern manual lathes strongly resemble Maudsley's design from 1800.

There are far more good programmers than there were good mechanism designers or electronics engineers. Programming is not really that hard by the standards of other complex engineering.


> Both mechanical design and electronic design are harder than program design. The number of people who did really good mechanism design is tiny. There were only two good typesetting machines, over most of a century - Merganthaler's Linotype and Lanston's Monotype.

That’s a good example and of course it immediately brings to mind TeX, which is an equally monumental if not greater achievement. Certainly there’s no denying that TeX has considerably higher dimensionality than the pre-computerized hot type setting machines. Especially when you include all the ancillary stuff like Metafont.

Also recall that Dijkstra was a systems programmer in his industry career. He was well aware of the complexity of the computing hardware of the day—which was cutting edge electronic design. The semaphore wasn’t invented as a cute mathematical trick; he needed it to get hardware interrupts to work properly. Something which THE managed and Unix, among others, never quite did (although it did get to mostly good enough if you don’t mind minefields).

> There are far more good programmers than there were good mechanism designers or electronics engineers. Programming is not really that hard by the standards of other complex engineering.

Most programmers are incapable of writing a correct binary search, let alone something the size and complexity of TeX with only a handful of relatively minor errors. Programmers capable of that level of intellectual feat are indeed few and far between. I suspect they’re more rare than competent EEs or MEs.

Most programmers are more comparable to the guys cleaning the typesetters, not the ones designing them.


> TeX, which is an equally monumental if not greater achievement.

TeX didn't come out of nowhere. It's a successor to the macro-based document formatting system which began with RUNOFF and went through roff, nroff, tbl, eqn, MM, troff, ditroff, and groff. The last remaining usage of those tools seems to be UNIX-type manual pages. There was so much cruft a restart was required.


Linotype didn’t come out of nowhere either. Printers used to cast type manually.

And don’t just gloss over TeX’s astounding correctness. It’s a truly remarkable feat of the human intellect to design something so large with so few errors.


For those who haven't seen Knuth's own analysis of his errors while writing TeX, it's well worth reading his 1989 article "The Errors of TeX" [1] and glancing through the full chronological list of errors [2].

[1] https://yurichev.com/mirrors/knuth1989.pdf

[2] https://ctan.math.utah.edu/ctan/tex-archive/info/knuth-pdf/e...


> Most programmers are incapable of writing a correct binary search

If you exclude the programmers who are incapable of writing FizzBuzz (who I would consider "not programmers", no matter what job title they managed to acquire), then I'm pretty sure your statement is false.

If you mean "could sit down and write one that worked the first time without testing", then yes, you could be write. But could not write one at all? I don't buy it.


> If you exclude the programmers who are incapable of writing FizzBuzz

FizzBuzz really is trivial. Binary search on the other hand is deceptively tricky[1]. It was well over a decade from its discovery to the first correct published implementation! No doubt if asked to write it, you'd look it up and say that's trivial, all the while double checking Internet sources to avoid the many subtle pitfalls. You might even be familiar with one of the more famous errors[2] off the top of year head. And even then the smart money at even odds would be to bet against your implementation being correct for all inputs.

And if you had to do it just from a specification with no outside resources? Much harder. At least unless you know how to formally construct a loop using a loop invariant and show monotonic progress toward termination each iteration. Which brings us back to the original submission. There are some programs that are pretty much impossible to prove correct by testing, but that can, relatively easily, be shown to be correct by mathematical reasoning. Since this is a comment on a submission by Dijkstra, here[3] is how he does it in didactic style

> If you mean "could sit down and write one that worked the first time without testing", then yes, you could be write. But could not write one at all? I don't buy it.

Yes that's what "correct" means. Code that only works sometimes is not correct.

[1] https://reprog.wordpress.com/2010/04/19/are-you-one-of-the-1...

[2] https://research.google/blog/extra-extra-read-all-about-it-n...

[3] https://www.cs.utexas.edu/~EWD/ewd12xx/EWD1293.PDF


It wouldn’t surprise me if there were fewer programmer who know of the overflow edge case (https://research.google/blog/extra-extra-read-all-about-it-n...) and will think of it when writing a binary search than programmers who know of it and will remember to prevent it.

If the implementation language doesn’t automatically prevent that problem (and that is fairly likely), the latter group likely would introduce a bug there.


> There are far more good programmers than there were good mechanism designers or electronics engineers.

But that doesn't mean a thing. The barrier to entry is much smaller.


Yeah if I wanted to just get started with electronics engineering, the easiest cheapest way would be… to use software. Programming / digital engineer bf is lower-stakes than physical stuff.


I think he was arguing that if programming consists of symbol manipulation and proofs, the task of writing proofs for large programs consists of a lot more symbol manipulation (although probably a lot more tedious in nature) than many proofs written by mathematicians in the past, something made worse by the need to precisely describe each step so that a machine could conceivably execute it instead of being able to skip over proof steps considered “obvious” to a human mathematician.

I think he was especially thinking of mathematical logic - he referred to programming as Very Large Scale Application of Logic several times in his writing.


For the peanut gallery:

If you ever want to see what it's like for a mathematician to not hand wave anything away, look at excerpts from Bertrand & Russel in principia mathematics (no, not the newton book).

It takes 362 pages (depending on edition) to get to 1+1=2

https://archive.org/details/principia-mathematica_202307/pag...

Of course, just like real mathematicians, in our everyday work we stand on the shoulders of giants, reuse prior foundational work (I've yet to personally write a bootloader, os, or language+compiler, and include 3p libraries), and then hope that any bugs in our proofs are caught during peer review. Like in math, sometimes peer review for our code ends up being a rubber stamp, or our code/proofs aren't that elegant, or they work for the domain we're using them in but there's latent bugs/logic errors which may cause inconsistencies or require a restriction of domain to properly work (ex code only works with ASCII, or your theorm only works for compact sets).

And of course, the similarities aren't a coincidence

https://en.m.wikipedia.org/wiki/Curry%E2%80%93Howard_corresp...


Note: There are today completely formal systems of proofs which are much more concise than what Russel had. You can now prove 1+1=2 after a few pages of rules (say of Martin-Löf type theory), a couple of definitions (+, 1 and 2) and a very short proof by calculation.

In practice, one would use a proof assistant, which is like a programming language, such as Agda. Then it is just the definitions and the proof is just a call to the proof checker to compute and check the result.


For example, here it is in Coq (not using -- but instead replicating explicitly -- the built-in definitions of 0, 1, 2, +, or =).

  Inductive eq' {T: Type} (x: T) : T -> Prop :=
    | eq_refl': eq' x x.

  Inductive nat' :=
    | zero
    | succ (n: nat').

  Definition one := succ zero.
  
  Definition two := succ (succ zero).
  
  Fixpoint plus' (a b: nat') : nat' :=
    match a with
      | zero => b
      | succ x => succ (plus' x b)
      end.
  
  Theorem one_plus_one_equals_two : eq' (plus' one one) two.
  Proof.
    apply eq_refl'.
  Qed.
As you allude to, there's also the underlying type theory and associated logical apparatus, which in this case give meaning to "Type", "Prop", "Inductive", "Definition", "Fixpoint", and "Theorem" (the latter of which is syntactic sugar for "Definition"!), and allow the system to check that the claimed proof of the theorem is actually valid. I haven't reproduced this here because I don't know what it actually looks like or where to find it. (I've never learned or studied the substantive content of Coq's built-in type theory.)

We would also have to be satisfied that the definitions of eq', one, two, and plus' above sufficiently match up with what we mean by those concepts.


Don’t forget the source code of Coq itself. What you have written here is not a formal proof, it’s a series of instructions to Coq requesting it to generate the proof for you. To know that you’ve proved it you need to inspect the output, otherwise you’re trusting the source code of Coq itself.


>> He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before. Compared to that number of semantic levels, the average mathematical theory is almost flat.

> ... is he being unrealistic about programmers here, or does he not have exposure to what a mathematician would consider Mathematics?

As with any sweeping statement, Dijkstra's assertion is not universally applicable to all programmers. However, for some definition of sufficiently skilled programmer, it is correct if one considers the subset of mathematics applicable to provably correct programs. To wit:

https://bartoszmilewski.com/2014/10/28/category-theory-for-p...


You’ve given a hint of the complexity on the programmer’s side of things but for Dijkstra’s claim to hold we also need to take a look at the mathematician’s. I think most people who are not mathematicians have no idea what they’re working on.

Take for example a single theorem: Classification of Finite Simple Groups [1]. This one proof, the work of a hundred mathematicians or so, is tens of thousands of pages long and took half a century to complete.

Fermat’s Last Theorem [2] took 358 years to prove and required the development of vast amounts of theory that Fermat himself could scarcely have imagined.

[1] https://en.wikipedia.org/wiki/Classification_of_finite_simpl...

[2] https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem


> This one proof, the work of a hundred mathematicians or so, is tens of thousands of pages long and took half a century to complete.

Now compare that to google3 or any other large software. It’s absolutely tiny. A paltry edifice in comparison both in pages and man hours as well as mathematical complexity. Boolean structures get monstrously huge.

On the subject of proofs and the verbosity of traditional mathematical methods, this note[1] is interesting. It provides two fun little examples of shorter than normal proofs.

It amuses me that just as mathematicians persisted in writing out “is equal to” for decades after Recorde gave us =, there will probably continue to be holdouts who write out “if and only if” instead of using ≡ for many years to come.

[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD107...


I also think "if and only if" is only surpassed by "iff" in badness of notation, though I'd like to put in a good word for classic '='. There's always some implicit inference being made about what kind of equality we're considering, and calling two formulas of logic "equal" if they have the same (truth) value in all cases is reasonable in my book.

https://en.wikipedia.org/wiki/Logical_biconditional has some nice comparisons of notation, including a mention that my old friend George Boole also used '='.


Maths has also evolved to be complex now, with the proof of four-color theorem being a computer proof. So, the statement is not as true anymore. But there was a short era when proving that your "static" program was correct was impossible because the number of possible combinations of input approached the size of earth (in terms of how many bits we need to represent). At the same time, almost any single mathematical theorem could be verified by a person to be correct over the course of a couple of years.


https://celebratio.org/Haken_W/article/794/

It was first published in 1976. It is _highly_ unlikely Dijkstra didn't know about it.


Dijkstra’s degree was in mathematics.

And yes the average mathematical theory is indeed flat compared to a large monolith like google3.


All right, but the average program is also flat compared to google3. So that argument doesn't prove much.


> the business community, which, having been sold to the idea that computers would make life easier, is mentally unprepared to accept that they only solve the easier problems at the price of creating much harder ones.

So true today in the age of identity theft, data breaches, privacy violations, deep fakes, surveillance, copyright violations, tracking cookies, fake news, addictive social media, computer viruses, ransom attacks, ...

But, hey, at least we have ChatGPT now that can write your homework for you.


Dijkstra was advocating for using formal methods...

Do the people agreeing with the article agree with its actual point? Especially whether it was wise before AI (maybe) improved the practicality of such an undertaking?


Expressive type systems have all the hype nowadays.

Those absolutely global formal systems that can explain any kind of behavior would never be practical in general. But specialized formal systems are just great.


How much value is there in formal proof that a program optimally meets precisely-described requirements if the requirements themselves are sub-optimal for the problem area of concern?

In other words, "software is hard": https://www.gamearchitect.net/Articles/SoftwareIsHard.html "The difference is that the overruns on a physical construction project are bounded. You never get to the point where you have to hammer in a nail and discover that the nail will take an estimated six months of research and development, with a high level of uncertainty. But software is fractal in complexity. If you're doing top-down design, you produce a specification that stops at some level of granularity. And you always risk discovering, come implementation time, that the module or class that was the lowest level of your specification hides untold worlds of complexity that will take as much development effort as you'd budgeted for the rest of the project combined. The only way to avoid that is to have your design go all the way down to specifying individual lines of code, in which case you aren't designing at all, you're just programming. Fred Brooks said it twenty years ago in "No Silver Bullet" better than I can today: "The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.""

I prefer a conceptual model more like "Software as Gardening". https://github.com/pdfernhout/High-Performance-Organizations... "Andy Hunt: There is a persistent notion in a lot of literature that software development should be like engineering. First, an architect draws up some great plans. Then you get a flood of warm bodies to come in and fill the chairs, bang out all the code, and you're done. A lot of people still feel that way; I saw an interview in the last six months of a big outsourcing house in India where this was how they felt. They paint a picture of constructing software like buildings. The high talent architects do the design. The coders do the constructing. The tenants move in, and everyone lives happily ever after. We don't think that's very realistic. It doesn't work that way with software. We paint a different picture. Instead of that very neat and orderly procession, which doesn't happen even in the real world with buildings, software is much more like gardening. You do plan. You plan that you're going to make a plot this big. You're going to prepare the soil. You bring in a landscape person who says to put the big plants in the back and short ones in the front. You've got a great plan, a whole design. But when you plant the bulbs and the seeds, what happens? The garden doesn't quite come up the way you drew the picture. This plant gets a lot bigger than you thought it would. You've got to prune it. You've got to split it. You've got to move it around the garden. This big plant in the back died. You've got to dig it up and throw it into the compost pile. These colors ended up not looking like they did on the package. They don't look good next to each other. You've got to transplant this one over to the other side of the garden. --- Dave Thomas: Also, with a garden, there's a constant assumption of maintenance. Everybody says, I want a low-maintenance garden, but the reality is a garden is something that you're always interacting with to improve or even just keep the same. Although I know there's building maintenance, you typically don't change the shape of a building. It just sits there. We want people to view software as being far more organic, far more malleable, and something that you have to be prepared to interact with to improve all the time."

And it helps to keep things simple. https://www.infoq.com/presentations/Simple-Made-Easy/ "Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."


Love the gist of this, but just wanted to point out, but there's no need to draw the line between buildings and gardening. Anyone who has built a house or done major remodel knows that it too suffers from fractal complexity. It may not be a nail that becomes a wormhole of complexity (as neither is something simple arithmetic operations in programming), but all kinds of things can crop up. The soil has shifted since last survey, the pipes from city are old, the wiring is out of date, the standards have changed, the weather got in the way, the supplies changed in price / specification, etc. Everything in the world is like that, software isn't special in that regard. In fact, software only has such complexity because its usually trying to model some real-world data or decision. For the totally arbitrary toy examples, the code is usually predictable, simple, and clean ; the mess starts once we try to fit it to real-world usecases (such as building construction).


I have met some developers in my career that can communicate as effectively as this with equally brutal criticality, but those people are astonishingly rare. Maybe 2% of the developer population, if I am being gracious, can be described this way.

Most developers I have worked with are cowards exactly as he used that word. Now in all fairness my career is largely limited to large corporate employers that only hire Java developers and, god forbid, JavaScript developers. It’s frameworks, Maven, and NPM for absolutely everything.

The hiring managers always claim to look for innovators, but then you get in and everyone is just the same. Thousands of developers just retaining their employment doing the same shit day after day, fearing any changes coming down the pike.


> The hiring managers always claim to look for innovators,

I can't recall the last time I saw a hiring manager looking for an innovator.

Most hiring managers want people who can just get the job done with as little supervision and involvement as possible.

Most of the time when I see a coworker going off and innovating, it's a questionable exercise designed for fun and entertainment rather than getting the job done.

My last job had someone spend months "innovating" an all new custom CI/CD system. It brought no benefits to the team and was a huge waste of time. They had fun and used it as a major accomplishment their resume and LinkedIn. You could say it was "innovative", but the rest of use really wished they would have helped us out with the work that had to be done instead of "innovating" off in the weeds.


As an SE who is now working in Product, I can tell you that there is always an endless list of features to build to support the business goals. Some are pretty standard but some are true business innovations. As in, software features that support an improved and innovative business process that saves millions or improves quality or efficiency. These features are rarely innovative from a technical perspective, but from a business point of view they are true innovations. The business problem space should always push the innovation, not some fun technology, UNLESS a technical capability changes the landscape and encourages a new process innovation.


Given that it brought no benefits to the team, I would probably not say it was innovative.


I think the poster is making the division that getting shit done unsupervised is what most people mean as innovators and that not many people want true "innovators" that just spin off doing random crap just to see what happens.


It's fair to say that you need to spin off and do random crap in order to innovate. But I would not say that "spinning off doing random crap == innovation". It dirties the word a little bit.


The fact you're using quotes around innovative indicates, even to yourself, that you're replying to a straw-man.


Industry use of "innovation" is using 3 year old tech instead of 10 year old tech.


Frameworks aren't necessarily a bad thing if you can critical think - why reinvent the wheel from scratch everytime when you're trying to make a newer version of a car? But if you're only a framework monkey who cannot communicate design decisions, architecture, and/or design you're definetly screwed in the job market.

A lot of people (especially newer profiles I've seen on HN) think just being able to glue libraries together is enough to justify being a developer with a 6 fig salary, when in reality the actual value add is the architecture, design, and other critical thinking actions.

Hiring managers do try to hire the archetype developer who is both eloquent and a critical thinker, but it's hard and those who can do both know their value.


Then again, just being able to glue libraries together ought to be enough to justify a 6 fig salary, because inflation adjusted that's potentially less than the 5 fig salaries I was offered last century, fresh out of school and still wet behind the ears.


And the only reason (I assume) you're still in the tech industry despite the dot com bust and the Great Recession is because you can at least show value to employers, and you most likely have some communication and critical thinking skills, not just gluing stuff together.

Plenty of code monkey types flamed out or remained underemployed.

> the 5 fig salaries I was offered last century, fresh out of school and still wet behind the ears

And there were also fewer developers in the 1990s/2000s than in the 2020s, and the hiring market was not yet fully globalized and async compared to the post-COVID WFH/Remote market.


Most companies don't offer WFH/remote though.


WFH/remote during the pandemic normalized the opening of foreign offices, because it successfully proved the operations can continue in an async model.


by any measure you contributed a lot more value than that, though


I've always heard that tech employers value all the things you'd expect them to value, but I've only ever seen them value LeetCode.


The worst thing about frameworks is that they are an excuse to qualify cowardice, people scared shitless to write original software. There are multiple second order consequences and ethical dilemmas that come from that.


Why are there always these smug "Dijkstra's right that everyone else is an idiot" replies to Dijkstraposts? Do people understand exactly what Dijkstra was advocating for?

> I have met some developers in my career that can communicate as effectively as this with equally brutal criticality, but those people are astonishingly rare

The fact that it's rare to find arrogant people who argue with "brutal criticality" that all programmers should use formal methods all the time (like Dijkstra did), is not a bad thing IMO.


> The hiring managers always claim to look for innovators,

I thought that they were looking for "rock stars". Not the same thing...


From the article: 'In the discrete world of computing, there is no meaningful metric in which "small" changes and "small" effects go hand in hand, and there never will be.'

As brilliantly composed as the piece may be, it exhibits the same resistance to radical novelty that it condemns. Here we are not 40 years later, and small changes to big networks produce small effects. At sufficient scale, the digital reapproximates the analog.


Complete with feedback loops and chaos dynamics. Small changes to almost all code would yield small effects. A small change to libcurl or SQLite, creating an application-crashing bug and somehow slipping through tests, would likely halt the world in its tracks.


> it exhibits the same resistance to radical novelty that it condemns

You're talking about bio-mimetic systems that wouldn't arise for a generation, he was talking about "the discrete world of computing". There's no context there to resist.

We built large statistical systems out of the "discrete world" components. Different regimes, if you will.

The fascinating thing is that chemical/biological systems evolved discrete interactions (e.g. nervous systems that can be deranged by, say, a few micrograms of LSD.)

- - - -

Also, Cloudstrike/Clownstrike: small change, large effect?


In the (is this shape fillable by 1x2 dominoes?) example given by EWD to demonstrate the power of formal reasoning methods, a slight change to the problem statement:

   For the shape Q, instead of clipping opposite corners of the 8x8 square board, one (what would lay under a) white square and one black square, which are non-adjacent, are randomly removed. 
makes the elegant proof argument fail.

Real world programming is usually like this, it is hard to cast the problem in the framework of a formal language, like first order predicate logic, and manipulation of uninterpreted formulae (i.e. the problem now mapped into the domain of first order logic) using the rules of first order logic might not lead to anything useful.

It seems to me that EWD is showcasing some example programming problems that are elegantly handled by his formal proof techniques, while ignoring the vast swathes of programming problems that might not be well handled by these techniques.


There is an extremely elegant solution to this modified question as well. In fact, it is always possible to fill a chessboard with dominoes with two squares of opposite color removed, by a very straightforward method (Gomory's Theorem). See Problem 2 at https://www.cut-the-knot.org/do_you_know/chessboard.shtml and note that the result generalizes to any bipartite Hamiltonian graph, such as any rectangular grid with an even number of cells.

(Much more generally, Hall's marriage theorem characterizes which finite bipartite graphs have a perfect matching, and there are polynomial-time algorithms to compute maximum cardinality matchings in arbitrary finite graphs.)


Essentially what you're arguing for is the desirability of untrustworthy software.

Typically people have insisted that it's too expensive to prove software correct, but as the machines and techniques improve proven-correct software becomes cheaper and cheaper.

The last argument standing is that it's unpopular.


On my current project the way totals and taxes are applied different on two different screens leading to slightly different totals. No one knows which one is actually correct or used by the customers in practice. Thus, you could say there is no correct answer.

Most software development is wrestling with malleable requirements.

As the old joke goes: writing software from requirements is like walking on water, both are easy when frozen


> No one knows which one is actually correct or used by the customers in practice.

Automating BS is bad.

> Thus, you could say there is no correct answer.

I object to such fatalism.

Kind of the whole point of computers is to find and eliminate such ambiguities.

I don't mind playing irrational games for entertainment, but they are "no basis for a system of government", eh?

> Most software development is wrestling with malleable requirements.

Sure but that's no excuse for automating irrational systems. There's no written requirement that the math to compute totals and taxes shall be mysterious?


Note that in the adjacent case, a nice recursive algorithm can solve the problem.

What's the general solution in the random non-adjacent opposite-colour case, out of only the slightest curiosity?


Write down a Hamiltonian cycle for the (undirected) graph where the nodes are given by the squares of the chessboard and are connected by edges if they are (orthogonally) adjacent. Notice that because the squares are different colors, deleting them from the Hamiltonian cycle partitions it into two even length paths. Cover each even length path with dominoes.

The part where you notice the two paths are even length feels a bit like dark magic the first time you see it. Notice that if you have two different coloured squares which are consecutive on the cycle the paths you get are length 64-2 and 0, then if as you move one of them further and further along the path you have to move in steps of 2 to keep them opposite colours.


Holy shit that's slick. Wow.

You have to verify though that the Hamiltonian cycle exists. An induction proof seems to do the job.


You can draw a big "C" shape that goes around 3 edges of the board and then fill in the middle with wiggles. This works for any rectangular board where one of the edge lengths is even. You already need one of the side lengths to be even to solve the problem because if both sides are odd then the number of squares is odd, and good luck covering an odd number of squares with dominoes.


Mathematics is inferior to programming, in a way, because the entire hierarchy of mathematics relies on human beliefs at every stage, all the way down to its axioms. The beliefs of many highly intelligent humans, but beliefs of biased individuals nonetheless. Humans simply can't match the accuracy of a compiler when it comes to finding flaws.

On the other hand, a program's correctness doesn't depend on human beliefs. It can be proven to work perfectly on certain ranges of inputs by actually executing it on those inputs. Human subjectivity can be factored out to an increasing degree by way of automated tests. This level of concrete proof exceeds the level of social proof which mathematics relies on. The electronics upon which programs execute do not have biases as humans do. Evaluating a complex mathematical proof is as prone to errors as humans evaluating a complex computer program with their own minds. The compiler will always beat the human when determining correctness of a program for a well defined range of inputs.

Although automated tests can rarely prove universal truths, they rarely need to, as the logic they test only needs to handle a limited number of inputs and requires a limited number of guarantees. For many programs, the degree of proof that a well written automated test can provide far exceeds the degree of proof that a mathematical proof (based on human consensus) can provide.

This is why programming can accumulate complexity at a rate which is unfathomable in mathematics. With AI, programming may exceed the capabilities of mathematics to such extent that the entire field of mathematics will become a historical relic; showcasing the desperation of feeble human minds to grasp truths that are well outside of their reach.

The 'importance' of the field of knowledge comes down to the scale of the information and growth speed of the field. For all practical purposes, it seems that the programmatic, exhaustive approach will beat out the mathematical approach of trying to solve problems by uncovering universal truths.


If mathematics is so vast the human mind is feeble to it, then it’s also too vast for exhaustive search. Axioms aren’t even beliefs. They are first principles at the service of solving mathematical problems. If we want to solve other problems, we can change the axioms. That goes all the way back to Plato. We treat them as if they were true to solve the problems at hand. (It’s called as-ifism if you want to google it). Since logical and mathematical space is infinite; exhaustive search is the wrong approach.


I've read this (and similar diatribes by "real" computer scientists) before and although it's compelling - and, hell, I'd be willing to try the whole "formal methods" thing if it meant I spent less time debugging - it kind of overlooks the whole "we are successfully using computers to solve real-world problems by letting us non-mathematical 'dummies' at them" aspect. He seems to be reasoning that "yes, you're achieving results, but it would be more efficient if you spent two or three decades mastering formal methods before you produced a line of code" which sounds suspiciously to me like the way project managers reason: "yes, you can produce a program that solves the business problem if you just get started, but you would do so much more predictably if you spent 6 months 'estimating' it before you embark on a couple week's of programming".


To be fair we do have computer verified proofs now although it doesn't encompass everything.

Also programming is based on mathematical ideas. I don't think you can do away with mathematics just because of AI.


Agreed, though mathematics and programming as we understand them are both ways for humans to express logic. As much as mathematicians want to believe it, math is not the same as logic. The former serves to express logic to other humans while the latter serves to express logic to both other humans and computers. It's not like in the business world; mathematics doesn't get to have a monopoly over logic just because it came first.

Math and computer science just happen to be two fields which concern themselves with different aspects of logic. Math being slow-moving and focused on solving universal problems and computer science being fast-moving and focused on solving concrete problems.

Once in a while, advancements like cryptography and LLMs show us that focusing on solving concrete problems can also expand our knowledge and capabilities in a radical (and useful) way.

I resent articles which try to present one as more important than the other. What does that even mean? Appeal to academic authority? Utility value? Difficulty? Degree of abstraction? They're both just languages which solve problems in different ways and which have different target audiences and have different scalability constraints.


For those who don't know, the author is Edsger W. Dijkstra


He of all people should have been able to find a shorter way to get to the point!


Took 2 paragraphs for me to guess the author ;)


Me too! I think it's the pontification presented in a self-aware, humorous way.

I strongly agree with about half of the essay, while I want to agree with the other half in principle, but instead I think it is indeed "...so ridiculous that [he is] obviously out of touch with the real world".


~1300 EWDs and this one is ~70 paras, so say 100K paras worth of EWDs (+ a similar amount for his published work?) suggesting that 2 paras are still better measured in microdijkstras than in nanodijkstras...


The handwriting was a split second giveaway.


2 sentences for me. The enigmatic humorous dirge was a dead giveaway.


CACM gave most of an issue to this, the bulk of the responses strongly disagreeing. The dissenters were not, as I recall, people generally in favor of sloppy coding.

In Peter Seibel's Coders at Work, the interview of Knuth includes

""" Seibel: Yet Dijkstra has a paper I'm sure you're familiar with, where he basically says we shouldn't let computer-science students touch a machine for the first few years of their training.; they should spend all their time manipulating symbols.

Knuth: But that's not the way he learned either. He said a lot of really great things and inspirational things, but he's not always right. Neither am I... """

[Much more of interest, but I don't have the time to type it out now.]


Somehow I guessed it was Dijkstra just from the title.


> the Middle Ages. > total intellectual stagnation

Wrong.

> Instead of teaching 2 + 3 = 5 , the hideous arithmetic operator "plus" is carefully disguised by calling it "and", and the little kids are given lots of familiar examples first, with clearly visible such as apples and pears, which are in, in contrast to equally countable objects such as percentages and electrons, which are out. The same silly tradition [...]

This guy would make a terrible elementary school teacher and getting this wrong leads me to believe he's also wrong about the point he's trying to make afterwards.

> Unfathomed misunderstanding is further revealed by the term "software maintenance", as a result of which many people continue to believe that programs —and even programming languages themselves— are subject to wear and tear.

If your software interacts with a changing external world, then software does rot. Security vulnerabilites, new protocols, etc all require a change. Your webserver can't serve insecure http in 2024. At least not if you want to build a business on it.


> To further sever the links to intuition, we rename the values {true, false} of the boolean domain as {black, white}.

This is indeed sensible. However, you don't have to start with this and you don't have to stick with the easy {black, white}. You could rename the values to "banana" and "left" for increased, although needless, difficulty.

My proposal: Have students write a simple correct program first and then show that it still works even if you rename all "user" variables to "comment" variables. Now "comments" have a "favorite pet", but the software still works. Magic! That shows how semantics and syntax are different, plus how "stupid" computers are, even if you "tell" them "what you want."


I was teaching an adult friend some programming-- this was someone who could program "a little"-- mostly smashing stuff copy and pasted from the internet with some brute force until it kinda worked, so I started our exercise at perhaps somewhat of a too advanced level.

At some point I realized that he was ascribing all kinds of meaning to variable names and also felt totally lost at how I "knew" what to call various things.

I first tried to explain that the names were just labels but he wasn't getting it. I immediately changed every single variable name to the most profane and insane random junk I could think of then showed the program still worked the same way. Making it obscene was important; simply changing from one set of reasonable names to another would just leave the impression that there were alias in the list of Magic Words. He was enlightened.

From there we were able to get into what things actually were Magic Words (like language keywords)-- or close enough (variable name set by system or library code that we were using, arbitrarily set by their authors but obligatory for us users). Knowing not everything was magic made a big difference.

It's not just AI that carries the risk of overfitting. Sometimes you need some "out of distribution" material to separate convention from the underlying truth.


My reply is borderline so I understand any perceived hyprocrisy, but it's so weird reading a response that starts with

> quoted text

Wrong.

> additional quoted text.

It's such an off putting way to start an interaction.


I'm always torn on this. As someone that thinks they are decent at programming, it can be tempting to think that all of the sloppy ways of programming that I learned on the way were bad habits that need to be shed.

However, I also think that anyone that taught math by jumping straight to the advanced maths would almost certainly find that they lose more than they gain as far as student progress.

Note that this is not to say that you should avoid the advanced stuff. Attempts at hiding complexity never seem to pan out to results. Instead, honesty with the students goes a long long way.

I would also be interested in knowing about all of the things that were a bit more normal in the early days that people today have likely never seen. Call semantics that are not stack based, as an easy example.


Dijkstra ironically makes the same error wrt software testing that he is warning against - he didn't envision software becoming so complex as to be impossible to reason about. So today we write small programs to exhaustively test the big program. Automatic tests, fuzzers, tests that click on every possible pixel in every combination, etc. - ultimately the program has a finite number of states owing to its digital nature, so you can prove the absence of certain classes of bugs with enough tests. You can prove it doesn't crash, it doesn't leak memory, it doesn't break authorization invariants, etc.


> You can prove it doesn't crash, it doesn't leak memory, it doesn't break authorization invariants, etc.

You can but nobody does. It's enough if it doesn't usually do these things.


So I read this, and got inspired to learn a bit more about formal methods. I fell asleep listening to this[1] (not because it's boring... just because it was bed time)

It appears to me to be the worst part of high school math (proofs) without much gain.

I'm really, REALLY, REALLY hoping that AI can help with this stuff, because doing it by hand seems insane.

[1] https://www.youtube.com/playlist?list=PL9o9lNrP1luXgu97NZnQH...


A few paragraphs in the author seemed very pretentious in their writing style.

> By now, you may well ask why I have paid so much attention to and have spent so much eloquence on such a simple and obvious notion as the radical novelty.

Maybe this is because English isn't my first language, but doesn't this come off as pretty pretentious? Calling your own writing eloquent, in that same writing?

Anyway, then I scrolled down to the bottom to see who the author was. I suppose he has the right to some pretentiousness.


Hard to say, as a native speaker eloquence here doesn’t seem to refer to the authors own eloquence but rather the amount of words and roundabout explanations they’ve dedicated to the topic. Ie, in contrast to a simple explanation for the obvious things. If anything it reads a touch self deprecating.


Ah, that sounds fair.


loquacity was the right word


I just went to read a random part of the text

> In what we denote as "primitive societies", the superstition that knowing someone's true name gives you magic power over him is not unusual

Thats funny, because its not about magic powers but a psychological trick that makes someone seem more trusted when they say your name. Its not about superstition but being able to understand things in more than a direct blunt way.


In this context, true name does refer to magic powers: https://en.wikipedia.org/wiki/True_name


The next line kind of makes connection between the two:

"We are hardly less primitive: why do we persist here in answering the telephone with the most unhelpful "hello" instead of our name?"

Implying that giving away your name to someone who doesn't know it yet is a similar kind of superstition, but its definitely a sane safety measure, as we know today all the crazy examples of what social engineering and elaborate scamming can do.


Was that a talk at first? (EDIT: Because I think it would be very hard to follow it "live") I had trouble going through the points he was making and I had the luxury of being able to read it on my own pace, going back to earlier paragraphs, and so on.

Had anyone the same impression or am I not used to reading harder texts?


What an amazing educator at all levels. This is worth reading repeatedly until embedded in neurons and practice.


> The programmer is in the unique position that his is the only discipline and profession in which such a gigantic ratio, which totally baffles our imagination, has to be bridged by a single technology.

I think that for example environmentalists, who coined the phrase "think locally, act globally" and contemplated an already existing world population on the order of 10^9, might beg to differ. Or, say, astronomers. Or physicists. Yes, people often have poor intuition about large numbers. No, the resulting problem isn't remotely unique to CS.

> The second radical novelty is that the automatic computer is our first large-scale digital device.

Yet small-scale digital devices such as mechanical relays - or, for that matter, light switches and push buttons - were well established by Dijkstra's time. The scale isn't relevant to that concept; experience is, and students in the 1980s had plenty of reasonable mental models for a bit of data as an abstraction (or the hardware storing it, or a boolean value in a program).

> To do so, however, is highly dangerous: the analogy is too shallow because a program is, as a mechanism, totally different from all the familiar analogue devices we grew up with.

As if students would never encounter stepwise functions in math class (or much stranger beasts for that matter)... ?

Metaphor and analogy are simply at the core of how people naturally learn. We are not machines that can be deliberately and directly programmed with an understanding of novel systems. The imperfections of the metaphors we use, are no more a problem than the leaks in the abstractions we create in our programs.

> Unfathomed misunderstanding is further revealed by the term "software maintenance", as a result of which many people continue to believe that programs —and even programming languages themselves— are subject to wear and tear.

I can only imagine what Dijkstra would think of today's "ecosystems".


What is "chrysocosmic" ??


I assume it's related to Chrysopoeia: https://en.wikipedia.org/wiki/Chrysopoeia


It seems like the author’s conception of medieval thinking is pervasive to this day with LLMs. Systems complex and “radically novel,” enough that most experienced programmers have a hard time discussing them without resorting to analogy and anthropomorphism.

I wonder how evergreen this essay will be.


Computer sciences:

- pure maths: algorithms

- physics and chemistry: how to build physically from "scratch" a computer.

Namely, without a strong scientific background, you just are a script kiddy until you decide to go thru years of really tough learning. Why do you think high school kids can code assembly which gets the job done?

What we call "tech", is just "usage implementation", but there you get full frontal with the worst the human kind can deliver in falsehood, toxic excess, etc.

So when you come from the "clean and pure" (well, you have smart and evil people)... computer sciences field, hitting this toxic diarrhea, this "usage implementation", because you need to have the job done, ooooof!

My copium: stay as close to the bare metal as possible using a near-0 SDK and aim for the very long run, but you will have to fight complexity where it does not belong, like the javascripted web, many file file formats (for instance runtime ELF), and for that, better get your lawyer ready and start to enlighten your administration for regulations, because you will have to deal with the (big) tech mob.


I'm sure that when the author today reads what he wrote, he will ask himself "what the heck was i thinking?"

Each of the dozens of ECUs in todays cars have a plentifold of complexity compared to any system existing in 1988.

Still a majority of people are managing to operate such a vehicle with as biggest aid, exactly the naive abstractions that he lamented about. The highly simplified mental model that we create of anything we interact with, is the biggest human strength, while he seemed to advocate quite the opposite.


I mean, I’m actually somewhat curious how starting tertiary computer science curriculum with formal methods would impact the students. I mean I felt my intro undergrad 101 course was essentially functional programming and lambda calculus (with Scheme)- and I felt that helped establish a fundamentally different way of thinking about computer science than a more basic procedural type introduction would have. When they moved the intro course to be OOP instead I felt it was a travesty.


I did follow a curriculum set by Dijkstra's students. In the first trimester, we learned to program in Pascal, so we knew what programming was. The next two programming courses, in the second and fourth trimester, "programming" meant proving programs correct. Using pen and paper. Often in one-on-one sessions with our teachers where we'd to demonstrate our proofs. Or the teacher would state a problem and then continue to derive a correct progam by construction, writing slowly on the blackboard. These programming courses were supplemented with logic, discrete mathematics, and other formal methods. And we had continuous mathematics courses as well, practical labs, electronics, databases, and whatever you'd expect in a computer science and engineering curriculum.

For most students this wasn't easy, particularly compared to the way most of them were comfortable programming on their own by trial-and-error hacking away at a problem. Proving programs correct by construction takes a different skill. At the same time, it wasn't particularly hard either once you got going.

I don't think this way of teaching and learning programming was very useful or practical. With Dijkstra's students leaving the university, or otherwise losing primacy at the computer science faculty, Dijsktra's ideas faded away from the curriculum. Since then—I returned twenty years later to teach at this university—, the curriculum is very like any other computer science / engineering curriculum. And students seemed to have as much trouble with it as before.

What I missed about the curriculum when it was gone, was the consistency it brought into the curriculum. The curriculum felt as one continuous track to some clear idea of what it meant to be a programmer in Dijkstra's style. If you liked that idea, the curriculum was a great guide. If you didn't, it felt as a waste of time.


> compared to the way most of them were comfortable programming on their own by trial-and-error hacking away at a problem.

This is only a viable strategy insofar as the tool which lets them hack away is itself correct. You need computer scientists producing these tools.


Sounds like projection to me.

- DJ

Edit: "Then reduce the use of the brain and calculate!" This is literally the worst advice I have ever read.

- DJ


have you never used a calculator?


All useful programs are run on machines with devices whose operation makes that program useful.

"Programs" without computational models and operational semantics are, for sure, discrete mathematics. But almost no one cares about those abstract programs, except for discrete mathematicians.

I'd say it's Edgar's view here which is the mystification. He wishes to pretend that the state of an LCD screen as it plays a video game is can be defined symbolically, and hence transformed in its definition.

This is nonsense.

Programming has never been discrete mathematics. And all those discrete mathematicians ("Computer Scientists") who wish it so are the origin of all this dumb mystery around it. Computers are useful because they are electrical devices, and through operation, transmit power to connected devices, whose physical state has relevance to us. I move a joystick and the electrical state of the LCD screen (etc.) changes, and so on.

Pretending that this can be given a denotational semantics is the charlatanism by which AI zealots also claim the world is some abstraction.


> Pretending that this can be given a denotational semantics is the charlatanism by which AI zealots also claim the world is some abstraction.

Huh?


denotational semantics = the meaning of a program is a mathematical object

If programs, such as video games, are mathematical objects, then the world is one too. But, of course, they are not. They are physical objects distributed in space and in time -- hence why people play them.

ED's discrete mathematics idealism here is the windmill he's tilting against. This is the origin of the very confusions he's annoyed with.

No, programs are not abstractions, and that is why almost every programmer bothers to write one.


There are some good ideas here, and this is a historical document for CS like the Gettysburg Address is for the USA, but your average freshman English instructor could offer a lot of suggestions for improvement if this were a class assignment. It could be said in about half the length (something Lincoln was good at!), and you don't need to invoke Galileo every time you're critcizing "the way things are done around here".

On some substantial points, the author is clearly correct: mathematics has proof checkers and proof assistants, and may soon have AI support as well. Computers are a thing. (Yes, I know who the author is. But I'm critiquing the text, not the person.)

I'll give the author what I think is their central argument: programming is a mathematical-intellectual discipline, not some form of glorified HVAC maintenance where you simply have data instead of water flowing through your pipes. I've never yet had to clean rust off a shell pipeline. More formalised methods for programming (even if they are called 'Rust') and other structures such as type systems absolutely need to be taught and used, where appropriate. Full-on formal methods have their place, but that place is a small minority of all code.

I don't fully agree with the criticism of reasoning by analogy. Surely what matters is whether there really is an analogy or not, or as a categorist would say, you have a morphism not just a mapping. Reasoning about cars from horses doesn't work because there is not much analogy here. Reasoning about the property of the as yet undiscovered Element 31 from the properties of the known element 13 (Aluminium), and the fact that elements N = 14, 15 etc. seemed to be similar to elements N + 28, worked out well. At the time, of course, the analogy was phrased in terms of atomic weights rather than element numbers - the fact that these analogies worked out more often than not led to the hypothesis that there was something deeper going on here, which led to the invention of the periodic table (and element numbers). This is all proper Science (TM), not mediaeval scholasticism.

"Programming tools" - look, I like VS code with old-fashioned intellisense, even if you have to turn off the AI crap nowadays. I also like git, for that matter. And syntax highlighting, and visual diff, and compiler errors being red underlined in the source code, and Control+Click on a function name taking me to its definition (usually). I can program in nano when I need to, but not when I have the choice. Maybe for some of the younger generation, they won't be able to imagine programming without an AI copilot. The author here is falling into their own trap, first explaining why mathematics can be revolutionized by tools (such as computers) but then asserting that of course programming cannot. Oops.

I'll give them that perhaps 80% of proposed programming (teaching) tools are rubbish, and maybe that was closer to 100% back in the day, and every few months some new low-code/no-code "solution" appears, but that doesn't mean there aren't good ones out there.

I usually yawn at any mention of the "testing can only reveal the presence of bugs". Yes, and then you can fix all the bugs that were revealed! Well-tested code (that requires some thinking about what to test - see "A QA engineer walks into a bar") has far, far fewer bugs than non-tested code. Some coding interviews expect you to test your code without being told explicitly to do so. It turns out, you can get something like 80% of the benefit of "formal methods" with 20% of the effort, and without having to switch your programming language.


fantastic.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: