The trouble is that the tinkerers learn to cope with the idiosyncrasies of the legacy systems and build their shiny new stuff on top of them. In the end, we get software that solves a relatively simple problem but depends on layers upon layers of other systems and abstractions.
IMO, we need to fundamentally rethink the design of our computing infrastructure so that it suits our current way of using computers. This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.
I'm sympathetic to your point of view, but Big Design Up Front never works, in my experience. It's impossible to anticipate future uses for tech that we currently consider vestigial, unnecessary, or inefficient.
Mainframes and terminals were the future until PC displaced them, until the web came along, until mobile and cloud computing smashed it all and brought us full circle (sorta). Designing to optimize the full stack for one of these iterations would have made the transition to the next phase much harder. Fewer people than you'd think foresaw the IoT, FPGA, Arduino, processors-in-everything revolution currently underway back in the early 2000s, when Moore's Law was still gospel.
In the end, I take a sort of Buddhist view toward the fuckedupedness of software: It sucks, but it's all we've got and must be accepted on its own terms. Find a strategy that works for you and that minimizes global pain. Do no harm. Attempting to force your way into Nirvana and out of Samsara only wedges you deeper inside it.
And then we end up with industry that continuously rediscovers and rebrands shittier versions of solutions that were already created in the 70s.
The problem with BDUF is not that you can't predict things well in advance. Indeed, you can, as it has been done countless times in the past in our industry. The problem is, doing the Right Thing doesn't make you the first to market, doesn't make your solution the cheapest and the most virally spreading. Hence we end up building towers of shit instead.
> The problem is, doing the Right Thing doesn't make you the first to market, doesn't make your solution the cheapest and the most virally spreading.
There's definitely that, yes. But I'd still contend that we're also crappy at predicting trends and future uses of technology. Yes, some people have predicted things well in advance. Some have done so accurately. Some have done so consistently. But none have been both accurate and consistent. It's maddeningly difficult, even at short time scales, and second-order effects quickly take over. The same applies to even tiny projects.
BDUF just has too much in common with communism: They're both forms of well-intentioned central planning, and they're both symptoms of the hubris that makes each of us think (s)he is more of an expert than we really are. And they both break down when unforeseen forces slam into their base assumptions.
I hear you. I really do. I've BDUF'ed my fair share of systems, and witnessed countless other people do the same. It just never works out. There's a damn good reason that "Worse is Better" keeps winning: It's got serious evolutionary advantages over BDUF.
And yet people praise SpaceX for doing much better and cheaper with their iterative refinement.
BDUF can work if you have a single organisation with a single stable set of requirements that's reasonably compact. The Kennedy "man on the moon" speech was such an example.
Where it falls down is trying to meet the needs to the 7 billion distinct human individuals, which are inevitably vague and shifting and change in response to publication of software.
They have much larger budgets and tighter tolerances than all but a few domains. Nevertheless, their approach is more modular than you think. They do a lot of designing to interfaces, especially where the necessary tech doesn't currently exist.
EDIT: Forgot to mention how little engineering (relatively speaking) NASA does these days. They farm a lot of projects and sub-projects out to contractors, devoting most of their engineering expertise to the requirements phase. Now imagine if your clients spent that kind of time communicating upfront! Software would be a lot better across the board.
In general, it works if you decide what you want before starting the development. I'm not talking about knowing what you want: I'm talking about deciding what you want.
Much like buying a car: no way you can think to all the possible detailed preferences of yours before choosing a new car, but when you sign the check you are voluntarily giving up any further discussion.
IMO, we need to fundamentally rethink the design of our computing infrastructure so that it suits our current way of using computers. This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.
Do you know if there's anyone seriously working on this? I feel like there are a lot of people on this website who want something like this and would love to help out.
VPRI[1], the group around Alan Kay had an NSF-funded project to reproduce "Personal Computing" in 20KLOC. The background was that way back at PARC, they had "Proto Personal Computing" in around 20KLOC, with text-processing, e-mail, laser-printers, programming. MS Office by itself is 400MLOC. Their approach was lots of DSLs and powerful ways of creating those DSLs.
This group then moved to SAP and now has found a home at Y-Combinator Research[2].
One of the big questions, of course, is what the actual problem is. I myself have taken my cue from "Architectural Mismatch"[3][4][5]. The idea there is that we are still having a hard time with reuse. That doesn't mean we haven't made great strides with reuse, as in we actually have some semblance of reuse, but the way we reuse is suboptimal, leading IMHO to excessive code growth both with increasing number of components and over time.
A large part of this is glue code, which I like to refer to as the "Dark Matter" of software engineering. It's huge, but largely invisible.
So with glue being the problem, why is it a problem? My contention is that the key problem is that we tend to have fixed/limited kinds of glue available (biggest example: almost everything we compose is composed via call/return, be it procedures, methods or functions). So my proposed solution is to make more kinds of glue available, and make the glue adaptable. In short: allow polymorphic connectors.[6][7][8]
So far, the results are very good, meaning code gets a whole lot simpler, but it's a lot of work, and very difficult to boot because you (well, first: I) have to unlearn almost everything learned so far, because the existing mechanisms are so incredibly entrenched.
The problem is that tech is always built with hidden assumptions. ALWAYS. Anyone who tells you otherwise is a liar or naive.
Not everyone can work with the assumptions that this tech demands, and not all of the assumptions are apparent (behavioral assumptions especially), so we end up either solving the same problem with a different set of assumptions or using glue code to turn things into a hacky mess that works.
Right, it's definitely the hidden assumptions that are currently killing us.
That's why the process is so difficult, because questioning everything is not just hard, it's also very time-consuming and often doesn't lead to anything. Or, worse, doesn't seem to lead to anything, because you stopped just a little short.
> This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.
Perhaps better to start from the top, and work down to the CPU architecture?
IMO, we need to fundamentally rethink the design of our computing infrastructure so that it suits our current way of using computers. This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.