Hacker News new | past | comments | ask | show | jobs | submit login
Against Generality (2016) (novalis.org)
38 points by luu on Aug 26, 2022 | hide | past | favorite | 8 comments



I've had this conversation with many colleagues over the years. There's something about software development that makes them think a "simple" system is one with fewer concepts -- fewer data types[0], fewer separate libraries, fewer services[1].

If I'm lucky I can shift them into thinking about inputs and outputs instead of execution. One exercise that was sometimes successful was to reason by analogy to sculpture, and the concept of "negative space" in painting. In this model the program starts out being able to do anything, and it's the programmer's job to carve away the probability space until the only remaining behavior is the correct[2] one.

Another was forcing them to diagram the system with unlabelled boxes, and every line labelled by the data it carried. I would ask questions like "for this RPC, is it possible to determine if the response is correct?" -- you'd be amazed (or maybe not) how often the architecture was too generic to allow that. I think this is one of the less-advertised benefits of automated testing, even a trivial test imposes some restriction on the domain of input/output values.

[0] Most memorably, a monitoring system where all data (including numeric metrics and log lines) is represented as a frame of a distributed stack trace.

[1] A common outcome of this thinking is a big-ball-of-mud, where all of the logic in some sort of giant central executable. You see, it's "simple" because there's only one service (that does everything) and only one build artifact (that depends on all code), and only one test suite (which must run in totality on each change), and so on.

[2] "Correct" being an intentionally flimsy word. Some times it meant a 100-page spec with official test vectors, some times a unit test suite, some times just eyeballing the output in a terminal.


How does one model a program that does anything to began pairing it to the desired execution?


Generality gives us a lot of flexibility and the ability to build generic tooling. Once I know how use Unix pipes then I can work with things I have never seen before because they speak the language of pipes. But it doesn't mean that because a system is general that it has to turn into a turing tar pit. You can layer specifics on top of the generality. Just because I can pipe something to a process doesn't mean that it will accept anything on the pipe. It has an interface and that can be strongly typed.


Programmers love "one clever trick".

Reality does not, it's irreducibly complex and as a result that trick quickly becomes an annoyingly overstretched metaphor.


Generality leads to bloat in software systems and increases complexity unnecessarily. As a general rule of thumb, software should be designed with the specific use-case in mind that applies to the current needs of the user. Generalisation should be delayed as long as possible in the design process. This is because the process itself reveals what would benefit from generalisation.


> Generality leads to bloat in software systems and increases complexity unnecessarily.

This is missing some qualifiers methinks. An interface is generic in that you don't have a concrete implementation, surely that can be used to decrease complexity rather than bloat it?


I'm thinking more in terms of what you put into that interface. Do you add features that you might need later? My approach would be no. You design your first interface with only what you need right now. And if you don't need an interface to do that, you bring in the interface later.


I'm against generality in general, but not in particular. Unless I have that the other way around.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: