The unfortunate side effect of this (very good) advice is that all code that's easy to delete will eventually be replaced with code that's hard to delete (and thus will eventually be impossible to delete in order to be replaced with something better).
Many people argue systemd is an example of code that’s easy to delete being replaced with code that’s hard to delete.
They’ve deleted init, the dns client, dhcpd, the whole xdm family, various small open desktop protocols, kernel-level file permissions enforcement on certain device files, rsyslog, countless shell scripts for running background tasks via ssh, and I’m sure hundreds, if not thousands, of other well-modularized programs. None of the collateral damage is in subsystems related to init. Instead, it is subsystems that worked well, but that were easy to delete.
One the other side of the coin, look at all the effort people are spending to rip systemd out. Multiple Linux distributions exist solely to contain the damage it’s doing.
It’s unclear if gnome will even survive the war if systemd loses.
It’s also wasting the time of end users, so the damage can greatly exceed the total resources put into building Linux distributions.
A few days ago, I ran an “apt-get fullupgade” on my headless Raspberry Pi, and some systemd subsystem wedged during the upgrade. Now networking is broken. I want to use this raspberry pi in an embedded I2C application that run for last decades. So, I need to find an operating system that:
(a) doesn’t use systemd - fool me once, shame on you, fool me twice, well this is well past the second time.
(b) runs on raspberry pi
(c) has userspace tools to work with the i2c bus on the pi
(d) has a working upgrade path.
This is a huge pain, and it’s all to delete one software package that I don’t even care about, and that is irrelevant to the use case for this machine.
You might be able to put something together using Raspup or Ultibo. IIRC, both should be able to run on all Pi models. Both are pretty lightweight, simple, and they could be a good base for building what you are looking for.
Raspup is a Puppy Linux for Raspberry Pi. It uses Raspbian Buster as base. I haven't messed with I2C on my Pi, but if there are any userspace tools in Raspbian Buster's repo then you can use them on Raspup. The working upgrade path is the biggest annoyance here. Package management on Puppy in general is a pain for anything but simple install and removal. It seems like the Puppy community either doesn't upgrade or upgrades by installing the next Puppy version (which is usually fast).
Ultibo is not Linux but a Free Pascal kernel that doesn't implement a complete OS. It is for using a Raspberry Pi board more like a microcontroller, but you can use it as a base for anything. For example, someone made a Z80 CP/M emulator that runs pretty fast. Besides the Free Pascal library, the tools here are pretty much what you write. This is probably the simplest option for a Pi that may be alright if you don't plan on using the full functionality of a Linux machine or you want to avoid the Linux kernel scheduler.
Raspberry Pi hardware is not really suited for "decades" reliability. If you want to use it for an application like this, you need to arrange to recover/replace the devices in the field and plan how to upgrade out of production models in a controlled way. (And you need to rehearse/test doing it too so it works when you need to do it, especially as things change between RPi hardware generations)
Check out buildroot linux. I have had very good luck with this distro for professional embedded applications. It doesn't have the same annoyances that other common embedded linux distros have. It is easy to hack on, easy to customize, and if you must, it is easy to understand.
> some systemd subsystem wedged during the upgrade. Now networking is broken.
All software has problems. Without specific details, blaming your problem on systemd specifically seems just as unreasonable as blaming “Linux” or even “Unix”. In fact, in past years during the old OS wars, there were many such rants, blaming “Unix”. (See for instance The Unix-Haters Handbook.) Become a curmudgeon, and idolize the past, at your own peril.
I recommend, instead, to live in the present, to use currently normal software, and to fix every problem as it appears. Ceasing to upgrade permanently (possibly by moving to an obviously dead-end fork) is never a sensible option in the long run.
Don't bring the Unix-Haters Handbook into this. A fun, self-consciously curmudgeonly romp like that has about as much in common with the systemd debate as P.J. O'Rourke does with Bill O'Reilly.
I think the point of the parent was that systemd will be much harder to replace than other init mechanisms once something better comes along, because it grows into every part of a Linux setup. And if that's true, then systemd actually hinders progress and innovation, because the cost of replacement will be too high.
I think the one thing we do wrong with code is when we document it, we write what the function does. Code is largely self-documenting and this because stale quickly anyway.
IMO, the thing we should be doing is documenting WHY this function needs to exist. That is the question that is hard to answer three years later.
I like your comment, but for the wrong reason: Explaining what a method DOES, goes against encapsulation principles. It creates coupling to implementation details, when what should be provided is just the interface. Any explanation can be exposed in business domain logic and documentation.
There are multiple pieces of documentation. Documenting what a method does is one of them, but primarily intended for maintainers of the system. Documenting its interface is meant for its users.
Both have great value, but at different times and to different people.
Indeed, was thinking the same thing while posting.
In practice though, I've mostly encountered lack of comprehensive developer docs, or they became outdated years ago. Most devs will point to source itself for documentation, unless for critical or certified software. If not the latter, the internal logic is usually scattered and not cohesive. So need to check dozens of files for each hypothesis about the original intentions and context.
I am working on some code I last touched 12 years ago. I think good documentation is essential, it saves time in the long run. Knowing the why is very helpful.
Having just completed a change that should have been small, but ended up spanning ~75 files, I can attest to this. It would have been small, up until the point where one of the excitable members of the team discovered Uncle Bob and got excited.
Which, disclaimer, I generally like what Uncle Bob has to say. But there's this thing, and I don't quite understand how it happens, where it seems to be really easy to implement the cosmetic parts of the programming style he advocates while simultaneously achieving the diametric opposite of the fundamental goals that these techniques are supposed to achieve.
> But there's this thing, and I don't quite understand how it happens, where it seems to be really easy to implement the cosmetic parts of the programming style he advocates while simultaneously achieving the diametric opposite of the fundamental goals that these techniques are supposed to achieve.
All of OOP is like this. The most enthusiastic OOP adherents create the biggest OOP messes. Maybe every style of programming suffers from this problem eventually? The style has a go-to form of abstraction and a characteristic kind of mess that results from overapplying that form of abstraction.
Once people get comfortable dealing with that kind of mess, they realize, if I program zealously and dogmatically in this style, this is the only kind of mess I will ever have to deal with, and the comfort of always dealing with a familiar kind of mess they know they can slog through outweighs every other consideration.
I once encountered a take on this that rang quite true to me, though I can't for the life of me find where I read it.
The observation was that good object-oriented design is inherently unstable. With even slight perturbations, they can quickly spiral away into a mess. And those perturbations tend to happen almost constantly in real life, because writing SOLID code requires vastly more skill, knowledge and effort than not writing SOLID code. So keeping the code clean requires a constant, almost aggressive effort by some (probably self-) designated caretaker who understands and can defend the design. The social factors there are terrible, though, because now you've got a person on the team whose very job is more-or-less to nitpick and have arguments with the rest of the team. Frankly, it might be better to let the code be messy than it is to risk creating that kind of work environment.
I play this role in my team, and as a team lead I take it as one of my responsibilities. What works for us is that when I don't approve something I explain why and work out an alternative implementation plan with them.
If we cannot come up with an alternative or cannot convince ourselves that it is indeed better, then the original implementation goes in. Otherwise we move forward with the new plan. I'd say that that (when I challenge an implementation) around 4 out of 5 times we end up with a new implementation.
The team is very happy with this approach, or so I've been told. It wastes some time in the short term (hey the thing worked, why are you overhauling it?) but our manager's perception of good team motivation and resulting quality is what buys me the leeway to keep doing it.
I have seen many a TDD enthusiast create many more problems in terms of test maintenance than they solved in code quality. Doesn't mean TDD is bad, it just means like anything it is not a silver bullet
I certainly don't think TDD is bad; I do it myself. Though I will say that the Classicism vs Mockism debate is alive and well, and it is my (a classicist's) opinion that test-induced design damage is largely a by-product of getting so caught up in the red-green-refactor flow that the tests start to become an end in and of themselves. At which point maintainability has taken a back seat to mockability.
I'd rather have a slow test that doesn't unnecessarily concern itself with implementation details, than a test that is fast, but achieves its speed by getting its dirty little fingers all over the implementation details, and throws a tantrum and refuses to let go of them every time you attempt some spring cleaning.
I was thinking this too, but on closer scrutiny I don’t think it’s true unless the “goal” of code is to be rewritten. If the goal of code is to work as well as needed and no more, then it won’t move in that direction.
The “goal” of employees in a corporate environment can be to be promoted and get paid more, on the other hand, leading to the Peter principle.
I have had pretty good luck with just dropping a comment where something would be expanded. I find for myself sometimes it is easy to add bunches of code for cases that never happen. Mostly because you are in the moment thinking about all the different cases and have in a crazy way missed the specific case you are working on. Sometimes (not always) it is better just to ack that you are being an over-engineer.
A good example of that is adding boost to your code base. Once you've added boost to implement one simple feature, eventually someone will use boost to do preprocessor magic or other meta-programming. Then there is no going back, because there is (almost) no way you will be able to reverse that decision.
I don't find the analogy working for software (a large structure is hard to change).
Software is always easy to alter, hack in some code here and there, move some functions around, add and rename files. The larger the software the more places to make changes.
It's unlike a physical structure where a 1000 tons wall really can't be moved.
Funnily enough I usually experience the opposite. I've worked with very heavy systems and very light systems, and I always find more stability with the really light weight stuff. Obviously there are more bugs at first, but once you go through your first wave of code fixes I'm usually sitting pretty for awhile until I change more stuff.
I don't see your point. Code that's more modular, pluggable is by definition easier to delete. That should not be a reason to prevent us writing easy to delete code.
This was part of the initial idea of extension. David Parnas' 1970's paper that popularised the term was called Designing for the Ease of Extension and Contraction where "ease of contraction" refers to subsettability, i.e. removing parts of the code without having to change other parts.
I think that one possible problem with the 'O' in SOLID (open closed principle) is spending excess time second guessing future modifications to the code. The examples are always clear cut, but in the real world it sometimes doesn't work out that way.
OTOH I'd say that the 'L' is worthwhile (Liskov substitution principle) as inheritance can be abused as a kind of 'version control' for functionality, and LSP helps guard against that.
Some time ago I worked exposing an API that, given that it was constantly evolving and different partners adapted to it at different paces, we had to support a wide range of versions of it, which was basically what you say as version control.
I'm not gonna say it was pretty, actually from a design perspective it was disgusting, but from the perspective of handling a dozen versions of the same code in the same application, it was certainly very sane.
For example, at some point we supported versions 2.0, 3.0, 4.0, 4.1, 4.2, 4.3 and the freshly new 5.0. They inherited this way: 2.0 -> 3.0 -> 4.0 -> 4.1 -> 4.2 -> 4.3 -> 5.0, easy enough.
One day decide to deprecate version 2.0.
What happens is we consolidate 2.0 code in 3.0:
1. If a method calls super, copy + paste from 2.0 its place.
2. If it's overridden, there's nothing to do.
3. Run unit, regression tests, on the remainder of the versions and fix anything that might have been missed.
4. Delete 2.0 code.
The end.
If we need to fix a bug introduced in 4.1, you fix it there and it gets fixed automatically for the rest of the higher versions.
Sounds reasonable, I should really have said 'informal version control' where it's used to get stuff out the door and piles on technical debt as well :-o
The L really doesn't fit in with the others. It's super weird that people even try to compare it with the other "SOLID rules".
The Liskov substitution principle isn't advice on architecture, it's a statement of what the mathematical requirements are for inheritance to be compatible with a type system. If you violate it, you are no longer in compliance with the expectations your compiler has of you, and your compiler may fail you. It's like allowing UB into your C code. You might very well get away with it, but if you end up with a run-time type error, even though you have an allegedly static type system, well it's your OWN FAULT.
The other principles are all recommendations that have to do with whether or not other future developers will be happy building on your work. It is very easy to imagine a perfect-until-you-maintain-it piece of software that wildly violates every single one of those principles. I'm personally not convinced by most of them, but as someone with only 10-ish years of experience, I'm definitely still learning, and it won't surprise me if I'm absolutely wrong about some or all of S/O/I/D. But either way, they're rules of thumb, not mathematical laws, and they're rules of thumb about future work, not rules of thumb about whether the software will work right this moment.
Microservices tip for this: Don't put code into a shared library without careful consideration of the consequences. Even if it is used in multiple services, consider duplicating it instead, and compare the risks and trade-offs compared to putting it in a shared library.
Common scenario:
New developer: "I'm done refactoring the implementation of the BingBong class in the shared library. It was a lot of work to change and test all the frobnicateXxxx methods!"
Manager: "Wait, I thought none of our services do any frobnification?"
The old hands discuss: "They don't. Not anymore." "Are we sure? Perhaps the Foo service we haven't touched since last year?" "No, the Foo service never did." "Let's search the repos. Maybe we can delete some of these methods." "This is weird. We have seven different frobnicate methods, and we never needed to support that many different kinds of frobnification." "Of course. It's such a pain to roll out a new major version of a library, you don't want to change anything existing." "Do you remember when the rendering service was stuck on version 3 of the auth library? It was because we removed a method from the auth library and rolled out 4.0 without it. Turns out it was being used on a feature branch in the rendering service that got merged right before the release." "I remember. I'm the one who had to roll out version 4.1 the next day with exactly the same code as version 3.7."
There's a simple way of understanding how stories like this happen, and a simple conclusion: Once functionality is added to a shared library, changing it requires much more care. Therefore, shared code is much more expensive to maintain than non-shared code. This is why it makes sense to ask a question like, "Which is cheaper in the long run, maintaining three non-shared copies of this code, or maintaining one shared copy?"
Yup, I'm a big believer in applying the rule of three ( https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra... ) for moving code into a library as well. I find if I create an API with just one or two examples in mind, it often doesn't turn out to be as general as I thought it would be. By the time I've done something three times, I have a much better idea of what the different use cases will be.
I wholeheartedly agree with "don't immediately jump on the modularize and abstract everything right away".
I think that "modularization and abstraction is always and uniformly good" is one of the big lies of our profession. It’s easy to see how it’s attractive : programming is intellectual work, and displaying capacity of abstraction is rewarding. I was extremely enthusiastic about that stuff when I was young, too. Then I got to school to get a CS degree, and it only reinforced it.
Now that I have some years of experience, let me talk to you about my last job. It’s a web app like most of us (I think) are doing. Backend in node. Frontend in react. The client was extremely unhappy about the delays and cost of their current contractor (weeks and 4-digit invoices for simple features), they wanted us to in charge of the app.
As soon as I received the source code, I looked at the backend part. It was incredibly clean for the standards of my younger self and CS teachers. Each class in his own file. You had one routes/_index.ts file that just included routes/users/_index.ts for /users, routes/blog/_index.ts for /blog, and so on (recursively: you would have routes/blog/comments/_index.ts too). routes/*.ts would be just that, declaring routes. Code was cleanly separated and implemented in controllers (controllers/users/_index.ts, and so one). Of course the controller didn't directly used the ORM, there was intermediate DAO classes to do that.
Very clean. Also, changing the slightest things required to go through 5-6 files. May god save your soul if you wanted to change more complex stuff.
I discarded everything. Most of the code now lies in /app.ts, which contains all routes. Each route is directly implemented in this file (no separate controllers) and directly calls the ORM (no DAO), except one which does complicated stuff and has his own implementation file.
Loccount on my "ugly, not modular" backend, which, incidentally, has many more features (that the client wanted for months but the previous contractor couldn’t implement for a reasonable price/time). And ~1/3rd of those lines are just some CSS assets moved from the frontend to the backend because we needed them for the mailing list:
all SLOC=2284 (100.00%) LLOC=0 in 13 files
Loccount on the "very clean" backend of the previous contractor:
all SLOC=12507 (100.00%) LLOC=0 in 262 files
Now, don’t get me wrong. Abstraction and modularization are not uniformly and always bad. But they have to make sense. Don’t write and organize a 2k LOC project the same way you would write and organize a 100k LOC project — or you’ll end up with a 500% overhead in LOC that WILL translate to a 500% higher burden of maintenance.
I just wish school taught me that, or that my younger self could have been clever enough to see it by himself.
> Also, changing the slightest things required to go through 5-6 files.
That is a sign it was incorrectly modularised. Good modularisation means high cohesion and low coupling. Having to make changes in multiple files means the opposite.
Don't rack on modularisation (great concept!) when your only experiences are of bad implementations of it. (Which is not surprising, because most people do get it wrong.)
Maybe the takeaway is "better not do it at all than do it incorrectly."
More concretely, the code you describe seems to have suffered from the awful common implementation of the model-view-controller pattern, where module boundaries are based on technical implementation details (routes are their own module, pages their own, controllers their own, etc.) This looks "clean" when you are inexperienced, but it is contrary to all good modularisation.
Module boundaries should be based on business domain concepts, not implementation details.
The ultimate source is David Parnas' 1970s' articles On the Criteria to be Used in Decomposing Systems Into Modules, and Designing for Ease of Extension and Contraction by the same author. The 1960s' NATO conferences on software engineering also touch on this.
These articles essentially introduced modularization, and they argue very well for why their approach is superior to the misunderstanding we frequently see today.
There was a lot of good research into how to build software right back in the '60s and '70s that was mostly ignored/forgotten during the '90s which is when many of the mistaken patterns we see today was invented.
Not much of an elaboration, for which I'm sorry, but I think the original sources are a more efficient use of time than my retelling, for anyone curious.
Edit: I can say one thing in different words than the original papers: one thing you want from your modules is that their interfaces are stable. (This is what's known as "information hiding", i.e. the modules themselves can change as much as you want, but since their interface is stable those changes don't propagate to other parts of the code.)
If you define a module for your routes, then the interface of that module will have to change for literally anything you add or remove to the site. This is the opposite of a stable interface.
Figuring out how to design a stable interface is tricky -- this is why good engineers are worth so much -- but one rule of thumb is to base modules on business domain concepts. The business domain has likely existed for longer than your application, and it's concepts will likely outlive your application. They tend to be stable, compared to whatever implementation details seem reasonable today.
You have 5 places in code where you draw a blue rectangle:
drawRectangle("blue", x0, y0, x1, y1);
Do you refactor them into drawBlueRectangle(x0, y0, x1, y1)?
It seems you removed the duplication but you didn't. Because if you now have the requirement to draw red rectangles instead you surely won't leave it as
So instead of changing 5 places you now have to change the invocation in 5 places and implementation in 1 place.
You can argue it's because you named it wrongly, it should be "drawWhateverThingTheyHaveInCommonRectangle" instead, for example drawHighlightingRectangle. And you'll be right.
But you don't know if they will have that thing in common forever. And splitting the code is harder than refactoring it, so it often leads to code like this:
This code may seems ok, but it tends to grow business logic inside, and you don't immediately know what combinations of deciding factors are actually possible without looking at all the invocations. So either you look at all the invocations before implementing your change (and then the refactor didn't actually save you any work - what does it matter if you look at code and change it vs just look at code and change stuff elsewhere), or you ignore the invocations and add your change in isolation (probably writing code that is redundant and overcomplicated because you handle cases that cannot happen).
This is obviously oversimplified example, but I've done these exact mistakes several times :)
I think you should ask yourselves why you are drawing blue rectangles. What is their purpose? Do they just happen to be blue? Or are they blue because they have they all "do the same job"? Maybe you should have
drawInlineHelpBox(x0, y0, x1, y1)
or
drawEnergyShield(x0, y0, x1, y1)
We can see from the language that unlike your example this is a true abstraction. You went from color parameter to a specific color. It's both phrased in terms of colors.
But here we go from color to ui elements. The terminology is completely different.
Yeah, the idea of DRY is simple, but it doesn't change the fact that you have to use your judgement, do and learn from mistakes etc. Nothing ever does.
then call separately? Depends on the number of calls to drawHighlightingRectangle I guess.
Something I've learned recently is to try to think in terms of behaviours, then compose them instead of mixing them together. Also it's easier to juggle the pieces by erring on the side of decomposition initially, then recompose a bit once they're organised, but only if it makes more sense.
DRY is not very useful as a concept. It doesn't define any rules how to identify bad repeated code, nor how to change it and doesn't define limitations of the concept or it's application.
I know this sounds wild and harsh but we have to overcome the habit of explaining non trivial or even unsolved problems with truisms.
In my last interdisciplinary role, I tried to simplify this as "you drew circles around the wrong parts". If many lines still cross the module boundary...
I would question your assumption that "clean code" that you describe is really modularization and abstraction. (In particular, separating everything into small scattered pieces, which is IMHO a horrible Java habit that I have to endure now.)
I mean, both modules and abstractions should have purpose.
The purpose of modules is to "do one task well" so to speak. So if for a typical change you need to modify several of them, you're not modularizing it right.
The purpose of abstraction is to provide another "language" which lets you forget certain technical details, while clarifying the bigger picture. Again, unless this language has been designed wrong, you shouldn't have to touch the abstraction.
> Very clean. Also, changing the slightest things required to go through 5-6 files.
Martin Fowler's "Refactoring" is a great book (I only know the 2nd edition from 2019). From a naive understanding of the "clean code" school of thought one might assume that splitting up everything into small functions, classes, modules is always the way to go. But Fowler's advice is much more nuanced than that. His list of bad code smells includes the aptly named "Shotgun Surgery", which describes just your quoted situation. The suggested way to go is then to first inline all the scattered stuff, next to extract parts such that the logic is more contained.
Your rewrite sounds very similar. And of course you are right: School and books probably don't do a good job of transferring this knowledge. That "Shotgun Surgery" paragraph is easy to discard by a reader who hasn't experienced the pain themselves.
I remember coming across Refactoring after having already adopted it for some years. I wish I had come across it sooner. But with my initial experience under my belt I was able to realize what a gold mine it is. Anyone working in a higher level OOP language especially should give it a read
”You aren’t gonna need it” (YAGNI)[1] is good to keep in mind.
When you are building the abstractions too early you are guessing the future needs. If you never need to change X then the code to make that change possible is just extra weight.
I think there's a subtle trap where programmers think their code isn't nicely organized unless they themselves have written all those abstraction layers.
But one of the virtues of picking up a framework is precisely that it's already there for you. Hopefully it matches what you need. But within reason, if you pick a good framework for your task, you don't then need to layer on another set of abstractions... it's already got them!
It's completely sensible to pick up a framework and bash out 2000 lines of code that simply spends the abstraction budget of the core framework. It's half the value of the framework in the first place. Obviously if you're going to build another 100K of lines on the framework you may need to bring some additional organization to the party, but the space of "framework + 2K lines of code" is a pretty rich one.
Sandy Metz' book "99 bottles" (there was a second edition recently, now you can choose between beer and milk and between Ruby and Javascript) tries to hammer those points in. I'd say it basically dedicated to the topic. You have one problem and explore differnt ways to approach it with pros and cons revealed and discussed along the way. And "do not rush with an abstraction" is something that Sandy tries to get accross. The wrong abstraction will cost you more in a long run than some code duplication.
Highly recommended.
Just as a counterexample to this, my last gig (I got out after 4 months thankfully) was written with some of these "do copy and paste" and "write a big lump of code" principle and it was an absolute nightmare to work with. Most functions were hundreds of lines long and files were huge. The code had no architecture and was very hard to reason with or modify. Most of the work was to fix a steady inflow of bugs which had no concrete repro steps and happened sporadicly.
People rag on premature optimization but I will take it over no optimization at all. Someone who prematurely optimizes at least has good intentions in their heart and is trying to do better.
At the end of the day, every day, even every hour as a developer you constantly have a choice - be lazy or be good. If you are lazy you will add one more if then statement to the already huge function. If you are good, you will refactor it. Its OK to be lazy every once in a while, but thats a very slippery slope and years of laziness will crush a codebase in its own eight.
There is also the case where even a well-abstracted codebase will suffer because of the particular feature requested. Even the best codebases will need changes in 5-6 files, if not more, if the feature doesn't fit with the rest of the system. It is trivial for stakeholders to request something that doesn't fit with the rest of the system, where one can then declare the system 'poorly abstracted'.
> Loccount on my "ugly, not modular" backend[...]
>
> all SLOC=2284 (100.00%) LLOC=0 in 13 files
>
> Loccount on the "very clean" backend of the previous contractor:
>
> all SLOC=12507 (100.00%) LLOC=0 in 262 files
This begs the question: so what?
Comparison will only make sense when another pour soul will try and change something on the likely mess you look proud to have created. What you found was not well maintainable, but will work be?
It is arguable whether 2x LOC reduction is worth it. But 5x LOC reduction on the same technology stack kind of changes the complexity category of the code base. 2kLOC (borderline trivial application) is almost always easier to maintain than 12kLOC (small application) — as 60kLOC would be easier to maintain than 12kLOC.
This is valid, of course, only for comparable code, not taking into account obfuscation, complete removal of tests, etc.
Fun read, but does anyone actually change their coding practices based on high-level essays like this? HN seems to be filled with a highly opinionated bunch, and I question the point of these musings.
I find value in essays like that even if they don't make me write completely different code tomorrow. Having design consideration clearly laid out and discussed is really helpful, imo. (Of course that varies with how effective the essay is at conveying its points and how relevant the points are to me to begin with.)
Maybe I've been thinking about similar things myself but never got around to thinking it all the way through or formulating actionable conclusions. Then reading the essay is just like having bits and pieces fall into place in my mind and having disorganized thoughts finally make sense, like a shortcut through the whole mental bureaucracy of forming opinions.
Maybe I've been having a disagreement with coworkers about something the essay discusses. Then reading the essay may open up a new perspective on the issue, or introduce terminology that helps us discuss things more clearly, or at least validate that we're discussing something meaningful that other people have also had to think about.
Maybe I've already been doing exactly what the hypothetical essay advocates for! But I've always felt kind of bad about it because I couldn't convince myself that I'm doing it for a good reason, and now the essay is making me feel better about myself.
Maybe the essay is completely wrong, but it's wrong enough that it provokes an expert into writing an enlightening comment on the details on why and how it's wrong, and how to be less wrong.
More people should write more essays, even if they don't end up as seminal blogposts that are cited for decades to come.
Perhaps you are using your intuition. I do let it guide me without overanalizing every decision but more like feeling it. Programming is also an art and intuition can play a bigger role in it. Engineering is the organized form but built on pre-existent ideas enforcing helpful boundaries.
I think when faced with too many slices in decision-making and decision considerations as you put it, it could become overwhelming especially if one is not completely sold on a certain ideology/opinion to fight for. Thats where intuition can be a good guide. However, intuition can do only as much, it needs to feed off at times with some external input such as this type of opinionated articles.
I'll admit I haven't necessarily changed my coding practices directly based on posts like this, but they do help me formalize the things I've learned in my own experience.
That sounds dismissive, did you intend that? It's my favorite type of writing. It tickles your brain. Does he mean it like he says? Do I agree? It makes you think, even when it sounds like he's telling you what to do.
It most definitely isn't. It's written in a bit of an absurdist tone, but beneath all that it's a lot of solid advice about not over engineering things from the beginning and working iteratively.
It's not satire; the technique of peppering a work like this with slightly-modified-famous-out-of-domain-quotes is not a tool of satire but of implicit metaphor.