Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Turns out all the big ones with strict architectural (n=3) pattern usage, although “clean”, the code is waaaay to complex and unnecessarily slow in tasks that at first glance should had been simple.

My last job had a Python codebase just like this. Lots of patterns, implemented by people who wanted to do things "right," and it was a big slow mess. You can't get away with nearly as much in Python (pre-JIT, anyway) as you can in a natively compiled language or a JVM language. Every layer of indirection gets executed in the interpreter every single time.

What bothers me about this book and other books that are prescriptive about application architecture is that it pushes people towards baking in all the complexity right at the start, regardless of requirements, instead of adding complexity in response to real demands. You end up implementing both the complexity you need now and the complexity you don't need. You implement the complexity you'll need in two years if the product grows, and you place that complexity on the backs of the small team you have now, at the cost of functionality you need to make the product successful.

To me, that's architectural malpractice. Even worse, it affects how the programmers on your team think. They start thinking that it's always a good idea to make code more abstract. Your code gets bloated with ghosts of dreamed-of future functionality, layers that could hypothetically support future needs if those needs emerged. A culture of "more is better" can really take off with junior programmers who are eager to do good work, and they start implementing general frameworks on top of everything they do, making the codebase progressively more complex and harder to work in. And when a need they anticipated emerges in reality, the code they wrote to prepare for it usually turns out to be a liability.

Looking back on the large codebases I've worked with, they all have had areas where demands were simple and very little complexity was needed. The ones where the developers accepted their good luck and left those parts of the codebase simple were the ones that were relatively trouble-free and could evolve to meet new demands. The ones where the developers did things "right" and made every part of the codebase equally complex were overengineered messes that struggled under their own weight.

My preferred definition of architecture is the subset of design decisions that will be costly to change in the future. It follows that a goal of good design is minimizing architecture, avoiding choices that are costly to walk back. In software, the decision to ignore a problem you don't have is very rarely an expensive decision to undo. When a problem arises, it is almost always cheaper and easier to start from scratch than to adapt a solution that was created when the problem existed only in your head. The rare exceptions to this are extremely important, and from the point of view of optics, it always looks smarter and more responsible to have solved a problem incorrectly than not to have solved it at all, but we shouldn't make the mistake of identifying our worth and responsibility solely with those exceptions.



> What bothers me about this book and other books that are prescriptive about application architecture is that it pushes people towards baking in all the complexity right at the start, regardless of requirements, instead of adding complexity in response to real demands.

The trouble is if you strictly wait until it's time then basically everything requires some level of refactoring before you can implement it.

The dream is that new features is just new code, rather than refactoring and modifying existing code. Many people are already used to this idea. If you add a new "view" in a web app, you don't have to touch any other view, nor do you have to touch the URL routing logic. I just think more people are comfortable depending on frameworks for this kind of stuff rather than implementing it themselves.

The trouble is a framework can't know about your business. If you need pluggable validation layers or something you might have to implement it yourself.

The downside, of course, is we're not always great at seeing ahead of time where the application will need to be flexible and grow. So you could build this into everything, leading to unnecessarily complicated code, or nothing, leading to constant refactors which will get worse and worse as the codebase grows.

Your approach can work if developers actually spot what's happening early and actually do what's necessary when it actually is. Unfortunately in my experience people follow by example and the frog can boil for a long time before people start to realise that their time is spent mostly doing large refactors because the code just doesn't support the kind of flexibility and extensibility they need.


> The dream is that new features is just new code, rather than refactoring and modifying existing code

I don't just mean new features. I mean new cross-cutting capabilities. I mean emitting metrics from an application that has never emitted metrics. I also mean adding new dimensions to existing capabilities, like adding support for a second storage backend to an application that has only ever supported one database.

These are changes that I was always taught were important to anticipate. If you don't plan ahead, it'll be near impossible to add later, right? After a couple of decades of working on real-life codebases, seeing the work that people pour into anticipating future needs, making things pluggable, all that stuff, seeing exactly how helpful that kind of up-front speculative work turns out to be in practice when a real need arises, and comparing it to the work required to add something to a codebase that was never prepared for it, I have become a staunch advocate for skipping almost all of it.

> Unfortunately in my experience people follow by example and the frog can boil for a long time before people start to realise that their time is spent mostly doing large refactors because the code just doesn't support the kind of flexibility and extensibility they need

If the engineers are doing large refactors, what in the world could they be doing besides adding the "kind of flexibility and extensibility they need?"

One thing to keep in mind when you compare two options is that unless the options involve different hiring strategies, the people executing them will be the same. If you have developers doing repeated large refactors without being able to make the codebase serve the current needs staring them in the face, what do you think will happen if you ask them to prepare a codebase for uncertain future needs? It's a strictly harder problem, so they will do a worse job, or at least no better.


+100.

Patterns and Abstractions have a HUGE cost in python. They can be zero cost in C++ due to compiler, or very low cost due to JVM JIT, but in Python the cost is very significant, especially once you start adding I/O ops or network calls




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: