The concept of clarity is why I don’t like the consequence of SOLID, where you have lots of tiny classes.
It’s easier to understand complex functions and a simple class structure than the other way around. Because jumping between multiple files/classes incurs a high understandability cost, whereas complex functions fit on your screen typically.
I typically find it very difficult to understand complex functions, (100+ lines of code, ~3 or more nesting levels deep) and even simpler complex functions (mixing 2-3 concepts into a 15 line function).
I've only just started doing TDD the "Growing Object Oriented Software guided by tests" way, and I find it incredibly helpful that each and every class does just _one_ thing, even splitting up those 15 line functions into two or three separate classes implementing an interface -- single responsibility -- helps me a lot in reasoning about the code.
I _have_ experienced the dependencies issue myself already though, it's very annoying to click on a method in my IDE, and then get shown the interface definition of that particular method. I'll then have to trace my way through a couple of files to find the dependency, very annoying.
This is widely believed and repeated, but empirical evidence actually runs the other way: according to studies cited in the book Code Complete, functions in the range of 100 to 150 LOC are more maintainable than shorter ones.
Code complete speaks about subroutines if I understood correctly.
I think that in functions, or even objects, the results would be very different.
I usually find the shortest functions more powerful and clear.
For example the pipe operator in F# that is nothing more than:
let (|>) f g = g f
While it has huge benefits in the overall readability of the language.
At the end of the day, a simple measure like LOC can never capture readability, good or bad, and you get eaten by Goodhart's Law if you focus too much on it.
No one would argue otherwise. Indeed, you can trivially take any readable function and transform it into an unreadable one of exactly the same length. But this doesn't seem like a valid reason to dismiss specific findings of specific studies. Don't you think it's interesting that such research as we have contradicts the most often-repeated claim about this aspect of programming?
I don't know. I've learned over the years that there is always a study confirming or contradicting whatever point you want to make. "Beware The Man Of One Study".
I say that without even looking at those studies, which is perhaps unfair. But there are So Many Studies...
My personal experience is that when I was exposed to shorter simple and (so important!) well named functions, my work became so much better. And that is now the school I subscribe to.
That's not - at all - to say you can't also find very good practices doing different things. But that's not where I found it.
Still, one study is still an important piece of evidence to consider when all you had before is no studies and a gut feeling.
My personal experience differs from yours somewhat. I believe it's not the length or the number of methods that matter, but what language (i.e. abstraction) they create. You try to subdivide the function into functions that are natural fit for the task being done, but no further. If you still end up with a long block of code - as you very well might - consider comments instead. A comment telling what the next block of code will do is kind of like inlined function, except you don't have to jump around in file, you don't lose the context. Much easier to read.
I used to write code where essentially every piece of code longer than 3-5 lines got broken out into its own private function. The amount of jumping I had to do when reading the code, and the amount of work maintaining and de-duplicating small private functions, was overwhelming.
When I was shown that you can break out a function that's only used once, just in order to name it (2005, or so), it was one of the greatest revelations in my career.
It also serves as a way to tell you what that code does, without you having to know details of how it does it, until the rare day when it's important.
But I only do it when that code is genuinely hard to follow, not because my function is "over 10 lines, and that's our policy".
agumonkey's mention of cyclomatic complexity in a parallel comment made me remember yet another realization wrt. breaking functions out: if you work with languages without local functions and start breaking your large function into smaller functions or private methods, you run into a readability/maintenance problem. The next time you open the file and start reading through all the little helper functions, you start wondering - who uses which one? How many places use any one of them?
With IDE support answering the question for a single function is just a matter of a key combination, but that still adds friction when reading. I found that friction particularly annoying, and a file with lots of small helper functions tend to be overwhelming for me to read (it's one reason I like languages with local functions). Whereas if you didn't break the code out, and only maybe jotted a comment inline, you can look at it and know it's used only in this one place.
> I typically find it very difficult to understand complex functions
It seems to me like "complex" and "ability to understand" mean the same thing, so this phrase doesn't have much meaning.
It's difficult to define "ability to understand" / "complex" without using either of those words in the definition. For example, you mention lines of code, nesting and multiple concepts.
I tend to agree with your examples, however not necessarily the lines of code. I've seen single large functions that represent an algorithm in a way that's easier to understand than the implementation that breaks it up into tens of little functions. It made liberal use of comments to explain each section of code in the function. I believe its advantage was that when reading, you could simply scroll down the function line by line rather than having to jump all over the file.
Sure, all I mean is that I get frustrated and lose unnecessarily much time being distracted by keeping a model of how the function works and how variables interact in my head. I have ADHD so my working memory isn't fantastic for these kinds of functions.
I suppose you could take my definition of complexity to be approximate to cyclomatic complexity
Yeah, but I would say it depends on context, clarity is very context dependent. Do what makes the code easiest to read taking into account that human limitations.
Sometimes splitting out code makes sense by making the underlying structure clearer, sometimes it makes it bothersome to find the actual important details. For instance, logic that filters what jobs to run based on conditions, it might make sense to abstract the logic for the conditions into one class per condition, to make the filtering logic clearer.
> Sometimes splitting out code makes sense by making the underlying structure clearer, sometimes it makes it bothersome to find the actual important details.
Yeah, that seems to be something I'll have to spend some time learning about. Right now I'm just mindlessly splitting off everything, and it kind of works, but it's annoying to navigate all over the place just to find some detail somewhere
> it's very annoying to click on a method in my IDE, and then get shown the interface definition of that particular method. I'll then have to trace my way through a couple of files to find the dependency, very annoying.
The IDE should be able to show you the implementations. When using Java and Eclipse for example, you just hold Ctrl and hover with the mouse over a method call. In the appearing dialogue you can then select "Go to implementation" instead of "Go to definition" (or similar) and Eclipse searches for and then shows you all classes that contain an override for that method and then you can click on one of them to view that specific overriding method.
I have a theory about why this functional atomization has become so popular: small design topics (functions, classes) are easy to talk about so the various wannabe OOP gurus like Robert Martin keep talking about them the most, becoming more and more extreme to the point that we end up with hundreds of one line functions which can only really be put together in one way so that they fulfill the arbitrary criteria of reading like prose.
This is an excellent example of local optimizations - i.e. local design optimizations - which end up harming the overall design and maintainability. You've discovered this yourself, but still cling to this dogma of tiny functions.
I've never really understood why those particular 5 design principles became a sort of "top 5". Then I grokked dependency inversion and it really changed how I write code.
That article bashes (among other things) IOC containers and mocking frameworks. I absolutely hate Spring IOC and mockito. But dependency inversion is not dependency injection. It wasn't until I got sufficiently fed up with Spring being everywhere in my Java code that I figured out how to hide Spring as an implementation detail.
These are just a bunch of ideas that have been cobbled-together by Robert Martin and accidentally ended up as folklore in a world without rigor and order.
Kevlin Henney basically refuted the idea that SOLID has any meaning or use in software development.
SOLID is rather old, and never really grabbed me that much. Well, rather, I kinda agreed with it (but I had been in OOP for about a decade before it came along) for a bit, but no longer. Most of it is rather obvious, and I find the terms just buzzwords. "Dependency Injection" particularly drives me crazy, as we've been passing in references to objects forever, and suddenly we need a Special Term for it. What next, instead of pointers we call them Dependency Actors?
I also agree that I'm tired of tiny classes and tiny functions. I went on a rant to my coworkers the other day about them, after spending an afternoon figuring out how some code works that fully believed in small functions.
Nothing makes 100 lines of code more readable than splitting them across 100 functions in 10 classes in 10 files! /s
I agree with another poster, though, that VisualWorks Smalltalk made it a lot more palatable. I'd honestly really like to try a system that treats a program as a database of functions rather than a directory structure of files. I don't think the directory structure adds anything that a tagging system couldn't do.
> "Dependency Injection" particularly drives me crazy, as we've been passing in references to objects forever
Might have been old to you, but not to everyone. I have seen plenty of codebases which inject nothing and where all objects create their dependencies themself. Lower level code is especially susceptibly to this - the amount if C code that I've seen that made use of some kind of DI tends to go towards zero. And with that often also the amount of tests that are available for the codebase (with claims like: This is not testable).
I find the term "Dependency Injection" also still unnatural and academic, but I really like the core idea, which isn't that hard to teach (pass dependencies instead of creating them yourself).
Whether one also needs DI frameworks is another topic, on which I have no strong opinion.
This. It makes it sound like more than it is, and while the basic idea should be taught (briefly), it isn't some amazing concept worthy of enshrining in some holy Acronym.
The complexity of tests tends to go up faster than the complexity of the functions under test.
One of the thought experiments I use for teaching testing is to remind them that nobody wants to read your code. The next person to read your test is probably going to be reading it because it’s failing. They are in effect already having a bad day. Don’t make it worse. They will assign that negative emotion to you.
This has started to affect all of my design thinking. I’m probably only reading your code because I’m trying to hunt down a bug (or my attempt to add new functionality has failed spectacularly). Every time I run into a function that contains code smells, I have to stop long enough to figure out if those smells are the bug I’m looking for or something else. In a particularly bad codebase, like the one I inhabit now, by the time I finally find what I’m looking for, I no longer recall why I was looking for it in the first place.
This is not good code. It’s shitty code written by people who aren’t emotionally secure enough to write straightforward code. Or are running from one emergency to the next, all day every day. Or both. Or are young developers just copying what they see around them.
All interesting points, the reason I hate class clutter is especially because of debugging/understanding. I don't want to have to spend a long time building a mental hierarchy of classes and what they do just to figure why a link doesn't have the right URI. Essentially, people use classes to achieve magic code. Code that's very, very convenient when it works, but is monstrous to debug when it doesn't. Especially if the code isn't something you're familiar with.
This is a subjective opinion. I’ve had this debate with countless coworkers as I take the exact opposite stance as you. Neither of us is correct, neither of us is wrong.
The problem with declaring simplicity and clarity to be the final goal is that’s not an objective truth
This is the very issue I have with a lot of SE dogmas and those that subscribe to one particular set, not just the one described here. Objective truth is an afterthought in most cases.
Most these dogmas are anecdotal at best with very little to no empirical backing within surrounding context, but people will stand by certain approaches as if they were well tested theories like general relativity, newtonian mechanics, or QED.
The concept of clarity is why I don’t like the consequence of SOLID, where you have lots of tiny classes.
tl;dr - Context!
The problem with a lot of the principles developed from the Smalltalk days, is that different environments have differing cost/benefit results for different tasks. This means that many practices which are awesome in a dynamic, programmer-is-god-of-runtime environment like Smalltalk are doing to bog down in other environments. (see below)
It’s easier to understand complex functions and a simple class structure than the other way around. Because jumping between multiple files/classes incurs a high understandability cost, whereas complex functions fit on your screen typically.
What this reveals about the above observation, is that reading and conceptualizing large chunks of operation of the code has a higher cost/benefit in the accustomed environment. My day job is in C++, and because of the compile times, that's certainly the case in that environment. However, in an environment like VisualWorks Smalltalk, where the debugger is 10X nimbler and has literally made students in my Smalltalk classes cry out, "This debugger is GOD!" the cost/benefit trade-offs are very different. Most of the time, code is read and edited in the debugger, and flipping between multiple classes in that context happens automatically by navigating. Also, the work for navigating relationships in the object model in a clean code base is an entirely different order of magnitude. Instead of O(2^n) on n == distance by references, it's O(n), because there's no fan-out for different kinds of references. It's all done by "message sends" or calls of a method.
A lot of damage over the past 3 decades has been done by very smart OO people who tried to transplant methodologies from Smalltalk to C++ and Java. If one wants to be a step above, then look at the largest 7 or so cost/benefit trade-offs that affect the methodology. Then adjust accordingly.
Similarly, probably the main reason I really like JS... I can apply different paradigms to different areas of code where a given approach makes more sense in terms of understanding or cognitive overhead. JS debugging is relatively nice as well, though in some cases the tooling leaves a bit to be desired.
When I'm in C# by contrast, I often get irritated that I can't just have a couple functions that I can reuse, no, I have to create as static class, ideally in a namespace and do a bit more. I do try to minimize the complexity of my classes and separate operational classes that work on data, and data classes that hold values.
I wish we named all kinds of ECS-es. It's a total confusion of terms now. Your view is one kind of ECS. Another view is "structuring data to be cache-friendly". Another is "data feels better when modeled and accessed in relational fashion". Yet another is "composition over inheritance taken up to 11". People mix and match these views, which is why every single article on ECS seems to be talking about something different from every other article.
Your view is one kind of ECS. Another view is "structuring data to be cache-friendly". Another is "data feels better when modeled and accessed in relational fashion". Yet another is "composition over inheritance taken up to 11".
I believe you can, though I've not been there myself. But all the articles I've read and all the ECS implementations I've studied so far usually focus on one, maybe two of those aspects at a time, so every one looks different from another.
(In my current side project, I have relaxed performance requirements, so I'm experimenting with taking the relational aspect up to 11.)
Maybe I'm not so sure what you mean by "taking the relational aspect up to 11." My side project is in golang, so composition over inheritance isn't really an issue. I guess that leaves me with just 2 of the aspects.
Classes and methods are abstractions. They hide code. It's like putting your dishes in the cupboard instead of leaving on them a mess on the counter. There is a small initial payment to know where they are located, but then for the rest of time you benefit from the mental space by having them out of mind until you need them.
With modern IDEs, going to class/method definitions is a breeze. In my experience people who write big walls of code are often those who don't know how to leverage modern IDEs.
What you should really aim for is a simple class structure and simple functions within. Complexity at either level is not your friend and certainly not something the next developer to maintain the code will thank you for. That’s the person you should be designing for.
> whereas complex functions fit on your screen typically
Are the complex functions unit-testable? Do they depend on other units of work or other libraries? Does it have multiple responsibilities? You are probably following most of what SOLID entails.
I find it funny that HN consistently bashes SOLID, I feel like SOLID has been misrepresented. They are _guidelines_ for development, they do not dictate everything. They might influence or support a decision.
Those who bash SOLID: have you worked on gigantic projects that are in active development for decades? I advocate for SOLID is because I have witnessed first-hand its great benefits. I have built and worked on plenty of projects that apply these principles, I have seen wonderful open-source projects that embrace them. And of course I have seen adverse effects from it (eg. your linked article complains of innumerable, non-sensical interfaces) but that is mostly due to inexperienced developers that don't get it. And of course there are some devs/architects that go overboard, introducing premature abstractions, etc.. To those people I say YAGNI. The point being: following SOLID doesn't guarantee nice design. It is easy to produce shit code following SOLID, but it is even easier without it.
At the end of the day, there are trade-offs. I think using SOLID as development guidelines produces a scalable codebase divided cohesively into units of work.
I find it funny that HN consistently bashes SOLID, I feel like SOLID has been misrepresented. They are _guidelines_ for development, they do not dictate everything. They might influence or support a decision.
And of course I have seen adverse effects from it...but that is mostly due to inexperienced developers that don't get it.
Whether SOLID is seen to pay off in the medium term, or the long term, or the very long term, is dependent on environment. In some environments, the payoff is apparent sooner. In others, it's only longer term. This is why inexperienced developers may not get it.
Of course, that beings up the question, "How can we better communicate the benefits?" Can we document and present those, from the actual history of the project?
EDIT: To better relate this to my other comment in this thread, a problem with SOLID in environments where there's a lot of bookkeeping for the compiler's sake and compile times slow down the edit-test cycle, less experienced developers are going to first notice, "Hey, this stuff makes me flip back and forth between files!" If they never see the benefits, they're naturally going to conclude it's a bad thing.
I'm a big fan of SOLID too. It actually makes a great deal of sense.
Re: your comment about "And of course I have seen adverse effects from it (eg. your linked article complains of innumerable, non-sensical interfaces) but that is mostly due to inexperienced developers that don't get it"
While this is true, there is a deeper implication here. Assume programming talent is normally distributed. Now, ask yourself above what percentile do you have to be in that distribution to truly grasp the how/why of SOLID and to be able to wield it to solve problems. Now, ask yourself what percentile do you have to be below where you just go crazy with creating non-sensical interfaces and thousands of awfully named classes with single responsibilities such as "CustomerCommandMapEmbelisherConverter"?
The real problem is that code bases tend to be horrible (also normally distributed!) because a lot of talent doesn't meet the bar and can't actually produce programs that aren't rubbish. In any org you'll find the quality of the code base is somewhere on a normal distribution. And you will find all the engineers somewhere on a normal distribution. You'll have a couple of brilliant people, a couple of horrible people, a lot of average people.
The only time you truly see exceptional code bases that everyone stops and goes "wow, this is nice!" are the rare times the stars aligned.
SOLID is contradictory and flawed when used with Object Oriented Programming.
1. Single-responsibility Principle. Objects should have only 1 responsibility
Objects by default often have two responsibilities. Changing it's own state and Holding it's own State.
2. Open-closed Principle. Objects or entities should be open for extension, but closed for modification.
The very concept of a setter or update method on an object is modification. Primitive methods promoted by OOP immediately violate this principle.
3. Liskov substitution principle. Subtypes can replace parent types.
This principle represents a flaw in OOP typing. In mathematics all types should be replaceable by all other types in the same family, otherwise they are not in the same type family. The fact you have the ability to implement a non-replaceable subtype that the type checker identifies as correct means that OOP or the type checker isn't mathematically sound... or in other words the type system doesn't make logical sense.
4. Interface Segregation Principle. Instead of one big interface have functions depend on smaller interfaces.
I agree with this principle. Though many composable types leads to high complexity. I don't think it's an absolute necessity.
5. Dependency Inversion principle. High level module must not depend on the low level module, but they should depend on abstractions.
This is a horrible, horrible design principle. Avoid runtime dependency injection always. Modules should not depend on other modules or abstractions, instead they should just communicate with one another.
If you are creating a module that manipulates strings. Do not create the module in a way such that it takes an Database interface as a parameter than proceeds to manipulate whatever the database object outputs.
Instead create a string manipulation module that accepts strings as input and outputs strings as well. Have the IO module feed a string into the input of the string manipulation module. Function compositions over dependencies... Do not build dependency chains.
I seems you interpreted SOLID with a functional mindset and then turned around and found OO lacking.
1. Single-Responsibility Principle: Whether or not an object can change it's own state has nothing to do with how many responsibilities it has. Even a pure function that takes a single argument can have multiple responsibilities. To give a silly example a spell-check-and-update-wordcount function/object would violate SRP.
2. Open-Closed Principle is about modification of the code. It means the function/object should do its thing so well, you never have to touch its code. But if you want to modify the behavior of your program you should have a way to insert your new function/object so the new behavior is added.
3. Liskov Substitution Principle: "the type checker isn't mathematically sound" No type checker is mathematically sound. Obviously correct statement, since even math itself cannot be automatically proven. However what LSP basically warns against is to say: "a square is a special type of rectangle". It's not, because if you take this 'rectangle' and multiply its width by 2 and its height by 3, you either end up with a 'not a square', which is unexpected or you don't end up with 2xwidth by 3xheight, which is also unexpected.
4. Interface Segregation Principle: agreed
5. Dependency Inversion Principle: "modules should just communicate with one another" is exactly what DIP warns against. Your monthly-activity-calculator shouldn't 'just' communicate with the user-database module. It should take a user-collection interface and let another part of the program that is responsible (SRP!) for setting up that system provide it. That way this program-setup can decide based on configuration / the environment to pass it a redis-user-collection instead of an oracle-user-database.
1. Depending on what layer you analyze things at, in OOP, it may not be possible to maintain single responsibility. In OOP SOLID refers to the business layer. However with FP you can take single responsibility all the way down to types. A function returns One TYPE. It interfaces with the universe through a single type and that is one responsibility.
2. Why does the open and closed principle only have to apply to code? What if it could apply to everything. You gain benefits when you apply this concept to code... what is stopping the benefits from transferring over to runtime structures. SOLID for OOP is defined in an abstract hand wavy way, for FP many of those guidelines become concrete laws of the universe.
>No type checker is mathematically sound.
3. A type checker proves type correctness. Languages can go further with automated provers like COQ or agda. They are mathematically sound. Your square example just means that types shouldn't be defined that way. It means that the type checker isn't compatible with that method of defining types.
4. -
5. I highly disagree. There should only be communication between modules NEVER dependency injection. The monthly activity module should not even accept ANY module, or module interface as a parameter. It should only accept the OUTPUT of that module as a parameter. This makes it so that there are ZERO dependencies.
For example don't create a Car object that takes in an engine interface. Have the engine output joules as energy and have the car take in joules to drive. Function Composition over Dependency Injection. (Also think about how much easier it is to unit test Car without a mock engine)
If you get rid of dependency injection, you get rid of the dependency inversion principle. DIP builds upon a very horrible design principle which makes the entire principle itself horrible.
...I agree with your comments on Liskov Substitution Principle, that the fact it's even possible is a weakness in OOP type systems.
But the rest of your examples are very far off base.
I somewhat agree with Single Responsibility being perhaps not quite right as "single" isn't always desired, appropriate or possible. But the general philosophy is absolutely on point. It's an instruction to carefully consider whether a component should be responsible for something or not and if not then think about where else that responsibility should lie. It gets pretty gnarly when you see things that just have way too many responsibilities. They become unwieldy. An object being responsible for holding and manipulating its state isn't what I would class as a responsibility. That is below the line. That's thinking far too granularly about what a responsibility is.
Same for open/closed. There is a great picture that represents open/closed of a human body (being the closed system) that you can put different layers of clothes on (open for extension) which I think beautifully captures the essence of the principle. When this is done right it's an absolute blessing. You mostly find it in frameworks that have a life-cycle and at certain points (say before anything happens or after everything has happened) they provide an overridable method with no behavior. That method allows you to insert logic the framework designers didn't think to cater for, but also keeps the framework life-cycle intact.
The example of dependency inversion just doesn't make sense. If you're creating a string manipulation library it should take strings and nothing else. It doesn't need anything else. If you're creating a string manipulation library in the first place you probably should just use the standard library. Maybe that's just a bad example, but I still don't agree with your sentiment with always avoid runtime dependency injection.
Forgetting the string manipulation example - I'm curious what you have in mind when you say "modules should instead communicate with one another". How does this communication take place? What language are we talking about and what does some code look like? Mostly what comes to mind when I think of that are either newing up an instance of a class or calling a static method, or perhaps making some kind of http/tcp request?
For the first two principles... in OOP they are just guidelines operating at the layer of business logic. There are programming languages/styles that implement these "principles" as laws all the way down to primitive components.
See what I wrote about composition. I also have an example about a Car and engine class later in the thread.
Function Composition > Dependency Injection.
> ...I agree with your comments on Liskov Substitution Principle, that the fact it's even possible is a weakness in OOP type systems.
It's not actually a weakness in the type system. It's the weakness in the language. The language should never allow for such types to be constructed. Basically Inheritance is not compatible with the type checker. You get rid of inheritance, you get rid of this problem.
>I think using SOLID as development guidelines produces a scalable codebase divided cohesively into units of work.
The problem with Object oriented programming is not any of these things. The problem is that an object is a bad choice for a unit of work. A good analogy is bricks and construction. If a brick represents a unit of work to construct a wall, object oriented programming represents a brick with jagged faces.
This is why, no matter how deeply you follow these guidelines you will always have to build custom "interface bricks" (aka glue code) to compose jagged bricks together.
GoLang solves the problem with objects by getting rid of objects all together, but the fundamental procedural function that it uses as a primitive of composition is also jagged in a way. GoLang procedures do not compose very well.
There is a deeper primitive that programmers should model their code around that gets rid of the usage of misshapen bricks as the building block of programs. Bricks that compose with other bricks without glue. I leave it to you to find out what this primitive is, as you use it everyday to build misshapen objects.
The original article talks about readability and simplicity. It does not talk about compose-ability and modularity. Both of the aforementioned traits have a strange relationship with readability and clarity. More modularity does not necessarily mean less readability in all cases but it certainly changes readability.
> Those who bash SOLID: have you worked on gigantic projects that are in active development for decades?
Yes, and by far the worst part of them is the dependency hell problem of a sufficiently mature front-end. It gets sand in your cornflakes during development, testing, and debugging.
Imagine you're writing a front-end in this mature codebase. What injection bindings do you need to instantiate a FooUIWidget, which contains a BarUIWidget and BazUIWidget, and a few new data types, relevant to the business logic of FooFeature?
Who the fuck knows! You have a rabbit's nest of nested dependencies, you have no idea what part of the system owns which data change, or what cascading effects that data change has. Oh, and when you decide to move FooUIWidget out of ParentUIWidget into UncleUIWidget, good luck figuring out which dependencies it needs, which need to be removed from Parent, which need to be added to Uncle, which need to have alternative bindings added (Because Uncle already provides them, but they are not what Foo needs - your code compiles, and gets no run-time Dependency Injection errors, but your values are silently bound wrong behind the scenes.[1])
Unless, of course, you do something sensible, and instead of having each bit of your system depend on 20 things provided by dependency injection, just build the bloody thing right the first time, by using event listeners and MVVM.
[1] Oh, and of course, neither your compiler, nor your DI framework is mathematically capable of telling you that half of the dependencies you're providing for Parent are no longer used for anything. Go get your coal miner's hard-hat, finish up your will, sign the waiver about black lung, and go delving through your dependencies.
Front-end (web?) is a more narrow domain than what I was speaking to. It sounds like you are pointing out specific issues that you encountered when working on a particular project.
How is SOLID responsible for these issues? The acronym represents guidelines.
What you are describing sounds awful. IoC can get nasty when developers are inexperienced and off-the-leash.
Keep in mind that everyone is ignorant. We all have different experiences with different technologies on different codebases.
> Front-end (web?) is a more narrow domain than what I was speaking to.
It's the domain where proper architecture matters the most, because it's hard to get it right.
> It sounds like you are pointing out specific issues that you encountered when working on a particular project.
If by 'particular project', you mean every single FE project that I've worked on, that made unopinionated use of dependency injection, sure.
> How is SOLID responsible for these issues?
'LI' doesn't do any value add for these problems (You don't use all that much inheritance, or define very many interfaces when working on front-ends), and 'D' is actively harmful, because it paves the road to dependency injection. I find posting events to a bus to be a lot easier to deal with, then dealing with a spaghetti of objects interacting with injected dependencies.
> [1] Oh, and of course, neither your compiler, nor your DI framework is mathematically capable of telling you that half of the dependencies you're providing for Parent are no longer used for anything. Go get your coal miner's hard-hat, finish up your will, sign the waiver about black lung, and go delving through your dependencies.
Yes they are?
If it's not referenced it, it's not needed. That's pretty straight forward?
The compiler can tell if something isn't referenced - but it can't tell if a provider that goes into a DI framework is never invoked.
The DI framework can tell (at run-time) that you're asking for something that is missing a provider. It, quite obviously can't tell (at run-time) that you're never going to ask for something in the future.
> The compiler can tell if something isn't referenced - but it can't tell if a provider that goes into a DI framework is never invoked.
It sort of can though. It depends on the circumstance. If there is an interface with one implementation or even multiple implementations and that interface isn't referenced anywhere nor are any of its references then you can reason that those dependencies might be provided to the DI container but will never be requested as they can't be. In that case - delete them.
In the case where you have one interface which has multiple implementations and the interface is referenced, I agree. Nothing will tell you if there is one implementation sitting there entirely unused forever.
If you wanted to solve that problem you probably could. In practice I don't find it a big issue.
1. The interface may not be referenced within the particular scope of an injector. Scopes (and the modules that make up an injector) are determined at run-time, so the compiler has no idea whether or not the injector that generates ParentWidget needs FooModule, or not. As long as an object sharing an interface with something that FooModule produces is injected anywhere else in your application, for any reason, you can't statically figure out that you should remove it from the injector that creates ParentWidget.
2. The interface is referenced, the implementations might not even be bound to it, depending on run-time conditions.
Even trivially scoped dependency injection is a fantastic way to make it impossible for your compiler, and very hard for a human, to reason about your dependencies.
It’s easier to understand complex functions and a simple class structure than the other way around. Because jumping between multiple files/classes incurs a high understandability cost, whereas complex functions fit on your screen typically.
And reading some more about it, I found this good article: http://qualityisspeed.blogspot.com/2014/08/why-i-dont-teach-...