So, getters? We need X, so we call getX(). getX needs A and B, so it calls getA() and getB(). And so on, until getSomething() just reads input. Please correct me if I'm reading this wrong.
This is the Inversion of Control pattern, where you inject A and B into X via some external mechanism (such as Dependency Injection). The advantage you gain is that you can swap out A and B for objects with different behavior (such as mocks) without needing to change the implementation of X. This leads to an architecture with a high degree of composability and a low degree of coupling, which is great for large systems.
When besides mocking would you want to swap out a dependency using an overarching dependency provider rather than an intermediary processor/getter with conditional delegation?
Test/dev environments are something else I can think of. For example, you might want a different dependency in a dev environment, but you don't want to write tedious "if (env == dev) return foo else return bar" code in your getter, which also couples the hosting class to the environment in which it's running.
Sure they are. Instead of the hosting object determining which dependency to return, some external mechanism is determining such. IoC doesn't necessarily apply only to initialization.
No. It solves the problem of having to reason about data dependencies between functions at scale, typically in the service of doing the minimum amount of work required to get a certain result. As "Steve Wampler" points out in the comments, it's a bit like Inversion of Control/ Dependency injection, except for transient data between functions as opposed to services, connection pools, etc.
My understanding from briefly skimming the article was that they have forced a "composition" of service orchestration to get from an input to a particular output. From a product standpoint, a particular "experience" could be broken down into a set orchestration of services (depicted as input/output functions). This lets you "tweak" your experience/product by mixing up the flavor of orchestrations by "stitching" different services (much like assembling lego blocks) via some rule engine. Kinda like "dependency-injection" but from a system architecture point of view as opposed to just an application code architecture.
I don't think you can expect to gain that understanding from this piece. It's part of a series, and presumably forthcoming posts will explain how the rules engine drives which components are selected to process specific content requests.
This is a far simpler problem than SAT. There aren't enough details in this post, but one would expect that 'processors' would not impede one another. Then it's dependency resolution, of the sort that package managers do every day.