No. It solves the problem of having to reason about data dependencies between functions at scale, typically in the service of doing the minimum amount of work required to get a certain result. As "Steve Wampler" points out in the comments, it's a bit like Inversion of Control/ Dependency injection, except for transient data between functions as opposed to services, connection pools, etc.
My understanding from briefly skimming the article was that they have forced a "composition" of service orchestration to get from an input to a particular output. From a product standpoint, a particular "experience" could be broken down into a set orchestration of services (depicted as input/output functions). This lets you "tweak" your experience/product by mixing up the flavor of orchestrations by "stitching" different services (much like assembling lego blocks) via some rule engine. Kinda like "dependency-injection" but from a system architecture point of view as opposed to just an application code architecture.
I don't think you can expect to gain that understanding from this piece. It's part of a series, and presumably forthcoming posts will explain how the rules engine drives which components are selected to process specific content requests.