Not just routing. When a plane or truck arrives and is unloaded, thousands of packages have to be scanned, which triggers a cascade of messages to different systems. In addition to the routing and whatnot, customs declarations might have to be sent, track and trace gets updates (both from arrival scan and customs), there's billing etc. All this flurry of activity happens within an hour or so after the plane lands, and then it quiets down until the next plane or truck.
I could easily see FedEx can save money by dynamically scaling capacity to track the arrival of their planes or trucks.
May I point you to the DynamoDB docs? Costs scale along the dimensions of storage and activity. A 1 entry table is essentially free if you use on-demand RCU/WCU provisioning.
Ever heard of the traveling salesman problem because that is worse than linear, it is factorial. Granted, that is probably overkill for most people and my team (Not FedEx) just does NxN so it is only quadratic but it is definitely not constant nor even linear.
> The same routing algorithms have to run either they have one package or 1 million packages.
Certainly you know that running a routing algorithm 1 million times because you need to route 1 million packages is going to need 1 million times more CPU time than one package, right?
No, it will take as much time and as many cpus as you have available, unless you manage to find the globally optimal solution earlier (practically never, for the problems that FedEx solves).
The number of variables is not a predictor of time and average complexity for integer programming.
We can solve some problems with millions of binaries in minutes. There are other problems with a couple of hundred of binaries that we cannot absolutely solve to optimality.
I'm wondering if we're talking about different things.
If it takes 1 second of CPU time to process the path for 1 package, then it will take 1 million seconds of CPU time to path find 1 million packages. You can add more CPUs to run multiple path finds in parallel (ie, use 4 CPUs to cut the wall time to 250,000 seconds), but there's still 1 million core-seconds being spent.
The same routing algorithms have to run either they have one package or 1 million packages.
Plus you need to keep the history of the operations for months if not years (meaning that you will store both holiday and random Tuesday data anyway).
So where is this flexible cost exactly?