Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The cost does not scale linearly with the #of packages. A table with 1 entry is not cheaper than a table with 1 million entries.

The same routing algorithms have to run either they have one package or 1 million packages.

Plus you need to keep the history of the operations for months if not years (meaning that you will store both holiday and random Tuesday data anyway).

So where is this flexible cost exactly?



Not just routing. When a plane or truck arrives and is unloaded, thousands of packages have to be scanned, which triggers a cascade of messages to different systems. In addition to the routing and whatnot, customs declarations might have to be sent, track and trace gets updates (both from arrival scan and customs), there's billing etc. All this flurry of activity happens within an hour or so after the plane lands, and then it quiets down until the next plane or truck.

I could easily see FedEx can save money by dynamically scaling capacity to track the arrival of their planes or trucks.


May I point you to the DynamoDB docs? Costs scale along the dimensions of storage and activity. A 1 entry table is essentially free if you use on-demand RCU/WCU provisioning.


Ever heard of the traveling salesman problem because that is worse than linear, it is factorial. Granted, that is probably overkill for most people and my team (Not FedEx) just does NxN so it is only quadratic but it is definitely not constant nor even linear.


Exactly because tsp variants are exponentially combinatorial we solve them with a strict time limit and use the best found solution.

So for practical applications they are constant time.


> So for practical applications they are constant time.

So they should be able to handle all of this on a raspberry pi, if the size of the input has no effect on processing time.


Yep a raspberry pi can give you A solution.

It is not difficult to find a solution to routing problems, the difficult part is to get probably near optimal solution (<1% optimality gap).


So it scales with optimality but not size?


It scales with the number of feasible routes for the particular problem instance you are looking at.

The issue is that there are exponentially many of them and you cannot easily rule out that there is no better solution than the one you have on hand.

That is why we solve these problems for as long as possible with as many resources as possible.


> The same routing algorithms have to run either they have one package or 1 million packages.

Certainly you know that running a routing algorithm 1 million times because you need to route 1 million packages is going to need 1 million times more CPU time than one package, right?


No, it will take as much time and as many cpus as you have available, unless you manage to find the globally optimal solution earlier (practically never, for the problems that FedEx solves).

The number of variables is not a predictor of time and average complexity for integer programming.

We can solve some problems with millions of binaries in minutes. There are other problems with a couple of hundred of binaries that we cannot absolutely solve to optimality.

Sorry that this is your typical sorting problem.


I'm wondering if we're talking about different things.

If it takes 1 second of CPU time to process the path for 1 package, then it will take 1 million seconds of CPU time to path find 1 million packages. You can add more CPUs to run multiple path finds in parallel (ie, use 4 CPUs to cut the wall time to 250,000 seconds), but there's still 1 million core-seconds being spent.


Seasonal demand




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: