The clue is probably in the 700-1000 machine interaction. If they have to display 200% more results (10 vs 30), they're probably interacting with a lot more machines in their cloud.
Perhaps -- but even if that is true, interacting with more machines is presumably a trivially parallelizable operation, so it shouldn't double the overall response latency.