Yes. Light travels 839 km or 521 miles in 2.8 ms. If you need a round-trip to send a query and get a response, then you would have to be within a few hundred miles of the datacenter, assuming perfect efficiency.
Supporting every region in every public cloud infrastructure provider is absolutely the goal. But we can change the little widget to be less confusing, too.
I find this interesting. Do you plan to have every database record replicated across every one of your datacentres? That would be some crazy high availability.
Yes that is the model. Each region has a full copy of the data and has its own internal replication factor. The customer can select which cloud regions and providers to replicate to, and pay accordingly. So you will be able to choose how many replicas and where they are.
You are assuming that the database is being directly accessed from the end user. If the database is being accessed from within an application running in a datacenter, then you only need a database in that datacenter.
2.8 ms to access data within a datacenter is both good and believable. And if I'm building an application, it is a latency figure that matters to me a lot.
If it's "global" then whichever service provider I choose should see the same latency. If my datacentre is in Brisbane and their closest is Sydney the latency I see will be about 40ms.
Yes, OP is assuming that. The popular use-case for a "global DB" is end-users.
And the popular definition of geographic network latency is a function of distance & the speed of light, not the speed of which an app talks to the DB.