> To work around these complexities, Google built Spanner, a database that provides strong consistency with wide replication across global scales. Achieved using a custom API called TrueTime, Spanner leverages custom hardware that uses GPS, atomic clocks, and fiber-optic networking to provide both consistency and high availability for data replication.
I find it interesting that most of this complexity simply falls away if users host their own data. In my estimation, most people's computing needs would best be satisfied with a smartphone + a raspberry pi in their house hosting their data, protected by a simple auth scheme, and accessed using simple protocols built on HTTP. That would be more than enough to access all their photos and videos, documents, and social feed for their few hundred friends to consume. Things like email would probably still best be handled by the one cousin in the family who works in IT, to manage spam etc.
If only the technical side were the actual problem.
This depends on the user; tech-savvy users may prefer a self-hosted version, especially if it installs in a few clicks. But they are outnumbered by IT-naive users whose only realistic option is the hosting by a vendor.
> tech-savvy users may prefer a self-hosted version
Not necessarily. I'd hate to self host stuff. It's a maintenance burden, time wasted getting stuff to work with all of its complexities, and once that's done you need to make sure it keeps working, that bills keep being paid, and you're responsible for drive failures, backups, missing a bill, etc... And that's despite my software and sysadmin experience.
I don't think it has anything to do with "tech-savvy". Hosted is just the better option in almost all cases, especially at individual or small-medium scale.
I actually think the converse to the typical self-hosted statement is true: only a small number of tech-savvy users like to self-host. They just happen to be a vocal minority.
We could improve the convenience of self hosting. And we should. The pendulum always swings, we should start thinking about what we want it to look like, or somebody else will.
A smartphone + raspberry pi does not provide any level of backup, so it is a disastrous setup for important data, such as pictures. Adding backup (and restore) functionality is pretty non-trivial. Adding high-availability ups the level of difficulty even more.
Not to mention, a lot of computing is not done with your own data. Sure, my pictures could stay in my house, and I could access my friends' pictures with relatively simple APIs. But I also need to work with Wikipedia, with StackOverflow's database of answers, with Amazon's database of products, with YouTube's and Netflix's video databases etc.
Maybe room for a middle-ground compromise - "street hosted data", some kind of deployed turnkey micro-datacenter that only accepts traffic from physical networks within a fixed radius / whitelist, etc.
So you get some economies of scale ... I wonder what the "breakeven topology" would look like.
I find it interesting that most of this complexity simply falls away if users host their own data. In my estimation, most people's computing needs would best be satisfied with a smartphone + a raspberry pi in their house hosting their data, protected by a simple auth scheme, and accessed using simple protocols built on HTTP. That would be more than enough to access all their photos and videos, documents, and social feed for their few hundred friends to consume. Things like email would probably still best be handled by the one cousin in the family who works in IT, to manage spam etc.
If only the technical side were the actual problem.