There are already multiple implementations of this:
- nextcloud (https://nextcloud.com/), more like a "dropbox-like" with added applications
- sandstorm (https://sandstorm.io/), with a high focus on security
- yunohost (https://yunohost.org/#/), a full-on debian distribution with all the scaffolding to one-click install known applications (mail server, file storage, IM server and client, ...). All linux software can be used as an "app". The process to make it into an app is the same as creating a package for a distribution, except specifically for this one. The good thing is that installation is way easier than using a package manager and configuring the database for the user, etc... It seems to be the most in line with the author's vision.
“If you were using federated Kuberenetes with node auto-scaling and the latest cloud-native AI-enabled service discovery OSS tools for geographically aware traffic distribution your static site wouldn’t have gone down.”
I think the most prototypical "hugged to death" personal website is a stock wordpress setup on shared hosting or a low spec vm, without any caching plugins, and perhaps with some popular plugins that happen to be database heavy.
It's easy to click a few buttons, install some plugins and themes, and wind up running 100+ sql queries on every page load. The various caching plugins work very well, but it's not necessarily something everyone thinks of turning on in advance of getting a lot of traffic all at once.
Can't say if that's what happened here, but it's super common when personal sites linked here go down.
Ah thanks. This make sense now. I had mine entirely frontend and even had a decent sized js game on it but never had any trouble posting. Likely because I never needed to deal with sql, (or low traffic)
People don't take advantage of caching headers. Put cloudflare in front of your blog with 10 minute caching headers and that isnt going down for almost anything.
They don't even need Cloudflare. Just a typical cache plugin would allow a WordPress/Django/Rails/whatever site to survive HN. Every page request being 90 database calls to satisfy "related posts", "wordclouds", "previous", "next", "related" and whatnot just doesn't scale :) Yey abstraction.
You can read about his tech stack here[1]. Curiously, his "About" page loads fine. He is using a Django REST API with DynamoDB and the "whole site is hosted serverlessly on AWS Lambda."
Dynamo DB can be expected to adjust capacity every 5 minutes when set up with an auto scaling configuration or on pay-per-request model.
Lambda will scale up in seconds.
Dynamo, however, can be set not to (if you wish to stay within free tier limits). Dynamo DB free tier allows 5 4KB/key reads/second, with a buffer for about 5 minutes worth of "tokens" at this rate.
If poorly designed (e.g. using Dynamo like a relational database) you chew through this with app-side joins very quickly. Even if well designed, looking up related articles, or re-loading records on page transition will eat up DB time
As someone else mentioned, caching is critical for this setup to survive this load, especially if you want to stay in free tier... And CloudFront costs pennies compared to Dynamo and Lambda... Though both can easily be cheaper than $5/month
Scotty or even McCoy might be a better fit, but having seen not enough TOS, my impression is Spock is enough prominent in out-technobabbling his captain in this context
It's a static site FFS! Erols and ServInt were serving 3k requests per second on a Pentium in the nineties using Apache and SCSI disks with no acceleration, no http cache, no memcache because that 2KB page fits into OS disk cache!
I wrote an article here, and the traffic I got was 1.3 million web requests. All I did was use memcache, and have all my static assets served with nginx.
Just like the article implies, if it requires any tweaking and configuration, most people won't/can't do it.
Me too. I like Twilio, but Anveo.com is worth checking out as well. They have a visual call flow handler that's pretty cool/capable, and like Twilio, are pretty cheap.
I'm sure everyone's setup is different but I did this for about 8 years and had to stop recently due to a variety of problems. The first being the lack of support for group messaging. I couldn't send group messages at all but what was even worse was that when I was part of group messages I would receive the messages individually as if the person was texting me directly. My system could mark them as being group messages but I couldn't see the other recipients so there was no way of knowing who else was in a thread until other people started texting and then could piece together the group from context.
Also some services straight up refuse to send SMS to cloud phone providers meaning you can't sign up for certain services that needed a verified phone number (unless they had an option to receive a call with the verification code which worked like 25% of the time).
Dialing was another huge issue, you can somewhat intercept outbound calls on Android but the system is buggy so I had to find other methods. Since I had integrated my SMS/MMS messaging with Slack (one slack channel per phone number) I created a /dial command that would call my phone and then when I answered it would transfer me to the person I wanted to call.
Happy to answer more questions about it but I highly recommend people think about all the consequences before moving their main number over.
> Also some services straight up refuse to send SMS to cloud phone providers meaning you can't sign up for certain services that needed a verified phone number (unless they had an option to receive a call with the verification code which worked like 25% of the time).
I've ran into the same issue with this on Google Voice. Though sometimes I wonder if I could have gotten the best of both worlds by just porting a number from a traditional carrier to Google Voice/Twilio.
Are these kinds of checks only against the number itself? Or is there some kind of dynamic registry?
"Are these kinds of checks only against the number itself? Or is there some kind of dynamic registry?"
In general it is much simpler than that. These companies are sending you these SMS messages not from a "normal" phone number (xxx-yyy-zzzz) but from a shortcode (xxxxx). The "from" is a shortcode and only mobile numbers can receive SMS from shortcodes.
So if your number is not a "mobile" number, you might still receive SMS from other real phone numbers, but you cannot receive SMS from shortcodes.
Twilio, for instance, does not provide mobile numbers. Period. So even if you port a mobile number to twilio, as soon as it is theirs, you cannot receive SMS from shortcodes.
Your assertion that only mobile phones can receive SMS from shortcodes is not true in general, and additionally many verification SMS are not sent from shortcodes.
I’m almost certain the check is against what the number is behind. Also the opposite situation - Google Voice to carrier would result in issues if carrier to GV was fine. Not the case however.
The number needs to be non-VoIP. Off the top of my head, Uber, Lyft, Craigslist all require non-voip which means no Google Voice.
They have databases that they look up, but they're notoriously inaccurate. If you can use a number from another country (not always possible since some services require an in-country number, but a surprising number don't) you'll often find it works better, especially if you choose a country where the database provider might have less access to information about which ranges are assigned to which providers.
I have a number in Google voice that I ported from a regular old T-Mobile SIM years ago. I haven't seen many that won't send me an SMS message, but when I do, they are usually banks. I don't think it much matters where the phone number originates.
This is a really cool idea. In fact, I was thinking about it but in the form of a PC [1].
Most desktop computers today are a magnitude more powerful then the small VMs we rent on the cloud. While you go on with your day, your desktop computer can be put to work in Your service.
The website is a bit misleading since I haven't had the energy to really revamp it to reflect the current state of the project. A lot of the marketing material is still accurate in the details (e.g. features), but the overall presentation makes it look like a product for sale rather than an open source community...
The current state is that it's "just" an open source project, not a business. There are a few people working on it in our spare time. The paid hosting service has shut down.
Oh I see, thanks. I had heard of it before but wasn't familiar enough to know it had previously been paid-for/hosted option.
I actually thought my 'open for business' turn of phrase was going to draw comments like 'its open source what are you talking about', when I only meant that it existed.
---
That's a shame it didn't work out, but I think for now the vast majority of people that want 'personal clouds' are so technically minded that they will favour at least running it themselves, if not ensuring that everything is OSS.
Perhaps a critical point can be surpassed after which privacy but not technically minded folks are interested in the by then popular concept, and will pay for the cloud device and then further for apps on its store app.
I think it's a solid idea, but that the money at the moment is in commodity infrastructure or hardware that happens to be used for it, and to a lesser extent software licences .
Did you consider selling Sandstorm licences, as a sort of iOS for a BYO phone, in the analogy?
> Did you consider selling Sandstorm licences, as a sort of iOS for a BYO phone, in the analogy?
We tried for a while. But the kinds of "enterprise" customers who would pay for an on-prem/private-cloud version of Sandstorm are also the kind that don't just wander onto your web site and plug in their credit card... you have to do the enterprise sales dance, which we never had any clue how to do. :/
Sandstorm the company most definitely failed -- we ran out of money, couldn't raise more, and had negligible revenue, hence could no longer pay employees. We looked for an acquisition, but while we received multiple offers to hire the people, we received no offers to acquire the company along with them. (Well, some companies offered to buy Sandstorm for $0 and wind it down for us.)
Most of the team ended up taking offers from Cloudflare, but Cloudflare did not acquire Sandstorm. Sandstorm technically still exists as a company, but with no paid employees -- it's basically just me keeping things operating in my spare time. But the vast majority of my coding energy has gone into Cloudflare Workers.
OTOH in the last couple months a bunch of people in the community have started more actively contributing to Sandstorm as an open source project. So that's cool!
Thanks for that story, it was like reading "2 years as solo developer" in a minute.
Especially curious, as making something along the lines of sandstorm would actually be what I'd be doing, if it was not for chasing the AI hype. Though I thought of it more as something, that you could "install" into your free tier cloud account with typical mobile apps (like Calendar, Contacts, Mail, Music and Drive) coming by default.
When openssl has a bug, who notices, decides, installs the new package and restarts all the dependent services?
That person or group has the actual control over your personal cloud server. And it's more efficient, and thus cheaper, if they are a large scale organization. Once again it will be more attractive to most people to centralize.
I like the sentiment (and run my own Synology NAS to sort of achieve this), but I'm trying to understand the difference between saying "my second phone is in the cloud" and "I want my phone to be a thin client for a server that I run using open standards".
Is there any meaningful difference? I feel like the second idea has been around for a while.
Incidentally, because I've read so many articles recently bemoaning the state of the iPad, I think if Apple really embraced the "thin client" model for the iPad, they would have a much better shot at product market fit. For example, if the iPad could seamlessly view and edit files from, say, any SMB server as easily as it can from iCloud, I think that would get people excited. It would be decoupling the interface from the underlying standard, but also make sure that they work well together. Who knows, it also might lead to having a standard file format where we're missing them today, e.g., for handwritten notes?
> For example, if the iPad could seamlessly view and edit files from, say, any SMB server as easily as it can from iCloud, I think that would get people excited.
This is actually an advertised feature of iOS 13, and I did get very excited about it when it was released last year. Unfortunately, it’s completely unusable; browsing a directory with more than 100 children or copying more than a few megabytes of data is likely to freeze or crash Files, and I’ve even managed to get the app into a broken state that can’t be fixed with an iOS reboot. The same applies to the USB drive “support.” Don’t trust anyone who tells you the iPad Pro is a viable laptop replacement if you need a working filesystem.
This is a great point, and I should clarify: I do this and it works. However, it feels like a second class feature, and there are many rough edges.
For example, some apps don’t seem to be aware of this possibility and throw errors if you try to open a file on this way. You also can’t add directories as favorites like you can for other sources. It’s just annoying enough in some cases that it feels like Apple doesn’t really want you to do it.
This is great to see and is pretty much the same reason why we have initially started with Cloudron [1]. There are not many technical reasons why running your own apps on the server has to be harder than using a phone. There are tons of great apps already out there and from my perspective the biggest issue, as mentioned in the blog post, is the onboarding and ease of installation/maintenance of those apps including the server itself. The building blocks are all available but as a complete solution hard to use, unless one is a kind of a sysamdmin. This makes it just very exclusive. If you ever tried to setup your own email server there is so much crud work to be done and things to be learned. Learning about this is as such a great opportunity to understand the underlying technology and how things work together, but it is also a huge barrier.
I tried cloudron recently and the experience was pretty smooth. However the 30$/month price tag put me off. Yunohost offers the same (please correct me, if cloudron has features yunohost doesn't have) for free, and it's hard to compete with free.
Agreed the price is steep but it's probably good they're focusing on profitablity. IMO what killed sandstorm.io is the lack of resources to keep apps integrated and updated. It's awesome tech but just didn't hit critical mass.
- The "appstore"-like experience already exists in the ecosystem, in a sense. You have package managers with post-install and configuration scripts (Installing slapd for example yields a perfectly functioning instance within 2 minutes), or maybe Ansible playbooks that run the instance with a simple file configuration and one command.
For package managers, it all depends on the packaging.
- Again the ecosystem somewhat already exists in a way if you consider Docker for example. All the dependencies are neatly packaged, and if you think kubernetes, then you you have helm as the appstore. It even solves the possible problems with ports and schenanigans that the user mentions. (But again, post-install triggers in debconf for ufw on ubuntu for example already do this. It all depends on the packaging.)
But as the author points out, not everyone is keen on hitting the terminal, but that has been a problem (or rather a business opportunity) for long enough that there are solid, uncomplicated and perhaps way more convenient (and eco-friendly) ways to achieve what the author is thinking of accomplishing.
One thing that comes to mind is the amazing Synology products. In fact, they implement the exact vision of the author: Run your software on your NAS, and have a dynamic DNS graciously provided by synology to access it. E-mail server, file sharing, photos app, even photo recognition, and a plethora of other stuff. You can even run VMs or Docker on it ! And all within the synology appstore (with some magic nat traversal I presume because it does not need any port forwarding)
And that is just one example of something I've seen and witnessed to work very well and reliably too.
I guess what I'm saying is, the author's vision is shared amongst a lot of people and there may even be commercial solutions out there that implement it. But I would for sure love to see an open-source attempt. In fact, I think there's already one that I'm just not aware of.
I'm trying to create EXACTLY this with my latest project (https://www.aspen.cloud) and it is challenging to find the right balance between ease of use and decentralization. The path we're taking is a service that you can pay for and we provide everything you need or you host it yourself. Because sometimes letting experts manage your system is more reliable and you can avoid getting the hug of death like this site!
The idea of owning your personal computer in the cloud with instant, always updating apps completely in your control seems inevitable but making something easy for anyone to use will take time. That's why we're focusing on the developer experience to make it simple and cheap to make apps in this new paradigm while shaping the experience for the end user so it does ultimately feel as simple as using the App Store.
Would you consider open sourcing it in the future? If not the hosted option not only competes with products such as GSuite but it 'll be hard to get adoption from the self hosted crowd.
That's the plan! We are building it in a way that you also means you don't have to decide to do just self-host or pay for the service. You could use both and have one as the failover. I.e. primarily use the hosted version but if you're internet goes out you are now working on your LAN to a computer you have in the other room.
So... VDI, but for Phones? VPI? Seems like a cool idea worth some effort to streamline.
These days, it's fairly inexpensive to get a home server, set up a VPN server, and run stuff like a web browser through RDP. Guacamole (https://guacamole.apache.org/) makes desktop apps through a web browser even simpler.
Again, setting all this up is more than trivial, but I believe it's something that could be a "one-click app" for a VPS. See https://marketplace.digitalocean.com/
I 've been using nextcloudpi on an rpi for 3 years to achieve most of what the author describes. It's been awesome and I d suggest people give it a try.
I've been thinking about and working on the same problem for a while. My conclusion is that this has to be done with a physical device of some kind over which the user has physical control. Otherwise whoever provides the personal cloud servers ends up being the centralized lock-in element in the system.
Am I right in the inking that even if a big tech company did this it could still be good. Because they are just providing the ‘device’ like Apple do now. I guess they would still have power over what you could / could not install though.
I get wanting to be serverless, but hosting the static parts using S3 and doing clever database things using Lambda and DynamoDB also achieves that goal, while surviving the hug of death.
Dynamo DB does not survive the hug of death unless configured to do so and can be very expensive if you don't design the data access pattern will.
The author is trying to stick to free tier, so if it's set to the max (5 records/sec), and no caching, it probably can only handle 1 visitor every 2 seconds after the bucket of 1500 read tokens is used up.
If it's doing scans at all, the DB will be shot in seconds.
Really need CloudFront in front of something like this to survive and lighten the DB access
Thought this was going to be more literally. A phone-virtual-machine, for stuff like your robot vacuum cleaner to keep it from infesting your real phone and spy on you.
If apps on personal servers can’t collect data, then they will need to charge money in order to have an economic incentive to support the app, ship updates, upgrade UI, etc. If there is a competing “free” option then from a centralized monolith, then it’s going to be very difficult for the PCC app to deal with.
Paid apps are also inherently less viral and can’t tap into network effects as well due to the friction resulting from payments. In order to overcome this, PCC apps need to develop a viable revenue model without putting a paywall.
I see the PCC model work for applications where neither the consumer and the business want to store sensitive data because the security overhead is too high.
E.g. trading bot that doesn’t want to store user’s API keys. In fact, running private servers is something that is pretty common in that world.
Second I think people in general are becoming more aware that free online stuff usually comes with a hidden cost to privacy.
Third there is a whole swath of online services that can't thrive in current situation: services that are too uninteresting (from a data harvesting pov or no tie-in to larger offering) to run for free, and not valuable enough for the user to pay $7/month for, and not worth running as a SaaS at a lower price. Yet if a user had a cloud OS with metered resources, they could run such an app for a few bucks a year (plus a few bucks donation/paid to the app dev). And they'd never have to worry about it shutting down because the dev wants to do something else with their lives.
> I see the PCC model work for applications where neither the consumer and the business want to store sensitive data because the security overhead is too high.
I agree. I think apps that have both a basic and premium version can really thrive on a platform like this where useful apps are not competing with VC-fueled or ad-driven apps and the utility to the user is very clear. Data mining has really gotten out of hand and caused the value-add to the user to become extremely distorted. User attention has been the only metric that free apps are fighting for which is a zero-sum game and not good for anyone.
I'm most excited by the idea of applications being able to be more closely connected and even create new types of apps that just layer on top or extend existing app data.
There's always the App.net business model where some of your hosting fee gets passed on to the apps that you run but the resulting revenue would probably be small.
Solid worries me because the apps don't run on your server. That means Solid apps can do all kinds of bad things and your only recourse is to stop using it (if you even know about the badness). For the same reason, Solid's privacy expectations aren't really enforceable; apps could mine your data and you'd have no way of knowing and they could cache your data even after you withdraw permission.
Check the Hacker News Guidelines, particularly "please don't post shallow dismissals". I definitely think the blog author is onto something. This is not just a virtual desktop, this is a VPS but "administered" like any regular smartphone. Such a thing doesn't exist yet, while all the parts are there.
- nextcloud (https://nextcloud.com/), more like a "dropbox-like" with added applications - sandstorm (https://sandstorm.io/), with a high focus on security - yunohost (https://yunohost.org/#/), a full-on debian distribution with all the scaffolding to one-click install known applications (mail server, file storage, IM server and client, ...). All linux software can be used as an "app". The process to make it into an app is the same as creating a package for a distribution, except specifically for this one. The good thing is that installation is way easier than using a package manager and configuring the database for the user, etc... It seems to be the most in line with the author's vision.