Hacker News new | past | comments | ask | show | jobs | submit login

Is there a reason why successfully running software on a server is so much harder than running software on your phone?

I don't think it needs to be this way. Someone needs to figure out server software for consumers.

Just like PCs became more accessible, so should servers!

Edit: Brainstorming here: specifically, I'd like a more accessible UI, automatic updates, sensible defaults on all apps, an easier way to get started and so on.




Because managing the correct way to handle downtime and how to respond to problems is vastly different when serving one person that will generally be actively using the device less than half the time of the day or far less, and serving thousands or millions of people using it such that at any one moment multiple people might be actively using it.

> Edit: Brainstorming here: specifically, I'd like a more accessible UI, automatic updates, sensible defaults on all apps, an easier way to get started and so on.

To some degree, that's sort of like saying "I can drive a car, and cards are simple, why isn't driving an 18-wheeler truck as simple, or a cargo ship, or a cargo plane? I don't see why it has to be more complicated than a car."


I think of it more like a pickup truck: It's easy to drive and gives consumers all the cargo space they would ever need. Not everyone needs to run Netflix's backend. But for things like a blog, an email server, a matrix server, controls for your smart home, and so on, it would be preferable to have users control them.


Cloudron, Sandstorm, Synology, etc. exist for exactly those people, as do Canonical's Snaps for those more comfortable with a CLI.


I claim that it's possible to design cargo ships and planes to be driven by anyone with short training compareable to driving license.

It all comes down how much you want to invest for being as safe as possible from accidents. Cost of cargo ship or plane accident is very high and there is no valuable reasons(?) why everyone should be able to drive those vehicles. Therefore it makes sense that those vehicles are driven by professionals and are designed for professionals.

If your server has millions of users and it provide such value that down time is not an options, maintaining such server should be done by professionals and maintenance tools should be designed for professionals.

However thats not case for every server and cost of failing "empty" server is basicly zero (unlike empty cargo plane).

I think it would be interesting to see software designed around not centralized servers, not PCs, but PSs = Personal Servers where user data lives on their own servers and services only link and communicate between them.


This is exactly what we do :) check out our demo at https://cloudron.io. Another user commented there is no market for this, but this is not true since we exist and are doing well. Its a niche but this is expected since this is a developing market.


Just wanted to pop in and say that Cloudron is excellent and I really, really love it. I discovered it a few years ago and it's just fantastic.

CapRover is excellent too but of all the various tooling I've tried over the years, Cloudron is hands-down the most polished option I've seen/used!


I started using Cloudron recently and I really like it! I came to this thread to mention it, but I see you're already here.


But it's not as simple as that though is it? Does this take into account things like HA? Or offsite backups? What about security? I guess this works for someone who sets this up on a single server, perhaps just for themselves. But running services is a lot more complex that just isn't possible as a 1 click install.


Cloudron can backup to a variety of destinations including S3/GCS/Backblaze (https://docs.cloudron.io/backups/#storage-providers). I guess it depends on what you mean by "service". If you mean running something for a million users, we are not the right product. Cloudron is meant to setup services for personal use and businesses. Our target audience has 10k users max.


What is a good option for doing this on kubernetes?


I am not aware of any. Cloudron itself might run on k8s some day but so far we haven't found the right customers for that scenario yet!


Cloudron.io and Sandstorm.io both do this well. (Cloudron is very actively maintained with a lot of package updates and a very wide package library and feature set, Sandstorm is open source and adds a layer of security in assuming apps are evil or compromised.)

I do think there's probably a good market for a "just plug it into the back of your router" box that has one of these pre-installed and ready to go.

Windows Home Server used to be an attempt by Microsoft to make a home server accessible to the average user crowd. Unfortunately, Microsoft doesn't feel consumers (or businesses, if we're being honest) should have on-premise servers anymore, and has deprecated both Home Server and Small Business Server, and the UI features that made them more accessible to the layman.


Thanks, i did not know about both of these! They do seem like a step in the direction I have in mind!


Unfortunately things are moving very slowly in this area. Sandstorm is basically abandoned and Cloudron is not even open source. Bitnami and Yunohost are also major options.

You might like some of the pointers here: https://github.com/awesome-selfhosted/awesome-selfhosted#sel...


> Sandstorm is basically abandoned

This is untrue. Sandstorm gets monthly releases, and new features tend to show up every couple months or so. Several major improvements to the platform are in the works at the moment. It's definitely true we could use more help, but it's still probably the most secure way to self-host cloud services for personal use.

> Cloudron is not even open source

True, though it's probably the best "successful" approach right now, in that the Cloudron devs have a functional business that allows them to very actively support the platform. (Sandstorm failed here, so as a Sandstorm contributor, I can't really knock their approach.)

> Bitnami and Yunohost are also major options.

Yunohost has zero isolation between applications, a single compromised app can hose your entire server. I would bear that in mind when recommending it widely.

Bitnami is going to leave you mostly on your own to decide how you're going to host it's app packages. I'm not sure it's directly comparable, it's more like a Docker Hub than a managed self-hosting platform.


I'm not recommending any of these; I think it's best to just run services manually - with one VPS, Docker and something like Caddy the pain isn't that large.

It would be great if Sandstorm could get enough attention to thrive because it looked promising, but the activity of the blog¹ doesn't give me much hope. Since 2019 when it announced the hosting was shutting down there have been just 4 posts and apparently no major releases.

[1] https://sandstorm.io/news/


We probably should write another blog post or two. We generally try to only post substantial content to the blog, our mailing list tends to have a bit more of the mundane, and of course, our GitHub issues/PRs. Many projects make blog posts for releases, but we generally do not.

For what it's worth, the "just four posts" constitute some major things:

1. Continuing Sandstorm as a community project

2. The 1.0 release of the main app packaging tool

3. Let's Encrypt support built-in

4. A major security improvement in disabling apps from making outgoing HTTP requests without permission


> apparently no major releases

I push at least one release each month. The change log is here:

https://github.com/sandstorm-io/sandstorm/blob/master/CHANGE...

But it's certainly true that development is much slower today than it was in 2016 when there were seven people working full-time on it.


NAS products like from Synology or QNAP is viable home server for now.


Well... I think the common consensus is that servers and automatic updates don't go together. For the most part you don't want your server deciding to upgrade its database/etc at random times.

Similarly for the "accessible UI" bits, particularly if your managing more than a single server and you want to say upgrade a few thousand of them at the same time.

In both cases its pretty "easy" to configure a more server mgmt related tool to do those operations (hence cockpit! or anisible GUIs/etc). If you looking for a more android level of software mgmt then its pretty easy to install the desktop tooling on something like fedora server that comes with fedora workstation. With that you get nice app stores layered on top of both the traditional dnf/rpm package mgmt as well as flatpak and various other container technologies. For a single home/etc server, just install something like fedora server, and group install one of the desktop/etc profiles during setup (or later if it suits you).


I think you're asking a perfectly valid question. IMHO authentication is still an enormous mess and new standards like OAuth did nothing to improve it on the server side.

On your phone app store there's a strong and trusted source of identity coming from Apple or Google. They know who you are, what you're allowed to do, etc. and can delegate that authority to your apps.

On the server though... welcome to the wild west. How does your server know the person on the other end of a TCP connection is really you, or the person you shared a document to view, etc? You can put your trust in a third party authority like Google, Facebook, Auth0, Okta, etc. but that usually comes with a financial cost. You can roll your own auth or self-host an auth server, but then you're taking on a huge security burden and it's a big leap in complexity to manage something like Keycloak, an LDAP server, etc. It's just not an easy problem to solve with the tools the web gives us today.


It seems to me you are making it needlessly complicated (LDAP...). There are many tools for authenticated access to server with minimal cost in terms of administration. TLS+Letsencrypt+Basic HTTP auth, SSH, OpenVPN, Wireguard, etc.


If you’re using N+1 servers that have multiple users, then you definitely want some kind of centralized user management. It doesn’t matter how you connect to the server (ssh, etc). Those won’t solve the problem of keeping user account information in sync between the servers. You still need some way to keep account information (username, password, public keys) consistent between the servers.

That’s what the GP post was comparing to.

I use LDAP to manage access to multiple servers and it’s more work to setup than /etc/passwd, but much easier to keep things in sync.


Does LDAP have some advantage as opposed to generic cms system like ansible?


> How does your server know the person on the other end of a TCP connection is really you, or the person you shared a document to view, etc?

Client certs solve this problem quite nicely.


There's no demand for it. There are some small & generous groups (eg. linuxserver.io) who cater to the tiny niche of "homelabbers", but even that is somewhat passionate & skilled group of people. It's mostly a waste of time to cater to anyone lower. Like most niche hobbies, no one is getting paid enough to care about the lowest common denominator.


Indeed, and the reason for that, I think, is that if you are at a level of capabilities where you are able to provide services, you have specific needs and ideas about how your business works. You can't outsource your business logic and customer care to some mass product and stay in business.


> Is there a reason why successfully running software on a server is so much harder than running software on your phone?

Sure (exaggerating)

phone - reboot is your FIRST option

server - reboot is your LAST option


Huh? We reboot our servers all the time. We would never design an app that isn’t tolerant to losing servers somewhat arbitrarily. How else could you ever handle hardware failures?


There is probably a niche for unreasonable customers who require 99.9+% uptime but do not want to pay for clustering and redundant servers. The best way to achieve that is to have a physical server that never gets rebooted. HW failures happen but you can explain to the customer it isn't your fault.


The affected user base is considerably different.

When I update an app on my phone, if it fails or changes considerably, it only affects me. When I update anything on a server(s), it's going to affect hundreds, thousands, or even millions of other people and getting back to the state it was in before the update can cost hundreds of hours and thousands of dollars, not to mention potential money lost from the affected users.


Exactly the same can happen by rolling out a buggy app to millions of other people. You can "brick" their app, and then have to work out how to get the fix rolled out to every user.

Updating a server is much cheaper than that.


Because you don’t run your own apps on your phone, it’s some developer’s app on a phone paid by you


This. You want control? You do the work. On a phone, the only easy part is being a user/consumer.


> specifically, I'd like a more accessible UI, automatic updates, sensible defaults on all apps, an easier way to get started and so on.

Isn't this kind of what Ubuntu does with the snaps? You just snap install someapp and there it goes. It will helpfully restart to update whenever the developer publishes a new version, etc.

There's an infamous, quite involved debate, over this exact thing with a bunch of people wanting an option to actually disable this behavior and another bunch arguing why that's not a good idea [0].

I guess the issue with servers is that there isn't really a one size fits all, or at the very least it's not easy to figure out what it is. All this combined with the fact that one argument in favor of running Linux is the customizability, I'm not sure all that many people are looking for such a solution.

---

[0] https://forum.snapcraft.io/t/disabling-automatic-refresh-for...


Once you get the infrastructure set up and cloud-init-enabled template images, servers can be very turnkey. I can spin up a new DNS-addressable, auto-updating Ubuntu VM in two or three clicks then deploy software like this on it from the CLI. There’s even a system for easy LXC container apps called turnkey Linux. Works great with Proxmox.


> Is there a reason why successfully running software on a server is so much harder than running software on your phone?

Yes. "running software on a server" is actually done when running a business, which requires careful attention to detail, quality of service, actual work and dedication to customers. "running software on a phone" is just being a consumer, the hard part of that is done by Google/Apple. In short, provider vs. consumer, it can't be easier to provide than to consume.

EDIT Yes one can run software on a server as easily as on a phone, it's just few clicks or installation command away. But most people running software on servers do not want that, because they want control and understanding and security etc... That is, not yet, maybe in future every family will have their own home NAS server with apps.


I personally think it’s due to how the internet is designed, where applications are an afterthought. It’s after all easy to just get html pages published online, but anything beyond that is a hack.

With a better protocol and set of primitives it shouldn’t be as complicated (even though the challenges of scale can be unique to server software)


It's not a trivial thing - docker and sandstorm.io might be two examples of making some decent headway here.

Just today I set up a postgres and ms sql server for testing - pretty much identically, running out of their own named docker containers (for those not aware, it's even more similar than it sounds, ms sql runs on Linux now).


Why do you want to run MS SQL on Linux? What will you do when the db gets locked or crashes? You call Microsoft and wait?


For development - I was waiting for a dba to sort access to the new prod server, and needed to check that the current app build was minimally working correctly talking to an ms sql instance. It might be interesting to run in CI as well.

Running any rdbms in production in docker isn't a great idea. But for dev and test it can be great.

For my use case, we deploy mostly to traditional setup (dedicated sql server) - but I could also see it useful for prototyping deployment to mssql in azure cloud.


> What will you do when the db gets locked or crashes? You call Microsoft and wait?

Actually, it would appear ms is quite serious about sql server on Linux:

https://docs.microsoft.com/en-us/troubleshoot/sql/linux/choo...

So I guess, similar to running sql server on windows?


It seems like an order of magnitude harder to run something on a phone, personally! i've never had to beg a company to let me run software on a server, at least not yet! ...let alone process payments through the platform.


Besides cloudron and sandstorm, yunohost is another alternative for this: https://yunohost.org/


This is what Canonical tried with snap packages. We need an open source variant of this system. I don’t think this exists.


The sand boxing model for app stores make apps completely independent and also difficult to incorrectly configure.


Many modern server platforms do this too. Be it something that pushes a web-based app store model, like Cloudron and Sandstorm, or a plain infrastructure tool like Docker.


This is a fascinating thread.

Someone starts a "Brainstorming" and everyone else poo-poos on it. This is how:

1. ideas are killed

2. Entrepreneurs are forged

Related reading: https://www.macleans.ca/society/science/scientists-mrna-covi...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: