Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's on your home server?
519 points by _moof on Jan 6, 2023 | hide | past | favorite | 456 comments
It's been years (over a decade?) since I've had a server at home but I'm setting one up for media and I got to thinking: what else should I do with this box? So I was wondering what cool/nerdy/weird stuff you all are using home servers for. DNS and file sharing seem like obvious applications I could set up. I already run email and web on a VPS so that's taken care of. What are you doing with your home server?



In a cloud server (hidden from the public)

  www.keycloak.org  - auth mostly for outline
  www.getoutline.com - my personal "notion"
  nginxproxymanager.com - to proxy things
  Wireguard - remote access and interconnection between zones
  cockpit-project.org - to manage VMS
  github.com/coder/code-server - To remote develop
  2x of docs.paperless-ngx.com/ (one for me and one for my partner) - I scan and destroy most of the letters I get.
  snibox.github.io/ - my terminal companion
  pi-hole (together with wireguard I don't have ads on my devices)
  uptime.kuma.pet - to be sure that things are online
  mailcow.email/ - for non priority domains
  docs.postalserver.io/ - mail server for apps and services
At home (small 6w nic with):

  HomeAssistant - To control home lights
  Cups - share printers
  Wireguard - (connected to the cloud)


Even though I sold mailcow and stopped working on it in the past months, it warms my heart SO MUCH to see people using it. :) Really, thank you.

I have some ideas for a proper successor which would be much more scaleable, modern, flexible and would be more focused to use existing transports for piggybacking mail. A fully and safely encrypted mail storage as well as a cool interface to invite for encrypted sessions etc. Smtpd and imapd using modern Python modules.

Providers found out it was easier to block private senders instead of developing good filtering mechanisms. I kind of understand this decision, but spammers simply abuse services like Sendgrid or create spam accounts on MS.

So… nothing really changed in regards of spam. We just lose actually *wanted* mails or have to look into the Junk folders more often, all while newsletters get on the priority lane.

That makes me unsure about developing anything in this regard. :(

André


Hi Andrè! I just wanted to thank you for mailcow! We used it a few years ago at my previous job to getting a new domain up & running and got a porfessional, best-practices-configured email server in no time. Great experience also for day 2 operations! Nothing like the other mail solutions available at that time.

Wishing you the best in your next endeavors!


Thank you! :)


To the new owners: I googled mailcow, got to the web site and found nothing to explain what it is, not even in the docs. Eventually I noticed the "People also ask" section in the Google results page:

> What is Mailcow?

> fully managed by Elestio. Mailcow is a Docker-based email server, based on Dovecot, Postfix and other open-source software, that provides a modern web UI for administration.

Hopefully this is correct but who knows.


Hi, it is now owned by tinc.gmbh

I sold it on 04/2021


Hi André.

First of all, thanks. I used hosted my own email 20 years ago, and when I saw how easy it was with mailcow, I decided to try again.

Looking forward to the mailcow successor. Is there anything up already?

Cheers.


Hi :) Not yet. I’m in a very burned out mode right now. Need to settle a bit.


Oh, and thank you for your kind words!!


I absolutely _loved_ mailcow. Worked out of the box way better than my hand rolled solution, and ended up using it for some small orgs too.

Thanks for all your work on it!


Thank you very much. :)


Wait, what? Was mailcow sold? What does it mean for future development and features? Should we start to look for an alternative?


> 2x of docs.paperless-ngx.com/ (one for me and one for my partner) - I scan and destroy most of the letters I get.

Ah, interesting. I have been considering finding a solution like that. How do you like it? Are there other alternatives you considered?


There are alternatives. I chose paperless because was the most mature opensource solution.

Be aware that there is paperless, paperless-ng and paperless-ngx.

When I did the setup last year paperless-ngx look like the most maintained.


> When I did the setup last year paperless-ngx look like the most maintained.

Just FYI, none of those were hostile forks, but in both the community taking over after the former maintainers abandoned the project. NGX is indeed the active version.


That's clearer. Thanks


I used paperless for a while but ended up just saving pdfs to directories instead. I can find what I need without fancy OCR features by organizing and naming files sensibly. And I am fairly certain this will still work in 40 years, while paperless, or even Linux, might no longer exist.


That was sort of the approach I had been planning on going for (manual scanning and putting it in directories that get cloud backed up). I have a pretty good scanner app on my phone, and so just scanning after opening the mail for some important document or invoice, uploading to the document storage, and then shredding seemed to be pretty low fanfare.


Which scanner app are you using?


In the end, paperless also stores the original pdfs in a directory. You get some more goodies from it, like automatic OCR and a web interface, but your baselines is also included.


Why not do both? Use rsync to do a one way sync by keeping a filter list to the paperless consume directory so whenever you drop any new documents to your original folder structures, it copies THAT to the consume and paperless consumes it?


I can recommend paperless. I am running it locally in a Docker container and sync database/files with iCloud (so they are backed up). I've been thinking about putting them on the actual web to access them anywhere, but so far having paperless "just" on my computer was enough.


Thanks for sharing this list! I wasn't aware of nginxproxymanager. This is something I've been doing semi manually for years. It looks like it'll slot right in and save me some work


How do you keep the server hidden from the public?


I just don't assign public Ip addresses to private VM.

Public (with different public ips) I have:

  * Mailcow
  * Postal
  * Code server (just a dev machine with public ip)
  * Wireguard
  * NginxProxyManager
Private (without public ip) I have:

  * PiHole
  * containers (where i run most of the staff)
  * Monitoring


One option is to use Tailscale or plain Wireguard.


You can run pihole in the cloud? That is very useful information.


You could also use https://nextdns.io. It’s basically pi-hole in the cloud.


I pay $20/yr for their service it’s so good, and I can turn on/off quickly per-device when I need normal dns to work, instead of having to ssh and tweak pi-hole or whatever across the whole network


Why wouldn’t you be able to? Pihole is just dnsmasq with a frontend.


The biggest curse of the Raspberry Pi is a whole heap of stuff that has gone from “needs a UNIX server” to “but surely that requires an underpowered UNIX server with a very specific distribution of Linux”.


Pi-hole runs just fine in Docker [1]. You don't need a Pi to run it.

[1]. https://github.com/pi-hole/docker-pi-hole/


You should not open its port to the internet at large (look up: open resolvers) but yes you can run it anywhere and access over VPN, etc.


I've been using a Raspberry Pi as a home server, and it's been holding up amazingly well, given everything I've thrown at it:

- The excellent Home Assistant, for unifying across Homekit and Google Home and tracking historical temperatures and a couple of automations. The RPi has Bluetooth built in, so I can capture the data from a few Bluetooth thermometer/hygrometers running custom firmware (https://github.com/pvvx/ATC_MiThermometer) without a 802.15.4 bridge or similar.

- An AirPlay to Google Cast bridge, mainly for listening to Overcast or the occasional YouTube video on Google speakers (without subscribing to Youtube Premium/Music)

- A SMB server, for file storage and potential Time Machine backups (but I don't currently have enough storage, and locally attached SSDs are just hard to beat in terms of performance)

- A DLNA server, for watching photos and videos on my TV

- Tailscale, for the occasional use of my home connection as a VPN when traveling (really glad to be having symmetric fiber for this!)

- Caddy, as a frontend for everything web facing, to benefit from its excellent Let's Encrypt integration for automatic certificate requests and renewals

Most of this is running in Docker containers and configured via Ansible, so that if the microSD card burns out (or I botch an OS update), I can just flash a new one with an empty image and recover from there.


All of my Raspberry Pis netboot, which means I never have to worry about a card burning out, and I can change what they boot into by just renaming a symlink on the server.


Seems like a smart way to do it, but that relies on having another always-on system standing as the server. OP's solution only requires the one device, the rPi unless something needs to be changed.


If you run an open source router distro like OpenWRT or OPNSense, you can use it as the PXE Boot host. That's a device that needs to be running anyway.

I've seen a lot of people running their routers as a VM on something like Proxmox and that gives you even more flexibility but it does require a beefier server - one that could potentially replace all the RasPis, potentially making the PXE boot redundant. :D


That's true. In theory you could use an extra Raspberry Pi itself as the netboot server. But since my home network already has a fileserver it was an easy choice.


Can you netboot from a desktop computer that's not always on? If the desktop is running when the pi is booting, does the pi then need the "server" (desktop pc) again until it needs to reboot?


No, because the root filesystem is mounted by NFS.


Which AirPlay to Google Cast bridge are you using? Thanks


This one: https://github.com/philippe44/AirConnect

And here's a containerized version that works with the Pi: https://github.com/1activegeek/docker-airconnect


i'm trying to buy rpi but can't decide which configuration. what are minimum requirements to run airplay bridge, home assistant?


If you want a Pi (or alike) for specific reasons, go for it!

But, if you are looking for cheap and relatively low-power compute, I strongly recommend looking at used ultra small form factor PCs. You can get much more computational power and expansion, often for cheaper than a Pi. And eBay is riddled with these things, unlike recent Pi availability.

https://www.ebay.com/sch/171957/i.html?_from=R40&_nkw=%28len...

The pricing gets even better if you want to buy them in a lot.


I'm using a Model 4 with 4GB of RAM, but 2GB would probably also be ok. (I haven't measured peak memory usage, but I imagine that having some spare memory for file caching reduces read contention for the SD.)

Home Assistant can take a minute or two to start up, keeping all four cores quite busy, so I wouldn't recommend trying this on an older model myself.

If you want to use the Pi's internal Bluetooth module, obviously you'll also need one that has one and supports Bluetooth LE. Again, I can only speak for the Model 4 here, which works great for that.


In terms of minimum requirements, you could even run that on a 0 or older.

In practice, you probably want at least 3b+ for reasonable performance. If you're buying new anyway and can get them for MSRP I don't see why you wouldn't get a 4.


Rather than listing everything I'm hosting at my home server, I'll just share what saved me the most time, repeatedly:

Mirrors of various package repositories I use.

I'm currently mirroring npm, crates, Arch packages, clojars, maven and some other things, then all my machines (desktop, servers and laptops) point to the mirror rather than directly to upstream. Some of them are mirroring dynamically (basically a cache at that point, do this for npm for example) while others I fetch the entire repository and keep on disk, cleaning out old packages when needed (I do this for Arch packages for example).

Best benefit is that downloading stuff and updating my machines takes seconds now, even if there is multi-GB updates to do, and a secondary effect is that I'm not impacted by any downtime from npm et al. Saved my bacon more than once.


I would be interested in doing some sort of hybrid mirror / cache of a few repos, if I could do them just in time style. I don't need all of pypi (nor do I want 13.5TB of packages). I probably only ever use a few hundred packages at most.

I would like to point all my systems at my server. And if I `pip install pandas` and its not on my server the server grabs it and passes it through and syncs that package locally from then on. Same with yum, npm, docker, or whatever.

And I just realized I could use Artifactory as a caching proxy and at least save some time there. However, that doesn't mirror the package it just caches that specific version. I would be very interested in something where the system sees that I use `pandas` now and will mirror it. Or give it a requirements.txt and it flags all those packages and dependencies for mirroring.



It looks like DevPi may just work as a caching proxy as well. I also found Bandersnatch too. https://github.com/pypa/bandersnatch which is a configurable mirror with allow / block lists.

Essentially want something like DevPi but add the package to the bandersnatch allow list and mirror it from then on. With some extra large packages in the deny list.

Probably possible to wire that all up reasonably well, but probably just using a caching proxy is 90%+ of the improvement anyways. So may just stick with that.


I don't know of any particular solution for the python ecosystem, I don't use it often enough to justify dealing with that can of worms.

But for npm, there is Verdaccio which does exactly what you want, and is what I'm using.


Can you share a bit about how you set this up?

Also curious if anyone has taken this a step further and MITM Squid proxied their whole home to cache all responses >100mb or something like that.


Basically just a bunch of shellscripts for most part, that is run with systemd timers on a PC-like server (consumer components), served via Caddy.

Npm registry is using verdaccio, arch packages is using https://gitlab.archlinux.org/archlinux/infrastructure/-/blob... + what is outlined at https://wiki.archlinux.org/title/DeveloperWiki:NewMirrors, clojars is just reading the list of packages and downloading them one by one each day.

Not a unified or nicely done setup by any means, just thrown together to solve the problem at hand.


I setup apache archiva [0] to cache maven binaries at work. It was okay.

[0] https://archiva.apache.org/


Gitea works really well for this - just choose "new migration" and set the remote up as a mirror - super easy.


I used to do that for my servers. It works quite well but packages update so frequently you don’t see a major benefit


How much space does it take to mirror npm currently? Tried finding that online but not sure where to look.


I've done it some year ago last time, in order to dig out some statistics, it was around 1.8TB then, just counting the latest available version of each package (not every version of every package).

But as another commentator said, I'm only caching packages that already been fetched now, as I have no need for everything in the registry (most of it is junk to be honest).


It sounded like they're essentially caching npm, not mirroring the whole thing

> Some of them are mirroring dynamically (basically a cache at that point, do this for npm for example)


NextCloud, Home Assistant, Paperless NGX, Minecraft, Caddy for some Hugo sites/blogs, Unify Controller, Vaultwarden, Traefik, Sabnzbd. Used to do WireGuard but now I use Tailscale, AdGuard home and FoundryVTT.

It's all docker-compose. I'm thinking of taking some services off the internet using TailScale, some already are just on the Tailnet (Home Assistant and Paperless NGX), all my SSH ports are now only open to the tailnet as well. I love Tailscale, except for the battery drain on my iPhone (which wasn't an issue with plain WireGuard)...

Btw, this is my hardware: https://blog.hmrt.nl/posts/personal-cloud-infrastructure/#ha... (beware, the rest of the post is somewhat dated).

Oh, I moved Paperless and Home Assistant to a NUC, mainly because I'm working on the house a lot and I really need those online, those were also set up with Tailscale in mind, and the advantage is that whether that NUC is plugged in, on WiFi or at my parent's place, all services (inc ssh) are still available at the same IP address (of course sensors drop when I move the NUC out of my network, there is no Tailscale for the Shelly plugs etc :)). I'm thinking of a similar setup for my NextCloud now. Always innovating, it's a nice hobby.


>I love Tailscale, except for the battery drain on my iPhone (which wasn't an issue with plain WireGuard).

Netmaker is a good self-hosted alternative that utilizes Wireguard.


> I love Tailscale, except for the battery drain on my iPhone

Same, I hope they can fix this soon. :(


I've run into this too, and there's been an open issue on this for some time now:

https://github.com/tailscale/tailscale/issues/3363


Yah it really does destroy my battery on iOS.


“It's all docker-compose.”

Are you using a utility to manage docker-compose or just setting it up on the CLI (e.g. started with a systemd service)?

I run docker-compose via CLI on my server but it’s not always convenient to ssh in to check on something.


Since it's all a yaml file I just use vscode's remote-ssh to check (files opens in vscode, one click opens a shell at the bottom (ie for docker-compose ps), nice thing is that remote-ssh also allows dragging and dropping files to the server without needing anything else, rarely need that though). If you add the `restart: unless-stopped` line, all containers just come up again after crashes/reboots.

On Arch I just `sudo systemctl enable docker` (after `sudo pacman docker docker-compose`), I wrote a bit about it here: [0]. This starts docker automatically, and docker automatically starts any containers with the above mentioned line in their service entry.

[0]" https://blog.hmrt.nl/posts/wordpress_using_swag/


Look into portainer


Very cool! Since this is all exposed to the internet, how do you keep it secure? I’ve got a spare laptop and a static IP, but I’m concerned about exposing my home server to attacks. Right now I’ve just got it all running in Tailscale, but I’d like to safely host public-facing apps too.


I religiously keep everything up to date, for anything exposed to the public I use https (Traefik does let's encrypt, caddy as well) and set 2FA for Nextcloud for example, there is also a brute-force protection app for NC.

Some services, like HA, my minecraft servers, Paperless, in fact most of them, I would indeed feel less comfortable exposing, and I don't, but for NextCloud I also use it to share large files with friends so it needs to be internet facing, as do the blogs, but Hugo generates static sites, so that is quite secure (like earlier mentioned blog).


Thank you for the explanation! It sounds like a pretty solid system.


One extra layer I put on my externally facing sites is a simple auth prompt (after redirect to https!) as an unlikely-to-have-a-compromise gate before any logon for a self-hosted service. You can make it a fairly easy to remember username/password for anyone you want to share your self-hosted apps with, since its a mostly irrelevant extra step just to guard against exploits in more complicated software stacks


I started using traefik as my loadbalancer which supports authentication middleware. I rigged up keycloak and forward-auth to handle external services that either do not support authentication or has a weak security profile. A poor man’s zero trust setup.

Here is the blog I used to get things started: https://geek-cookbook.funkypenguin.co.nz/docker-swarm/traefi...


Neat! Thanks for the tip. I might integrate this in some of my auth, but I'll probably keep using simple auth at the very front due to its old age and absolute simplicity making exploits unlikely


In addition to the other comment, look into fail2ban: it's a bruteforce protection that isn't application-specific, it can be configured to protect form bruteforce any service that logs login attempts somewhere.


> docker-compose

How do you deal with backups? That’s my main struggle with docker.


I use Borg (https://borgbackup.readthedocs.io/en/stable/) with some Python and Bash scripts.

All my containers write to the same volume mount (e.g., /mnt/docker_share/$service_name), the scripts shutdown the LVM, run the backups from Borg, sync the files to rsync.net and turn the LVM back on when the backups finish.


Not my home but my parent's (as I'm a "nomad"). They recently built a new house and put me in charge of the tech with a handsome budget, so I built a rack just like it would be my place. Ended up costing 1/4th of what other "smart home" companies quoted. We got way better bang for the buck and software capabilities.

On the server/NAS, a QNAP TS-464eU (4x 4TB HDD RAID 5 + 2x 1TO SSD RAID 1), I'm running Container Station with 4 docker containers:

  - Home Assistant => For all the home automation, displayed in the kitchen on a tablet.
  - Adguard => To remove internet trash and protect my parents when browsing the web.
  - NextCloud => For contact, calendar and file sharing in the family as well as backups.
  - Caddy => Reverse proxy to make NextCloud available from the outside.
Computers (including mine) are backed up daily via NextCloud. The NAS is also backed up off-site with a cloud provider.

I did a more comprehensive list of the setup[0] if you are interested.

[0] https://www.craft.do/s/W8r9KufHct0zG7


As someone who gets caught in the occasional "could you just fix x – what do you mean y is not working? It always worked before you got here"-trap, essentially signing up to run your parents entire smart home setup, indefinitely, feels like some special kind of hell.


My choice with their new house was pretty simple. They wanted a smart home, and being the tech son I would inevitably have to help them at some point. So either I debug a system that is open and familiar, with remote access and reliable software/hardware. Or I deal with some closed proprietary trash which will end-up with me on the phone with some incompetent cash grab company. As mentioned in my post, the latter would have cost 3 to 4 times the price of all the hardware we bought (see the link) and we would just get some SBC or cheap tower instead.

With the current setup, I can easily connect via VPN to configure network devices. Ubiquiti also makes it very simple to apply network changes or upgrade it remotely (as long as it's from the same brand). I set up extensive monitoring and alerting to proactively resolve issues. I also gave my parents some training and docs on how to plug things to the Ethernet sockets (if they want to setup a new printer for example) or what to do to bypass the router (rewiring) in case the main router fails.

As a side note to helping my parents. They are both 70, not the most tech literate people but they manage to do all basic things without much trouble. I put them both on Ubuntu 8 years ago, never had any issues whatsoever. I get maybe one message or call in 6 months for a question, but it's usually web related. Once in a while when I get home I update the distro, but that's pretty much all the maintenance I do. I have Teamviewer on all their devices, just in case but never really had to use it so far.


> They are both 70, not the most tech literate people but they manage to do all basic things without much trouble. I put them both on Ubuntu 8 years ago, never had any issues whatsoever. I get maybe one message or call in 6 months for a question

Okay, so can I hire your parents?


I had a few questions if you don't mind answering as I'm doing something similar for a vacation home (and my own, but this is more relevant to the remote one).

    How many IoT devices are you working with?  
I don't have the best router up north and some of my lousier routers can't handle more than about 25 devices before they start crapping the bed.

    Do they disconnect often from your router (and do the successfully reconnect)?
Similar router problem, though I've found some of the more finickier devices I own can have problems in my home where I have a number of options for connections.

    If you haven't had these problems, what router are you using?

    What about "the internet is down"?
This was less of an issue when everything was Z-Wave/ZigBee, but all of the cheap stuff is WiFi. This mostly only concerns "the lights" and "the plugs". I don't want a switch to stop being able to control devices it's not directly attached to if home assistant or the target device can't reach the internet. All of the ones that I own broadcast state and can accept commands via UDP over the local network so I was thinking of writing something that HA could call, locally, which would issue those commands and receive the statuses (so they'd always be local-only).


I only have only few (less than 10) IoT devices using WiFi and they are spread over 2 fairly good access points. The rest are either wired or using ZigBee. My router doesn't have WiFi functionality itself, it's a Unifi Dream Machine SE[0] with the APs and cameras connected in PoE. It's plugged on a UPS, so hopefully the internet should still work if the electricity cuts.

Regarding your issues, I'm not an expert but I would recommend deploying more access points (pick some good ones) on the areas where your IoT devices are. Best advice I have is to wire your access points, I always had bad results when bridging them wirelessly. Although if you are in a remote area with little interference, you could get better results.

[0] https://store.ui.com/collections/unifi-network-unifi-os-cons...


There’s another aspect though - when you come to sell the home it’s easy to market Crestron, Control4 and others specifically because they are standard (albeit expensive) solutions and have a whole ecosystem of consultants who can be brought in to diagnose and fix issues, upgrade stuff.

With DIY you are usually left with at best ripping it all out and selling it without any “Smart Home” promises, you can I guess still market that it has structured cabling to enable smarts though?


Yeah that's a good point. In our case, all smart systems can work independently (via their own app or physical controls) or be integrated with other smart home platforms. It's true that if they would ever move out, the HA instance might be gone, making the house loose a bit of its brains when it comes automation. But the base to build on is there, still has value I guess. And the rack with Ethernet in all rooms has some value too.

Edit: The only "problematic" part would be the cameras. Not sure if Unifi cams can work with other platforms than Ubiquiti's. But in any case, the wiring is there (Ethernet) and the cameras can be replaced or a new owner could bring their own Unifi surveillance console (or we could sell it with it).


RE: UniFi, they need to each be flipped into "Standalone" mode and then you just follow the instructions over here:

https://help.ui.com/hc/en-us/articles/221314008-UniFi-Video-...

I mostly ended up and recommend Axis for IP surveillance, it has nice MQTT integrations and recessed mount options (https://www.axis.com/products/axis-t94s01l-recessed-mount) but almost anything which can do RTSP will work with either a COTS NVR, or something like Frigate (https://frigate.video).

Another thing to read up on and use when shopping for IP surveillance is ONVIF and the various profiles offered!


In my experience signing up to do it or not is irrelevant because I'll end up doing it anyway since they're my parents, so i'd rather have full easy remote teamviewer/ssh access to the stuff I know works and how it works because I set it up instead of going there to debug some crap they downloaded or bought in the app store/mall that claimed to do x and now it doesn't work and now nothing is working etc etc.

Still can't avoid having at times to go and fix the god damn printer however. Printer driver programmers/designers really make me question if politicians should be the most hated profession in the world.


> Printer driver programmers/designers really make me question if politicians should be the most hated profession in the world.

Trying not to be snarky but really printers are very mechanical I/O devices. I don't think the driver programmers have much fun with their jobs, having to deal with seemingly greedy product requirements and lots of variation in real world use (humidity, paper type, ink quality, yada yada). When I start thinking about this and couple it with my own stupidity, I am amazed anything works at all. How do you even do automated integration tests on a printer?

I'd like to think I am about an average programmer and I am reminded by my own actions everyday that I know nothing. I am constantly learning (and forgetting) new ideas every week.


while it is obvious my comment was hyperbolic, the underlying sentiment still stands regardless. I can be charitable only so far, not complaining that the drivers don't make my color image perfect on a vinyl postcard, not even complaining that I need to realign the heads or whatever every other time I've got to use the printer, those can be mechanical issues and probably not their fault. I just want to print black and white on A4 and a color print every month or so.

But the consistent issues almost every single home printer I've used had with regards to connecting to computers, both wireless and wired is not reasonable. There's a plethora of other issues but that's more on a software in general than driver front so I'll leave them out of it.

All these problems on a windows machine by the way, surprisingly enough I've had less headache with printers on linux than windows. Maybe the lack of software layers to muddle shit helps here, who knows

> I don't think the driver programmers have much fun with their jobs, having to deal with seemingly greedy product requirements, and lots of variation in real world use (humidity, paper type, ink quality, yada yada).

That's a normal job, everyone has greedy product requirements outside of VC money pits. Also don't think their job in the capacities I complained about is very much mechanical at all, yeah it's very much I/O but that doesn't mean much.

>I'd like to think I am about an average programmer and I am reminded by my own actions everyday that I know nothing. I am constantly learning (and forgetting) new ideas every week.

people learn stuff and do their jobs, its not rocket science, nobody know everything nor can hold all the information in the world in their heads. As far as i'm aware printer connectivity isn't some CS open question, it's not a new field that needs exploring, it should've been explored by now. Yes of course drivers are a bit of a moving target regarding support for different architectures, but printers aren't the only things that need drivers and yet seem to be the only ones consistently having this issue ever since I can remember(maybe network cards as well?).


My colleagues and I call this the "Ever since you..." and it's one of our longest-running chuckles.

"Ever since you installed that ad-blocker, the Wi-Fi signal in the den is really weak"— uh, that's not how that works.

It's why I make every effort to never touch or even give advice on technical matters to friends and family anymore. Almost zero upside and unlimited downside—getting angry texts and phone calls at all hours of the day and night, and offering free support for life (and almost always accompanied by zero thank you's).


I thought the same. Having run HA at my own home for a couple years now, theres no way I would take ownership of someone elses install. I love the setup, its just not a "non-tech savvy friendly" ecosystem. Thats what I assume you pay for on the COTS ones.


My inlaws built a new house 3-4 years ago, a fairly nice, modern house. But, the whole tech side felt like it was using technology that was dated in the '90s.

TV going into a Receiver in the next room, and DVD+Apple TV connected to that receiver. 5 channel audio from the receiver. Several zones of in-ceiling speakers also run by the receiver. Some "knock off" Logitech-like smart remote control, because it's easy for the installer to program, but fairly hard to use and not something we can customize. CCTV cameras connected by coax to a central controller, I think the resolution is 640x480. One simple WiFi AP to try to cover the whole 4600sqft house.

For my own home, I went with a Google TV with soundbar and HDMI+CEC and get full control with a single remote. For full house audio I use a combination of Google Home speakers and portable bluetooth speakers. Much simpler and flexible.


The setup your inlaws have probably doesn't rely on the cloud, and report everything they watch to advertising companies, unlike your setup.


They have cable for their primary viewing, which means that, sure, Google isn't seeing it directly, but their cable provider has all the details of what they are watching then. Honestly, I'd trust Google with that information over Xfinity or whoever they have for cable. Verizon?

I realize that is a concern for some, that is not a concern for me.


I have accepted that most of what I do is recorded and tracked. As such, I have a few options... opt in, opt out, or a hybrid approach. Google provides enough value to justify the cost (IMO). For this reason, most of my products are from Google. I'd rather consolidate my data with a single company rather than spread it out across many.


I've got a slew of different computers doing different things. All of them are networked together via Tailscale.

Ubuntu 22.04 Server for the host, everything else runs in LXC containers. This is all setup on ZFS.

- https://znc.in/ IRC bouncer

- https://caddyserver.com/ Caddy Webserver for a few personal websites

- https://github.com/AndroidKitKat/waifupaste.moe/ My personal pastebin

- https://transmissionbt.com/ Torrent client that I actually use for Linux ISOs. Primarily seed different versions of Ubuntu and the latest Arch. I am looking to seed other, lesser-seeded distros, too.

- It also runs Samba

A second, dedicated computer also running Ubuntu Server 22.04. It only runs https://pleroma.social for me and a few of my friends.

A third computer, this time an M1 Mac Mini that is my Plex box. It's running the latest version of macOS Ventura and runs all the *arrs and qBittorrent. It also runs Plex itself, because it's one of the only computers that I found that was low power enough but still supported hardware transcoding in Plex. I've been meaning to find a replacement for it running Linux + an AMD GPU (I have an rx470 sitting around somewhere), but no real good deals have turned up.


Your hypothetical new plex box does not need a discrete GPU. Plex makes really good use of Intel Quick Sync, and transcode quality has been indistinguishable or better than NVIDIA since about 5th gen. A Celeron G4900 (8th-gen dual core, 3300 passmark) has been benched as capable of 21 simultaneous 1080p transcodes.[0]

TL;DR: pick up a NUC knockoff with an >8th gen celeron or i3; it'll handle anything a casual household can throw at it.

[0] https://forums.serverbuilds.net/t/guide-hardware-transcoding... this is far and away the most comprehensive hardware guide for plex servers.


The second you want multiple streams of higher quality then 1080p 10mbs you will want a dedicated card. I got a <200$ old Quattro card (can't remember which) and it can handle pretty much everything I've thrown at it now.

4k TVs are common, planning for 1080p use is a mistake at this point imo.


1) if your device plays 4k, then what are you transcoding from? That's going to be the determining factor, not the 4k output stream. Certainly in a home situation, that's likely to be direct stream. If your source files are 8K or higher, I wouldn't consider you a normal home user.

2) quicksync on a 10g or newer cpu is benchmarked at 4-6 4k simultaneous streams, but a lot depends on the details: container format in particular matters a lot, and so do your transcode options. Color matching for example, is only ever done on CPU. And IIRC AV1 and VP9 are not supported by quicksync (or older discrete GPUs).

That said, it's completely correct that you should measure all advice against your actual output device capabilities and source quality.


This is very true, QuickSync falls over very easily when trying to transcode even a single 4k stream.


Depends on the details of your transcode:

- what's the source/destination quality?

- what's the container format? Quicksync doesn't support them all; even the newest gen is missing av1 and VP9 IIRC. But then, the same can be said for discrete GPUs.

- what filters are you applying. Many (most?) filters are CPU bound, which will kill you really quick.

But for supported formats without filtering, 12g Intel quicksync is regularly benchmarked at 4-6 streams.

One should always measure internet advice against one's actual use case. Personally I would argue that if you have a library of 8K video and more than 4 4K devices, you don't qualify as a "regular" home user. But of that's the case, hardware up my friend!


10th-gen i5 could not handle a single stream with most of the 4k content I tried, even though it was benchmarked as working. So if you're trying to transcode from varying sources I would imagine most people will end up wanting a discrete GPU.

If you're ripping your own content and can ensure that everything is set up right so that the integrated GPU can handle it, sure, it will probably work.

That being said they could have made giant strides in this area on 11th-12th gen, I have no idea.


As I mentioned elsewhere, check the details of that transcode you tried. Are you doing tone mapping? That happens on the CPU (even with many discrete GPUs). Is it in a supported container format? Did you validate that the GPU was being used for the transcode?

Other 10th gen I5 users get multiple simultaneous 4k streams going at > 1x speed. Even with tone mapping it should only use like 45% CPU for a single stream.

Unfortunately the difference between software and hardware development cycles means that when you're on data formats that came to prominence in the last 2 years, even the newest hardware may not support it yet.


Or I could just solve the problem by putting an old 1070 in the box and not have to babysit something I have literally zero interest in.


Sure, it's always cheaper to use the hardware you've got than to buy something new! And you don't lose much: AV1 and VP9 compatibility the same, and probably your home doesn't use more than NVIDIA's maximum of 3 simultaneous transcodes anyway. And IIRC that's about the maximum 4k transcode throughput of a 1070 regardless.

Personally I moved from NVIDIA to quicksync exactly because I felt NVIDIA required too much babysitting. Driver compatibility, interactive mode updates, patches and patch compatibility... In comparison the intel driver felt like a cake walk. But you should definitely go with your own comfort level, especially for the price point!


> NVIDIA required too much babysitting

This is on a headless box so thankfully I've never run into any of those issues, I just install their driver and pass the GPU through to my Docker container.


The place you specification is a thread posted in 2019 talking about a Dell prebuolg. Do you mind posting the link for the nucs? Or is it in that thread somewhere? Or is that dell actually a nuc.


The thread has the benchmarks, links to ebay searches, and lists of prebuilt machines (like the dells) that meet the requirements, since they're often the cheapest way to meet the spec (according to TFA). you may have to scroll up.

I just searched ebay for used celeron NUCs, and got lots of options like this one, with a 9th gen Celeron, for a hundred bucks. NB that the generation matters much more than the CPU, but some operations are still done by the CPU so if you can get an i3 or i5 it will make a marginal difference.

https://www.ebay.com/itm/115640978606?hash=item1aecbd4cae%3A...


Do you automate your Linux ISO seeding? Like getting updated torrents when a new version is released. I have been thinking about this from time to time, and haven't come up with a solution other than scraping.


I just do this. Update the ubuntu line every 2 years or so. This runs as a cronjob on my synology with the working directory set to a directory that the Synology "Download Station" is watching. It picks up the torrents and does its thing. I come in every now and then and clean out the dot releases. It's not the best but it's not the worst either.

  #!/bin/bash
  wget -nH --cut-dirs=4 -r -l1 --no-parent -R "*.tmp" -A "*.torrent" https://cdimage.debian.org/debian-cd/current/amd64/bt-dvd/
  wget -nH --cut-dirs=4 -r -l1 --no-parent -R "*.tmp" -A "*.torrent" https://cdimage.debian.org/debian-cd/current/amd64/bt-cd/
  wget -nH --cut-dirs=4 -r -l1 --no-parent -R "*.tmp" -A "*.torrent" https://releases.ubuntu.com/22.04/
  rm *-DVD-{2,3}.iso.torrent debian-mac-*.torrent *.tmp *.loaded


Wow, thanks!

Much simpler than my glue using changedetection.io and huginn.

I also learned some new tricks with wget.

For others interested: https://explainshell.com/explain?cmd=wget+-nH+--cut-dirs%3D4...


I pretty much update things manually when I think of it (usually every ~2 months or so) or when I am downloading a new version for whatever reason (like to flash yet another computer I picked up). I've been looking for something to do in my downtime , so I might see if I can whip something up to automate updates, could be a fun project.


Right now I have have changedection.io call a huginn webhook to alert me when a new release is up.


> Torrent client that I actually use for Linux ISOs

It's okay boss, plenty of us pirate stuff too. You can just admit it.


Maybe you do, but some of us are actually, unironically, torrenting Linux distros.


I ran a raspberry (model 1, then 2, 3, 4) since forever, which has been fun. Switched to an intel NUC recently as I had a spare and needed the compute power. Being able to run on 32 gb ram with an nvme disk feels good, but the pi has served my needs pretty well...

- plex for streaming media

- external hdd that a friend uses as offsite backup (he has mine)

- home assistant, mostly fed by data from the...

- mqtt broker, that ties the sensors around my house together

- postgres, for long term reporting and predictions, mostly with data from...

- some cron jobs that scrape weather data and energy prices (they change hourly, sometimes going negative)

- security camera (a shell script saving an RTSP stream)

- a docker container that I can ssh into from anywhere, that allows backing up the iphone photo roll using the "photosync" app into my photo backup folder

Soon (I tell myself) I will analyze the security camera stream with YOLO or something to detect the cats that piss against my bikes... hehehe


Any particular security camera that you use?

I have been considering building a bespoke home system with Elixir and am slowly building some APIs and ideas.


A very shitty Tapo camera made by tplink. It was cheap and exposes its RTSP stream on the network with some URL.


Do the cheap NUCs let you install more than 8GB RAM?


Define cheap. I'm running an Intel NUC d54250wyk (launched 2013) which takes up to 16GB DDR3L. You can't definitely find that class of hardware got cheap on eBay these days. Update SSD and memory and it's still great.


Stupid auto-correct. Should be: "You can definitely find that class of hardware for cheap on eBay these days."


Not sure which those are, but all the J/N 4000 series CPUs are only rated for 8 GB, yet I never heard of anyone having issues with 16, though not always at the maximum supported frequencies.


I have a 6th generation with 32GB:

    hw.model=Intel(R) Core(TM) i3-6100U CPU @ 2.30GHz
    hw.physmem=34227793920


Yeah it's a i10 NUC (not exactly super cheap), which takes 2x16 gb without problems. I'm running debian...


Running k3s on a small cluster of mini pcs and RPis.

Use Tailscale for MagicDNS and access from any network.

Have a custom wildcard domain pointing to my tailscale k3s node ips, and a traefik ingress controller. This means exposing a service from my cluster on a subdomain just requires creating an ingress object in k3s, and it's only accessible via tailscale. cert-manager and let's encrypt handle TLS.

All services are deployed via gitops using ArgoCD, so changes are auditable and can be easily rolled back. Replacing hardware is just a matter of installing k3s and joining the cluster, then everything automatically comes up.

Restic for backups to s3.

For home automation I use a USB zigbee controller, mosquitto, zigbee2mqtt, room assistant, and home assistant, all deployed on k3s. These control my lights, HVAC, and various garage doors and gates. Also have mains-powered zigbee switches bound directly to devices so everything still works even if network or home assistant goes down.

The RPis are used for Room Assistant, which can automatically control lights/HVAC based on presence detection via a smartwatch. More intrusive actions (e.g. making lights brighter when already turned on, opening blinds) are pushed to the smartwatch for confirmation.

Grafana/prometheus to monitor sensors.

For media, jellyfin and sonarr/ radarr. The native Jellyfin app works very well on modern LG TVs.

Pihole to block ads on any device connected to Tailscale. Works globally.

Right now it's zero maintenance, and changes are automatically synced after a git push, so I almost never SSH into the servers directly.


Always love seeing someone else create a similar solution as your own (albiet likely better!).

I have the same setup with K3S running on a couple PIs. You have a nice CI but I decided to use cdk8s[1] which lets you compile Typescript into K8 files. For access I did almost exactly the same but with CloudFlare Tunnels (might look into Tailscale). Stealing the zigbee2mqtt and room assistant ideas.

Where do you store volumes? I eventually just bought a NAS and mount persistent NFS volumes off it.

1. https://cdk8s.io/


cdk8s works really nicely with gitops.

> Where do you store volumes?

Back when it was a single node cluster, I just used hostFolder mounts with restic backups. I added Longhorn once the cluster grew, but there's still some local hostFolder mounts left around. For example, zigbee2mqtt needs to be on the node that has the zigbee controller plugged into it, so the node is tagged and zigbee2mqtt has a nodeSelector. This means the hostPath still works and I haven't needed to migrate it to Longhorn.

Longhorn initially scared me off with its relatively high listed resource requirements, but after configuring it to not run on the RPis it turned out to work quite well, most of the time just using a few percent CPU.


Thanks for writing almost exactly the post I was going to write. differences:

I don't use tailscale; I just port forward from my router to the k3s ingress IP, since that's fixed anyway. Accordingly k3s handles letsencrypt certificates. My router has a built in openvpn server.

I haven't moved to jellyfin... yet. Plex is super slick and runs nicely in the cluster. I've learned to keep it version locked though, to avoid regressions and unwanted new "features", which means jellyfin is only a matter of time.

I also run Nextcloud, and photoprism for my photo library.

Storage is on a built-from-scraps 16TB NAS which backs up to azure blob with duplicity, and longhorn for block-based storage (since lots of services nowadays prefer sqlite, which breaks on NFS). Yes I do need that space; I run an entertainment company and we store a LOT of video and audio. Not to mention media for plex!

I have many times considered moving most of this to a cloud system, but the cost is prohibitive. If anyone can find 13+TB of storage and transcode- and ML-capable hardware (for plex and photoprism face recognition) for less than $45/mo (my cost of electricity plus annual amortized hw cost), I'm interested.


This was basically my holiday project - Photo: https://i.imgur.com/2AnP4pu.jpeg.

I'm running Longhorn for storage but haven't figured out backup yet, and haven't got to grafana / prometheus yet.

I put up my work on GitHub as well: https://github.com/inssein/mainframe. I wanted to create a separate ingress controller for internal dashboards, but for now I just setup a separate nginx-ingress for internal, and using traefik for external, feels wrong.


> The native Jellyfin app works very well on modern LG TVs.

My experience on a CX OLED has been hit-and-miss. Freezes, crashes, and some times it just hangs when skipping and I have to force-close it.

I really want to like Jellyfin. I run it beside Plex and use both. I find Plex user-hostile but it still gives me a better video playing experience more consistently.


Interesting, I have a CX and a G1 and it works flawlessly. I didn't do anything special so might have just gotten lucky.


> changes are automatically synced after a git push, so I almost never SSH into the servers directly.

Can you elaborate how you're doing this?


I'll answer in his place, he said he's using ArgoCD and running everything on k3s. ArgoCD watches the files in a repo (kubernetes yaml manifests for example) and applies them in the cluster, so that the state of the running cluster (applications) is synchronized with the git repo.


Reading this thread, someone needs to put the home server in a tiny box, preinstalled with all the apps and put a slick user interface on it. I think https://umbrel.com is a thing but it's not packaged with hardware. Something that packages the hardware and software together would be pretty killer. Plug and play. Instant email. All the apps. Easy migration from existing services.


Synology NAS products (using their quickconnect service) get pretty close to this. You have to install their apps from an App Store but that feels reasonable for even the average consumer.


Seconded. Just get a Synology NAS. I'm incredibly happy with mine, they really seem to make a great product on both the hardware and software side.


Part of the fun for me is the tinkering, and I'm not sure if non-techies "get" the benefit of running a home server. Maybe the tide is turning for the younger generation? I can only speak for most people my age (30-40), they are capable of interacting with technology but they have absolutely 0 understanding of how anything actually works.

"Why when I can just use Spotify/Netflix/iCloud/etc"? Is the answer I got when telling them about my own home server shiz


I think it's like most home devices, it comes down to ownership. We've gotten into the habit of streaming everything from someone else e.g we're in cloud rental economy, but there's opportunity to restore ownership. Maybe there's a generation that just wants to rent but it wasn't really by choice, it's because ownership cost too much, ownership of homes, servers, storage, etc. We can reduce these costs and we can make ownership of services a thing once more. Sure, you might need someone else to fix things when they break, but it's not like I'm doing my own plumbing or electrical work in my house either.

So it's really this idea of pulling all your services back into your home server. Something you own, something that then becomes private by proxy of that. Something primarily for you and your family, no one else.


On the other hand one of the benefit of renting is that you don't have to deal with all the maintenance. It's fine to do a little maintenance but if you do that for everything it just absorbs all your time.


I know of an avid ham, and one of his favorite jokes is "Why use the radio when I can just call them on my cellphone?".

At some point, stuff like home servers really are more about the tinkering rather than any practical desire or need.

If someone's only interested in getting from Point A to Point B, he's just not going to care how fancy or interesting the mode of transport is.


I love tinkering but not with my data. For my data storage I want a stable and tested solution. I have done enough "shit I destroyed my RAID array" for my taste.


For important documents (insurance docs, mortgage, etc) I use SyncThing, for media I use my RAID1 array. I'd certainly hope that RAID1 works as I expect it to if one of my HDDs shit the bed! It's a good point though, backups aren't really backups unless they're tested... brb...


And RAID1 isn't a backup. If you delete a file accidently, it doesn't help you.


I have snapshots on my NAS, so nothing is really deleted. I supposed in another 10 years I'll have to start pruning old files, but disks are cheap and I don't really have that much data.

For that matter I generally don't delete anything. If I save it to my computer odds are it is something I want to keep for the rest of my life (that is family pictures). There are a few distro images that I download once in a while and then delete, but for the most part disk is cheap, and I don't have time to delete old stuff.


RAID and snapshots aren't a backup either: lightning strikes/house fires/earthquakes/floods kill all the disks together.

A backup is a restorable copy in some location not sharing much fate with the primary location. Right now you might consider "all copies in one city" to have too much fate in common, but "all copies in one country" to be acceptable, or "all copies on Earth".


There are many levels. Raid with snapshots is a useful level, though not perfect.


Yunohost ? Then put a sticker on a random box and your done.

Seriously, it does most of what you describe : good UI. Excellent user management between apps. Great catalogue ( 500+ all tested with different level of integration)


That is the easy part. What I really want is someone to administer the upgrades for me. Make sure everything just works, and all security holes are closed. Too often I get things almost working perfect just as some software goes out of support, and next thing I know everything is broke again after I make one upgrade.



unRaid is basically this.

Happy user of unRaid for years, just chugs away, easy updates, lots of apps available.


dietpi.com will get you halfway there


I would love for something like this to become mainstream, mostly from a privacy PoV. But, I think there are just too many different use cases, pitfalls, user problems and limitations. Something like a plug and play box could work for the very small intersection of "technically inclined and understands how to port forward and how to fix minor problems" and "not interested in tinkering with self hosting". As someone else already mentioned, a big part for people self hosting seems to be the tinkering aspect. For me tinkering and privacy are the main motivators.

On the potential problem side you have far too much friction compared to cloud services. It starts with getting a public domain and IP for sharing data with other people. Then you need to setup port forwarding, need a somewhat stable network and internet connection, run into problems with upload bandwidth being usually far less than download. Once you have the connection side taken care of, what about user onboarding, password recovery ("just mount the partition and overwrite the password" will not work), backups? How do you make clear the distinction between my self hosted Spotify and someone else's? Why can they not see my music? Why is there a difference in the first place?

I know all/most of this can be solved in some way, but it is hard. There is also never going to be a big (think billion dollar) company involved with creating such a solution as there is no monetization model. You can not go for a subscription (imo), because then you just pay someone to host something at your home using your power and bandwidth. You could say you only pay for support and maybe a relay to workaround NAT, but how would the support look like? Full access via an admin account? Privacy just got downgraded. What happens if the company hosting the relay goes under? Enjoy your brick (local would still work, but the user experience would be far worse).

Apart from all these software and user experience problems, we still need the hardware. It needs to be reasonably cheap (US and EU market maybe 200-300USD), but it needs to be reliable (it will live in the worst possible place, like a hot, dusty cupboard) and support some more advanced use cases. Of course >90% will only use it for some light file storage, music streaming and voice control, but 10% will make heavy use of transcoding, invite others to their services, store TBs of media, run their entire home automation on it. How do these 10% know they are in the 10% and need to buy different hardware? Why can the box not do automatic quality adjustment like Netflix does? Do you have a hardware migration path?

And what about cross compatibility? In this thread alone you got multiple different systems. This just multiplies user support issues or will lead to vendor lock in, just a different kind.

Lastly, you need marketing to bring this to the mainstream. And you need a big selling point to make people migrate from "free" cloud services to something they need to pay upfront.

I hate to be this negative about this topic, but I just do not see this being viable outside a relatively small, interested community. Please, correct me here, I would love to see this become real and mainstream!


I wrote up a larger reply to this, but simplified it a bunch:

1. Hardware needs to be good enough. Raspberry Pi proved that. Ubiquity of common parts solves a bunch of issues. The main limiting factors here are likely transcode speed, ethernet speed, encryption speed.

2. Software needs a good modularization story. Linux in general suffers from the idea that everything is configurable and to configure everything you have to learn about how it interacts with everything. There's a lot of focus on how to do things rather than matching common use cases to profiles that just work. As a concrete example rather than selecting the "I'm at a coffee shop or airport" profile, I need to select that I'm on an insecure network that I don't trust with a specific password and a configuration for my VPN that forces traffic ... Addressing componentization via use case design seems lacking (and a good place to introduce standardization). A lot of the "tinkering" level software seems to start with "How" rather than "Why" as the impetus.

3. Security. We have security principles that are evolving to solve these sorts of problems (OAuth2 RAR, Passkeys). Invest in them.

4. Support. Common solutions for common problems breed easier conversations. There are consultants that do NAS support for small businesses because those NAS companies have gained enough market share. Secondly we need to spend more time to start with the why rather than the what. Software developers (myself included) obsess over the latter and build systems at that level that can answer (how fast is my connection) rather than systems that inherently answer the real need (why is my connection slow or intermittent).

5. Perhaps the answer is just build a better NAS company where the focus is on home server software rather than purely around adding server software to a bunch of disks in a box. Perhaps the answer is really embracing the idea of U in NUC?

Perhaps this has been done elsewhere? Perhaps it's been done too many times to really work?


Thanks for taking the time to provide such a detailed response. All your points make sense and it's those things that have to be resolved one by one to make anything like this not just a viable product but an actual sustainable business. I do keep coming back to this idea mostly because unlike a lot of people here, I don't want to tinker anymore, I don't want to hand build something from scratch, those days are long gone. I'm happy and willing to pay for something that works and maybe even a subscription for a period of time (think mobile phone contract).

Ultimately as you say privacy is the clear selling point. So much of what's in the cloud is now being exploited or hacked. Obviously the cloud is here to stay and we'll continue to rely on it for a ton of high compute, high bandwidth and high storage needs, but there's so much of what we do on a day to day basis that just doesn't require it e.g let's just say I need to talk to my family, leave notes between us, share sensitive documents, etc. All of that can very much be local in my house. All the things that we do in our physical houses we deem private, the digital should be the same. But we've shifted from the personal computer to public cloud services. There was big benefits but equally there will be huge benefits to going private once more.

I can't say this problem will be solved anytime soon. Someone has to really want to solve the problem for themselves first. I'm hacking on a little toy that might manage DNS, email, web serving, file storage, etc as one binary but I'm not sure it will go anywhere, it's just an experiment.


There's a NUC in my basement connected to my router over gigabit ethernet:

- Caddy acting as a reverse proxy in front of the other apps

- Wallabag to capture articles I want to read later on my e-reader

- Calibre Web to manage my ebooks & PDFs

- Two Minecraft Bedrock Edition servers for my kid & their friends

- Yopass for secure password & secret sharing

Prior to this, I had a Raspberry Pi in the closet for hosting and it was frustrating. Not only did I have a hard time finding Docker containers for some apps that were actively maintained for ARM, but one time my SD card died and took everything with it. Since then, I've started mounting directories on my Synology NAS and using that as RAID-enabled storage that gets backed up to the cloud every night.


Be careful with calibre-web; it's had a ton of vulnerabilities. To the author's credit, they are typically addressed quickly, but there have been enough related to auth to make me wary.

https://huntr.dev/repos/janeczku/calibre-web/


Maybe hosting this service behind a firewall / on a private network would make the most sense. I notice many people here are running Tailscale at home — perhaps this is why?


- Router in VM

- Main workstation in VM with GPU passthrough (saves power by not having an additional machine running)

- Nextcloud for files, calendar, contact sync

- Joplin server for notes sync

- Samba NAS for me and everyone else who lives in the house

- Occasionally one or more Minecraft servers for friends

- Jenkins CI for my open source projects

- Mail server (using the ISP's mail proxy for outgoing)

- qbittorrent for 24/7 seeding or Linux ISOs

- Storj storage nodes for some passive income using spare disk space

- Borg backup target for friends

- Home Assistant (very basic user, only use it to control some MQTT tasmota flashed relays with my phone)

- Matrix server

- InfluxDB+Grafana for collecting various metrics (server usage, temperature sensor, hooked up to serial port of smart electricity meter for power and gas usage graphs)

- WireGuard for remote access, obviously

- Many other random stuff and my own projects


Do you use your main workstation with directly connected display or via some remote technology? My home server is in the basement and I have several low end devices around but haven’t found a way to use GUI applications remotely. Currently I use only TUI ones.


About 15 meters of one USB 2.0, one displayport and one HDMI cable. Of course a 5V power supply for USB at the client end, soldered to the cheapest USB hub I could find.

I never need more than the speed USB 2.0 provides anyway, and those USB 3.0 optical extensions are way too expensive.


Joplin server, after my own heart. It’s great lets me setup accts for friends/family. I store all my recipes as markdown and anyone on the server can get my shared notebook full of them


Thanks for the Joplin mention. I’ve been looking for a secure, gdpr compliant and preferably selfhosted note app for some time! Bingo, I guess


Joplin is old but gold in terms of open source not taking. However, I always found the experience to be a bit too clunky even after writing 100s of notes.

Obsidian is much nicer but you gotta set up your own sync or pay. Same basic premise of markdown files on your local file system. I've been enjoying it a lot and written more since switching.


Thanks, will look into it


You do not even need to run a dedicated server if for example you already have an owncloud/nextcloud setup, since all clients (desktop and mobile) can sync from a WebDAV folder (among other backends).


How is storj going ?


Have been getting $20-50/mo for sharing 15TB. Probably won't be worth it for the average developer with insane salary here, but I'm not complaining!


Nothing to sneeze at, thanks for the feedback. That project and helium are the two crypto stuff I don’t find completely idiotic. ( or that is not finance, finance crypto “works”, too. But that idiotic too )


I've bought a used Dell Poweredge T320 with Xeon E5-2428L (low consumption), 24GB RAM and an SSD, so it's quiet and cheap to run, and I've got Debian with dockerized services.

  jwilder/nginx-proxy to act as a reverse proxy that dynamically routes traffic to containers without manually editing the config, using subdomains, with SSL wildcard cert
  DokuWiki that I don't use much anymore
  Nginx to serve static files used by other servers
  My web resume
  Piwigo for pictures
  IoT: Custom Python Flask server that is used to control Philips Hue lights from ESP8266 wifi modules (cheaper than buying 20€ Philips switches)
  Vaultwarden (Bitwarden) my password manager, shared with family
  OpenVPN server
  Wekan (self-hosted open source Trello-like)
  Gitlab and Gitlab CI (created when Github didn't have free private repos, might delete at some point because it uses some CPU even when idle, but I have over 50 personal repos, also share with close family)
  Nextcloud, but I don't use it for important/sensitive stuff yet, I'd have to set up a robust  backup procedures
  Other experiments, like openvscode-server, web interface with password to trigger wake-on-lan for my PC, etc.
Email seems like a pain because small servers are always seen as spam by big services, need to manage reputation, too complicated, so I use my hostname provider (Gandi) SMTP relay to send email, and I could set up a free inbox too, but I don't need it.


I run a Poweredge T20 with a Xeon E3-1275Lv3. I snagged a 32GB kit of ECC ram from somebody's trashcan Mac Pro on eBay for a song and have a couple SSDs for OS and VMs. I use USB WD MyBook drives for storage and backups.

Right now the server, switch, drives etc is pulling 31 watts from the wall and it's so quiet you can't tell it's on. I'm sure it would keep running for hours off the cheap UPS I have it all on.

I had a second one as a desktop but the motherboard died last year. I'm not sure what I'll replace it with when the time comes. Probably one of those one liter PCs since I don't need internal 3.5" bays.


If I replace it one day, it might be for a "miniITX" (whatever MB size corresponds to ~1 liter PCs) but I fear the cost of the case + specific MB + low-power processor with many threads + good NVME SSD will be through the roof, compared to this cheap used T320, and they're hard to find in used form, at least for now.


Navidrome is my favorite, by far. It's like a personal Spotify. It also has built-in subsonic support.

Syncthing is another I could not live without. All devices upload to my server and my server gets backed up to S3/backblaze type service.

Plex for movies. But I may look at Jellyfin soon.

I use Calibre-web, but it's really not my favorite. Okular is so much better. And Calibre is a chore to use and overkill for my needs. Books organized into folders is about as far as I care to organize. Thumbnails are nice though.


I've managed to build a huge calibre library of garbage. I've been going though and pruning... I use the desktop interface to manage the libraries, and then rsync it to my cloud host.


I do use Plex for music as well, and I am quite happy with it. I don't know Navidrome, but I was wondering what made you choose it over Plex for music?


Me too, it's been a lot of years since I bothered with a home lab, but just recently I put NixOS on a triad of Intel NUCs work was throwing away. They're fanless, quiet and decent enough for Linux with their 16GB memory and 8 cores. Way better than a RasberryPi.

Now I can do some experiments I wanted to do, but not use VMs on my laptop. Feels more real when I can see a little stack of servers I can pull the power on. All are running tailscale so I can get to them from anywhere and run some simple tests. Example: I wanted to play with a quorum of FoundationDB nodes and see how things can fail. Also I'll run k3s and do some experimenting with that. Can I use minicube on my laptop? Sure, but this is more fun.


For people that aren't getting free boxes from work, you can usually find used micro/tiny form factor Intel machines for around $120. That's more than a Raspberry Pi, but you can get a Core i5-6500T with 8GB of RAM, 256GB SSD, and it'll use around 10 watts at idle (and top out around 50 watts). You'll get an M.2 PCIe 3.0 4x slot and a bay for a 2.5" SATA drive and support for up to 32GB of memory.

You can put Proxmox on them if you want to run a bunch of VMs or use Windows or any of the many Linux distros that support x64 rather than dealing with less known hardware.

While 10 watts is more than the 1-2 watts of a Raspberry Pi, it comes out to around $13/year for me leaving it on 24/7. A normal desktop idling at 50-100 watts would cost me $65-130/year so it is a big difference. In fact, if you're leaving a normal desktop on 24/7 as a home server, it might make financial sense to grab a micro/tiny machine. If it costs you $120 and saves you $50-120/year in electricity, that seems like it would be worthwhile.

They're not fanless, but they are very quiet compared to most desktops.

I ended up going this route because Raspberry Pis are so hard to come by these days. A Raspberry Pi 4B with 8GB of RAM will cost $75 and then you'll need to supply your own storage (MicroSD), buy a case if you want to protect it, and a USB cable and power adapter (though you probably have that sitting around). Between the Raspberry Pi and MicroSD card, it's basically $100 and way less powerful than a Core i5-6500T (which should be 3x faster) and you're using a MicroSD card rather than a nice PCIe flash drive. Plus, if you want, you can load the micro/tiny machine up with 32GB of RAM for not that much ($40 to get it to 24GB, $70 to get it to 32GB) and you can make that decision in the future. Plus, for a lot of purposes, standard x64 hardware can be nice. Plus, you can actually get your hands on a micro/tiny form factor PC while I haven't seen Raspberry Pis in stock in a long time.

One could easily disagree, though. The operating costs are likely to be an additional $10-11/year and one could get a 1GB RAM Raspberry Pi for $35 (if they had them in stock) and get a small MicroSD and maybe only spend $40-45 instead of $120, but it feels like the utility of a $45 machine with 1GB RAM is a lot more limited and I'd (personally) rather spend the money on something I know I can use.


> A Raspberry Pi 4B with 8GB of RAM will cost $75

Do you have a link to where I can buy a Raspberry Pi 4B (8GB) for under $150? The cheapest I can find through Amazon was $202.

I bought one before the pandemic (~2019) at Microcenter for, if I recall correctly, $90?

Seriously, if you know where to find one, please share!


Keep watching https://rpilocator.com/ or follow them on twitter


IIRC the approved resellers do not sell them over RRP, which you can find via the official website - when they have them in stock.

Opinions my own.


I use an Intel NUC as a desktop. It's powerful enough for 6 virtual ubuntu machines (probably more) and running ceph and postgres nodes with Vagrant (not that I use them all the time, more for development)

Processor Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz, 1608 Mhz, 6 Core(s), 12 Logical Processor(s) 32GB RAM 1TB SSD

https://ark.intel.com/content/www/us/en/ark/products/188811/...

IntelliJ and Windows 11 runs fine.


I had this setup and then wanted more so got a Ghost Canyon one. It's not really a Nuc as its about 8x the size, but carries the badge. It'll fit a full PCI card and then you can go wild. It really is great.


I have a small K3S cluster of RPIs and a Synology NAS. I don't trust the RPIs to store any meaningful data so I use the NAS to store and backup everything. The cluster can request persistent volumes and they are automatically created and exposed via NFS by the NAS.

On the NAS, it's mostly storage services:

  - MinIO (S3 compatible)
  - Postgres
  - InfluxDB
  - Syncthing
  - Plex (NAS has hardware encoding)
  - Print/scan server
  - Cloud Sync to Backblaze
On the cluster, all the "compute" services:

  - Traefik and cert manager to expose the services with HTTPS, both on the NAS and the cluster
  - Accounting service to invoice clients
  - Mayan EDMS to store and process the documents of my LLC. It was a pain to setup but it offers OCR and an API to search document content.
  - Home Assistant and mosquitto that I just started using. I'm playing with ESPHome to integrate some CO2 sensors.
On the unusual things on my network:

  - A LaMetric time that I use to push my own notifications
  - A RPI connected to speakers that I use as a pulseaudio server when I want to put music (`pactl load-module module-tunnel-sink server=tcp:x.x.x.x`)
  - A RPI with a 64x64 led matrix from Adafruit that I want to use like an on-air sign.


What accounting service do you use?


My ideal is that I use it as my "home base" and everything else is a "peripheral" that checks back in one way or another.

The big things are:

* Music Player Daemon - It has access to my whole music and podcast collection, whereas I only "check out" a subset of this onto my phone at any given time.

* Podcast downloader - I used to have this. I wrote it myself. I got tired of the problems that kept coming up so I turned it off. But, I got recommended a replacement. I'll try it out at some point.

* Gitolite - I highly recommend this one for anyone with this use case. It's a git repository host, for when you don't want all of your business on Github. You can create however many repositories, you give it however many ssh public keys, and you configure which has access to which. The main interface is a special admin repository where you add the pubkeys and configure the repos. For my "home base"/"peripheral" model this changed everything for me, especially since I use QubesOS on my laptop where I'm working in multiple VMs with different ssh keys.

I eventually want Sandstorm.io and OpenHab, but I'd probably want a separate machine since I think that stuff wouldn't be in the Debian/Ubuntu repos.


Old Thinkpad T420s turned home server

- PiHole + Cloudflared - ad blocking for the entire network

- Home Assistant - getting data from temperature sensors

- Sonarr - downloading TV shows to Plex

- Radarr - downloading movies to Plex

- Jackett - better torrent trackers support for Sonarr and Radarr

- Transmission - downloading torrents from Sonarr and Radarr

- Plex server - media server for streaming TV shows and movies, mostly to the TV

- Tailscale - access to everything from outside my home

- NFS / Samba - mostly for backups

- Heimdall - nice dashboard for everything above

- Maestral - open source Dropbox client


It's absurd but I always thought it'd be so great to have FreeIPA set up. Having my computers actually be part of a real network would be neat.

I do wish it were a little less coupled. I'd rather be using better known moderm pieces like cfssl instead of dogtags for CA, OpenLDAP instead of 389ds for ldap. But FreeIPA has one of the hardest worst most terrifying jobs on the planet & it's amazing it can interoperate so deeply, and there's like, next to no hope ever we improve beyond this particular thing, unless we can somehow just ditch AD & SMB. Maybe some day Windows & filesharing will have alternative viable directory systems, but hard to imagine.

I also ran into this project on setting up Kubernetes atop FreeIPA though, and wow is it ever terrifying. https://github.com/zultron/freeipa-cloud-prov

Some more basic answer for you, Jellyfin for media-sharing. A small GoToSocial server for ActivityPub/Mastadon. Prosody for XMPP. WireGuard for vpn. Frigate for security cams. Rygel for upnp/dlna MediaRenderers (there s other good options too). Mpd/mopidy for music jukebox. Nextcloud for groupware-ish.

If you want a lot of ideas, there'a a pretty active k8s-at-home microcosm, and there's a website that indexes the projects they get up to. Even if you dont want to run kubernetes, the projects they have cover the whole gamut of services people might find useful or fun to run at home.

A while back I had a bunch of home sensors reporting to Prometheus. Temperature/humidity gauges, ambient light sensors. My favorite was making my laptops battery & charge status show up. The 2-in-1 had two batteries & was extra cool to watch drain one, then another, then see levels charge back up.


Plex, Calibre, Photoprism, some homebrew backup scripts that interact with other computers on the network, some private security/camera stuff, and a slew of StableDiffusion/GPT models for text/image generation (which is what the majority of the server resources are usually maxed out with). Sometimes I'll host Steam/Minecraft servers for friends.


Proxmox (On an Intel J4105 with 8GB RAM) with

* A VM for Home Assistant (Smarthome automation hub)

* A VM for Docker, currently only running Portainer and MyMedia for Alexa (local music streaming to Alexa)

Everything else as LXC container:

* Caddy (Reverse Proxy)

* UnifiController (WiFi AP controller)

* Samba (Network Filesharing)

* Paperless NGX (Document Management System)

* Jellyfin (Audio and Video Streaming, OSS Emby/Plex)

* Homepage (Dashboard)

* InfluxDb (Time-series Database; for SmartHome long-time data and homeserver metrics)

* Grafana (Graphs for InfluxDb)

Then a few more where I tried stuff out, Owncloud Infinite Scale (Go rewrite of OC), meshcentral (IT monitoring and remote control, might use it to support my elderly parents), Navidrome (music server, not really better than Jellyfin outside performance)

Then there’s also a Raspberry Pi 4 running Proxmox Backup Server for deduplicated backups of all those VMs and containers ;)

If you want a recommendation: Paperless NGX. Having all your important documents tagged and scanned is amazing.


- Bind for internal DNS zone

- Pi-hole on 2 Raspberry Pi 4s

- Chrony stratum 1 NTP from GPS on a Pi, my OPNsense router redirects all NTP traffic to it.

- Emby (like Plex)

- Sonarr/Radarr/Prowlarr/Transmission with VPN/SABnbzd for collecting lots of little boxes that fall off trucks.

- Calibre-web

- LibreNMS

- Omada/UniFi controllers

- Home Assistant

- Tailscale

Almost all Dockerized now but it didn’t used to be. One Ubuntu 22.04 server I built with most of it, and another TrueNAS box I also built for file sharing and secondary Bind server.

I have a feeling I’ll be running a lot more after reading the rest of the comments!


I thought I was too hoity-toity for it, but I ended up gradually dockerizing most of the applications on my home server. It’s just so convenient.


Really made doing Ubuntu upgrades or clean installs way less of a chore. Some of that software requires an insane collection of packages, like Mono for Emby. And there started to be conflicting version issues with Python/PHP for awhile too.


> Pi-hole on 2 Raspberry Pi 4s

Is this for redundancy? How does this work?


Yes. They both have the same initial configuration, and DHCP hands out both IPs.

They both in turn talk to the two bind servers for recursive resolving.

(I got extremely irritated at my ISP’s DNS years ago which is why I was running bind, but now I keep it running behind Pi-hole for an internal domain name I started using because having bind running made that easy)


pi-hole is at it's heart a recursive dns server, so you run two of them and advertise both in your dhcp(or put them in manually if you roll staticly)


On my home server running on a fitlet2 with no public access except through my wireguard vpn:

- jellyfin :: Media server for streaming music, tv, movies to my phones, ipads, and rokus

- nextcloud :: Storage, carddav, caldav

- frigate :: Object detection on my custom home cameras

- homeassistant :: Notifications/Automation (used with frigate)

- freshRSS :: RSS feed reader

- vaultwarden :: Bitwarden server

- bookstack :: Wiki

- yacy :: This is a distributed search engine, but I use it in non-distributed (robinson mode), and use it instead of bookmarks. Any interested page, instead of bookmarking, I add it to my yacy index

- smashing :: Dashboard. I have written some custom addons. This is shown in my front tv to track the real time location of buses at the stops near me.

- photoprism :: Photo manager

- snapdrop :: Airdrop replacement

- imapfilter :: Advanced mail filtering, like taking news letters and converting to RSS feeds

- dolibarr :: ERP for my side business

I run wireguard and pihole on a separate raspberry pi.


Every digital asset I’ve ever created and backups of every machine I’ve had since graduating college. I have a rule of forbidding remote access so it’s not really a server — you can only get onto this host if you are sat in front of it. Everything is backed up offsite by a combination of borg and USB “tapes”.

It was very liberating to put all my data on one filesystem (ZFS zpool) instead of having it littered over many decades of hard and floppy disks. It felt like a great tidy up, even if it was really more like bundling all my junk into a storage unit. Not having to worry about losing it took a weight off my mind.


How do you do backups?

I’ve had a homelab for a decade and I could never figure backups out properly.

I use proxmox with containers, VMs when I have no choice, and a portainer machine that runs docker containers when I have no choice.

I use ansible so in each playbook I have a task to backup periodically to rsync.net

But that requires a lot of work because it must be setup for each service, doesn’t work with the docker machine because docker is weird to me and I can’t use ansible to setup stuff within th container, and proxmox’s backup are too big for my rsync account. Besides I only need the actual data in the backup, not the whole OS.

Any insights?


As my sibling says, zfs send and receive is excellent. If your datasets are encrypted they remain encrypted at the other end, without any requirement to unlock them on your backup server (should you only partially trust it, for example.) It works for incrementals but you have to manually specify the delta point for each send — it’s not like rsync where it figures out the differences automatically.

I use zfs send/recv to update a pile of USB drives each time I do a physical offsite backup. I also use it to gather data from other hosts onto my main data computer.

Once everything is there I then archive each snapshot of each filesystem as a new borg archive. This is efficient — borg only backs up changes — but it requires a rescan of all the data. This happens at ~50MB/s on my opteron with 4TB WD Red HDDs.


Check out Proxmox Backup Server - you can install it on top of PVE. My home server backs up locally to it's own PBS datastores, which then get synced over to a $6 dedicated server running PBS in Europe.


Use ZFS snapshots and send & receive. It so good and easy! I backup everything with it, VM's, containers, databases, files.


Where do you backup to?


To my parents, who lives 30 minutes from me. Alternatives would be some other family member or friend you trust.


Lots of stuff; my 'home server' is probably a little more elaborate than a lot of people would want to deal with. I bought a 42u rack years ago and stuck it in my basement, and it's got a bunch of old enterprise-type hardware in it. It contains:

Hardware:

- ESXi vSphere server

- Old ESXi vSphere server (being decommissioned) - pfSense firewall

- Big NFS/CIFS storage server (60TB)

- Windows workstation/gaming desktop (lives in a rack-mount case connected to a KVM; cables run upstairs to my 'office')

- Linux workstation (also on KVM)

VMs include:

- Linux dev server for miscellaneous projects

- Pi-hole DNS

- Personal Gitlab

- Personal wiki

- Plex

- k8s dev environment for a defunct project that I haven't deleted in case it becomes un-defunct for some reason.

- Suricata/Zeek IDS

- Windows domain controller, for reasons


> - Windows workstation/gaming desktop (lives in a rack-mount case connected to a KVM; cables run upstairs to my 'office')

I am thinking about doing this. What cables are you running and what display resolution is the system running? I am thinking I could get away with a single DP cable for 4K display and a powered USB cable going to a USB hub.

Do you have a KVM you would recommend?


I am running displayport at 1440p at 60Hz - I used these cables, and they work fine:

https://www.amazon.com/gp/product/B07HN8PR4J

However, based on some of the reviews, they will probably not work for higher resolutions or refresh rates at that length. There are active/amplified cables available, but I don't know how well they work.

KVM is a Black Box KV6202A-R2. Which I think is discontinued now, but the same people make similar items. The important thing is to get a KVM with EDID emulation - a lot of the cheaper KVMs don't have it and will tell the non-active system that the monitor is disconnected every time you switch, which causes all sorts of problems.


I use my home server also for serious stuff in terms of data protection and archiving rules. Storing tax things, that need to be archived at least 10 years, backup of databases from my dedicated server in the internet and backup of private data, like photos and so on.

For this I use an encrypted ZFS mirror running on Ubuntu. The board is a power saving ASRock J4105M with soldered low power CPU.

For each HDD I use an dedicated SATA controller for controller redundancy.

The computer itself is secured with a real steel lock against (too easy) theft!

The database backups from the internet server are coming in incrementally with ZFS snapshots. I love this!


How do you manage the encryption keys?


The keys have a password, so, yeah, if the machine reboots I need to ssh in. :-)


I have 3 rpi4s running my entire home network.

One is a vpn router and a wifi AP, it also has Uptime Kuma. I need this to be reliable and rarely touch it except to improve its reliability. - Openvpn - HostAPD - Uptime Kuma (in docker) - A microservice invoked from Uptime Kuma that monitors connectivity to my ISPs router (in docker) - nginx, not in docker, reverse proxies to Uptime Kuma

The second acts as a NAS and has a RAID array, consisting of disks plugged into a powered USB hub. It runs OpenMediaVault and as many network sharing services as I can set up. I also want maximum reliability/availability from this pi, so rarely touch it. All the storage for all my services is hosted here, or backed up to here in the case of databases that need to be faster.

The third rpi runs all the rest of my services. All the web apps are dockerized. Those that need a DB also have their DB hosted. Those that need file storage are using kerberized NFS from my NAS. This rpi is also another wifi AP. This rpi keeps running out of RAM and crashing and I plan to scale it when rpis become cheaper or I can repair some old laptops.: - Postgres - HostAPD - nginx - Nextcloud - Keycloak - Heimdall - Jellyfin - N8N - Firefly-iii - Grist - A persistent reverse SSH tunnel to a small VM in the cloud to make some services public - A microservice needed for one of my hobbies - A monitoring service for my backups

All of these pis are provisioned via Ansible.


Sounds neat. Are you doing anything to mitigate the possibility of SD card corruption with the Raspberry Pis?

I used to use a single RP to run as a media server, and it was great, but stopped using it after suffering from SD card corruption.


You can boot a Pi4 (and some older Pi boards) from more reliable storage attached to a USB port these days (e.g. SSD).

https://www.raspberrypi.com/documentation/computers/raspberr...

You can also network boot as well:

https://www.raspberrypi.com/documentation/computers/raspberr...


TBH I haven't ever had a problem with SD card corruption so far. If I did, it wouldn't really matter, since all the important data is on the RAID array, and the OS can be reprovisioned if needed.

Performance proved to be an issue for SD cards though, when attempting to host nextcloud and postgres. I do what teh_klev is talking about and selected the fastest USB stick I could find, which was a Samsung FIT Plus 128 GB Type-A 300 MB/s USB 3.1 Flash Drive (MUF-128AB), and this gave me a huge speedup.

Unfortunately Jellyfin is not really fast enough on an rpi and I have no solution.


I have been thinking of setting up a pi4 as a wifi AP. Can you comment on the hardware performance? I am worried that the range or throughput might be poor, and thinking I might need to use an Intel ax200 or similar.


I've done this before and it works in a pinch, but I didn't think it was reliable enough to use on a permanent basis. I added a USB WiFi interface, and that helped with the signal quite a bit. Setting up the AP and networking isn't trivial (but is certainly do-able if you're familiar with linux networking).

My use case was using it to connect my family's devices to an AirBNB network. I used the Rpi as a bridge to the host WiFi. This way I could keep a common SSID/password and didn't have to reconfigure all of my kid's devices. It kinda worked.

However, it wasn't very reliable and had poor range and performance. The Rpi was meh with one client attached, but it was bleh with more than one. I ended up replacing it quickly with a cheap dedicated AP that I flashed with openwrt. Much easier, and device-wise was cheaper too.


The range is indeed poor, and it depends a lot on your house/flat. Would definitely recommend using another machine with something like this.

In terms of throughput, right next to an AP, I just got 65Mb/s.


Love reading threads like these. Have already discovered a couple of cool things in here to explore.

Here's what I'm either running now or in the process of standing up. It's very WIP, and nothing special, but maybe someone has some feedback/ideas. I'm aiming to have all the things I use contained in my rack, relying on cloud stuff as little as possible, mainly as a fun exercise/project.

https://i.imgur.com/PDO71bx.png

Devices:

    * 6 x Raspberry Pis (1 x v1, 2 x v2, 1 x v3, 2 x v4)    
    * 1 x HP Microserver (ProLiant Gen8 Intel Celeron G1610T, 2x3TB in RAID, 1TB regular)   
    * 1 x Old gaming PC (i7 3770K, 24GB mem)    
    * 1 x Current PC (i5 12600K, 64GB mem; this is in the rack so that I can have a small, cleared desk, but is not hosting anything)
Uses:

    * Dev stuff:
        * Forge - WIP: Forgejo
        * CI/CD - WIP/TBC: Woodpecker CI vs Concourse CI vs Laminar CI
        * Container registries - 1 x my images, 1 x Docker Hub mirror
        * Deployments - WIP: Would love a FOSS Octopus Deploy clone. Working on my own primitive Ansible clone for fun, which might be good enough.
    * Home stuff:
        * Reverse proxy - nginx
        * Backups - TBC: Not sure yet. Lots of good options around. Probably should be doing this first..
        * Adblock - Pihole
        * Content aggregation - Basic Bash YT downloader (using yt-dlp), Deluge
        * NAS - Samba
        * VPN - WIP: Wireguard
        * Network control - WIP: some basic custom stuff to wake/sleep the microserver + gaming PC when not in use for power/heat reasons, initiate backups etc
        * Telegram bots - TG is my chat client of choice and I have a small .NET 'gateway' API I use to send messages from various bots (each representing different parts of my 'home lab') to my personal account
        * Monitoring/dashboards - TBC: haven't explored software for this yet
        * Log aggregation - TBC: haven't explored software for this yet
        * Network boot - TBC: Yesterday I started thinking of trying to PXE boot the RPis cause they're in one of those stacking towers inside my half-height rack, so getting to them is a PITA.


Love this thread, here is mine:

    Hardware: Synology NAS (DS420+)
    Software:
    - Synology DSM – Was actually quite impressed by Synology's software. It is a tad quirky but can usually be worked around but all in all pretty good. (https://www.synology.com/en-us/dsm)
    - Plex: not in love, but it gets the job done, definitely gonna use this thread to look in to replacements (https://www.plex.tv/)
    - Syncthing: i love my magic folders that just sync stuff, amazing software (https://syncthing.net/)
    - Docker: most things are hosted out of Docker (https://www.docker.com/)
    - cloudflared: effortless external access for all my stuff + all other Cloudflare goodies (WAF/Zero-Trust). Good piece of mind to not have to ever open ports on my local network and let CF just take the brunt of the Internet. Also used as a local PlaintextDNS=>DoH proxy that is hooked up to NextDNS (https://www.cloudflare.com/products/tunnel/) [disclaimer: I work for the big orange cloud company, so sorry if the previous sounds like an ad, I do really like the software we make].
    - FreshRSS: quite silly to do a cloud version of this when it is quite easy to host on your own. It works good, but I feel a strong itch to write my own version. (https://www.freshrss.org/)
    - Metrics: prometheus node exporter. I do have grafana hosted in the cloud, but I've been meaning to move that to my local server.
The list will definitely grow after reading this thread :D


   > cloudflared: effortless external access for all my stuff + all other Cloudflare goodies (WAF/Zero-Trust). Good piece of mind to not have to ever open ports on my local network and let CF just take the brunt of the Internet. Also used as a local PlaintextDNS=>DoH proxy that is hooked up to NextDNS (https://www.cloudflare.com/products/tunnel/).
I was a beta tester for Warp pre-launch and set my home server up to use it. I'll agree with you -- it was awesome. I stopped using it after the beta since it became metered (which was expected and I'm not sore about[0])

   > [disclaimer: I work for the big orange cloud company, so sorry if the previous sounds like an ad, I do really like the software we make]
I don't work for the big orange cloud company and I really like the software you make, I really enjoy reading your technical blogs and have routinely shared them with co-workers at past jobs[1]. I had a coworker solve a long-running issue on a pool of Linux front-end hosts by tweaking some kernel/network configuration settings -- he mentioned he got the idea from a blog post from the big orange cloud company that "he read over the weekend". I use your services for my home server and end up recommending (and ultimately using) it with every one of my customers. I'm sure you guys have pissed someone off by now, but everyone I encounter heaps praise on your products and company.

[0] Okay, maybe a little. It was a really neat service that I got to use for free for almost a year. A gentleman from Cloudflare (I think he said he worked in Romania) even sent me a gray T-Shirt with the big orange cloud company's logo on it in "Apple-style" minimalism.

[1] Programmers have gaps -- I'm surprised how many web developers' gaps are "everything between the various hosted files and The Browser".


> Plex: not in love, but it gets the job done, definitely gonna use this thread to look in to replacements (https://www.plex.tv/)

I'm sure you've seen it already but Jellyfin is fantastic.


emby + Kodi is also good.


I use Cloudflare to proxy a number of self-hosted HTTPS services, which is great.

I still haven't found a solution to obscure, geo-filter and WAF arbitrary TCP/UDP.

Spectrum Enterprise seems like it is capable but not really the answer for a personal server that handles megabytes of traffic a day.


Docker containers for:

Minio : object store

sdftosvg : Molecular renderer

Observability stack : (Grafana, Prometheus, exporters)

Postgres: molecular metadata (9 billion molecules)

Molecular relaxation workers

Quantum Monte Carlo simulators

Dask workers

Websocket message pump

Insilico virtual lab server + 3 clients

Vscode server

Deep Learning model server (inference)

Deep learning model training server

Most of these are applications I have written myself, and powering my hobby project https://atomictessellator.com

Specs: 4 machines 2TB RAM total 4 GPUs (Tesla A100s) 384 CPU cores total

These are in my lounge, yes, it is noisy, yes, it is hot in here, yes, I love it


What is a molecular relaxation worker? Is that a scientific or a computing term?


It’s a colloquial term for structure optimisation

https://wiki.fysik.dtu.dk/ase/ase/optimize.html


For me: Jellyfin with all my movies, shows, home videos, and a film footage archive of content ripped from YouTube. I set this server up when a movie I wanted to watch was no longer on Netflix and I realized, in a few decades there’s a decent likelihood that many of my favorites will be hard to find on streaming services or I’ll have to subscribe to several services to get the collection I want. I’ve had it running for years now and it’s one of my favorite things. I share access to it with my friends and family, so they benefit as well. Only issue I had to get around was with steaming 4k video. For some reason, the official Jellyfin apps try to transcode 4k video live rather than just streaming it directly, and my low-CPU NAS really struggles to keep up with that, so I’d resorted to paying for Infuse Pro as the front end for my server, which does stream 4k without transcoding. In all it’s just a NAS with ngrok running in a docker container running Jellyfin and a Caddy reverse proxy. Highly recommend this setup to anyone interested in building a media server! It feels good to actually own some digital content in the age of Spotify and Netflix.


On my Synology NAS, I run the following using Docker:

- Jellyfin: Streams my movies and shows

- Paperless-NG: Where I keep OCR'd scans of my paper docs

- Airsonic: So I can stream my music using the Subsonic protocol

- Photoprism: Where I store my photos, auto-labelled with AI and geotagged

- The Synology Surveillance Suite: NVR for my home security cameras.

- Wireguard: So I can access all this on the go

It feels pretty cool to stream music and videos from my personal cloud using Wireguard while I'm e.g. travelling or in the car.


yup I have a DS920+ running docker instances of :

Plex

Tiddlywikki

trilium (x2 instances)

taiga

Homepage

Pingvin

freshRSS

pihole

minecraft server

valheim server

My ftitzbox router handles wireguard for me


Debian as the OS, Portainer to have a WebUI so that I can manage containers a bit more easily. Jellyfin to stream movies, Transmission-OpenVPN to download movies, Blog container and caddy to manage the traffic. Wireguard so that I can connect to my home network remotely.

The server has a GTX 1050Ti so that it can transcode the movies, was a bit of a challenge to be able to successfully use the GPU in the Jellyfin container, but works flawlessly now.

I plan to setup a Nextcloud or maybe Syncthing to backup my own files and photos, but I'm not sure how I want to handle backups, maybe just an external SSD. Cloud backup would be cool.

A local Bitwarden server would be nice as well, but maybe I'll just switch to good old Keepass.

Has a Ryzen 3, 16GB RAM, 3TB storage (2x 3TB in a mirrored RAID).

What stresses me out a bit is that I don't really monitor the state of my RAID, so theoretically it could currently be broken without me knowing. I'm not doing anything against this because I currently only store movies and shows which are also copied to an external SSD for travel, so I don't really care if it goes tits up.

But if I want to start hosting my files and passwords, I gotta make that more stable.


On my NAS (qnap ts453d) I run a few webapps using Docker:

    homeassistant (Home Automation)
    jellyfin (Media Server)
    joplin (Notes Syncing)
    transmission (Bittorrent Client)
The apps are behind nginx proxy manager (which does https termination) and the containers are managed using Portainer. They are automatically updated using watchtower.

And then on a raspberry pi, there's more home automation stuff:

    openhab (my old home automation system, which still interfaces old components like my heating and the photovoltaic system) 
    influxdb (storage for openhab)
    grafana (visualisation and alerting)
    node red (automation stuff like doorbell push notifications)
    mosquitto (mqtt server)
    lots of custom scripts that do stuff and publish the results via http
I probably should move some of this stuff to the NAS, but I have a working and tested backup flow and the NAS takes forever to boot, so I would lose more sensor data by reboots of the NAS than by the SD card of the pi breaking (happend twice in 7 years).


My FreeBSD server is a bit of a Ship of Theseus at this point (been around in some form or other for ~20 years). Some items:

  - project management software (todo lists, etc)
  - web server (personal photos, wikis, etc)
  - a script that sends me email and text reminders (birthdays, appointments, etc)
  - a script that listens to my network, and notices when devices connect;
    I use this to play "intro music" on our sound system when friends'
    phones connect to the WiFi.
  - a script to monitor some email accounts. One fun one is an "exquisite corpse" [1]
    manager, to make surrealistic email chains.
  - central hub for my development repos, so I can push/pull to share with other devices.
  - a script to download certain podcasts regularly
and many more (:

[1] https://en.wikipedia.org/wiki/Exquisite_corpse


Decommissioned due to energy costs increase in Poland.

I've moved my "home lab" to a dedicated server in a country with green energy.

At home, I'm currently using NanoPi R5s as "router" (NAT, no routing).


Define "home server." Consumer-grade NASes are so powerful they can pretty much do anything you'd do with a "home server."

On my smart TVs, I installed Kodi. I have an off-the-shelf NAS that serves the files via NFS. It can do Plex, if that's your cup of tea. (But I prefer Kodi, because most of things I do are audiophile grade and I don't want the kind of transcoding that Plex does.)

The NAS is basically a super-powerful server in a tiny footprint. I just use it for files, because I just don't care to take the time to learn how to do anything else. One critical feature is that it's point-and-click RAID. I've hot-swapped drives after a failure with no downtime; all point-and-click over the web interface.

But, one thing I want to do is self-host my own website, and self-host a mastodon node. Maybe if I was independently wealthy I'd take the time to figure it out, like when I ran a dial-up BBS when I was younger.


What NAS do you have?


Synology


Thanks.


I have two raspberry pi's 8gb, with SSDs. Running Portainer for Docker management. I have everything containerized with docker-compose. - Most importantly, Cloudflare tunnel for exposing my services without opening ports on my router.

- Tor relay (non-exit)

- Pi-hole for DNS and ad blocking - Homepage dashboard

- Audibookshelf - Jackett

- Sonarr

- Radarr

- Jellyfin

- qBittrorrent

- Backup Pi-hole

- Uptime Kuma

Monitoring everything with Zabbix which runs on a VPS.


you use portainer but "have everything containerized with docker-compose". am i missing something regarding portainer that supports compose files, or do you run containers via compose files and use portainer for monitoring/logging and such?


You can create stacks in Portainer with compose file, that's what I meant...


> - Sonarr

> - Radarr

What indexers are you on?


1337x TPB and few private indexers


I am running at home:

- authoritative DNS for my zones (nsd)

- recursive DNS for my home network (unbound)

- MTA (exim), IMAP (dovecot) and spam filter (spamassassin)

- IPsec VPN for my phone and laptop (strongswan)

- HTTP serving mostly a static filedump (apache2)

- a persistent IRC session (irssi)

- Debian repo cache (apt-cacher-ng)

- full AAA setup for IPsec & wifi login (openldap, mit-krb5, freeradius)

- mailing lists (mailman) - this one is soon to go away

- netboot for my desktop (dhcpd, atftpd+thttpd)

- netroot/NFS for my desktop (nfsd, rpc.*)

- a fricken samba server because that's the only thing my HP printer wants to upload scans to (nmbd, smbd)

- oh and 12 TB of bulk storage (ZFS, 4×4TB HGST raidz1)

- coming soon™, paperless-ngx

Might still have forgotten something…

Hardware is a X9DR7-LN4F with 2× E5-2630v2, 128GB RAM. 160~200W, getting replaced soon (electricity is fricken expensive…) The thing is also rather noisy sitting in a glorified broom closet.

There's a Dell PV-124T tape library with an LTO-6 drive attached for backups.


At the moment I just have the UniFi controller running on a Raspberry Pi. But I really do want to set up a good NAS for storage, and do some kind of encrypted snapshot backup to cloud for important things I currently just have strewn across some external hard drives and SSDs (generally at least two copies, but all the drives are at my house which isn’t great if there’s a fire or something).

I might get a small server (maybe the HPE Microserver) instead of something like a Synology, set Proxmox up on it, and then if I ever have the time I’d quite like to get a Samba 4 Active Directory controller going so I can control my few Windows machines better (and so it stops trying me to link a Microsoft account and all that).


HP Elitedesk 800 g6 mini running Proxmox (compute and SSD storage)

  - Home Assistant
  - Docker
    - cloudflared
    - traefik
    - authelia
    - unifi controller
    - jellyfin
    - arr stack
    - influx
    - grafana
  - OSX (runs only when needed)
  - Win11 (runs only when needed)
QNAP TS251+ (RAID1 storage)

RaspBerry Pi (print server)

  printers:
    - HP Laserjet
    - Dymo Labelprinter
    - 3D Printer
  software:
    - Cupsd
    - Klipper
    - klipperscreen
    - Moonraker
    - Mainsail
I really like my setup, although in the future I would like to investigate some form of redundancy, since my home starts to rely more and more on Home Assistant.


Hardware: old laptop from ~2012 and some external hard drives (~25W total).

Occasionally used for things like yahoo answers archiving, fishnet (distributed lichess game analysis), random big computations or downloads, and other things I want to run overnight but don't need a big desktop PC for.

Services it more permanently hosts:

    - email for me and friends
    - websites of friends
    - game server (Factorio, OpenArena, custom games I made, etc.)
    - SSH -D functioning as a VPN when paypal or some such is being a bitch again
    - [web] blog
    - [web] grocery list (custom. It syncs between devices, stores recipes, frequently bought items, etc.)
    - [web] link shortener / pastebin / file sharing in one
    - [web] a data explorer for some game (with associated scraper running every few hours)
    - [web] a series tracker to note which episode file we're at because apple tv is garbage (for many reasons) and doesn't remember this
    - [web] browser-based latency test thingy, both as plain HTTPS and websocket (they're remarkably similar with HTTPS connection reuse, the main difference being that websocket breaks when your connection is down for 0.1s or your IP changes, so I use websocket more as canary and the HTTPS version for actual latency testing)
    - [web] proxies for certain sites, like to speed up the openstreetmap wiki by caching or to remove the 5MB unnecessary javascript from a news website
    - data scraper that emails me when the local river does something that interests me (like get close to flooding or drying up or a sudden large change)
    - other data scrapers
    - Telegram bot
    - IRC bouncer
    - Gitea
    - Restic backup server
And a ton of web things I'm forgetting, including little .php scripts not worth mentioning that I occasionally use to do this or that.

I'm really amazed what I can do with some hardware that would otherwise be thrown away or disappear into an old drawer never to be used again. I could probably host twice as many things and it would still work (CPU and network are basically idle the whole time, only RAM would be getting tight then... I could upgrade to 16 GB DDR3 which is cheap nowadays).


I also recommend using an old laptop as a server. Benefits include the built-in battery backup, built-in KVM, and relatively low power consumption.

The other major benefit is that if you're the kind of person who's apt to set up a home server, you probably already have at least one these lying around.


There's just one slightly opaque pitfall, not all laptops have a BIOS setting where you can specify what to do on power-loss and power-resume. So the laptop "server" stays off and you need to intervene manually.

I've had better luck with thin clients ...


Never had power out for long enough that the laptop didn't survive. Automatic shutdown of optional services at 80% battery level might also mean that it survives 2-3x as long, if this sort of thing is likely to hit where you live. Or buy a tiny UPS.

(One of the major advantages of laptops not manufactured in the last 3 years: replacing the battery to make it good as new takes 15 seconds. Click out, click in. They don't make 'em like that anymore :( )

If the battery were to run out: on 355 of the 365 days per year, there will be someone home who can push the button within a few hours.

And if it's one of the 10 other days: email will retry for 24 hours and everything else is somewhat optional for me. I can call someone to get there within a day, or change the DNS records and turn on a VPS to temporarily buffer the incoming SMTP. I considered my contingency options but never got close to needing it.


Jellyfin (video), Navidrome (audio), Deluge, Paperless (document management), tt-rss, Wallabag (web clipping/archiving), Huginn, Syncthing+restic+rclone for backups, and I host a static blog generated with Jekyll.


I see Navidrome mentioned a lot. Why use that instead of just Jellyfin?


Honestly, I had Navidrome installed for music before I had Jellyfin set up for video, and now I'm used to it. :)

When I originally picked it, I really liked Navidrome's UI and the simplicity of installation, setup, upgrades, speed/performance, etc.

I have heard that Navidrome handles large libraries better (e.g. faster scanning), and I like that it's purely tag based and doesn't rely on any (slow) scrapers (I just used Musicbrainz Picard to clean things up before adding music to my collection).


A old 2011 mac mini with 80GB HD and 16GB RAM. Everything via Docker:

- Jellyfin: As a media server for movies and tv shows.

- Navidrome: My own music streaming service so I can listen to my FLAC collection anywhere (you can use any subsonic compatible app or its own web UI)

- Radarr: Automated movie torrent download

- Sonarr: Automated TV shows torrent download

- Nextcloud: For document and file sharing. It does allow calendar, contacts, etc.. but so far I am doing that just via my own regular Fastmail account

- Photoprism: I upload all my photos to it, via WebDAV.

- FreshRSS: RSS reader

- PiHole: No ads anywhere in my network

- OpenVPN: So I can access all the above from outside my network


Recently went from an RPi to a MinisForum HX90 this past year.

RPi was running PiHole, Wireguard, a dyndns updater, and the control interface for my old AP via docker(compose) on Ubuntu Server.

HX90 is running ProxMox, with VMs for WireGuard and PiHole, and a large Ubuntu VM to run dockerised (docker-compose) applications. I know ProxMox can do it directly, I just feel more comfortable in control of directories.

Thinking about running a web-office server (OnlyOffice, NextCloud or OwnCloud) and connecting that with my NAS... possibly using a Cloudflare Tunnel/argo or ngrok.


The real need of any home server has really dissapeard now when there are cheap services for everything and all of them do it better than any setup am willing to dedicate my time on. If you really want something just buy a cheap NAS and it will run everything you want with minimal setup.

Unless you want to learn how to setup a server ie "home lab", but then its more cheaper and more efficient to rent a cheap vm on linode.

The era where you had an old computer from your office in a closet to host your website, music, email, and such is long gone.


Really depends on what you're doing re: homelab. Linode's pricing is still way more than what you're gonna pay for your own hardware at just about any tier. My home server is 16 core/64 GB and I use most of that to host stuff. It cost about $1250 in parts and about $10/mo in electricity. Linode's monthly pricing for a dedicated instance 16 core/32 gig shared CPU is $160/mo, so the breakeven point is around a year for slightly worse hardware. I expect to get at least 5 years of use out of this hardware so it's not even close.

Even on the lower end a recycled mini-PC will run you like 100 bucks and get you 8 core/8 GB memory and negligible power. Versus the cheapest Linode plan at $5/mo for 1 core/1 GB. That's more competitive and might work for some people but even then self-host is still viable. Your own time is valuable of course, but most of the time in my experience is not dealing with the hardware itself, it's setting up the software and maintaining it. Which you still have to do with Linode.


Do you really use all of that computing power for the stuff that is running on the server. All of the home lab setups i have seen have been way over powered and under used. Lots of "wasted" computing power to just be powered on and never really utilized.


From the top of my head:

Proxmox and Kubernetes, Rook on top of that for storage. GitLab for manifest storage, ArgoCD for deployment, terraform for node management in proxmox. It's not fully redundant (two separate nodes) but storage is replicated (ZFS send/receive on proxmox, Ceph replication for Rook).

Primary stuff are things like AP management, file shares, prometheus, grafana, various interfaces with 'dumb' devices, just to collect data from them without having them connected to the internet, VPN for remote access, separate VPN for internet tunnelling when on foreign networks, media service for movies and series, downloaders, Cloudflare tunnels to expose services to the big bad internet without opening ports, TFTP services for network booting, pull through caches for OCI resources and some other package indices. Also an OpenFlow controller but that was more for fun than because I needed it, no longer used for actual switches.

Main benefit from all the IaC is easy creation/destruction of entire environments when testing out things without having to pay money to a public cloud.

Also have BSD based firewall(s) to do NAT, DNS, DHCP etc. but those aren't on a shared home server but rather on separate hardware.

Similar setup in leased hardware in a DC just to keep the knowledge for legacy setups in on-prem style cases fresh, and public cloud version as well (but that costs money so it's scaled down significantly)/


Here's what I have:

- A dedicated (hardware) server in an European datacenter running Proxmox. - opnsense (firewall VM that I connect to using wireguard for administration, and home networks connect to for voip, etc) - mailcow - nextcloud - collabora office - asterisk (for VoIP, since I'm living abroad but still need my old local phone# sometimes) - own Mastodon instance - a Windows VM for those times when you really need to test something against Windows

- Back at home; an old gaming PC converted into a NAS, on a speedy 300/300Mbps GPON - borgmatic backups for everything, incl. family - jellyfin - samba for most of my video/music/legacy stuff - home assistant - transmission

- At parents' place - an ancient HP Microserver with 10-year-old hard drives: - borgmatic - samba - transmission - a cellular to VoIP gateway connected to my european Asterisk via VPN

- Another "home" network location interconnected with these two -- mainly for convenience of monitoring home automation and CCTV with Zabbix;

- A separate cloud VPS running just Zabbix for monitoring all this and some other stuff I do as a freelancer.

On my phone, VoIP VM and some other chosen devices I also run Tailscale -- with phone it's mainly because some European ISPs like to block VoIP :). There's a massive inconvenience about it in that Android only allows one VPN tunnel per profile, and I need two...


I've slowly upgraded my home server over the years, going from a Raspberry Pi 3 in grad school, to an old Mac Mini my parents had, to finally building a dedicated machine about a year ago using desktop parts.

- A few Discord bots I've written

- Jellyfin

- Nextcloud

- Home Assistant + NodeRed

- Libreddit and Invidious alternative Reddit/Youtube frontends

- Calibre-web

- A Terraria server that no one has used in a few months

- A smattering of other utilities such as Samba, MQTT, ddclient for DDNS

I also have separate Raspberry Pis running Pi-Hole and Octoprint, and a Raspberry Pi Pico running software I wrote to control my Xmas lights.


Beelink with AMD Ryzen 7 3750H and 64GB running Home Assistant (with Caddy for external proxy), Sonarr[1], nzbget, Minecraft[2], Tailscale, Photoprism, and it also acts as a NAS[3] via a Yottamaster enclosure with LVM'd disks.

[1] Sucks up RAM which meant a switch to Medusa but the Medusa web interface chokes badly on Safari and it proved to be much less reliable at fetching episodes hence returning to Sonarr

[2] Only running when people are wanting to play otherwise it also sucks RAM.

[3] Used to be a Zyxel but that got retired.


Very similar to my setup! I really like my Beelink, I was initially sceptical because it was so cheap.

I run Pi-Hole, Tailscale, Photoprism, Syncthing, and Terraria, amongst other things.


Oh, I forgot to mention Syncthing. Probably because it's one of those services that Just Works Quietly and almost never gives any trouble.


I got a single quiet box running Proxmox and it hosts all my private workloads. The workloads change from time to time, but the following have been persistent:

- router (pfSense) with wireguard VPN

- piHole for ad blocking

- Kubernetes cluster for running my own projects

- Cloudflare tunnel (on k8s) for exposing services to the internet

- Jenkins for CI and misc automation

- Samba shares for family members

- NFS share for K8s volumes

- Asterisk for telephony

It's great to have everything virtualized. Enables you to try new things easily. Also saves you electricity and space, and does nor sound like having jet engine(s) in your home


I have

- A Jenkins instance, for various purposes (cf. https://news.ycombinator.com/item?id=25391401)

- A webservice to download music, videos and store it locally for future use.

- Another webservice to store all my photos and access them.

- A TOR hidden service to access all the above when I am overseas.

It's worth noting that they are all Raspberry Pis to which 3.5" hard drives are hooked.


Great thread. My setup...

Hardware: Intel NUC, running Proxmox Synology DS920, 16 TB - used for storage and file services only. I avoid the Synology installed apps to keep this decoupled. A pile of custom Wemos devices - sprinkler controller, digital radio to listen to my weatherstation, landscape lighting control, sensors, etc. Zwave devices - light switches, mail delivery sensor, garage door status Security Cameras - been happy with the Amcrest ones Tablet - fixed display of Home Assistant status, using fully kiosk UPS

I gave up on Pi's after suffering too many SD card corruptions, nice for playing around, but not good for stable deployments in my experience. Didn't want to bother with the usb stick approach as by then I was on a NUC and it's been great.

Apps & Services: Home Assistant Mosquitto Plex Pi-hole Unifi Wiki.js sftp VPN Server Syncthing: primary use case is for syncing Obsidian vaults Veracrypt: encrypted store of info, keys, credentials, financials, etc.

Hosted services: $50/yr VPS in Netherlands - 1 TB storage, absurd amount of transfer Uptime monitoring - healthchecks.io

I follow the reddit selfhosted sub to learn about developments in this area.


Outbound bandwidth has always been limited out here in the sticks and our inbound has only been "high bandwidth" for the last few years. For a while I was employed as a contractor and had to run all my own development environments locally and still do so I have more reason to not have an office at HQ.

- 2 x Xeon X5650s running Ubuntu as hypervisors and one acting as dnat router for our Starlink connection. Hypervisors include Asterisk, Invidious, staging mysql/dreamfactory, CIFS server. One trying to run yolov7 on docker with cuda on an external GPU. [edit] Forgot the Pihole kvm instance for all local DNS.

- 1 x GPD mini pc (4 core 8G Celeron n4100) with Homeassistant supervisor mode on Debian for Temp/humidity/pressure monitoring, AC compressor control, 110V AC Aqara Switches, remote Aqara buttons (outside lights, garbage disposal)

- 1 x Raspberry PI 4 iredmail running three SMTP/IMAP domains over Wireguard DNAT forwarding from Vultr public IP

- 1 x Raspberry PI 4 with 3x7TB USB drives for DNLA & CIFS

- 1 x Raspberry PI 4 with Octoprint on 3d printer

- 200AH lead acid battery bank with 12V/100A charger and 3kW inverter


My RPi is just a torrent box and media server. I use SMB to drop .torrent files downloaded from my Windows desktop directly to a directory on the Pi, and Transmission immediately starts downloading it.

My router forwards port 22 to it as well, so I can SSH to it remotely and then tunnel VNC connections to my desktop through it. It's exceptionally rare for me to do this, but it HAS come in handy before.


Locally:

Synology NAS - dockers:

  - monitoring: grafana, prometheus, alertmanager, bunch of exporters

  - consul + consul registrator (as service discovery for docker)

  - gitea (personal git repo, with some mirroring github repos)

  - media stuff (jackett, radarr, sonarr, plex)

  - ci (jenkins)

  - datastores (postgres, mongo, redis)

  - rutorrent

  - saltstack (config management for all home systems)

  - syncthing (currently testing to see if it's usefule to me)

  - hashicorp vault (secrets for things)

  - pihole
Intel nuc:

vms running on vmware:

  - active directory (auth across all devices are managed with this)

  - kubernetes (microk8s)

  - pfsense (running a couple of vpn and private vlans for iot devices)

  - license server for some applications
kubernetes

  - istio for managing routing and creating a service mesh with the cloud

  - argocd (deployment management for kubernets resources)

  - kafka (strmizi operator)
unifi + dnsmasq

  - internal dns
Cloud:

  - dns public ip's for things that are exposed publically


How easy is it to create a kubernetes cluster between local and cloud? Was thinking about doing this but was puzzled by how to bridge the network.


I have several computers serving useful stuff.

I just started playing with Umbrel on an old desktop machine I had laying around. Looks promising. I mainly want to run Bitcoin and Lightning nodes with it.

Pihole on a gen 1 raspberry pi

Syncthing on my desktop, laptop, and phone for sharing files between them all

Zerotier on all the above (I gather that wireguard essentially does the same thing, but I’ve been using zerotier since before wireguard was a thing, it’s really simple to set up)

Openssh server

Minidlna on my desktop for serving up music, photos, and video. Roku Media Player can play the media on my TV, and bubbleupnp plays it on my phone

Borg backup on an old Ubiquiti NVR, just because that box is small, quiet, and has a lot of storage.

Apache web server on my desktop that I’ve been running for roughly 20 years, but I don’t use it for much anymore. You can get to my music, ebooks, and other random not sensitive files if you know the secret URL. Oh, it also serves my arch Linux package mirror that all my other machines use, so I guess I do use it a lot!

EDIT: just remembered I have samba running on my desktop, mainly so my HP officejet can scan files and save them there.


Well, there's three, including the router. Altogether, they host Nextcloud for sharing files between devices and have a place for (one set of) backup files, minidlna for movies and music, an OpenVPN instance that has largely been replaced with Wireguard, several local git repositories, and a build environment for FreeBSD (including a cross-compile one for the Orange Pi also running FreeBSD).

The pi also hosts a Kerberos KDC and OpenLDAP, which I prefer for authentication and account management, not that I ever change my password, but just in case. I also use that for email, which is hosted on a cheap, but reliable VPS, connected to the home LAN via Wireguard.

Eventually I plan on adding Kerberized NFSv4 for a shared home directory across servers, and I might play around with a local ipfs cluster. Sadly, OpenAFS doesn't seem to build on FreeBSD these days, or I would be all about that.

At present, I have a Mastodon instance with its own database server on a free Oracle instance, but I will probably migrate the database to my in-house server, just to tidy things up.


I recently got a ~3-year-old Thinkstation with 72 ht cores and 64 GB of RAM for relatively cheap and it has become my home server. It's overkill but I could not resist it. I do not run much on it yet but I have it with FreeBSD 13 and:

* ZFS for secure and reliable document and photo storage. Exposed via Samba. Backed up via ZFS snapshots to a drive I keep in a safe and another drive I keep in the office.

* PostgreSQL for development of personal projects.

* minidlna to serve some videos to the living room Xbox.

* Local development. I connect to it via SSH to work on my Rust projects, all through tmux and Emacs. But...

* bhyve for VMs. VSCode doesn't like FreeBSD as a target, so when I want to use VSCode on the client machines, I connect to a very large Debian VM. I'm planning to run other VMs as well.

I do not serve anything publicly though, other than SSH. My personal projects are hosted on Azure.

The previous machine I used as a server (a 10+-year-old PC) is currently running pfSense and acting as my router + local DNS + DHCP server. I want to replace this with a small box instead but haven't had the chance yet.


Right now I'm using a Synology NAS and two Raspberry Pis.

The Pis host ddclient, Pi-hole + Gravity Sync, cloudflared and PiVPN (Wireguard). I'm also working on setting up Logstash + Filebeat but one of them needs to be formatted for it to work. As you might guess, its purpose is redundant DNS.

The NAS is running a bunch of different things. Portainer, Wireguard, Invidious, a DNS-over-HTTPs endpoint, ddclient, Kibana, and various instances of nginx hosting some of my sites.

I host ddclient and Wireguard in all my machines for redundancy - the bare minimum I need to maintain this setup remotely is being able to connect to my VPN, so keeping my dynamic DNS record up-to-date and having at least one VPN endpoint available is vital.

This year, I want to add more hosts (probably more Pis, and a few nVidia Jetson Nanos I have lying around) and some sort of service mesh to switch to Docker Swarm. Eventually, I might replace the NAS with a custom-built server - I want to do a backup of my Steam library, and I've found the NAS a bit lacking when it comes to virtual machines.


I have a box sitting under my router with my old gaming config inside: Ryzen 5 1600X 32 GB RAM So i have 12 cores to do basically whatever i want, connected to what is supposed to be 5 Gb/s fiber with a static ipv4.

I virtualize with promox (very happy with it): - my personal matrix instance (@Themoonisacheese:poggers.website) - my static homepage (https://poggers.website) - a minecraft server that rotates between modded and not whenever the crew feels like playing minecraft again - a discord bot that plays music, since the big ones get taken down - recently added a Magnetico host. Magnetico is a bittorent DHT explorer that finds publically available torrents and indexes them. This enables me to stop relying on public torrenting sites.

My router (ISP provided, name of the isp is "Free") also does: - router and firewall jobs - SMB server, sharing the contents of a 1Tb external disk - BitTorrent client, downloading to said disk, and that i can control from my phone or my browser. Both are registered as magnet link handlers so it's as seamless as a native client but works from anywhere using my phone. - Various file servers that i currently have off, but could turn on as needed. - DNS-level adblock (not pihole, just a built-in thing)

Over the years i have added and taken down services, such as a ShareX image host because i didn't use it, and various game servers that we no longer play. Notice also the lack of monitoring at all. I'm a sysadmin by day, so i'm familiar with monitoring tools, but i havent felt the need. Some day i might deploy nagios+thruk for the fun of it. Notice also the lack of DNS. I have found at this size i am able to recall IPs for everything that matters. Logins are also set in my VM templates and are standardized, so i don't use a central login solution.


At home, on my HP DL20 I have:

  Restic (backup)
  Samba + Jellyfin (movies + music)
  Nextcloud
  my Telegram photo bot (https://github.com/nmasse-itix/Telegram-Photo-Album-Bot)
  Aeneria (energy monitoring)
  Home Assistant
  Unifi Controller
  Gitea
  Tekton (my CI pipelines)
  Keycloak
  Minio
  Miniflux (RSS Reader)
  Mosquitto (MQTT broker)


HomeBridge really is a game changer for making smart home devices work with HomeKit while supporting user privacy.

Also: PiHole for ad and tracker blocking.


Plex, Photoprism, Calibre, ddclient (keeps my domains pointed at my non-static ip), synching, 1-3 project services, and a caddy reverse proxy container for most of the above. Also at times Minecraft/other game servers. I would figure out wireguard for access but my router can run a VPN. About half run in docker with compose, on a cheap windows hp business refurb


The standard DNS and local network services, plus security cameras.

Being in California, I have tried to push as much as I can out of my home and into datacenters to save on power. That said storage is one of the most expensive things to do "in the cloud," so everything I have is backed up to my house where I can shove a ton of drives into a NAS and spin them down when idle.


I have a standard 42U server rack in my home and 7 2U servers in it now, all of them are retired E5-v3 generation from local datacenters which are so cheap.

All my homelab services run on a single 2U huawei rack server with dual E5-2680 v3, 128GB RAM, Intel P3600 for OS, and 120 TB HDD for storage, with 10Gbps networking, the other servers are used for testing and messing with.

The OS is Debian 10, and I deployed a single node k3s to deploy my services as the management is easier and I may scale them to more nodes in the future.

Major services:

- Ceph (rook-ceph) for managing the 120TB storage, and the CephFS are shared through Samba as the family NAS. It stores family videos and photos (through Nextcloud PV), Blueray movies, old games from the '90s to pre-Steam era, emulator ROMS including MAME, Wii, GBA, PS, XBox, etc. 2-replicas for not-so-important data like games and movies, 3-replicas for personal and family data, 5-replicas for very important data. And I'm so happy with Ceph, it's so stable and easy to extend compared to other solutions like RAID, Longhorn and GlusterFS.

- Nextcloud, I have 3 Nextcloud deployments in the same k8s cluster, one for my personal projects like design documents that can be shared with others, one for the family to view and upload family videos and photos, and one for my personal private data that should not supposed to share with anyone.

- Home Assistant to monitor and control my home.

- Gitlab, stores all my personal project code.

- qbittorrent, for downloading and seeding torrents.

- Gitpod, as my main remote development environment, I can develop wherever I like!

The k3s uses CertManager to issue and renew Let's Encrypt certificates for my services, so I can access my homelab with HTTPS securely from outside.


What are you going to move to now that Gitpod has retired self hosting?


Haven't updated the installation for month so I just learned the unfortunate news from your comment. Guess I'll continue using it until it breaks, then maybe switch to vanilla VSCode Remote SSH, or try some alternatives like [coder](https://github.com/coder/coder).


Local DNS/DHCP (with a raspberry pi for failover), Zabbix for monitoring devices troubleshooting Internet outages, Zoneminder as a front-end for the security cameras. But mainly, it's about my day-to-day, real-life, human-readable data and content, which I've tried to digitize as much as possible...

Email: Most of my email going back to the mid 90's. Available locally via IMAP or remotely via webmail.

Photos: new photos I take, plus scanned versions of old photos, organized by photographer and year. Shared via HTTP and DLNA

Audio: My entire music collection, in FLAC or MP3 format. Also a bunch of free music downloaded from Internet Archive and Musopen. Shared via HTTP and DLNA.

Video: All my home videos, including digitized Super8 films from my childhood. My entire collection of DVDs and VHS tapes (including shows taped off TV). Also, public domain movies downloaded from Internet Archive. Shared via HTTP and DLNA

Text: Scans of important documents, bills, bank statements, PDFs of the instruction manuals to most appliances and gadgets in the house. Scans of letters I received in college. eBooks. Interesting articles from the web or from periodicals. Public domain texts (Project Gutenberg, government publications, sheet music, textbooks). Shared via HTTP.

Software: ISOs of various old operating systems and applications. Netboot images for whatever OS I'm using on the desktop at the moment. Archives of old software. Drivers, ROMs, firmware, fonts. Shared via HTTP or TFTP where needed for OS installation over PXE.

While I've organized things in a pretty logical file hierarchy, I also use Yacy as a private search engine. It periodically crawls and indexes the file structure and categorizing results into collections, such as 'books' 'manuals' 'music' etc. to make searching easier.


I would highly recommend paperless-ngx (organising scanned documents) and photoprism (organising photos). Both are excellent.

I have an FTP server running (open only to the local network) which allows me to have my scanner and camera upload directly to my server for ingestion into them. Similarly on my Android phone I have foldersync set up to send my photos to my server via sftp.


Which camera allows you to do that. I wish my Sony Alpha had that ability instead of using the horrendous app they require.


I have a Canon R5 but I think all the Canon DSLR/mirrorless models with WiFi have this ability too. Its a bit fiddly and only supports FTP/FTPS but it works.


Home Server: Raspberry Pi 4 with an HDD (I've had bad luck with trim support on USB adapters for SSDs)

Open source software: Home Assistant, Home Intent, pihole, miniflix, pinry, and bookstack.

Small self developed software: Chore tracker, Package tracker, Chromecast radio streamer, and a simplistic Trello board

I'm just happy that all that works as well as it does on a single pi with a few users.


> I've had bad luck with trim support on USB adapters for SSDs

Can you expand on that? I have also had trouble getting USB3 to SATA to work reliably.


VPN client (VM which only access to the WAN is through a pfsense VM)

A database of all the files on my NAS to query them quickly in SQL (and uses ffprobe to get video info - codecs, audio and sub tracks, etc) + md5.

A copy of the imdb database (it is still downloadable), so I can index my films, get suggestions for high rating movies I don't own, or by director, or get notifications when new episodes of my favorite tv shows are released

Incremental daily snapshot of my important files (with hardlinks when the file hasn't changed).

Failover mail server to receive emails when your vps is down or you need to migrate it

I took the habit of scanning every important document and physical mail I receive, they then get OCR-ed and sorted automatically.

Keep an eye on all sorts of things you can scrape from the internet. Prices for real estate transactions around you, stock prices, prices for certain computer hardware, be notified when a product comes back in stock, etc.

Podcast downloader


I have an unRAID tower, which is basically an all in one storage + application server with integrated docker. I have a lot of file storage with writing/programming projects, important docs, and a whole lot of video/ebook/manga data hording. I don't watch much of it but my wife and roommate do.

For Linux isos I use radarr + sonarr with Jackett for torrent aggregation, all public sites. Mullvad with wireguard gives great throughput over the VPN. I also use Requesterr which allows people to add new items to the download que via discord bot, which is great because trying to get my wife to use Sonarr was a lost cause.

I run a variety of game servers for friends off the machine, and local applications for myself. Ubiquities network manager for wifi endpoints, redis/sql databases, and backup manager.

I really want to get some home automation setup, so Home Assistant is probably next on my list.


I just upgraded my desktop computer for the first time since about 2010. I've been working on repurposing the old i5-750 based system as a home server. I picked up an old LSI raid card off eBay, and half a dozen budget 1TB SSD's which are setup in a RAID 6. So far, I have nginx with nextcloud, postfix for mail, and Jellyfin. And of course Samba for local file sharing. I also still have a pi 3b with a 1TB SSD running 64 bit ARM Ubuntu, which was my previous home server. It's still running DNS, which is actually setup split zone, so it serves not only my home network, but also public DNS for my domain. Nothing super fancy, but it works for me and has been fun to tinker with. I was able to recover a whole bunch of old media from a pile of drives in a storage tote that had been a RAID in a previous home server about ten years ago.


Mostly media server and home security related stuff.

Box 1: Truly ancient HP Proliant N54L - Bastion/DMZ machine, router forwards all incoming traffic to it - NAS - JBOD that I cobbled together over the years, storage is only ~12TB, and mostly used for media. - NGINX - proxies traffic to the rest of my network

Box 2: Some SFF Lenovo desktop - NVR, running zonemonitor - Dedicated storage for my home security cameras

Box 3: Some HP Prokesk Mini SFF - Homeassistant - Various docker containers running homeassistant ajacent stuff

Box 4: Another HP Prodesk SFF, but with 10th gen i3: Media server - Everything in docker - Emby - Radarr - Sonarr - Transmission - etc. - Latest addition to the family, I deployed this a week ago. I have to say I'm mighty surprised by the performance of the i3 for media transcoding. It can do at least one 4k->720p transcode without even breaking a sweat.


I have a pretty elaborate media downloading setup. Though this is a cheap 16TB server that I brought on the cloud for myself.

I use Radarr and Sonarr to automatically download movies and tv shows. I use mdblist.com to autogenerate movie lists based on criteria like imdbrating, number of votes etc. And use these lists directly in Radarr, one plus point is that these lists are autoupdated with new movies and Radarr automatically downloads them. I've configured Radarr/Sonarr to download items automatically with usenet and debrid(no direct torrents).

I have setup Plex, WebDAV(for Kodi), ftp and a generic http server with basic authentication on the media folder. I have 1Gbps unmetered connection and have added a bunch of my friends to plex. I can easily stream 4k remux blu-ray rips to my smart tv from the Plex server.


I'm confused about Radarr / Sonarr, is that what's directly sourcing the torrents you're downloading or are they being sourced from separate tracker(s)?


- Self hosted LanguageTool instance. Spell/grammar checker like Grammarly, but the self hosted version avoids the privacy problems.

- Write-only file upload under an obscure domain. A bit dangerous, but a life saver when someone wants to send you a 200MB file. "Just go to frequentrain.com and upload there". With desktop notifications on my side, text upload for quick note taking from any computer, etc.

- Custom RSS feed reader, and downloader for YouTube channels or websites that I like but fear may go offline.

- Cronjob for checking internet connectivity (to router, to DNS server, to nearby servers, to far away servers). Very useful for both troubleshooting and forcing the ISP to admit there's a problem.

- Fire-and-forget jobs. Long video transcoding, downloading large files from flaky servers, running long tests, etc.


> - Write-only file upload under an obscure domain. A bit dangerous, but a life saver when someone wants to send you a 200MB file. "Just go to frequentrain.com and upload there".

Keep in mind that DNS records, including subdomains, are public. Consider putting the service on a random page instead (e.g. "frequentrain.com/sunnysometimes") and/or using something simple like http basic auth if you're not already.

That said, this is a cool idea and I'll be stealing it :)


The code is all custom and there are no instructions on the page itself, so its very unlikely a stranger would ever find and figure it out. But if it gets popular, that's a good idea!

A few more details if you want to make your own:

- The public page accepts drag and drop anywhere for file upload, and has a text field that uploads its contents immediately upon Ctrl+v. Coupled with a short domain name, it's extremely low friction to upload files or notes from any computer. You can't see what you upload (write-only), so it's not useful to abuse.

- It also opens a Server-Sent Events channel and executes all incoming messages as Javascript code. I use it for redirecting users, sending files, showing text, and pranks. It's only a couple lines of code, massively flexible, and surprisingly reliable.

- The authenticated page can see a list of all uploaded files, along with links to download and delete them. A service worker shows desktop notifications when somebody uploads something.

- Finally, the authenticated page also has a live list of all visitors (SSE channels), with buttons to send them code to execute, or to make them authenticated sessions.

It's the kind of stuff that would horrify me if it was a commercial product, but works marvelously between friends and family.


On premise I have:

    - Fritz!Box 7530 AX (purchased router, connected to fiber endpoint hardware from ISP)
        - OpenVPN
        - Dynamic DNS update of domain name
    - Synology DS220+ (bought new with 2x4TB, running striped/RAID0, uses ~30W under load)
        - Download center with various RSS feeds
        - Plex
        - Primary media collection
        - Adguard in a Docker container
        - Google Cloud/... drive sync
        - Backup target for all non scheduled backups.
    - Synology DS1817+ (second hand, together with extension bay hold ~35TB worth of drives)
        - On a power cycle schedule + Wake-On-LAN (it uses 200W idle and power is expensive here)
        - Backup target of my entire DS220+ and all other devices where I can schedule backups.
        - Media archive
    - Raspberry Pi 4B (no SD card, boots via SSD over USB3)
        - Home Assistant OS
        - Used to run Adguard, but moved to NAS for stability (I play too much with the HA install)
        - Various hardware attached to read out smart meter, control lights. Hardwired as much as possible, Zigbee for wireless IOT stuff.
    - (Linux) Workstation
    - PiKVM attached to my workstation.
        - Not required because SSH, but useful since I use PiKVMs at work, so I have a test one at home.
    - Chromecast on TV
Off premise:

    - Hetzner AX41-NVME
        - Various websites for hobby projects (nginx+...)
        - Pterodactyl (Game server panel)
        - Backups to included 100GB Storage box and DS1817 via Borg
    - Backblaze
        - Important stuff gets an immutable backup here. Costs are currently <1$/month.
Future plans:

    - Replace 1G network with 2.5G
        - I can have symmetric gigabit fiber here, so I'd like to (ab)use it.
        - Requires a new router, I'm thinking of getting one of those NUCs with 6x2.5G.
        - May require a few new cables to be pulled, but the runs are so short I think the existing wiring will manage.
    - Setup cross backups between my and my parent's NASs to have a free off-site backup.


If you don't mind me asking, what did you use for the cross backups? I'm looking to set up some home servers at my house and parents house to get us all off Google but want our data to be geographically replicated (terrified of a house fire destroying all our phots



I've pared things down quite a bit lately; the setup here used to be:

- A Mac Mini, a Pi 4 with SSD and 8GB RAM and a Pi 3B+ forming a "multi-room" Kubernetes cluster with the idea that sensors and automation for each room (that had one) could be run locally in that room, including wired sensors, all connected by Kafka running on one of the Pis; that way I could just plug in another sensor and the local pod would know what to do with it. I never found a reliable way to allow pods to use serial devices and such, so while it's been fun, shifting priorities mean I had to scale it down to just the Pi 4 running HASS, Zigbee2MQTT, Mosquitto, NodeRed, uStreamer and a simple homebrew TTS API based on espeak. It's all docker-compose now, since docker works a lot more reliable with devices. It's a real shame, kustomize/flux/k8s is so much nicer for robustness and easy configuring (a second, different instance of my stack is running elsewhere, so I need that) and monitoring, debugging, ... is way more convenient, but I need those sensors to work and I can't spend as much time fiddling with it as I previously did, so it's docker-compose and Ansible for now.

- A Pi 3 running OctoPrint; since made redundant as my 3D printer (Prusa Mini) got the ability print over LAN with the last update. The webcam I use to monitor it is now running via uStreamer on the Pi 4. It's going to become the brains for the Freenove Big Hexapod Robot kit I've been gifted last Christmas.

- A Pi Zero W running a large-ish e-ink display with weather data and things like that. Still around, terribly useful, I'd love to have a second (but possibly one that doesn't take seconds to refresh...)

- A Synology running a Prometheus/Grafana stack, a MySQL, a docker registry for the Kubernetes cluster, Gitea with all the manifests and flux config so I could rebuild the Kubernetes cluster without internet access. Also still around, but I'll get rid of some of the stuff running on there. Reducing its attack surface and such.

And I'm using Tailscale like everyone else these days.


What eInk display are you using and in which case?


Waveshare epd7in5_V2 and no case, there's a smallish wood block on the left and right lower corners with a shallow cutout that the display slides into, so it's kinda wedged between the wood blocks. Looks kind of like a post card given its thinness.


- Home Assistant

I "smart home"-d my apartment a little and prioritized aesthetics/tactile feel over technical simplicity. So I have some Lutron light switches (proprietary protocol), a couple of Zigbee ceiling fans and an Ecobee thermostat. I'm not crazy about Home Assistant itself but it lets me bring all those things together and export it all to HomeKit, which the means by which everyone actually uses the stuff.

- Plex

Most of our media consumption these days is via streaming services. I use Plex as a DVR for over-the-air TV and it's mostly competent at it. I'd much rather be using Channels but it's a $60/year subscription (vs a $120 lifetime pass for Plex) and we just don't watch enough OTA TV to justify it.


I have an Intel Nuc running Ubuntu, and an ASUS router that I installed the WRT Merlin firmware on. The NUC is running K3S, which I installed to learn Kubernetes, mostly. I don't really need it, tbh. NGINX + apps as systemd services would work just fine.

The most useful thing I use it for is exposing the test environment on my laptop to a local DNS so I can test on mobile and other computers. It also serves as a location to point backups to. Additionally, I point a sub domain off my website to a small service I expose for file sharing with friends. It's not accepting requests all the time though. Just when I have something to put up there.

Would like to put movies and music on it eventually to stream to the rest of my network.


NixOS got me into homelabbing. Before nix, the maintenance burden of running a home server felt too high. Now I run lots of things, and the incremental cost of running one more service or app is low enough that I explore lots more.

Here’s what all I’m running:

- NixOS based router. NAT, DNS, DHCP.

- NixOS NAS + app server. ZFS for storage, and I run radarr, sonarr, NZBGet, nginx hosting various websites and reverse proxy to most of the services listed here, homebridge, zigbee2mqtt, Octoprint, grafana, and more.

Let’s encrypt/ACME certs for each virtual host, and it Just Works thanks to the NixOS nginx modules that make it super easy.

I highly recommend NixOS if you want to have a single service running a ton of different apps. It will keep you sane.


A fairly compared to some of the others. I used to run more but have reduced it a little.

  Hardware
    An HP Miniserver Gen7 with just 4GB of RAM
    120GB SSD for OS
    4 Disks (3,4,4 and 6 TB) No RAID
    4 External Disks to match the above for backup

  Software
    NFS for File Sharing
    sftp server for restic backups (also backing up to B2)
    KVM with 1 VM with prometheus and Grafana

  VM In Cloud
    postfix, rspamd, dovecot, mailman for email
    apache/mysql with various website
Planning to add a recycled desktop as an extra server soon

    Wireguard
    Some more VMs for developing a website project
    Moved Prometheus, grafana


In service terms

- p.v. and home automation, suffering with Home Assistant and it's design, but since I found nothing better (and few even worse)... So far it serve as a mere web dashboard with few controls and automation, that is limited to regulate hot water heating, for sanitary and home heating usages and limited car charging, in more practical terms just switch-toggling via Shelly Pro 2/Raspi Zero GPIO etc;

- my mail system, local mirror of a hosted one (classic IMAPs/SMTPs on my domain) with mere storage in maildirs, indexed with notmuch, synced to few Emacs/EXWM desktops via muchsync over ssh, I plan for a web frontend (MailPie or Modoboa or Roundcube undecided so far) but done nothing more than few experiments so far;

- a limited prototype of home surveillance and communication, few cams and a netatmo dorbell. I plan to have a real SIP/RTP entryphone but all I found on sale are simply absurdly priced (700+€) and more cam and sensors but so far I can just take a look from remote without cloud crap in between;

- raw file sharing (webdav), NexClound and so on are simply monster in my sysadmin vision to be useful for my needs... I drop files individually in a cache like tree and share links as needed;

- tmate terminal servers to support sporadically some family/friends who are behind NAT when a full remote desktop is too much/not available;

- a small PBX wrapping a "commercial/physical" one simply because find a PCIe asterisk card is equally expensive and also hard, while my ISP do not offer VoIP settings forcing me to pickup it's router FXO...

- a TT-RSS istance for my feeds, I use them normally on deskop but decide to move the instance to the serve for casual on-the-go usage;

- a mere ssh + fwknop shell to access other stuff if needed from remote;

- home backup.

In hw terms a single personally assembled 4U small Celeron based machine with 8Gb ram and 4 sata ssd storage. I have a spare mobo+cpu and I plan to buy a new machine leaving the other as a real ready spare. No real redundancies so far but also no real downtime risks...


I have a little home server sitting right behind me as I type this, ticking along.

It hosts:

* My personal site (http://samhuk.com). I use this mostly for just myself as a way to remember tasty recipes and such. It also has a tonne of functionality behind a login that I use.

* Samba server for a little NAS setup. I have to daily drive Win, Linux, and MacOS, so Samba was my go-to.

* CI/CD for some of my projects.

* A minecraft server that me and my friends sometimes play around on when we have a few drinks.

It has an old 8th gen intel i3 that I got for peanuts, 8TB WD datacenter HDD, etc., way more than enough for what I use it for. Could have done with a rpi honestly, but...rpi 4...


Pretty simple setup here at home.

One RPI4 running 24/7, hosting my files with encrypted ZFS on two external drives. I access the files through SFTP, Syncthing and Samba. It also runs various nightly backups to/from the cloud. Oh, and it runs Pi-Hole.

It runs OK, but is a little slow when transferring files because of the encryption. The max throughput is 20MB/s which is not awesome, but not terrible at the same time. I have spare machines around that I should use instead, but it works well enough that I can't be bothered to do it. I'm also a little worried about data corruption, because apparently that happens with ZFS without ECC ram, so...


Mine is a local one, running on an OrangePi Zero. Contains three things.

- DNSMasq for handling local hostnames, and caching external queries for reducing round trips and latency.

- SyncThing for aggregating a couple of computers, act as an always on file syncing endpoint. It also handles transferring/syncing files between home and work.

- qBittorrent client for downloading Linux ISOs and keeping them alive. I use a lot of VMs and replicate work environment at home for testing stuff, so having the ISOs and keeping them seedable is nice.

Since it's not that powerful, SFTP file transfers tax the processor a lot, so it also has a local, unencrypted FTP server for transferring files in and out.


A few things that are well-covered in this thread, but my personal favorite is PicoShare. It is a nice, simple self-hosted file sharing app that fixes a lot of issues with other solutions.

Email has size limitations. Many services don't like to share .exe files. Some will apply compression (unlisted youtube videos). I like this because it's a simple app that allows you to share files with others, and give them a link to share files to you.

https://github.com/mtlynch/picoshare

http://pico.rocks/


An Odroid HC4 with

Nginx+Lua (Openresty) on to display the status of my smart plugs that let me turn on and off several devices. One is for the Odroid itself, to really turn it off remotely.

A docker container to let a friend connect over ssh and rsync and perform a disaster recovery incremental backup to a disk connected to the Odroid.

Samba to access movies, music and pictures with VLC from my Android devices.

A second Odroid has a disk that I use to make my backups and send the incremental disaster recovery copy to my friend. By the way, encryption is managed by rsyncing a reverse mounted gocryptfs file system. That is, I keep the plain text and the key, I send the encrypted view.


I currently run the following on my home server:

- AdGuard home - my DNS server as well as DNS wide ad blocker

- Nginx Proxy Manager - a reverse proxy for all my services

- n8n - for some automated / scheduled tasks (mainly to trigger a daily rebuild of my static blog)

- Portainer - to manage all the dockers

I also run a dedicated Omada controller on OC200 hardware.

I'm waiting for a dedicated hardware device that will act as DNS server + Wireguard server. Once it arrives, and I'll find some free time to convert my existing HTPC to a TrueNas server, I plan to add the following services:

- paperless-ngx - For document management

- Photoprism / Immich - for photos management

- Plex/Jellyfin - Media streaming

- *arr - media downloading

- Gitea - local git server

- Calibre Web - Book management


MicroOS by Opensuse on a single machine with a few drives. Everything is executed in a docker container and managed optionally by portainer, which is also a container. NoIP Dynamic DNS client for domain name. Caddy for web proxy. Synching for backup of critical files. Code-server for coding on the go. File browser for sharing files. Gitea for git. Satisfactory server for gaming.

I run tailscale as not a container, but have had essentially no issues with anything. Spinning up new game servers is no problem. MicroOS updates daily and the immutability and container abstraction meet my needs perfectly.


Raspberry 4B and RPi 3B - PiHole Synced with gravity sync

- Wireguard

- Openspeedtest https://openspeedtest.com/ as a container to test random wifi speed issues

- CUPS

Experiments: - qbt for seedings ISOs


Tailscale (it's amazing for management when you're away, and avoids some of the pitfalls of exposing port 22 to the internet)

A goofy reddit bot

NAS (using SnapRAID). Mostly backing up YT videos that I like, home media, every SD card from a phone that I wanted to save but will probably never look at again, etc)

My personal website, which includes a basic hugo blog and some services I care about like Grafana (which tracks my HOA payments and the moisture levels of my plant. Sends alert when payment is due or my plant is too dry)

Home assistant (which I'm not really using, but would like to be more proactive about at some point)


A single PVE box runs on an old thin client with some extra ram and hosts:

  - A Unifi Controller for the single AP I own
  - A Portainer instance hosting
    - An HTML5 speed test utility for internal testing
    - A Murmur instance
  - A uTorrent-nox frontend
The Synology DS1520+ acts as storage for the above and all sorts of other things, but also runs a Syncthing instance that, in addition to a $2/mo RackNerd VPS, acts as a backup for select folders on various devices.

And though it isn't exactly a server, my router is a Protectli Vault FW2B running pfSense.


Technically I have 2 servers as we stand:

1. An old raspberry pi, which serves only one purpose: ssh access server: a way for me to access my home network from the outside world.

2. Asus PN-40 mini pc with a custom 3D-printed stand with a fan for active cooling: I am a moderator of one of the biggest subreddits(in terms of submissions and comments has been in the top 5 for quite some time now) and I run a couple of bots for automated moderation and general automation(checking for duplicates, keeping an eye on new users and an nlp model that hunts for trolls and some other fancy stuff).


Emby, NextDNS, Mastodon, Navidrome, Gitea, RSS Bridge, Miniflux, ZNC, Klaxon, Wiki.js, Radicale, Jupyter.

It has 1050Ti, so I was using it for some gaming earlier, but moving that to the Steam Deck now (got the dock for it).

I have some more details at https://captnemo.in/setup/homeserver/, and the source code at https://git.captnemo.in/nemo/nebula/


I have a Oracle Cloud Free VPS running Wireguard and Caddy that proxies all traffic to the server in my home, a basic cheap Intel mini pc from minisforum with 4gb ram that runs a bunch of docker containers with caddy-docker-proxy (using docker labels for reverse proxying, like traefik, but simpler)

In that server I have the media server section setup with jellyfin, sonarr, radar, prowlarr, qbittorrent and the sysadmin section setup with portainer, glances, dozzle, and filebrowser and using dashy for the homepage.

Also I host some apps that I developed myself there


Home Assistant is the most important service that I host myself. With this comes a slew of other services that make HASS as cloud-free as possible (Zigbee2MQTT, ZWaveJS2MQTT etc). After that it's a mix of testing containers and VMs on Proxmox, some of which help me learn new things or procure select media formats. I avoid hosting things that are unnecessary to host just for the sake of hosting, like the popular Vaultwardenor Nextcloud. I also have a personal Plex server that's just for the local network.


My home server is connected to my TV, so I can have a fully functional desktop environment when watching media. Probably the most useful thing on it is a little Arduino that's hooked up to a Django REST API that I can use to control my TV (lost the remote years ago).

Also the server itself is my ancient T530, which is still quite snappy on Arch!

Here's the remote code, for anyone interested: https://github.com/ijustlovemath/arduino-remote


I run two main devices:

- QNAP TS-253A-4G NAS. What has been a real game changer on this has been container station, which allows you to run any dockerised app. It is however a little underspecced for any real heavy lifting. It's basically the workhorse for file storage, media hosting and downloading.

- MeLE PCG35 Fanless Mini PC. This is a fairly new addition. Is a home web server (nginx) and runs my gitlab instance. Saves me about $20 a month in hosting costs by self hosting instead (nothing important). It also runs Pi Hole. The OS is Debian


I have some services:

- my portfolio website https://blmayer.dev with my mail server on the m. subdomain, a web interface on mail.

- one git host on https://derelict.garden with a git. subdomain.

I plan to add more services and a database to it. All hosted on a pi zero w. In the past I got 77 days of uptime. I think the mail part is the nicest: I can have as many emails as I want, so I create one for each website I signup.


Main server is a pretty wimpy i3 with some removable drive cages running FreeNAS Core. That runs various things like Plex, FoundryVTT and NextCloud in FreeBSD jails, as well as SMB services to the rest of the house. I used to run my own dnsmasq server with an adblock list, but gave in and set up a small Ubuntu box on a SFF server to run pihole in Docker. Turns out pihole makes some very Linux specific assumptions -- I couldn't really get it working in FreeBSD, and rpi hardware has been hard to come by.


- logitech media server (streaming for squeezebox devices, the .ogg files are also shared via samba)

- cgit (git webinterface pointing to git-bare repos that are accessible via ssh)

- some static webpages via nginx+zola

In the past:

- some prototype webapps hosting

- dnsmasq + tftpboot

- nagios


- adguard home (ad blocking) - plausible analytics (open-source analytics) - assistant relay (depreciated) - duplicati (backups) - changedetection.io (change monitoring for websites) - ESPHome (control microcontrollers for smart-home) - Homeassistant (home automation) - PiAware SkyAware (flight tracking w/ an additional stick on my roof) - Grafana (monitor the server) - Shlink (url shortener) - assorted other projects (like the backend for a modified version of the late Weather Timeline)


Apart from all the usual stuff (DNS, email, NFS, SMB, iSCSI, DHCP, HTTP, Wikis) the biggest thing is as a diskless server. There are many, many boxes here without disks in them that boot off it from old 386/486s (NE2000s with boot ROMs) through far too many Pis to modern Intel and AMD boxes running various linux and windows version (nice thing about diskless, you can have literally as many different distros as you like on the same PC).

Longer term ambition is to get the PDP11's booting off it as well.


Installed on my NUC (all dockerized)

* adguard (i used to love pi-hole but adguard home is better) - ad free LAN

* transmission that runs over vpn - for seeding linux iso

* homeassistant - automate my switches, monitoring

* zigbee2mqtt - zigbee to mqtt bridge

* eclipse mosquitto- mqtt broker for my zigbee devices

* node-red - integrated with home assistant to write some of my automations

* duplicati - backup all the data from all these service and upload to remote storage

* grafana - visualize my network/devices/services stats

* prometheus and several exporters

* nexus repository - to proxy external libraries

* wyze-bridge - so i can view my Wyze cameras in home assistant via rtsp

* tailscale


I have a 3 node (Raspberry Pis) k3s cluster with the following :

- Pi-Hole : https://github.com/manibatra/kube-pihole (might be a little rough around the edges, I have to push a few updates)

- Homebridge

- A custom Golang app to poll metrics from my Solar setup and export them as Prometheus metrics

- Prometheus

- Grafana Mimir for long term metrics storage to S3

- Tailscale on all the nodes and all my machines so that I can access my homelab and Pi-Hole when I am on the move.

Bunch more things to add but it's a start!


- pihole

- several DIY projects fetching data from the web in Docker containers (like scraping the servers of my car manufacturer for the trip/gas milage data)

- good amount of fast storage with 100gbit to main machine (don't use lots of local storage), using ksmbd for windows or NFS for Linux to access

- vm's for stuff which you don't want to run on your main machine (like needing windows xp)

- domotics with zigbee/zwave2mqtt in the back

- probably lots of things I forgot about until they brake.

- remote backup location for the rest of the family to have off-site storage


I host a Mastodon instance, a Matrix server (using synapse and coturn), a Minecraft server for the kids, a server for an esoteric mod of Ultima Online called Ruins & Riches, and a Web server. I use Nginx for reverse proxying and Wireguard for VPN.

I also run a bunch of Raspberry Pis around the house connected to stereos to stream audio using AirPlay (with https://github.com/mikebrady/shairport-sync).


I'm mostly finished rebuilding mine and very happy with it.

Right now, it's just Plex & Next Cloud available within my network but Home Assistant is next on the agenda to tie into my devices. I also have Dropbox to keep a couple of the folders in sync so I can drop files on any computer and it will come back to there. I'm looking into some Ubiquity cameras & security too.

All of it is fronted by ngrok - with Google OAuth for auth - to make it available to the select people in the outside world.


Recently the water pipes in our barn exploded because of a hard freeze. None of us thought to leave the water trickling over night as we should. In response, I wrote a script that sends me an alert via ntfy that explicity says "Leave water trickling, it will be freezing tonight", as well as "Roll up windows, rain expected". I use sqlite to track notifications already sent and not spam myself, and for historic forecasts in case I ever want to play with that data.


- Infra - Portainer - Caddy - Wireguard / Netmaker - RSS: TT-RSS - Files and Backup: - Seafile - Duplicacy - Search: Searx - Automation - Automation scripts, should probably replace by Home-Assistant - Telegraf, Influx, Grafana - Mosquitto - Task and personal information management: - Monica - Vikunja - Development - Git / Fossil (trying to move to Fossil) - VSCode server - Adminer - Nocodb

Actually not in a home server, since it is in the cloud, but still...


QuiteRSS is great desktop program for reading (or more like aggregating) RSS, but I found myself in a need to have RSS reader available on more than one device. Thus I'm right now on the quest to find a self-hosted alternative. So far CommaFeed [0] wins. If you haven't already, start using RSS before this great method of content delivery get crushed down too.

[0]: https://github.com/Athou/commafeed


Backups. All my Linux computers back themselves up every night to the server. The server itself mounts the Windows PCs disks and backs those up. I use dirvish to do the backups.


That's my use too, except I use macOS, and I save downloaded assets (sample libraries, mainly). I sometimes use my iTunes backup as a "media streamer", but that's often too much of a hassle.


Running OpnSense at the edge (DNS, VPN), raspi as a print/backup server, Synology NAS for Plex/file sharing, a Philips Hue bridge (lighting) and a Ring bridge for lights/cameras. The raspi periodically backs up certain files from the NAS to an S3 bucket for long term retention. I'd like to go with HomeAssistant at some point with some custom Ring-MQTT/lighting apps for capturing video and "dissuading" some of the solicitors around here.


An old laptop (i3-380M, 4gb RAM, 150gb SSD + 1tb HDD), currently running:

    * syncthing, as a backup host;
    * photoprism, had to migrate from google photos;
    * docker registry for pet projects;
    * qbittorrent, which is mainly unused nowdays;
    * pihole;
    * homeassistant;
    * mosquitto;
    * vscode code-server.
On top of that, there's tailscale, nginx proxy manager and dirt-cheap VPS for exposing some services to the internet.


I only self-host a few things these days, so this is slimmed down from the dozens of services I ran a few years ago:

* Firewall/IDS (Debian on a bootleg 4-port mini-pc)

* NextCloud

* Public websites (hugo, jekyll, wordpress, mastodon, etc)

* Zoneminder

Basically, anything that holds long-lifetime data that I don't want to trust somebody else with I run at home. I've thought about moving my Masto & sites to a VPS or PaaS but it's just easier (imo) to just rsync shit onto a box. Plus, the NUC is already paid for :)


It's right behind me, unfortunately, due to lacking ISP, I'm currently jumping some hoops[1] to have online.

I have OpenVPN and Docker, rest runs as containers.

Mail (Postfix, Dovecot, Spamassassin)

Web: Nginx (serving sites and doing reverse-proxy for other containers)

Various containers running nodejs sites (served through nginx reverse proxy)

MySQL, Mongo, Apache/PHP,

Apparantly a minecraft server too.

- 1 http://dusted.dk/pages/aWayOut/


Got my old pc up as a home server, probs overkill but was sitting collecting dust. Ryzen 2700x, Crosshair Vii, 970 evo plus 500gb 64gb ram and an Nvidia quadro p4000 32TB WD Red as Nas Storage.

Currently Running all as VMs/LXC via proxmox Adguard Home Assistnat Plex, sonarr/ radarr/ bazarr / deluge. TrueNas using HBA passthrough for my drivers Bitwarden Prometheus and Grafana for monitoring Traefic reverse proxy Ubiquity Controller


    - nginx (personal web pages)
    - weechat (irc bouncer)
    - syncthing instance (so there's always one online for my laptop or phone to access)
    - gonic (simple navidrome/subsonic alternative without the web UI)
    - backups (my own rsync+snapshot scripts since I found borg etc. too involved)
    - some uptime checkers (simple cron jobs + msmtp for sending email), and various one-off scrapers and scripts


Topton Intel N6005 fanless PC as router/server - only 10W, but quite capabable

- Arch Linux

- just systemd/networkd for DHCP server/client, wireguard setup etc.

- router: own setup, just nftables, wireguard

- unbound DNS server (with blocklist for spam/ad domains) with encrypted DNS on uplink

- chronyd as timeserver

- mosquitto mqtt server

- custom jobs to post system stats to influx

- few VMs for testing things

- syncthing for keeping keepass database and documents

- git server (gitolite, gitc web)

- backup to encrypted S3 storage (duplicacy, encfs)

Hetzner VM

- influxdb (host stats, plus temp/humudity sensors around the house)

- grafana


I have an old gaming laptop lying arround, and use it as a server for the last 6 months, with the lid closed, lying upside-down, working well and happy.

Lenovo Ideapad Y50-70 (i7-4720HQ, 16 GB RAM, 500GB SSD, 4 TB Ext hdd via USB) mostly running docker services:

- Navidrome: serving flac files, replacing the need for spotify

- vaultwarden

- ghost blog

because my ISP gave me NAT'd IP, I've to setup a VPS and use it for reverse-proxy, also docker-based:

- nginxproxymanager

and Tailscale to connect them all.


I've got a bunch of stuff, all running in Docker:

- Nextcloud for file syncing and sharing, syncing photos/contacts/calendar from my phone, and Joplin notes

- Pi-hole for ad-blocking

- Jellyfin for media

- Miniflux for RSS reader

- Tandoor Recipes for recipes and grocery list

- Archivebox for saving local copies of sites I want to keep in case they disappear someday

- Navidrome for music (this is still something I'm testing)

- Send (fork of the deceased Firefox Send)

- Grafana/InfluxDB plus some custom scripts for monitoring


In my home I use a Synology NAS for local home files, organizing pdf libraries, backing up GOG and steam games, mirroring google drive. I also backup my home folders with Vorta and Borg. I used to backup Fedora RPMs based on what I had installed, but since switching to openSUSE I haven't mirrored any of their repos. Thanks to capableweb's comment I'm going to go do that now!


Asterisk PBX (phone system) on Raspberry Pi OpenMediaVault and ownCloud (on Docker) on a Raspberry Pi

Hopefully in the near future HomeAssistant on a Raspberry Pi


I don't have a home server, largely because I like preventing the devices on my home network from being able to talk to each other at all.


Yeah, I don't have a home server either. All of my devices are used independent of each other and just connect to the Internet.


VLANs


I have a server rack in my basement, with a few different machines. One is a storage server with a zfs array. Another is a 1u with lots of cores.

> What are you doing with your home server?

So much stuff. But the reason I have a servers at home is not because it's practical, it's because I love computers and making them go fast. The one's in my basement are often used for performance experiments.


Ubuntu, postgres, gitea, dotnet core, pytorch + open ai whisper, obisidian vault, some code generator experiments & custom build tools.


Frigate for object detection on RTSP streams. It integrates with Home Assistant so you can include detections in automations.

Also Proxmox, Prosidy (XMPP), AdGuard Home, Jellyfin, Paperless and Photoprism.

Also see this list for more ideas: https://github.com/awesome-selfhosted/awesome-selfhosted


Intel i3-8100 with a 6-disk ZRAID2 SSD array, running …

A git repo which acts as a document backup/upstream for my company records and allows me to keep history/records across multiple other devices.

Git repos for various personal projects like my e-ink/rp2040 thing.

Time Machine backup from my MacBook (samba + some config).

DLNA media server

VPN server (strongswan) and associated certificate authority.

Errrr… that’s about it for now. Pretty boring!


Blocky DNS (pi-hole-esque) on a dedicated raspberry (model 2b)

Caddy web server, Git, Cgit, Miniflux, Radicale (calendar and contacts), Syncthing, Maddy mail server, Postgres, Restic backups

It also has a 16 square led matrix on top, visible through my window, in case I want to blast messages out there that way.

I change bits and pieces regularly, and have an overdue reminder to re-test restore from backup.


Technically I have 3 that do the following

  * Local arch Linux mirrors
  * Iptables/routes/rules/vlan tagging as a router
  * Vpn client
  * Ldap for authn
  * DNS
  * Kerberos for authz
  * Postfix/dovecot for email
  * Nfs for home directories
  * Git repos via vanilla ssh/bash scripts
  * Other temporary odds and ends.


Nothing fancy:

- local file (media) sharing - VPN client - torrent client - slow file download

I plan to make it an automated build server.

In the past, it was also an access point.


I have a RaspberryPi Model 3 B+ with 2x 2TB HDDs running in RAID1.

It currently runs PiHole and Jellyfin (a media server).

Next on the list is to get a docker container to run qbittorrent + vpn, so I don't have to transfer the files across to my server from my main PC.

I'd also like to open up my file server with friends and family but not sure on the approach just yet...


mine's really vanilla.

a half width 1U supermicro, sitting on a filing cabinet in the office's closet, with 128 gig of ram. I put the free esxi 7 hypervisor on it. This has let me consolidate almost every weekend experiment down to a single host.

On it, I have an RKE2 cluster on ubuntu 22.04 VMs. Mostly this is so I can learn some kubernetes because the environment at work is so locked down, it actually helped me to just have my own.

I host a few other VMs, mostly just as scratch pads for shell/ruby and an MPD server for streaming a private web radio off my NAS music share to a raspberry pi that is always streaming random music into my office. I have a single volume knob amp that lets me just turn it up, or turn it down, and that's all I can do to pick what I am listening to. It's cured my decision fatigue for music, and I really recommend it.

It's way better than 20 years ago, when it was a museum of clutter.


Jellyfin and emby : For media server

ngnix-rtmp : personal streaming server

navidrome and mpd: for music streaming Gmediarender :upnp /dlna renderer


RPI4 8GB with 9TB of SSD attached running Yunohost is sitting behind a PfSense firewall. About a dozen people have accounts, and we have these apps installed: -Nextcloud with the full suite of apps inside to cover most internet services needs. -Mastodon -Synapse(Matrix) -Metronome(XMPP) -Borg backup


Lots of virtualization, for what varies by the day.

Usually testing something -- automating building the latest trend, or getting an understanding for why a particular upgrade went poorly.

My longstanding VMs are... a general purpose 'game server', a couple DNS resolvers (spread among two nodes), and a password manager


Dell 720R running Proxmox.

various VMs and containers:

pfSense pihole Plex Homeassistant Subsonic (music server) virtual desktops for various purposes (keeps things isolated) syncthing (to move backups offsite) guacamole ephemeral VMs for testing/learning ~25Tb of storage in various RAID configurations for media and backups


  OPNSense
  Pihole
  Wireguard (server)
  Wireguard (client)
  Jellyfin
  LMS
  Unifi Controller
  Syncthing
  MiniDLNA
  CouchDB (for noteself back-end)
  Mailu
  Zoneminder
  Nginx in front of anything https
  Debian virtual desktop
Most are in docker containers in Proxmox VMs


- Samba server - Nginx reverse proxy - Wordpress - Nextcloud - Mediawiki - Jellyfin media server - FreshRSS client - RustDesk server (like Teamviewer/AnyDesk) - restic + Backblaze for cheap daily backups - Octoprint server for 3D printing - Deluge + WebUI torrent deamon - Syncthing


On my homeserver (a Raspberry Pi), really just OpenMediaVault to manage data backups, Jellyfin for local media streaming and a Kiwix server which serves local copies of a number of services like Wikipedia.

I have a VPS where I "self" host more stuff, like Nextcloud, Mealie, Paperless, etc.


I have always wanted to do the home lab server thingy. I started long back with a few computers[1] in the attic of my first Office. I graduated to a simple wooden rack[2] with routers, storage, and cables. Recently, at the onset of the COVID-19 Pandemic, I got a durable 24U server rack[3] for a throw-away price from a friend as he had to shutter his Startup. I even asked myself, "Does it come in Black?" I spent about 30-min or so a day for a few weeks just spray-painting it black.

I had all the plans and started tinkering - a few old Laptops as servers, Pi-Hole, and a 2012 MacMini to serve media to the family members, etc. Unfortunately, this is a very addictive hobby and may rival or even more than photography. I have stopped playing with any additional (wrong focus for now) experiments. A few of my friends have seen and knew about my hobby, so I have been on the receiving end (free) of older laptops and quite a bunch of Raspberry Pis[4]. Btw, many companies seem to be gifting Raspberry Pis to their team members to encourage to tinker or something in that line!

So far, only some basic operations work. Internet Load Balancer + Bonding, Backup of photos (since 2001), replication of a local copy of Dropbox, a simple media storage, syncthing replicating copies my development environment and files.

Yes, one-day, that one-day will come when I can just keep playing with these. I even had the led strip (now removed and back to my defaults) light up when I was in the zone. :-)

1. https://www.dropbox.com/s/2onquaoc4ob7mpm/F360559208.jpeg?dl...

2. https://www.dropbox.com/s/w0gf6mq8s4dze7g/IMG_1825.jpeg?dl=0

3. https://www.dropbox.com/s/21hsrj7k5e5t687/IMG_0003.jpeg?dl=0

4. https://www.dropbox.com/s/uhnknwutlgvre9s/IMG_1112.jpeg?dl=0

All: https://www.dropbox.com/sh/kumyb9accyae1g9/AACAOt9a8VEnUHLpC...


TrueNAS.

Has a jail running InfluxDB, Grafana, and Mosquito to monitor the TrueNAS metrics and my IoT sensors.

Has a Ubuntu VM for the TP-Link Omada management software. (It's not terrible but I wouldn't necessarily recommend it)

I used to have more ambitions but installing and maintaining software has become a chore.


Great list. I just got myself a RPI and am planning to set it up over the weekend mainly for pihole in my internal network. I've never done this before and wanted to know how one would expose the machine over the internet (and whether that's a wise idea at all).


My home server has currently running:

- Nextcloud (for normal collection of files etc)

- Home Assistant

- Zwave JS (for Home Assistant)

- Couple of Minecraft servers

- Pihole for DNS based tracking and ad blocking

- Not continuously running but in use: Unison backups for backing up my photo archive disks hooked to my laptop. Feels the best way to sync changes of big disks.


I will probably get downvoted for this (unless, as I expect, the vast majority are in the same boat), but my answer is: nothing. I don't mean this to discredit people who actually run home servers, just sharing my own experience.

I have a "prosumer" grade £120 ASUS gigabit router, which has a perfectly adequate settings UI, firewall, port forwarding, and supports NAS via a USB HDD, or 4G failover if I ever happen to need it. No need to overpay for fancy business/enterprise grade networking equipment - plug it in, and it works.

Photos/videos are stored in Google Photos for £1.99/mo - I'd happily pay more if I exceed the 200gb limit I currently have iirc, but most of this is junk I won't ever access again that got backed up from my phone, eg screenshots that I took to send in Messenger/WhatsApp and forgot to delete afterwards.

Pretty much anything else that I do with a computer only has to be done when the computer is turned on. Having a separate server running drawing 100-200w at all times (even when we're in the office or sleeping, which is >2/3rds of the day) considering current energy prices, seems like a massive waste of money. 100w 24/7 for a year costs £300 at the current energy price cap (which most tariffs are currently at or around). I can't remember the last time me and my partner needed simultaneous access to the same files via the network, or I needed instantaneous access to a file on my MacBook or Android phone that is on my Windows/Fedora desktop, so another reason I don't see any need a NAS or server.

Wrt movies (Plex/DLNA/Kodi) there's very few films I like enough to want to watch again, that aren't included in Netflix, Amazon, or Disney's library. Hence there are very few that I own physical disc copies of, and I can just play the disc on the PS5, or my PC's bluray writer, or the upstairs TV, so no need to back these up onto a HDD or stream around the house. Any content that is already on my PC just requires me to switch it on (via either Wake on LAN, walking 10m upstairs and pressing a button, or bluetooth switchbot) temporarily to stream to my TV via a DLNA server. No need to leave anything running permanently.

Really the only way I can see home servers making sense, is if running the server itself is your hobby. The added convenience is almost zero, and for me wouldn't be worth it considering the £300/yr electricity cost and £300-£1000 initial outlay I can imagine for decent hardware and a few TB of drives if you don't have an old PC lying around. Plus all the time spent configuring, monitoring, and maintaining - most on this site do enough of that at work already, and I have enough hobbies as it is ;D


It isn't just for cost savings, but it is also for privacy, convenience, and sanity. For example:

Do I really trust Google/Apple with all of my photos?

Do I really trust lastpass/onepass/... with all my passwords?

Do I really trust Amazon/Google to monitor my home cameras to give me notifications?

Do I lose my mind every time feedly changes their interface or nags at me for upsells?

Do I get frustrated when my favorite shows (Star Trek/Psych) suddenly leaves Hulu or Prime?

Also, I don't know how anyone survives using the internet without pihole (or similar dns blocking), especially on an iphone!

For me, knowing my data is in my control, not having software constantly change or nag me, and always being able to access what I want is incredibly important to me (and my SO).

I spend minimal time administering my server. Usually I spend about 2-3 hours a quarter to update all my docker images, but other than that, it all runs quite flawlessly.


> The added convenience is almost zero, and for me wouldn't be worth it considering the £300/yr electricity cost and £300-£1000 initial outlay I can imagine for decent hardware and a few TB of drives if you don't have an old PC lying around. Plus all the time spent configuring

My Two Raspberry Pi Zero W's cost $10 in total - which is less than what you pay Google annually. Adding 500GB SSDs & electricity costs (cents per kWh) has my amortized costs less than your annual 200GB Google bill. Your mental image of "home server" is rather pricey


Why are you assuming 100-200W power draw? That seems way too high when an old laptop would only use like 15W. My ARM server sips power and runs everything I throw at it.


Simple little thing I have started doing. Writing my home server's ip address to a text file on my Dropbox account. I also have a text file of whitelisted ips on Dropbox I check and add to the server's firewall if necessary. All made simple by rclone.


Some websites! I still find it very cool to serve web traffic to people around the world (for low traffic sites) from the closet.

Also, I think it will be easier to teach my daughters (in a few years) how the web works by showing them the physical thing in operation.


I've got a 20 TB self-built RAIDZ2 (ZFS RAID with two-disk redundancy) plex media server that hosts my media collection.

Also serves to store backup snapshots/images for my family's devices, as well as a family SMB share to move files between devices.


Can I ask a couple questions about this server? Hardware config? Software? (OS + any monitoring) I am curious about ZFS but have never implemented it, are there good administration resources for whatever your setup is? my account at gmail if you prefer to reply directly. TIA


Sure!

It's a repurposed gaming PC, so a 1st Gen Ryzen 1800X, 32 GB RAM. Took out the good GPU and bought the cheapest card I could find to support a tiny monitor for maintenance, but generally I manage it from my primary windows machine with X2Go (just a remote X server/client over SSH for Windows), and ssh directly if I'm on one my linux boxes.

I use this SATA expander card to fit 8 4TB HDDs (https://www.amazon.com/gp/product/B008J49G9A/ref=ppx_yo_dt_b...) for the main array, and have a 64 GB SATA SSD for the OS.

OS is Arch Linux, which I'm partial to for the AUR and documentation. For the root drive I just use ext4. I initially got started with Arch's ZFS wiki pages: https://wiki.archlinux.org/title/ZFS https://wiki.archlinux.org/title/ZFS/Virtual_disks

Note if you go this route, I've had bad luck with the DKMS builds of ZFS where they sometimes fail to install during kernel upgrades, so I use the binaries from the archzfs repository. Sometimes I have to wait for the repo maintainer to build new packages to keep up with arch kernel releases, which is annoying, but it's preferable to getting a broken system back up and running. I've also had bad luck with ZFS-on-linux native encryption, with my system intermittently seg-faulting when reading from an encrypted dataset under heavy load. So I use LUKS/dm-crypt on top of ZFS (the LUKS volumes are unlocked at boot, then the ZFS pools are imported). However I suspect this is a hardware peculiarity with my system, as I can't find any mention of the same issue online. So YMMV

Some resources for dm-crypt/LUKS: https://wiki.archlinux.org/title/Dm-crypt https://wiki.archlinux.org/title/Dm-crypt/Device_encryption https://wiki.archlinux.org/title/Dm-crypt/System_configurati... https://wiki.archlinux.org/title/Dm-crypt/Encrypting_an_enti...

Hope it helps, with the exception of the native encryption issue I found ZFS to be remarkable easy to use and bombproof so far. Have fun!


Sorry for the delay - thanks!


I run TeslaMate to see what my car is doing, Homebridge to bring extra devices into Homekit and Uptime Kuma to monitor various things and alert me if they go down. Also a bunch of custom scripts to archive things and the usual NVR and Plex apps.


On this topic, what is the best out of the box solution for home server-lite setups. I used symbology a bit at work back in the day. Anybody else better for any reason? (Such as storage speed, application support, use interface, security)


I just got a generous amount of credit in one big cloud provider from my job, so I decommissioned almost everything inside my house. The only thing still on prem is an Rpi1 running ADSB feeder software, connected to an home made cantenna.


I run Nolific on my Raspberry Pi, giving me a place to store all my random writing. It’s my super simple second brain tool that runs in a browser tab.

https://www.nolific.com


Caddy to reverse proxy everything. Vaultwarden. Jellyfin. Nextcloud. Miniflux. Samba.


On a RaspberryPi 4b I have installed in my living room:

- Wireguard

- A Python script I created myself for providing a dynamic DNS ala DynDNS.

- Bind9 for the split horizon DNS.

To be added at some point later this year: Self hosted bitwarden (vaultwarden or the official one, I have yet to decide)

Edit: formatting


Plex, torrents, adguard, irssi

I used to host a Valheim server as well. I was experimenting with Miniflux rss reader yesterday so I'll probably add that in the near future. I have a Synology NAS that strictly handles files and backups.


HPE Microserver Gen8

- Jellyfin: Netflix alternative for local files

- Nextcloud: Mainly used as Dropbox alternative

- Borgbackup: Deduplicating backups to my other Mivroserver at my parents home

TP Link AC1200 router

- Running OpenWRT with a dedicated, isolated WiFi for "smart" devices

Raspberry Pi - Pi hole


Great question, as I've started running a home server (and gaming computer) recently. A corollary: what anti-malware software do peeps use now on Windows? I'm interested in both prevention and cleanup.


I would strongly recommend /r/selfhosted [1] for this topic.

[1] https://old.reddit.com/r/selfhosted/


  - vaultwarden as password manager
  - tt-rss
  - rainloop
 for webcal & caldav :
  - baikal
  - caldavzap & carddavzap (https://inf-it.com/open-source/clients/)


DNSSEC-secured DNS with PowerDNS Federated XMPP chat with Prosody Email with Postfix and Dovecot Prometheus keeping track of it all and Grafana to draw some pretty graphs for how it's all going


For my router PC, I run vyos [1]. Dinky little 9th gen Core i3 machine, but consumes <9W and easily handles gigabit throughput, even with Wireguard. For those in the US with an AT&T fiber GPON ONT, vyos' 802.1x EAP-TLS implementation is complete enough where the certs can just be imported and things will work without obscure hacks (for those not familiar with AT&T fiber, this is to connect your own router directly to the fiber ONT, bypassing the bundled router).

Everything else is on a separate server running Fedora. I used to run applications inside k3s, but got tired of the constant 10% CPU usage from the control plane trying to keep track of its state. Also didn't like that the containers I used weren't being rebuilt for security updates in packages (eg. openssl). Now, I just use plain old RPMs inside VMs.

The stuff I run:

* [Host] ssh: For CLI access and git hosting (just `git init --bare`; no fancy UIs like Gitea or Gitlab).

* [Host] samba: For all file access.

* [Host] zrepl: For zfs snapshot replication.

* [Host] syncthing: For syncing KeePass databases. Also for syncing pictures from my phone.

* [VM] jellyfin: For easy access to rips of my physical media collection. VFIO is very unstable with the 13th gen iGPU on my hardware, so I custom ffmpeg wrapper that SSH's back into the host for remote transcoding in a bwrap sandbox.

* [VM] pdns-auth + dnsdist: The powerdns authoritative server handles the internal DNS. I have DNSSEC set up and SSHFP records for everything, so I don't need to worry about ~/.ssh/known_hosts. All of my computers that rely on this run systemd-resolved and do the DNSSEC validation locally. I run a daemon [2] on every host, which handles pushing new A/AAAA records via TSIG signed updates to the DNS server.

dnsdist acts as a proxy which only allows access to the specific TXT record needed for ACME DNS challenges originating from Let's Encrypt's IP addresses. I may switch to a custom CA in the future when the name contraints x509 extension is better supported (so that a custom CA wouldn't be able to issue trusted certificates for domains that aren't mine).

* [VM] miniflux: For RSS.

* [VM] unifi-controller: For managing Ubiquiti wireless APs.

[1] https://vyos.org/

[2] https://github.com/chenxiaolong/ddns-updater


Do you have more information about GPON from AT&T? Here in Germany I have seen only authentication based on the GPON SFP serial number and not with EAP-TLS.


Yeah,AT&T unfortunately tries really hard to make their customers use the provided routers. For customers on the older GPON network (like myself), they provide an ONT and a router. The ONT will not communicate until the router authenticates with EAP-TLS. The ONT has some additional funkiness, like requiring the EAP-TLS ethernet frames be tagged with VLAN ID 0.

Some folks work around this by using a ONT <-> dumb/unmanaged switch <-> router setup. They'll plug in the provided router, wait for it to authenticate, disconnect it, and then plug in their own router. The dumb switch will keep the link alive from the ONT's point of view. Works well, though it is annoying to have to redo the procedure whenever the power goes out. For the lucky folks whose provided routers are easily jailbreakable, we can extract the EAP-TLS certs and configure our own routers to authenticate directly.

On their newer XGS-PON network, some folks found out that the EAP-TLS isn't even enforced by the ISP's network--it's enforced locally by the ONT. So you can buy your own SFP ONT module that doesn't support the OMCI commands for enabling EAP-TLS, spoof the SFP serial number, and get connected.


There is a yunohost install with Nextcloud, Monitorix, a Yourls install, Monica and I a LimeSurvey plus various tools I'm not using that much (Searx and various RSS readers)


Proxmox, Nextcloud (I fully integrate it with my phone), rtorrent + rutorrent, SMB server, ZFS, a home cooked gandi dyndns workaround, Sandstorm for hosting Wekan, Rocket.Chat


Two PCs:

- First one

in the basement, no monitor, connected using WiFi running docker-compose: websites, WireGuard, Socks proxy and etc

- Second one

In my room, with monitor, connected using LAN + multiple external hard drives plex, qbittorrent


2x14tb drives in ZFS mirror for Linux isos. 128 GB RAM for the filesystem And VMs I closing some work stuff.

All hosted with proxmox. A great homelab distro for VMs and persistent containers.


My Home server (Thinkpad x260 with 1TB storage and 16GB mem) serves nextcloud, wireguard, minecraft, gitlab and gitlab runners. Oh, and Grafana and Prometheus. And pihole.


I run a bunch of VMs on ESXi:

Vaultwarden, DNS, node-red, mqtt, zigbee2mqtt, nginx reverse proxy, unifi controller, truenas passing a SAS controller, plex passing an nvidia quadro + more.


These days it's mostly my syncthing central repo, and runs a weekly restic backup to backblaze over me and wife's files (which in turn mostly backs up photos).


Most recently, hosting Mastodon has been great (single user).


I used to love having a server at home but started running less and less, now I'm just worried about potential security holes and will probably stop it.


I bought a Synology and left it at that. I used to DIY everything but at some point the time spent wasn't worth it, and Synology is just so damn good.


Proxmox with a Windows x64 workstation VM, some Linux test VMs, and some other various test images. Also have Blue Iris and Homeassistant on there.


No one has mentioned flight tracking. Add a ADS-B dongle and antenna to track flights around you and optionally feed them to flight tracking sites.


Raspberry Pi 4 running Home Assistant with AdGuard as an add-on. I do have a handful of other add-ons installed, but nothing else I actively use.


Fairly sparse at the moment

- pihole

- ubiquity connect

- home assistant

- kodi

- openvpn (via router soon to be replaced by wireguard)

My plan for this year is to setup a Proxmox server and grow the number of self hosted services considerably.


Automatic compiling for my code and book hobby projects. So that there's always a latest version that I can open via file sharing.


I'm curious if there are guys designing ~offgrid ultra low power homelab. Most efficient redundant storage with a bit of compute.


Email (mutt), webpage (nginx), newsbeuter (news), Home Automation (Home Assistant) , webfiltering (pi-hole) and Gnosis Chain


* Internet search engine including custom crawler

* Wikipedia mirror (down for maintenance)

* Blog served over https and gemini

* Various other bits and bobs I've built


Media backups and security camera footage. And the first tier of computer backups which get sync’d to the cloud.


Security cameras are currently the only thing I actually have running at home.

I'd like to do SFTP backups eventually.


Old laptop server: Nextcloud, Gitea, hledger, calibre, some custom apps

EDIT: oh and pi-hole on a separate Odroid-C2


Debian running Caddy web server, proxying some homemade Python apps for music files management.


- Sonarqube

- Github Actions agent for Linux

- Tons of Elasticsearch (6x)

- NVR software

- Haproxy (2x)

- Scvmm

- Unifi controller

- Sonarr, Radarr, Jackett, Transmission, Ombi

- Nextcloud

- Nginx for work stuff (2x)

- Plex

- Various Windows dev boxes

- Mikrotik CHR

- Github Actions agent for Windows

- Librenms

- Docker registry

- Openhab

- Zeek and Suricata

- Syslog-ng

- Windows domain controller (2x)

I'm about to run out of RAM :(


My websites PiHole Media Server Print server Sometimes a Minecraft Server


I rely on docker-compose, Traefik, and a few simple shell scripts for most of my setup. Each service I have in its own directory with their own data/ and config/` drectories. Overall, self-hosting has been far less work than I expected. Generally speaking, things don't break and are quite easy to get up and running. I will put * beside strongly recommended services.

- changedetection This monitors webpages for changes and sends you notifications on multiple streams when they change. This is the service I have found the most awkward and least useful! (https://github.com/dgtlmoon/changedetection.io)

* ghost for blogging (https://ghost.org/)

- gotify for notifications to my phone (https://gotify.net/)

- grafana for streaming logs and metrics from these services (https://grafana.com/)

- heimdall for a vanity dashboard (https://heimdall.site/)

- homeassistant for all home automation needs (https://www.home-assistant.io/)

- matrix synapse for communication (https://matrix.org/)

* mealie for recipes and meal planning (https://mealie.io/)

- photoprism for photo storage (https://photoprism.app/)

- plausible for privacy respecting analytics (https://plausible.io/)

* portainer to do light admin on these containers (https://www.portainer.io/)

* send to replace wetransfer (https://gitlab.com/timvisee/send)

- splunk for logs/visualizations (https://www.splunk.com/)

* traefik to handle all the routing to the containers. When it doesn't work, it's very awkward to fix, but it almost always works just as you expect it to (https://traefik.io/)

* vaultwarden as a password manager (https://github.com/dani-garcia/vaultwarden)

* vikunja an incredible todo list service. I cannot recommend this highly enough! (https://vikunja.io/)

* wallabag a straightforward article saver/reader (https://www.wallabag.it/)

I have also hosted Mastodon in the past, and while it was easy to host, it would eat storage space too quickly for my small setup. It also doesn't lend itself well to being a single-user instance in my experience as it makes finding organic content difficult.


main server at home:

- VDR - For receiving and recording TV shows

- samba - filesharing

- mosquitto - mqtt broker

- wireguard - vpn

- cups - printserver

- influxdb - time series database

- grafana - showing stuff from influxdb

- apache - webserver. Also for some self-built python stuff

additional RPI:

- rtl433 - receiving cheap wireless temperature sensors

another additional RPI:

- homeassistant

- ESPhome


A Raspberry PI 3 Model B running Raspbian with Homebridge


Just pfsense, transmission, samba and a valheim server.


transmission-daemon

kodi

cronjobs for encrypted restic backups to google cloud cold storage

photo and video archive

all on an old celeron nuc with akasa fanless case

tried pi-hole, but ended up signing up to Google Family account.


TensorFlow and an Nvidia graphics card.


Currently I have a hybrid setup, where stuff that needs more compute or storage (which are cheaper if you buy hardware) runs locally in my homelab, whereas the things that need better uptime or more bandwidth are available in rented VPSes. I've actually played around with what is hosted where, depending on what I need: for example, uptime monitoring currently runs locally, checking whether I can reach the remote sites from my residential connection.

LOCALLY (in my homelab)

- Startpage, with Heimdall: https://heimdall.site/

- Backups, with BackupPPC: https://backuppc.github.io/backuppc/

- Twitch stream and YouTube video backups, with PeerTube: https://joinpeertube.org/

- File sharing for larger files, with Nextcloud: https://nextcloud.com/

- Chat solution, with Mattermost: https://mattermost.com/

- Project management, with OpenProject: https://www.openproject.org/

- CI runners, with Drone CI: https://www.drone.io/

- Minecraft servers, with docker-minecraft-server: https://github.com/itzg/docker-minecraft-server

- Static code analysis, with SonarQube: https://docs.sonarqube.org/latest/

- Uptime monitoring, with Uptime Kuma: https://github.com/louislam/uptime-kuma

PUBLICLY (in rented VPSes)

- Code repositories, with Gitea: https://gitea.io/en-us/

- CI, with Drone CI: https://www.drone.io/

- Package/container management, with Nexus: https://www.sonatype.com/products/nexus-repository

- File sharing, with Nextcloud: https://nextcloud.com/

- Link shortener, with Yourls: https://yourls.org/

- Mail server, with docker-mailserver: https://github.com/docker-mailserver/docker-mailserver

- Analytics, with Matomo: https://matomo.org/

- Container management, with Portainer: https://www.portainer.io/

- Server monitoring, with Zabbix: https://www.zabbix.com/

- Blog, with Grav: https://getgrav.org/

My homepage and some other projects as well. I manage most of these as Docker containers (with Swarm, though K3s is great too), so thankfully handling data backups and resource limits is pretty easy. Currently, I use Apache as the reverse proxy in front of all of these (lots of modules for a variety of features, as well as Let's Encrypt integration with mod_md). So far it seems to work decently, isn't too expensive, helps me avoid e-waste (homelab nodes run 200 GE CPUs with 35 W TDP), although updates are always a pain.

The blog has more information about some of these pieces of software, in case anyone is interested: https://blog.kronis.dev/

For example, previously I ran GitLab, but Gitea + Drone + Nexus proved to be a better solution for my needs and workloads.


The whole comment section is so depressing.


Why? What do you mean?


For being "hacker" news you would expect creative or even unconventional uses of home servers.

Instead many posts are a collection of corporate buzzwords as people are bringing home their jobs, from k8s to vmware to Jenkins.

Admittedly the upvoted posts improved a lot after the first hour tho.


Ohhh, glad you asked this. Can't wait to read what others have running.

Here's my setup.

I have a mini-pc with 32gigs of ram running as my combo "compute" and "storage" server.

It has an attached 5 bay enclosure with 8TB of storage on it.

On it I run

- Caddy to host static sites (my blgo) and to terminate HTTPS for other services - audiobookshelf (mostly unused)

- calibre-web (organizes ebooks)

- diun (notifications about docker image updates)

- gitea (personal git repos, mostly useless honestly but I do put some things in here before they go to GitHub)

- home-assistant (heavily used)

- mealie (heavily used)

- mpd (music player daemon, moderately used throughout my house)

- a VPN container for things that I want to only run inside a VPN and never when the VPN is down

- plex (heavily used to organize and play media elsewhere in the house)

- postgres

- scrutiny (for consolidated reporting of disk issues across my machines)

- shiori (read-it-later style bookmark manager)

- snapcast (coordinates multi-room audio throughout my house on a bunch of raspberry pis attached to speakers, heavily used)

- syncthing (heavily used)

- vaultwarden (heavily used)

- woodpecker (self-hosted CI, moderately used)

- zwave-js-ui (manages the zwave based smart home devices I have...about 20 or so)

My router/firewall is a separate devices running OPNsense and I use Wireguard to remote in - also works wonderfully.

I run all the services with docker-compose. The server itself is a bit of a snowflake but all the critical parts of the services are in their respective docker directories so backup is a snap (aside from postgres which has a separate backup process).

Currently I'm working on documenting a recovery procedure for Vaultwarden from our Backblaze backups so that in the event something happens to me my wife will be able to recover the Vaultwarden instance and our passwords. That's a fun exercise in documentation and simplifying the process.

Snapcast has really been a dream for multi-room audio setup. It presents a Spotify Connect device to anyone on my wifi. It has a separate stream which comes from whatever is being played on MPD and it is easily configured to play audio from whichever of those two streams is actively playing music...so I don't have to manually switch between them.

Caddy has been great for organizing everything and ensuring each service has HTTPS. I understand Traefik is somewhat more purpose built for doing this with a bunch of containers but I haven't had a need to switch.

I do use https://github.com/lucaslorentz/caddy-docker-proxy for letting the containers themselves describe their respective domains and mapping.

I do have a VPS and use it for the occasional site that needs to be more reliable than my home internet (which itself is quite reliable but I'm not counting 9s there). More and more I find I'm comfortable putting random static sites on my machine at home, though.

Parting thought: Exposing all of these details is a bit of a security concern, for sure. But ultimately I think it's (a) Not a huge security concern -- I need to assume any attacker knows what I'm running anyway, and (b) part of the fun is talking about it so I lose some value if I don't.


My home server is based on Unraid, it does double duty as file server and application server. I use a desktop case and MB with an AMD 5 3600X, 32GB ECC RAM and 5x10TB HDD (30TB usable) storage + 1TB SSD cache.

Current active services, all docker containers:

- Unbound DNS server - resolves names from the root servers and resolves my local domain and external home domain to local IPs for compat reasons - will be moved to RPi or Rock Pi due to routing issues with Docker containers

- Syncthing - backup pictures from my phone to the server

- Gitea - hosts a mix of private and public repos, automatically mirrors some Github repos (came in handy when Automatic's SD webui got removed)

- Plex - serves the TBs of media I store on the server, used by me and very occasionally by my parents

- SWAG (nginx with letsencrypt integration) - serves as the reverse proxy for all services, small detail: I use a wildcard subdomain and certificate to prevent service names from being visible via certificate transparency

- Home Assistant (as VM) - home automation etc., ZigBee gateway is a rooted Silvercrest (Lidl) gateway, see [1] (not my page, very useful), also serves as the MQTT server for anything automation related

- Rhasspy voice assistant - main node runs as Docker container on the home server, 2 satellites based on RPis (one in bedroom, one on my desk for tinkering), some more details on the setup at [2]

- PhotoPrism - new service, hosts some pictures, not sure whether I will keep it, automatic content recognition is nice, worked decently with my cat pics

- mStream - lightweight web interface to stream audio, also have the app on my phone

- MPD - media daemon, can be used via home assistant as media source and from my RPi connected to my HiFi (HA can start/stop/volume control, RPi has a web interface for music selection, can also play internet radio via HA)

- OpenVPN (legacy) and Wireguard VPN endpoints

Disabled services:

- Gitlab - playground so I can test things without screwing up the production environment at my company

- 7DaysToDie - game server for some friends

- AMP - multiple game servers, usually hosts Minecraft server(s) for some friends

- Empyrion - game server for some friends

- OpenStreamingPlatform - think Twitch, but self hosted, I messed up some config and have not recovered it

- SFTP server - unused offsite backup

As you can see, I am very much in the self hosting camp. The server is my home lab for testing things and gets new services as I "need" more functionality.

[1] https://paulbanks.org/projects/lidl-zigbee/ha/ [2] https://news.ycombinator.com/item?id=33708421


On a VM host (using libvirt), I run:

- FreeIPA (LDAP + DNS)

- Keycloak (SAML/OpenIDC provider for FreeIPA)

- Gatekeeper (or the new replacement) (SAML authentication in-front of applications that do not support SAML/OpenIDC

- several OpenVPN servers (Personal access to internal networks + connection to remote VMs)

- Tinc (Site-to-site VPNs)

- Jenkins x 2 (CI) with linux/windows/macos build agents for various projects :)

- Racher clusters (k8s "wrapper")

- Gitlab (source code/ticket tracking/CI)

- phabricator (SCM/ticket tracking) - though it's now deprecated ;(

- cachet (status page)

- mattermost (IM messaging)

- sentry (exception tracking)

- gitea (SCM)

- Drone (CI for gitea)

- matomo (site analytics)

- sonarque (code scanning)

- tracwiki (internal wiki)

- onlyoffice (web-based office suite)

- nextcloud (for onlyoffice storage)

- nexus (package artifact repository)

- squid proxies and apt-cacher for package caching

- pfsense (firewall for internal networks)

- perforce (SCM for larger projects (games etc.))

- recipesage (recipe hosting tool)

- icinga2 (with icinga director) (nagios-based monitoring)

- Custom backup solution, using duplicity for internet tier backups

- phpipam (recording networks and assigned IPs)

- remote docker swarm cluster using glusterfs, MySQL cluster, haproxy, mysqlproxy, bind, consul

- portainer (docker management for remote docker swarm cluster)

- docker registry for local builds and proxy instance for caching remote images.

- syncthing for personal file synchronisation

- wazuh - host security scanning

- snipe-it for asset tracking

- seeddms - document storage (bills, receipts, letters etc.)

- Calibre-web (PDF book viewer)

- various self-build web applications, databases to support applications, reverse proxies for internal hosting (mainly haproxy)

Media PC: - syncthing

- plex

- homeassistant

- samba (for other family members that don't like SSH/rsync ;) )

- couple of game servers

Thanks for this post - there's a bunch of great applications posted that I hadn't heard of that look really interesting to try out! :D


Thanks for sharing, never heard of OnlyOffice...I will be investigating as part of a goal to manage shared access to ~30K MS Office files.


Good luck :D Honestly, it's one of my least favourite applications that I stuck with.. I find it horrendously slow (tried on linux+docker and even windows :P ).

But it looks good, does the job and stability hasn't been _bad_ :)


Syncthing

Paperless-ngx

Photoprism

TT-RSS

Wireguard

Caddy as the reverse proxy


Truenas core:

- smb shares

- nfs shares

- cloud backup tasks

vm with docker:

- pihole

- traefik reverse proxy

- vaultwarden

- openproject

- fireflyiii

- plex

- *rr stack

- youtube-dl

- internal smtp

- HomeAssistant

- NodeRed

- MQTT

- Influx

- Grafana

- os-nvr

- unifi

- healthchecks

- dnsrobocert

- gogs

- heimdall

- portainer

- mysql

planned:

- imap server (accounts at shared hoster getting too large)

- photo solution (not completely happy with plex)

- a ton of other ideas ;-)


NetBSD


# Work

- https://gitea.io (repos)

- https://discourse.org (forums)

- https://github.com/nektos/act (CI)

- https://www.goatcounter.com (analytics)

- https://bestpractical.com/request-tracker (support)

- https://couchdb.apache.org (a slave db to backup https://rxdb.info [client db])

- deps: nginx, redis, postgres, mqtt

# Life

- https://matrix.org (comms)

- https://www.teamspeak.com (p2p voip for gaming)

- https://nextcloud.com (files, dav, etc.)

- https://jellyfin.org (+ the sync & swarm shit, radarr, etc.)

- https://mopidy.com (audio)

- https://photoprism.app (photos)

- https://actualbudget.com (finance)

- http://tileserver.org (map tiles)

- https://github.com/FreeTAKTeam/FreeTakServer (hiking nav)

...and more (reply to initiate detail sequence)


Initiate detail sequence please


# Automation

- https://n8n.io/ (script i/o for services)

- https://www.home-assistant.io/ (script i/o for physical world)

- https://homebridge.io/ (bridge homekit for above)

# Federation

- https://misskey-hub.net/ (social network [used for game community])

- https://glitch-soc.github.io/docs/ (social network [used for biz])

- https://lemmy.ml/ (custom news aggregator for fan site)

@%4$! ADHD segfault, IOU sequence activated, post may be updated...

edit: update complete. anyone like how those categories worked out in each post? ocdgasm.


When you say you're hosting Lemmy [0] for a fan site does that mean you're hosting it for others to use? How/why did you pick Lemmy? What's your experience with it so far?

[0] https://en.wikipedia.org/wiki/Lemmy_(software)


Yes. It's the only federated news aggregator, which works well for a community that wants to share links related to their common interest, and discuss them.

Being federated allows them to use their existing identity or create one for the fediverse, and general or like minded communities to see the discussions as well. The same reason people use subreddits for fanbases, but not being beholden to reddit.


How does Misskey compare to Mastodon?


Misskey has a lot more features, but can be too much for certain scenarios.

Misskey is more geared towards "fun" communities, because of it's design and it's extra features.

Mastodon is more sterile/corporate and geared towards serious things.


I always CTLR + F these threads for Jellyfin, because most people I know use Plex and don't even know Jellyfin. Are you happy with it? I've been using it for like half a year now and am pretty satisfied.


Been using Jellyfin almost since it forked from Emby. Was a bit rough early on, but works a treat for me these days - I think there's a balance point where I've adapted myself to its idiosyncrasies as much as it has improved stability and compatibility throughout version upgrades.

I primarily use it via the Android TV app, and it's been a pretty smooth experience for at least the last couple of years.


I had to use my TVs internal (amazingly slow) web browser to access Jellyfin. Until a couple of months ago, when I was surprised with them announcing that they will release a Jellyfin app for LG WebOS. Works great, the only slightly annoying thing is that they don't seem to cache the posters. Every time I scroll through my library, all the posters load again.

Other than that, the things which don't quite work for me:

- Jellyfin is useless on my Android phone. Media with 5.1 sound is severely broken, I can only hear the background noise, the actual dialog is just barely hearable. Could be a config issue on my end, but I haven't found out what the issue is. Now I just download stuff and watch it in VLC. Works, but it's not the best UX.

- Sometimes when I add a lot of content, the metadata gathering seems to be stuck. I can refresh the libraries, refresh a single item, but nothing seems to work. Then, after a couple of hours, it magically starts to pull all the metadata and is done in a couple of minutes.

Else - I enjoy it very much. Even donated some money, which I do (too) rarely for OSS.


How about adding another audio stream to your Video files with ffmpeg? You could even automate it with a script on a cronjob or make a Jellyfin Add-on. You would first have to detect if there is already a mono or stereo stream. Then invoke something like ffmpeg -i input.mp4 -c:v copy -c:a copy -c:a aac -rematrix_maxval 1.0 -ac 2 output.mp4 (haven't tested this and it would drop subtitles)


In the Android app, switch from one player to another, from internal to the web player in the settings. It solves the 5.1 playback issue. (Of course, it introduces others, eg. manual screen brightness but it's not as a big issue as no sound.)


Thanks for the tip, but I already tried that :(

- Web player: Video works, audio is broken.

- Integrated player or External player: Unable to load media info from server

no idea why it can't load the media info from the server, might have to dig into the logs...streaming via PC or TV works so it's not a network issue.


I'm Jellyfin-curious, and hear a lot about it, but I'm not really unhappy with Plex, and it passes the family-user test with ease. How easy it for the techno-novice to use Jellyfish? I think Plex's biggest advantage is it's already everywhere, from browser to smart TV. How does Jellyfin compare there?


Hm. As a disclaimer, I've never setup a Plex instance or used one, but from what I've heard, it seems to be very stable.

I think the Jellyfin UI is easily understandable for users, but what you might have to be prepared for is supporting them with technical issues. Jellyfin has apps for Android[-TV], iOS, webOS, Windows and probably more. I gave my dad access to Jellyfin and he had some issues with it related to Chromecast and his TV. On my end I can't use the Android app because I run into audio issues (could be my fault).

Jellyfin is awesome when it works and I really love it, but I've had to dive into the logs quite a few times. Sometimes it was my fault, but other times it's noticeable that Jellyfin is a smaller OSS. Stuff breaks, especially edge cases.

AFAIK you can just install Jellyfin and point it to the same libraries which your Plex uses. That way you can test it for yourself without bothering the people who use Plex. It might add some pictures, but the rest will be fine.

I think if you already have a Plex setup which works and you're happy with, I'd stick with that. I chose Jellyfin because Plex seemed too commercialized to me and I heard some things about it I didn't really like. But most people I know use Plex over Jellyfin, the only people who use Jellyfin are the ones who I managed to convince to use it :)


I'm a big Jellyfin user. I run it on an opensuse server that acts as my homeserver. It's just an old PC with some extra drives. I stream mostly to Roku devices and it works awesome.


Nice list, thanks!


Glad it could be of use!


36 docker containers spread between 3 computers. 250 TB raw with ~70TB free.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: