I now understand why I'm not – and will never be – rich. My first (and last) employer complained about my preference for stable and reliable solutions, equating it with lesser profits. According to him, unstable solutions requiring frequent maintenance meant more revenue. To me, a job is well-done when it works consistently, not when it demands constant fixes.
This is a great point. Haven't you noticed the teams that have downtimes constantly are hailed like heros for saving the day countless times. Yet the guys that have services running 24/7 for years without a hiccup will be the first ones on the chopping block when finances get tight.
I am also in that boat. I too like systems that never fail, defensive programming, proactive attention. I'll never be rich and most of my company has no idea what I do. It's still the right thing to do.
I've been a project manager for nearly 20 years after "hanging up my technical boots" and trust me it's the same shit for us as well - quietly get on with it, manage the challenges and the team, anticipate and deal with the problems upfront? You are an unremarkable "safe pair of hands".
Run around panicking with your arse on fire constantly firefighting issue after issue (almost all of which you should have anticipated and already dealt with) making your constantly "red" project go "green"? You're a fucking hero that wins any internal little award/prize for rescuing the project.
Now apply that principle to politics: representatives who actually work to, uh, represent their districts, tend to quietly go about their business and are rarely promoted. To get promoted you need headlines, which you won't get if you're actually good at your job.
In defense of your job, make sure you are constantly communicating how much uptime / how stable / reliable your services are. One reason why the people with lots of downtime are hailed as "heroes" is because they get more face-to-face time with upper management.
Of course, some managers will be dumb and equate instability with revenue.
Think bigger. Start a quality IT consulting business and possibly be rich. There’s a small fraction of industry who will pay for it. Likely plenty for many ‘lifestyle businesses’ per region.
Actually I have been considering investing & going BSD only to escape the insanity that has now also contaminated the "Linux world"
Now from my initial observations the opportunities in the saner world are quite scarce and hard to find.
But I am pretty sure those companies often face a skill shortage too.
The whole Docker & container craze vs how things are done in FreeBSD worry me much more.
But this is not so much about the technology but more about the fact that as Linux became mainstream & the default choice it doesn't filter out anymore the companies who have no idea what they are doing.
Probably complaining about the rat race / grind mindset which leads to lot of people without strong fundamentals gaming the system and getting high end jobs.
Are you implying every system is stable and reliable now, because it is in the cloud? My guess is that it helps on the hardware front, but there is more to a stable system.
Yup, definitely resonates here. We design replacements and upgrades to keep ancient stuff in factories running. After the initial fixup, we often never hear from a customer again until a different machine breaks.
> Yet the guys that have services running 24/7 for years without a hiccup will be the first ones on the chopping block when finances get tight.
Not really my experience, as long as the stable services costs of operation are good. I have witnessed more chopping on the R&D side, where things are never "stable and reliable", but innovative and expensive.
No need to change or outsource something that works technically and financially. Until it becomes irrelevant.
I don't disagree, but I think the opposing take would be the cliche "if you're not breaking some things then you're not moving fast enough" (by whatever measure of "fast" or progress management would like to use).
What angers me the most about this mindset is that unless your dev teams are full DevOps (this does not exist IMO), and truly can understand and create scalable and reliable infrastructure, all this means is “we are externalizing tech debt onto other teams.”
You built everything in AWS via ClickOps? Sounds like an SRE problem.
Your DB is just a giant JSOB object in Postgres? Sounds like a DBRE problem.
It’s extremely frustrating, as someone who has been / is both of those other groups, to see projects being hastily thrown together, knowing that one day it’ll be your problem to solve.
If you have a stable setup that does not need changes as you grow, chances are you're not growing fast enough, or at all.
New and thus inherently unstable things may allow order of magnitude jumps in efficiency, and, again, getting onto a growth trajectory.
Dropping VMS and Solaris in 1997 and switching to Linux was switching from stable software and hardware to unstable, but such that had a growth potential. Switching from bare metal and colocation to AWS in 2009, the same. Switching from highly advanced ICE to electric motors and huge lithium batteries in 2007, the same.
If all that you value is stability [ed: of your setup], the amount of wealth you own will also stay the same, at best. Stable things need to be stepping stones into the unstable, uncertain future that may bring something bigger.
From direct experience, instability and uncertainty less to your customers getting angry on twitter and fucking off rather quickly. All they want is to do what they did yesterday without being poked in the eye.
The whole move fast, break things and grow quickly is a cancer that compromises everything not a commercial win. When you put yourself before your customers you fucked up.
At a practical level, this is NetBSD story is a story of a machine that just does it's job and doesn't change. For example, there is no active experimentation, no machine learning added on, etc. It's like a wood-mill; good that it make 2x4s but also not a highly dynamic business. Probably not a business that's seen a lot of competition or having to adapt to changing markets and products.
I'm not talking about breaking things and having downtime.
I'm saying that if you have the same infrastructure over 13 years, and you did not have to upgrade it, or replace its parts, or even migrate off it, you're likely not growing.
All the above things can be done with minimal disruption for the customers, ideally in a completely transparent way.
Perhaps you don’t need to grow? Growth is not the only business model. I mean failing to do maintenance is a sin of course but anything else, there is no one true course.
I originally answered to a comment about never becoming rich. No growth means not getting rich, this is what I'm trying to point at.
Of course growth and getting rich is not the only worthy target. Say, OSS is not about getting rich, but it's wonderful and worthy without doubt, in my eyes.
I know of at least one semiconductor company pushing the state of the art on 1990s litho processes, running 1990s control software on their stepper that runs on Solaris on SPARC32 (SPARcstation 5). Anecdotal, but their production engineers have stated that not relearning how to use critical parts of their process every few years has helped them stay focused on actual R&D.
This is of course an extreme, probably unusual example.
Somewhat related story time! In the early 2000's my main business was web hosting. It paid the bills but never made me enough to really invest in it. So it kept running, sitting in a colocation space. In 2005, having the mail server hosted on my web server was becoming a problem, so I decided to put it on a new server.
I chose a 1.42Ghz Power PC Mac Mini. I installed Linux, and was very happy with how well it worked, how tiny it was, and how it took a fraction of the the rack space that my web server and other servers took. I thought I might even just use those in the future.
Fast forward a couple years, and the load started increasing. I had used XFS for the mail partition, it ran Qmail and used used Maildirs which tended to accumulate thousands of files per mail directory, and the server was starting to choke. I also avoided rebooting it for years. If I remember correctly, by the end, the server had an 6 year uptime because I was so scared that rebooting it might brick it. But I had a major problem: this Qmail+Vpopmail+SpamAssassin+[dozens of custom tweaks] install had accumulated so many custom hacks, tweaks and patches that I never had confidence that I could do a real downtime-free cut over to a new system without a barrage of complaints.
So I put it off. And I put it off. Fast forward to about 2013 and I decided enough was enough, so instead of doing a fraught cut-over, I just ended email service. Problem solved. Best choice I ever made.
Needless to say, I avoid overly complex, patched configs now.
That ideally shouldn't happen if your dkim, dmarc and spf check out, though. I hosted my own email for a couple of years and I can't remember a single time when my emails to my friends ended up in spam.
"ideally shouldn't happen" doesn't mean the deliverability cartel doesn't block you anyway.
Gmail et al have been spam filtering messages from correctly configured mail servers for a decade+ now. All the dkim, dmarc, and spf in the world won't help you if you aren't known to them.
Sure, that's almost certainly true, but the arbitrariness of changes and blacklisting leads one to consider perhaps they don't care at all about small hosting operations.
How are these HAM signals? Any spammer can set these things up. dkim/spf are just mildly useful anti-spoofing technologies.
Google and others will happily block your mail or send it to spam folder even if you never sent one SPAM email ever, and have all those technologies you mentioned set up.
I realized I wouldn't use it for anything serious, and I didn't renew my domain name. Maybe someday I'll get a domain for ten years and then get google to host the actual email. That way it doesn't matter too much if google decides to nuke my account.
I was also sixteen when I did that, so I mean, of course I wasn't going to do anything serious with it.
Unfortunate reality. I stopped self-hosting after many years three or four years ago. Keeping up with the requirements to stay out of the spam bin was too much work, and the risks of non-delivery were too high.
Remember, even if you have everything 100% nailed down with SPF, DKIM, etc. you can still end up on a random blacklist, some of which are basically extortion shakedowns. Now, you can say, "well ignore those losers, who cares?" but sometimes you have a customer who directly or indirectly relies on those blacklists. I certainly do, that's how I found out!
About a decade or so ago I did IT for a company that had a large copier that could also scan to email. The email delivery stopped working even though this was Gmail and nothing SHOULD'VE changed (Google has never cared much about standards, so even now it's never surprising when Gmail using normal tools just stops working).
The person who had the company Gmail administration login had tried to open a ticket, had played with settings, had deleted and re-added the account on the copier many times, but nothing worked.
Never being someone who is content to just wait for someone else to, you know, do their job, and not wanting to deal with the proprietary bullshit that is Gmail, I decided to do something far simpler: set up an SMTP server.
A $12 PogoPlug, an SD card, and a couple of hours later (I had to compile Sendmail from pkgsrc), the printer could scan to email and deliver by smarthosting through a public SMTP server.
People will often tell you that something is a waste of time because they don't know how to do it or because "everyone does something else". Some people tell me that running email servers is a waste of time, but standing up a whole server is infinitely easier than trying to deal with Google.
Someone from the company called me many years later and asked me if I wanted my PogoPlug back. It had been in use for at least half a decade before they replaced that copier.
At a startup I worked for, we had a cheap wireless microphone and speaker from Craigslist that we used for all-hands meetings. The UX for anybody remote (pre-pandemic, but board members did like to dial in) was terrible - IT would just place a laptop pointed at the speaker. The audio quality left a lot to be desired.
At some point, I pitched the silly idea to IT that I'd figure out what frequency the mic ran at, tune it with gqrx and an SDR, then feed it back to my own microphone using a loopback device in PulseAudio. That day, we had our best-ever voice quality on the remote call, I ended up becoming critical path for all-hands meetings.
Fast forward a few months, the only thing that saved us from building a Raspberry Pi + SDR + Zoom box was another round of funding, with which we bought a proper conferencing system.
OH man I forgot about PogoPlugs -I had about 3 of them around the house before raspberry pi's took over - they were running silly things like dlna and a VERY slow OwnCloud installation - thats really when I got into self hosting.
I use them for all sorts of things, even now. They make excellent VPN devices, they're great as local recursive DNS resolvers, and I have several that're backup NAT routers / firewalls for various networks. They also are perfect for ssh-based port forwarding / jump hosting for networks that have crappy NAT firewalls.
Turn off atime and devmtime, turn of daily and weekly cron jobs, and log to tmpfs, and SD cards will last for many, many years. I have one that's been running continuously since 2015.
This idea of the "desktop" still persists. I have often read description of the BSD projects as "server" oriented. Thus some may conclude, NetBSD is for servers.
However when I look at the personal computers I run today, there is no "desktop" because I prefer textmode and I am running many tiny servers. These serve only me not the internet at-large.
The dream I had at the time of NetBSD 5.1 was a framebuffer where I could launch GUI applications from the command line in VESA textmode, without any context switch (Ctrl-Alt-F1, etc.) back to X11.
If I have the choice between "desktop" and "server" in 2023, I choose server. The "server" is a more useful metaphor for for me than the "desktop" ever was. Eye candy GUIs can be seductive, but IME servers operated with text commands and text configuration files are more powerful and ultimately more useful. As it happens, the server metaphor is quite common. For example, it's used to run the web.
NetBSD 5.1 was great. Arguably one of the best releases in the project's history.
>However when I look at the personal computers I run today, there is no "desktop" because I prefer textmode and I am running many tiny servers.
That puts you in the vanishingly-small minority. To borrow a phrase from Tom Ptacek: to a decent first approximation, zero people including you are daily-driving text mode for their personal machines.
People describe "desktop-oriented" distros as such because they're filled with niceties such as support for multiple physical displays, sub-pixel LCD hinting, sane defaults for mouse/touchpad input, graphical wifi managers, and usually desktop environments. They ship with these things out of the box. Imagine recommending with a straight face that someone go to starbucks and manually edit wpa_supplicant.conf to get onto the wifi.
People describe distros as "server-oriented" because while you could achieve the aforementioned niceties with them, it's usually a hassle. In my experience BSD (or Gentoo) aficionados tend to eschew the term "hassle" in favor of sayings like "minimal" or "exactly what I want and no more" or the ever-popular but always meaningless "cohesive" when describing those operating systems.
In any case, the "desktop" metaphor might not be useful to you but consider that your use case is far from common.
> Imagine recommending with a straight face that someone go to starbucks and manually edit wpa_supplicant.conf to get onto the wifi.
Editing config files is not specific to TUI, you can do it even with GUI editor, and you have to if you want to do anything more complicated with your computer, like route some traffic over VPN and some not, or have a dynamic VPN setup based on what wifi network you're connected to.
Also in the TUI land, there's iwd these days which has nicer end-user text mode UI than wpa_cli, with one command to start a scan, and another to connect. There's also wifi-menu, or nmtui. All pretty much equivalent to the GUI wifi selection menus people use on windows.
1. Make HTTP requests with variety of software, but not graphical web browser, including shell scripts, TCP clients and custom UNIX-like programs, i.e., small, easy-to-use in pipes. 2. Use text-only browser to read HTML. I make heavy use of HTTP/1,1 pipelining so I'm not "browsing" as much as I am doing bulk HTML or JSON retrieval from a single host. This cannot be done with graphical web browser nor with curl, wget or even aria2.^1 Often I get downvoted by HTTP/2 or HTTP/3 fans on HN when I discuss this because it is not contemplated in their vision for the web, i.e., interactive websites, graphical advertising and imperceptible data collection and tracking. (For example, they will say something like, "No one else does that". Who cares.^2 I'm doing it. The way the majority of people use the web has nothing to do with how they want to use it. They made no choice between one option or another that is significantly different. They are only using the web in the way dictated to them by so-called "tech" companies and legions of hivemind web developers, all praying to the same God of Advertising. The web was not created nor designed for mandatory delivery and receipt of advertising, it was created for whatever people want it to use it for; it's quite versatile.)
Sometimes I play around with nghttp2 and it just feels totally inflexible. Bad ergonomics for how I use the web.
Using a browser from an advertising company shapes the way people think of the web. It limits thinking and especially thinking outside the box. Look for the replies that seek to discredit those who aberrate from use of the web as mechanism for delivery and receipt of advertising.
If I preferred what was "common", then I would not be using NetBSD. And if it was as popular I suspect it would be less good. Quite confident other NetBSD users have had that thought before. Some software developers stuggle to understand why others might not care about popularity because they live their lives desperately hoping/trying to get others to use the software they write. But I'm not a software developer and I like stuff that is not popular, especially non-popular software.
1. To be truthful, with aria2 it's technically possible but not designed for use in UNIX pipes.
It's also a tribute to the hosted VMs and the framework in general. Virtualisation has been an amazing boost, to reliability. Get on a platform with variant HW and then expose it as an idealised machine, to clients which can behave predictably.
I ran NetBSD systems for years, but I don't think any of the BSD I have operated managed 13 years. Even 3-4 years felt like a big win!
I think how manufacturers of embedded OS looked at the field and went VxWorks, Linux or BSD is interesting too. By no means is it automatic you go to a linux kernel for a small device.
>It's also a tribute to the hosted VMs and the framework in general. Virtualisation has been an amazing boost, to reliability. Get on a platform with variant HW and then expose it as an idealised machine, to clients which can behave predictably.
It's ironic that Java was supposed to be that idealized machine, but native virtualization caught up. The JVM has it's benefits, but x86 itself ended up eating java bytecode's lunch, which isn't something anyone predicted. Indeed, even javascript of all things is becoming a strong contender for this role, something no-one predicted either.
Java VM and especially GC tuning, like Oracle DBA tuning, seems to be a black magic art few learn. I disliked XML configuration immensely, and have had some instances where (earlier, pre jdk11) JVM did not understand they had to introspect to be told how much memory they had (arguably the fault of the container/virtualisation they in turn sat inside)
I still run all my PROD JVM applications with `-Xms=a -Xmx=a` to just remove all questions about heap memory. I already allocated that slice of the machine's memory for the application; I don't need the JVM trying to handle shrinking and growing it. This also removes pressure from the GC if the amount allocated is some multiple of typical operations. A lot more operations complete without ever leaving generation 0.
JVM solves a very different problem. It makes some very strong assumptions: everything is an object (no functions!), garbage collection, no easy structs, etc. All this prevents easy porting of lots of existing software written in C, C++, Fortran, etc. In exchange you can be oblivious of the underlying OS and even CPU architecture.
Instead, x64 virtualization easily allows to run not just your existing software, but your existing OS, at practically native speed.
Java the language has a lambda / function-literal syntax, not first-class functions. (You know this, obviously.)
The `struct` bit can refer to a few things: value typing, stack allocation, and explicit memory layout. I think records give you value typing, but not stack allocation or explicit memory layout. They are just syntactic sugar for immutable POJOs.
I posted this on here a long time ago but NetBSD was always a winner for me. I set up a dialup gateway for a small company in the late 90s on it on a skip dived Compaq desktop. Forgot about it and moved to somewhere completely different in the industry. Out of the blue about 2010 I think it was I got a call saying that it had stopped working. Figured I better go and look at it for them. Got there and found the disk completely full of logs but still spinning which was a surprise.
The failure was the dialup ISP had cancelled their dialup service. I found another one, signed them up for it temporarily and ordered an ADSL line and router for them and swapped that in a couple of weeks later. The compaq was retired.
Some trite syslog analysis suggested it managed 7 years of uptime in one stretch killed only by what looked like a power outage. There was no UPS on it.
I remember that! Here is your original comment from 2013: https://news.ycombinator.com/item?id=6503464. I enjoyed that subthread. (I assume you don't mind it being linked to this identity, since the details are identifiable. Sorry if you do.)
I like follow-ups on predictions, so please allow me to request one. When asked in that thread, you said you would "probably controversially" choose Windows Server 2012 on a mid-range HP DL or ML server for a system meant to last over a decade. Almost a decade has passed. Do you think today that this would have been the right choice at the time? I am not questioning your choice as an anti-Windows thing or anything like that. I am genuinely curious.
Oh wow. Details show how bad my memory is as well - stories fade and change over time :)
Well over a decade has passed now and I have nothing to do with Windows whatsoever any more and am running fully Apple on the desktop and Linux on the server side of things. I wouldn't have anything to do with it any more. That was a completely wrong prediction. I inherited a lot of Hyper-V infrastructure with SCVMM which really finished it off.
What did work was CentOS on AWS EC2 though although I'm not sure that's good any more what with the whole RHEL controversy recently.
So for another future prediction which will be equally wrong: I don't have a clue any more and am trying desperately to find a way out of the industry.
I have had more systems go down for full disk than any other reason (besides my own bugs of course). Even when I try to use logrotate, there always seems to be some kind of trick or mistake. /var/log should be a ring-buffer filesystem that discards oldest records whenever full & writing new records. I wish I knew how to do that reliably.
I have three servers still running since 2014, I know this because they run Ubuntu 14.
They've never been upgraded, they've only been rebooted a few times by linode during maintenance. I have no way of building them again, I have no backups, and yet... They are the production servers for a web app with 250k monthly users (not a commercial venture).
Seemingly I'm fine with this... They've just been that reliable. A python so old I've no migration path, a django so old I've no migration path. But yet they keep working
The only thing updated in a decade is an API built in go, which still compiles on latest go despite being written for pre 1.0 go. And I did replace the postgres server as that had outgrown it's instance. But everything else, a decade old and still fine.
I logged in the other day to discover that SSH didn't initially work as ssh+RSA has been superseded and needed new ssh config just to keep connecting.
There are so many things on these servers that no longer make sense, graphite monitoring that goes nowhere, new relic integration I disabled years ago, linode Longview long deprecated. And yet still the servers work as load balancers, and app servers for ancient python programs.
Very bold… How do you keep it secure with such ancient software? What happens when it goes down and you don’t have backups? I can’t imagine a service that’s both unimportant and has so many users. So many questions.
> How do you keep it secure with such ancient software? What happens when it goes down and you don’t have backups?
Only 3 ports are open: 80 443 <another for ssh that isn't 20>
The nginx is just a proxy to the frontend and a file cache, I'm confident I could rebuild it in under an hour even without access to what is there today. Also confident that if it's compromised it can't do much real harm, I will nuke it.
The django is just a thin front end that calls an API, I'm confident that when it gets compromised it can't do further damage as it has no direct access to the DB. And TBH, if/when it dies it will be a kick up the butt to reimplementing this in Go (I've about 60% of that already done but I only look at it once a year for a few evenings)
> I can’t imagine a service that’s both unimportant and has so many users. So many questions.
It's a platform with 300 forums on it.
And I guess if it goes down I discover either the users didn't care (no big loss) or the users really care (and perhaps now the donations would be sufficient to cover the eng time for me to get someone to complete the frontend rewrite in Go).
The whole thing is provided for free, and I figure people get what they pay for. I used to feel very emotionally attached, but now I'm quite YOLO about it. I believe these things should be ephemeral, if it turns out it has run it's time then so be it.
Hackers love servers like yours. It's old and they know nobody is monitoring it. If they can get in and use it to launch attacks as a proxy or part of a botnet without touching your running services, they have a nice home for a long time and you'll never notice.
Maybe you don't care about the data running on that server. It's still irresponsible and a disservice to the rest of the Internet to run an unpatched Internet-facing server.
There's a million routers that are unpatched and easier to get into than this.
It's not entirely unmonitored, Linode send bandwidth usage warnings and iops warnings, and my users have been the best downtime signal.
I'm very fine with these servers. They belong to hobby web, a low threshold for making something for yourself and others should be encouraged, but also the burden should be carried lightly, and this burden is carried very lightly and if it falls I'm fine with it.
There is nothing what stops you from, at least, dd'ing to some other server, even if at Linode, too.
> I logged in the other day to discover that SSH didn't initially work as ssh+RSA has been superseded and needed new ssh config just to keep connecting.
Yeah, I have a 2T PKI, originally from 2012R2. I needed to re-issue one certificate and I needed an older OpenSSL binary to split it to pem+key pair.
I wonder if maybe reliability is inversely proportional not just to profit (as the author cynically says) but also to virality and so, community. If something just works, then there won't be stack overflow posts, or discord communities, or forums or anything. It's a strange sort of error mode!
I thought this was going to be the story about the server that was still running, but nobody knew where it was. It turned out that it had been walled off during some renovation works years before, and was still running sweetly behind that wall.
What I've seen ares workers opening a wall with a hammer in an hospital disregarding the fact there was a small shelf with 19" switches on the other side, that they left hanging by the ethernet cables, without even asking someone to call IT. They went on to take their lunchbreak like nothing happened while computers on that floor were without network.
I once saw my father furiously smacking the screen of the new laptop I brought him. It had stopped working. I quietly explained that the power light is off and he's not hooked up to the wall... yup, came back on as soon as he plugged into power.
I guess I see a commonality about people who don't know any better..
It was reportedly NetWare which was a pretty stable and reliable OS, so I could imagine a server sitting in the corner and being forgotten about just chugging away quite happily for years serving files and print jobs.
But the hardware of that day was maybe not that great, e.g. Compaq or HP servers, so from that standpoint I'd be a bit more sceptical of such a long uptime.
I've heard multiple versions of this story over the past 20 years.
When NetWare admins tell it, it's a NetWare server.
In Mac fans' version, it's a Mac server.
BSD admins recall it being a BSD server.
etc., etc...
That sort of uptime always scares the sh!t out of me when I see it.
Reboot at least once a month folks, even for those single node critical systems. Better it doesn't boot on a Friday night than mid morning on a Tuesday.
I manage a small cluster (< 500 nodes), and do staggered reboots every 3-6 months, mostly for security/firmware updates.
It's amazing how many servers that were seemingly running "fine" for months don't boot back up. Memory failures, disks that disappear, random power issues, motherboard/controller failures. As high as 1%.
There are a lot of "strains" that happen (current inrush, mechanical shock to drives, heat shock to parts) that happen on a system, particularly after a full power off and power up cycle.
It’s interesting, we want reliability, but too much reliability makes something invisible, and then failures become catastrophic because there is no expertise in the organization.
Death and birth are evolved. Micro-organisms don't do it. A few multicellular organisms don't either (e.g. there's a kind of immortal jellyfish, but what it does is periodically revert to an amorphous mass and then grow back into a jellyfish.)
The thing about reliability is that not every single thing needs to be ‘reliable’. The reliability comes from knowing the failure points. Some issues you may not be able to fix right away but simply knowing why something fails is more reliable than not knowing. And that’s the scariest problem I’ve seen. Not knowing failures.
Love NetBSD, especially for embedded work. Cross-compile support is core to the project, and that makes life much nicer.
On the personal project side, I'm currently running NetBSD/cobalt 9.3 on a Cobalt Qube2 microserver, which is an old MIPS server appliance. It lives here:
Mostly it hosts my persistent IRC sessions, and provides a few network resources to some of the old computers in the shop. It requires very little maintenance, pretty much just does what it's supposed to.
Wow, I had couple of those cobalt qube servers running in my room about a decade ago, when I got them really cheap from ebay. I just loved the form factor (being a cube), as it was so unique for a server. It's awesome to hear that these machines are still being actively used.
Yup! I picked up my first one probably 16-17 years ago, also when they were still cheapish on eBay. I don't think the one I'm currently running is my first one, I have two and a parts unit now.
I also ran their RaQ series as custom Linux router/firewall boxes for a long time. Still have two CacheRaQ 1s, which were designed to be caching web proxies and have dual Ethernet ports. I think they came out of production 5 or 6 years ago, ran fine with Debian mipsel until the customer upgraded their Internet connection and exceeded the little 150 MHz CPU's capacity!
If anyone wants to play with NetBSD I highly recommend a shell account / membership at SDF.org. It’s also nice to support an organization that’s been rock solid for decades.
Thinking back on all the breaking changes I've dealt with since 2010: 32-bit Linux distros discontinued, SMB incompatible between Windows versions (and samba versions) without tweaks, root encryption certificates added/removed, OpenSSL requiring all SSH keys manually updated: This NetBSD machine may have been running for all that time, but no way was it unmanaged. Some Morlock was turning the cranks and oiling the gears.
>This NetBSD machine may have been running for all that time, but no way was it unmanaged. Some Morlock was turning the cranks and oiling the gears.
When I read the article, it said it went down once due to an earthquake 13 years ago. Depending on what it is used for, tweaks may not have been needed, also it may not have been connected directly to the internet, but behind a firewall in another router.
So I say this is true, NetBSD is very stable and it does not need all the sub-systems Linux needs just to be useful. Some of those things are in pkgssrc (like dbus), but if not needed, it is not used.
Whatever it did, and what I know about NetBSD, I would not be surprised other NetBSD systems are still active in some hidden place forgotten about, doing its job without any "thanks" :)
I have the feeling that in the past this wasn’t so unusual, particularly as there was less security patches and more airgapped systems that didn’t need rebooting. I remember being in airports, government offices, large corporate field offices and the like, and these machines that have huge uptimes would have BSD, SCO unix, HP-UX, DEC alpha and similar installed and almost never Windows or Linux.
I still use a BSD variant or Solaris clone when I want something super reliable, but now with Linux I am doing an experiment for a system I want around for 10+ years (I do have experience with Linux almost since its release, just think its something I have to handhold more).
I also like in the article the engineer talking with the customer about the reliability of the hardware - I guess the customer was proven right or just lucky for once!
I love SSHing into an old server with an uptime in the thousands of days, I can't help but be amazed in those moments, quiet giants working behind the scenes to make the days of a few people work better, all the while most never even knew it was there to begin with.
NetBSD is fantastic, I also have a Manjaro box that has been handling some print stuff for me, been running for 3 years (with no intervention) and I often forget it's there until I'm reminded while moving boxes, just quitely working and waiting for the next request.
We had multiple linux instances running for that long (live patched via ksplice, back when Oracle still didn't cut support to other OSes).
Leaving some box unpatched in datacenter for a decade isn't impressive.
Also there is actual risk server not restarted that long just keels over after a power fail for hardware reasons, more frequent (say, every 2 years, if you have kernel patching) reboots at least reduce number of machines that can die at once, as rare as it would be.
I guess I am getting old, too. I have a server running Arch Linux (same installation but updated and rebooted, though) since early 2011 and I would have not called it old by intuition. I just realized it's over 10 years now and not something like 4 like I was "feeling" still. :(
That is pretty cool! I have a similar setup started around 2011 as well. It's an Atom netbook and has run OpenBSD almost continuously since then, only going down for upgrades and patches. The SSD shows something like 8 years of (non-continuous) uptime, before that it had a spinning disk.
I just de-provisioned a Debian Squeeze (6.0) instance that had been more-or-less online and untouched from like 2012 - running a Django application alongside MySQL. I regret not taking a screenshot of the uptime but it was north of 8 years.
> As 2010 was drawing to a close, I found myself on more flights than coffee breaks, constantly testing technical solutions, in search of stability and reliability
I am also in that boat. I too like systems that never fail, defensive programming, proactive attention. I'll never be rich and most of my company has no idea what I do. It's still the right thing to do.
Boring that never fails = Quality.