redhat couldn't engineer an operating system to save their life: systemd is a Windows-like system, they constantly break backwards compatibility, their compiler is purposely castrated to only produce 64-bit code, memory overcommit is still a thing and ships out of the box turned on, they can't get NFS to work correctly and haven't been able to for decades, they can't get fiberchannel to work correctly, they resisted and sabotaged XFS for decades only to now make it a default after having made their customers to suffer with ext2, 3, and 4 for twenty years, they resist ZFS, they've allowed netstat to be deprecated in favor of ss, both GFS and GFS 2 are disasters which they could never get to work correctly, Puppet is a disaster, Saltstack is a disaster, Cobbler is a disaster, Satellite / Spacewalk is a disaster... seriously, what can redhat do correctly?
They spend more time arguing with paying customers on bugzilla.redhat.com than fixing their code because it's easier to argue than to find out where the problem is, or to solve the problem properly.
Red Hat break backwards compatibility every 10 years. the LTS approach has been copied by every other Linux distro.
systemd is windows-like in that it replaces a seperate and duplicate code in init, cron, atd, forever, and a billion init scripts with something actually designed.
Red Hat has supported XFS and paid it's maintainers for something like 15 years now.
I could discuss the rest but it honestly seems you're arguing from an emotional viewpoint rather than a technical one.
I'm merely answering the "what's wrong with Redhat Linux?" question: it's enough to go to bugzilla.redhat.com and pull the publicly open priority 1 critical bugs, read the exchange between users and redhat and a pattern emerges. Never mind what I wrote and what I suffer through on RHEL every day, let's ignore and discount that; it's the exchanges in the priority 1 bugs that tell the tale for themselves far better than I ever could.
Try building RPM's between .el5, .el6 and .el7, the macro definitions are ripped out and (partially) put back in or modified for more often than every ten years. And that's just one example of many. Starting with .el6 they made their RPM backend scripts bust if -Wl,--build-id isn't used in CXXFLAGS, CFLAGS, and FFLAGS because they modified their castrated compiler to encode that automatically and rely on it instead, so if you roll your own fixed version of the compiler but don't implement that, all your RPM builds are suddenly broken. Now I have to put that work-around in my compilers. Should I go on? I've got plenty of technical details...
So yeah, they should go into web and not touch OS and kernel engineering ever again.
Looking at prio 1 bugs for a huge distro like RHEL is not going to paint an accurate picture.
I've had production experience with all the major distros except SUSE, and Red Hat's QA is miles ahead anyone else's. Canonical/Ubuntu do a particularly bad job at it, for instance.
That's not to say there aren't any issues with it, and I've personally experienced a few (the devicemapper Docker driver, for example, caused a lot of trouble for me and I'm glad they did the right thing and built overlayfs), but overall, their engineering is very solid.
> There exist no words in any of the languages I speak which can express my hate of GNU and GNU/Linux.
We might not find much common ground there :-)
I'll bite: I worked with a bunch of HP-UX systems before they got decommissioned a few years ago, so I can attest both to how well-built the operating system is, as well as how painful it was to get anything modern running on them. I also keep an eye on SmartOS/SmartDataCenter.
However, how does this make Red Hat's engineering bad?
HP-UX was hard to get started on building software, but once one builds up a base of common libraries, it gets easier and easier, just like it did on Solaris.
hp's engineers never broke backward compatibility. The OS is lightning fast and rock solid. That takes a lot of insight and knowledge.
redhat constantly brakes things, I find things which work yetsterday that break tody, even in the same mainline release. They couldn't even get shutdown to work correctly, a couple of years back when we were working on integrating XFS (and they were still resisting it), the kernel was panicking because they were trying to write to an unmounted filesystem; that was 18 years into Linux's development.
SmartOS engineers would never do such a thing on purpose, as they are guided by the 'empathy is still a core engineering value', put in words in an answer by Keith Wesolowski, and in those very rare cases when they do, they fix it immediately.
With redhat we get constant finger pointing between them and the hardware vendor, they never act responsible for anything although we pay them lots of money. What the hell are we paying them for then? They can't even engineer proper code and drivers for their own OS for the hardware they officially support. That's not engineering, that's hacking!
The worst by far is their lack of architecture. Take Satellite for example, with their concept of channels: unbelievably confusing and complicated. Have you tried integrating your own RPM's into it? The needless complexity!
We had Satellite filling up an Oracle tablespace with irrelevant garbage log information even though we just installed it; I called redhat up and asked them how to lower the amount of information Satellite is generating so that it wouldn't constantly fill up the tablespace. Their support told me that I have to go talk to Oracle because it's an Oracle database problem! Yeah, they are that kind of experts!
Only SMF uses XML and SQLite behind the scenes. While that is a poor choice of configuration format, SMF is considered the Golden standard which all others try to re-invent and re-implement (not invented here syndrome), because SMF has been working reliably for more than a decade, and all added capability doesn’t break backwards compatibility, so an SMF manifest you wrote ten years ago works without modifications on the latest and greatest nightly build of illumos. That’s system engineering as opposed to haphazard hacking. I use drivers in the latest illumos from 1995 and they run without recompilation or modification. I’d like to see GNU/Linux pull that one off.
As for \t not working in vfstab(4), it’s simply not true, as all my vfstab(4) files use the [TAB] characters to line up the fields.
> Now I have to put that work-around in my compilers.
That seems really minor to me... Just because it takes a long time to find the root cause of a bug doesn't mean that the root cause was a stupid decision.
Disclaimer: I partially work as a developer/support engineer for a RHEL-derived OS, but use Debian at home.
It's a lot of engineering to roll out one's own compilers, especially if one wants to do it correctly.
They are relying on specific hacks in the GCC compilers' backends in order for their rpmbuild back-end tools not to bust. That's lack of insight and lack of architecture: what if I were using Sun Studio compilers for Linux, or intel compilers, or PGI compilers? rpmbuild will bust. They didn't think it through, and they never do. It's so stereotypical of them: for the past 20 years they've been haphazardly hacking, and never learned from their mistakes on how to actually engineer systems and how to architect solutions. Just take a look at the hacks they perform in their .src.rpm's, and it's crystal clear.
And "-Wl,--build-id" work-around I had to pull was just one example. I have many.
IMO Red Hat has been on top of recent developments.
OpenShift is (arguably) the best PaaS Kubernetes distribution, and certainly the most enterprise-ready one. I use it in production and it's a great piece of engineering. They heavily contribute back to Kubernetes upstream instead of forking it. Red Hat is responsible for many important Kubernetes features like RBAC.
They acquired Ansible and, recently, CoreOS.
Software Collections make it really easy to run an up-to-date software stack on RHEL by decoupling it from the base OS (which is the way to go, IMO).
And of all the IdPs I recently evaluated, Keycloak sucks the least.
Legitimately curious: the main issue with running containers on Linux is the bad state of the Linux kernel as far as security is concerned. Even with SELinux, it's risky to run multi-tenant containers on Linux due to the massive attack surface, necessitating things like [1] or lightweight VMs.
How does SmartOS solve this?
Also, does the Joyent stack have an OpenShift equivalent? Triton wants my credit card details to sign up with their public cloud, but I might grab a spare box and give it a try.
...if after reading that you still have specific questions, ask.
You don't need a credit card, if you don't want to run it on Joyent's servers, you can run Triton for free on your own at home, at work, (or someone else's) infrastructure. All of that technology is freely available at Github at no cost other than reading a little bit of documentation and investing some time to set it up following the instructions.
Out of curiosity, do Digital Ocean, Hetzner, Azure or AWS not ask for a credit card?
Well, one is forced to work with that inferior tooling and is not allowed to change it. This is excruciatingly painful and generates a lot of resent when one knows that there is far superior tooling available, gratis, with much higher quality support and way more competent engineers working on that tooling.
They spend more time arguing with paying customers on bugzilla.redhat.com than fixing their code because it's easier to argue than to find out where the problem is, or to solve the problem properly.