Maybe it is, maybe it isn’t. The only thing I know is, none of the arrogant fuckers on hacker news know anything about it. But that won’t stop them from posting.
There's an upside! If they're wrong, and they manage to convince more people—it basically gives you more of an advantage. I don't get into arguments about the utility of LLM technology anymore because why bother?
I have not tried it, but I used to be a .net developer and worked a lot with LINQ (and contributed a bit to NHibernate and its Linq provider) and I am a big fan of the approach.
Kusto does seem interesting too, and i think some of the stuff i want to build will find a use for it!
What’s more “legitimately scary” is that people wanna run it back to monoliths like it’s the 90s and think building a modern scalable system is too hard
As a software engineer, DevOps engineer, platform engineer and SRE in a mixed bag, I would say not building monoliths -- instead build a microservice but slightly larger that can still be easily cloneable, scalable and fault tolerant. A mix of monolith and microservice, you may say, and I would like to call that "siloservice".
Silo: A silo is a cylindrical tower used for bulk storage, like grain silos that stand tall near farms. Another kind of silo is harder to see — military silos are underground.
Obviously, you don't need 10 fragmented microservices interdepending on each other, that's one of the biggest overengineering for microservices in real world practices, but you can build multiple "siloservices" that does the same stuff more effectively while getting easy maintenance. I got this inspiration from working with monorepos in the past.
While I agree that there are certainly cases of microservices being used in places they shouldn’t be, I have trouble imagining that monoliths are strictly better in every case. Do you have suggestions for running monoliths at scale?
I think the big problem is it tries to do too much. We used to have many tools as SRE but now teams are really limited. We handed the keys to the engineers which I think is overall a good intention. But we didn’t set them up with sensible defaults, which left them open to making really bad decisions. We made it easy to increase the diversity in the fleet and we removed observability. I think things are more opaque, more complicated, and I have fewer tools to deal with it.
I miss having lots of tools to reach for. Lots of different solutions, depending on where my company was and what they were trying to do.
I don’t think one T-shirt size fits all. But here are some specific things that annoy me.
Puppet had a richer change management language than docker. When I lost puppet, we had to revert back to shitty bash scripts, and nondeterminism from the cicd builds. The worst software in your org is always the build scripts. But now that is the whole host state! So SREs are held captive by nonsense in the cicd box. If you were using Jenkins 1.x, the job config wasn’t even checked in! With puppet I could use git to tell me what config changed, for tracked state anyway. Docker is nice in that the images are consistent, which is a huge pain point with bad puppet code. So it’s a mixed bag.
The clouds and network infrastructure have a lot of old assumptions about hosts/ips/ports. This comes up a lot in network security, and service discovery, and cache infrastructure. Dealing with this in the k8 world is so much harder, and the cost and performance so much worse. It’s really shocking to me how much people pay because they are using these software based networks.
The Hypervisors and native cloud solutions were much better at noisy neighbor protection, and a better abstraction for carving up workloads. When I worked at AWS I got to see the huge lengths the ebs and ec2 teams put into providing consistent performance. VMWare has also done a ton of work on QoS. The os kernels are just a lot less mature on this. Running in the cloud inside a single vm removed most of the value of this work.
In the early 2010s, lots of teams were provisioning ec2 instances and their bills were easy to see in the bill as dollars and cents. At my last company, we were describing workloads as replicas/gbs/cpus/clusters on a huge shared cluster. Thousands of hosts, a dozen data centers.
This added layer of obfuscation hides true cost of a workload. I watched a presentation from a large well known software service company say that their k8 migration increased their cloud spend because teams were no longer accountable to spend. At my company, I saw the same thing. Engineers were given the keys on provisioning but were not in the loop for cost cutting. That fell to the SREs, who were blamed for exploding costs. The engineers are really just not prepared to handle this kind of work. They have no understanding about the implications in terms of cost and performance. We didn’t train them on these things. But we took the keys away from the SRE’s and handed it to the engineers.
The debugging story is particularly weak. Once we shipped on docker and K8 we lost ssh access to production. 10 years into the docker experiment, we now have a generation of senior engineers who don’t know how to debug. I’ve spent dozens of hours on conference calls while the engineers fumbled around. Most of these issues could have been diagnosed with netstat/lsof/perl -pe/ping/traceroute. If the issue didn’t appear in New Relic, then they were totally helpless. The loss of the bash one-liner is really detrimental to engineers progress.
There is too much diversity in the docker base images and too many of them stuck. The tool encourages every engineer to pick a different one. To solve this my org promised to converge on alpine. But if you use a docker distribution, now you are shipping all of user mode to every process. I was on the hook for fixing a libc exploit for our fleet. I had everyone on a common base image, so fixing all 80 of my host classes took me about a few days. But my coworkers in other orgs who had hundreds of different docker images were working on it a year later. Answering the question, which LibC am I on became very difficult.
Terraform has a better provisioning/migration story. Use that to design your network, perform migrations. Use the cloud native networking constructs. Use them for security boundaries. Having workloads move seamlessly between these “anything can be on me hosts” make security, a real nightmare.
I left being an SRE behind when I saw management get convinced docker/k8 was a cancer treatment, a desert topping and a floor wax. it’s been five years and I think I made the right call.
Unions are about digging in and fixing terms between management and labor.
Our industry is different than any other. The factory floor we work on is in our editors. The man on the assembly line building cars is a part of the machine. We are the people that builds the machine.
We don’t want to lock terms between management and labor, because as we’ve built up our tooling, we’ve changed the game repeatedly.
When I got started, I was writing c/c++/assembly. I had to write my own standard library for every project. I was allocating and freeing all my buffers. I had a qa guy, an ops guy, a dba supporting me.
Then I was a Java guy. We realized the DBA wasn’t needed anymore. I didn’t have to allocate and free. I could now use other people’s software through packages. I got way more productive. Made more money for the company, and I got paid more for it.
Then I was a python guy. We realized all this OO crap was a waste of time. We realized the QA guy could be replaced with better tooling and monitoring. The dev ops and cloud revolutions replaced my ops guy with APIs I could manage. I got way more productive. Made the company more money, and I got well paid for it.
Now we stand in front of the AI revolution. I don’t know what my job will look like. But it won’t look like what I’m doing now. I’m using copilot a lot, and I’m way more productive. I can turn around UI for the first time! I’m hoping these new Ai features we’ll make my company a lot of money. I should get some of it.
I’ve seen a lot of people age out when the technology changed. I’ve seen a lot of people make good money for their work. What scares me more than the next technology pushing me out, is an industry that stagnates around “the way we do it”. Today we are on the road to infinity. But if we stop moving forward we are on the road to stagnation.
Well they're wrong and so is their language of choice. Any language that pollutes the global namespace when you import a lib did it wrong. For all of JS's faults, they nearly got this one right.
Also, even in JS this is still a problem when you really do want all packages to share the same version of the lib, regardless if you're able to import multiple. Sometimes you just don't want multiple instances.
Npm and yarn offer a solution for that too, by forcing a certain version resolution but that then you're on your own if it isn't compatible.
I would contend that this is an issue with namespace collision / poor namespace management than it is an issue with semver.
For example when I have a REST API endpoint I always prefix it with /<module>/<major-version>/ - would never expect to host both old and new versions on the same base URL
The thing is, just updating the name/artifact is a half-assed solution required by existing packaging solutions with a flat and/or non-versioned namespace.
If more packaging solutions allowed for graceful coexistence of multiple library versions, this would be a non-issue. The problem isn't really semver vs naming.
Here after the "libc" module released a major version, the definition for the `void` C type in two versions of the lib are considered by the compiler as two different types, resulting in breakages everywhere around the library ecosystem.
There are also scenarios for dynamic languages / runtime errors.
My main problem with the current SemVer spec, is that it does not mention the multiple lib versions problem, and promises the dependency hell issues can be avoided simply by updating major version number. Thus encouraging to break backward compatibility freely.
Also note, it's not the case that SemVer is intended only for languages supporting multiple library versions in one app. SemVer is a product of Ruby community, and Ruby has a global namespace for classes and unable to have several versions of a lib simultaneously.
In 2000s Ruby library authors were breaking compatibility left and right, neglecting elementary compatibility practices. If you were working on an application, practically every time when you update dependencies, the application would break.
So (in 2011 ?) they came out with this "manifesto" (Why such a big name? This scheme of versioning was well established in linkers and sonames of all Unix-like systems for decades - it goes back to at least 1987 paper "Shared Libraries in SunOS").
It's a good thing SemVer acknowledged finally that compatibility is a serious matter. Only that it's better to discourage compatibility breakages. An in cases when it's really needed (I agree such cases exists), there are things to take care of in addition to simply increasing major version num.
The api hurts the Twitter. My company extracted lots of value from it, which Twitter didn’t monetize at all. But they had huge infrastructure and engineering effort to support the api. They never should have built it.
After seeing the dumpster fire that was the Warcraft 3 remaster, and the awfully executed heel-face turn in World of Warcraft's most recent writing, do you really want to see Blizzard make a Warcraft 4?
Does anyone remember when HaProxy’s tag line was” Security - Not even one vulnerability in 10 years”. I loved that product.
Before they took all that money. Before they added all the crap people want but don’t need. It’s hard to do the fundamentals, especially when you take your eye off the ball.
I see what you are saying but it could also be for example:
1) over time more inherent complexity has accrued, making vulnerabilities more likely to occur
2) vulnerability analysis has improved, meaning that vulnerabilities are being found today where in the past certain vulnerabilities (either same or different ones) were present without being found
and we don’t know that things are being added that’s “not needed”, or that they are “taking the eyes off the ball”.
Do we even know that they would be able to stay relevant without taking money? Taking money seems, in isolation, a good thing to me.
I didn't know that used to be their tagline though. That this changed is an interesting perspective.
And as for point #2 specifically, detection improved as well so it's not as if attackers are clearly winning with better attacks on larger attach surfaces. There was a talk recently (can't remember which conference) optimistically wondering if we might be succeeding more than we're failing, since the impactful bugs are getting much harder to find. We're very clearly not there yet, but try to get an exploit on a phone now and compare that to 2011 or 2001. There is a trend there and it's not the same one as GP claims HAProxy is going in. Though your point #1 might adequately explain that, in my head they're still just a proxy (I don't work with the product from a sysadmin perspective) so I wouldn't have known of it if it hadn't come up.