> most of these things can happen for poorly documented large codebase.
Documentation does not help beyond a point. Nobody reads the documentation repeatedly, which would be needed.
When you keep working on a project, and you need a new function, you would need to check or remember every single time that such a function already exists or might exist somewhere. You may have found it when you read the docs months ago, but since you had no need for that function at the time your brain just dismissed it and tossed that knowledge out.
For example, I had a well-documented utils/ folder with just a few useful modules, but they kept getting reimplemented by various programmers. I did not fault them, they would have had to remember every single time they needed some utility to first check that folder. All while keeping up that diligence forever, and while working on a number of projects. It is just too hard. Most of the time you would not find what you need, so most of the time that extra check would be a waste. Even the most diligent person would at some point reimplement something that already exists, no matter how well-documented it is. It's about that extra search step itself.
The closer you want 100% perfection you get exponentially increasing effort. So we have some duplication, not a big deal. Overall architectural quality is more important than squeezing out those last not really important few percent of perfection.
In my experience, the usefulness of documentation in code declines as familiarity with a codebase increases. The result: people ignore it; it becomes outdated; now it's debt. Similarly, non-intralinear documentation (documentation that isn't in the code) tends to grow with a codebase. Meanwhile, the codebase changes, personnel change, and more and more of the documentation beco.ed noise, a historical artifact of solving problems that either no longer exist or can no longer be solved the same way.
That being said, good documentation is worth its weight in gold and supports the overall health and quality of a codebase/project. Open-source projects that succeed often seem to have unusually strong, disciplined documentation practices. Maybe that's just a by-product of engineering discipline, but I don't think it is -- at least not entirely.
I'm sorry, but this is selling good engineers very short. If you didn't nest your utils folder 8 folders deep, it seems pretty obvious that one should check the utils folder before writing another utility function. This stuff should also be caught in code reviews. Maybe the new guy didn't know that util function existed, but surely you did when you reviewed their MR? Obviously mistakes like that can happen, but I've found that to be the exception rather than the rule, even in some of the gnarlier codebases I've worked in.
Assuming they even have code reviews - in your experience, in a situation where the person writing the code didn't check if it already exists, the reviewer will check that and then tell them to delete their already finished implementation and use that existing thing?
I wouldn't say you should explicitly check, necessarily. More like, you go to implement the widget and when you open the appropriate file to get started, it's already there.
I for one think that this discipline is what separates a good developer from being a good engineer. This kind of rigorous process is the kind of thing that I'd expect from most devs but is sadly missing most of the time.
I agree with you completely, but also posit that this is exactly what agentic LLMs should solve?
Claude code’s Plan mode kind of does this research before coding - but tbf the Search tool seemingly fails half the time with 0 results and it gets confused and then reimplements too…
> But relying on other people's kingdom isn't free either
I know it sounds trite, but looking at what I'm responding to, it is not possible to not rely on others for anything but the most basic things where input(s), tool(s), action(s), and output(s) don't require anyone else. You need infrastructure, services, tools, materials provided by others for almost everything.
We are a highly networked species, and building a business is finding a spot in that network. You are always highly connected. Therefore I find that "kingdom" metaphor does not fit how our lives work. It's a network, not kingdoms, not even the richest country could afford to cut the connections.
Services like those from the big companies at least have the advantage that the companies cannot make them too bad, because that would backfire. Copying what most do is pretty safe compared to doing your own thing.
That's also because you want to concentrate on your core business idea. Sure, if you spend some effort doing your basic infrastructure different from others you may save a little, but overall for many companies it will be much better to copy what most others in your space do, so that you have the same basis, and not risk being different in an area that is far from your core competence.
For many areas in business you just hope for the best. You hope(d) Russia-Ukraine would not escalate, or later that it ends quickly. You hope that the latest US tariffs won't be too bad. The world is full of surprises, at least for businesses big company account access or cloud issues does not seem to be a big one, in comparison. I'm not saying this as a headline reader, but as someone dealing with a sanctioned country and other politics-related issues that have been impacting us for a while, despite doing business very far from anything critical (lifestyle consumer products).
We, for example, use one big company to host our DNS (due to history, they also used to host our emails when we began), but we have Microsoft host files (OneDrive) and Email (Office 365) for our entire business domain. I would like to not have to rely on US companies, nor more than ever (we are German), but that stuff just works. And not just as individual pieces, but also together. For example, when I open an Excel file stored in a shared OneDrive folder, and a colleague does the same, we get automatic shared editing and see each other's cursor position. Many small conveniences like that. AND, very important, emails just work - with rarely ever any issues because of rejected emails.
Whenever I see a discussion on reddit or here "why Microsoft (Office)", soooo many people only know the most superficial of arguments. They talk about "but LibreOffice", "but [insert other mail- or cloud file- service), but the apps are not even all that important. It is everything, the huge amount of infrastructure and methods they provide, automatically or manually usable, that ties everything together. The "glue" vastly outshines just Excel or just Word itself, either one of which could indeed be replaced.
When you do more than just simple email, when you have to administer a few dozen employees and their devices, you will find that the big US companies are very hard to beat.
What would be the alternative anyway? Having your own server is a nightmare in comparison! Even just making sure my emails won't be spam-rejected by at least some providers (where my customers and business partners sit) is too much. Making sure the dedicated server is always up to date with patches - that's a lot of work that I'd rather not do. In any case I still have to rely on the hosting provider, who may cut off my wonderful 100% self-owned and administered server at any time because I ended up on some anti-spam list because I did not react to some new threat that I did not even know about in time.
Overall, relying on the big mainstream providers is a prudent choice. You can't avoid trouble, and ceteris paribus choosing them IMHO makes sense.
> You can absolutely build your application without relying on other companies too much.
Obviously, since they do it! I could also bake my own bread. The point is I prefer to specialize and deal with things that don't differentiate me from others in my line of business as little as possible.
I started with computers at a time of 8 bit CPUs, when I knew every relevant memory address by heart. The many many layers these days are not something I enjoy, but I will still use the mainstream stuff nevertheless! Because I am not in the business of basic infrastructure IT and every minute I spend on it is a minute not spent where it actually matters in my business. Everybody, the few dozen employees, the partners, the customers(resellers, they all know the big companies and their products. The only thing we do ourselves is EDI messages on top - on cloud servers. But we sure don't want to come up with OneDrive and Ms Office alternatives, even if politically I'd like that.
Right, but theres levels of vendor lock in. Going up to more lock in doesn't necessarily mean an easier experience, that's what I'm trying to point out. So you have to do the risk analysis.
The fallacy I'm trying to point out is that if you outsource some functionality to locked in vendor implementation, then your life is easier. It can be, but Its often not.
Works fine for me. It has a lot less options, truly "Lite", but most people will be fine. Whatever Google might do that will make this extension worthless, we will se, for now, it seems to be working. (It's funny that the Chrome Web Store lists this extension as "Featured".)
By the way, on Android, I replaced Firefox with Microsoft's Edge. It supports uBlock Origin (no "Lite" in the name, not sure what that means, I did not check the details of how much it supports since it just works as it is). It is significantly faster than Firefox (again, Android). It plays all videos, while Firefox just showed an "unsupported" placeholder for videos on some niche sex video site I happened to accidentally visit.
Supposedly, filter lists only get updated when the extension is updated with uBO-lite. Google could just start delaying approval for these adblockers and their filter lists would become out of date fairly quick.
It comes down to money and private interests dominating Western politics. It has bad but also good sides. Human conflict used to be about extermination a few thousand years ago, that is what I think of when I read such headlines, such things can actually also be seen as progress. We now have priorities other than our tribe or nation, that has good sides too. I like the Napoleonic war story where Napoleon gave a medal to a British scientist, Humphry Davy.
While bad for the nation, I try to see it as a mixed bag that people from the top down are... flexible.
The price we pay is, possibly, worse outcomes in conflicts. What we gain is that after conflicts end we get back to normal life - with one another - very quickly and relatively easily.
"Over the 6-year study period between 2009 and 2014, 322 engine failures or malfunctions involving light aircraft were reported to the Australian Transport Safety Bureau (ATSB) and/or Recreational Aviation Australia (RA-Aus). These reports involved single-engine piston aeroplanes up to 800 kg maximum take-off weight.
Aircraft powered by Jabiru engines were involved in the most engine failures or malfunctions with 130 reported over the 6 years. This represents about one in ten aircraft powered by Jabiru engines in the study set having reported an engine failure or malfunction.
Reports from Rotax powered aircraft were the next most common with 87 (one in 36), followed by aircraft with Lycoming (58 – one in 35) and Continental (28 – one in 35) engines.
When factoring in the hours flown for each of these engine manufacturers, aircraft with Jabiru engines had more than double the rate of engine failure or malfunction than any other of the manufacturers in the study set with 3.21 failures per 10,000 hours flown."
(When you read on, it appears the Jabiru engine was improved and now has less failures)
I do not know how widespread Rotax engines are in Australia and how large the GA is there. Also, I do not track standardized failure rates of engine models around the world - but shared anecdotal evidence. Except two instances, ALL reports or stories from friends and acquaintances around engine failures involved Rotax engines (probably 5:1 ratio). Tracking planes with up to 800kg in the study eliminates all Pipers and Cessnas - which I admit I used as a baseline comparison for my statements. I guess the only plane with a Lycoming/Continental engine below 800kg that comes to mind is the Pa18 from the 1940s/1950s.
Now, I definitely do not say that these are bad engines, but there is a lot of chatter in Europe how these engines are plugged into a wide range of airframes and there are more complex system interactions than meets the eye which can cause some problems. Or put differently: C172 and Pa28 are probably among the most common airframes to stuff the Lycomings and Continentals into. I suspect we kind of figured out how to make these work reliably.
Rotax works in MANY many different combinations and many different airframes - so there is that.
> there are more complex system interactions than meets the eye which can cause some problems
I will grant this for sure. Kind of like modern cars though, it's a double-edged sword. On the UAS programme I'm working on it has been absolutely invaluable to be able to just plug into the 912 ECU's CAN bus and gather a ton of engine telemetry (and send it down to the ground for monitoring at the GCS).
Thank you for posting the links and starting the discussion about 912 reliability. I'm going to have to dig into it a little and see if there's any takeaways I need to bring back to my team.
With zero evidence to support this other than my own experience with N=4 of these, I have a suspicion that part of the problem could be that they're not getting sufficient maintenance and inspection because of how simple they are from an O&M perspective and how robust they are in nominal and off-nominal conditions. When I was first working with it and flipping through the operators manual I was kind of shocked to discover that the only real pre-flight actions are: check coolant level, rotate the prop and make sure the oil reservoir burps. There's a startup and warm-up procedure that we follow to the letter but short-term you almost certainly won't notice if you skip it. Before we had our robust telemetry system and checklists in place, we accidentally flew it with only one ECU lane turned on once and didn't notice until we were on the ground. Engine was already off after landing when someone came on the radio and asked "hey guys... in-flight we're supposed to have both lanes A and B on right?" "Yeah..." "The Lane B switch was off when I approached the aircraft...".
To summarize what I'm getting at: this engine, in my experience so far, has a ton of really robust redundancy features and those redundancy features work so well that you may not notice that you've got issues until you've run out of redundancy. I can only think of two situations where we've had issues bad enough that it caused it to "run rough" and trigger a deeper investigation:
- Because our aircraft is unmanned we have electromechanical relays in series with the Lane A/B switches that we can control from the ground both for engine-start safety (the engine can't be started unless both the crew chief and remote pilot have turned on Lanes A and B) and to be able to kill the engine remotely after landing or in an emergency. We had an electrical issue that was causing the relays to chatter, resulting in Lanes A and B getting sporadic power.
- Somehow in one revision of the ever-evolving full-system checklist the "check water separator" item got dropped and no one noticed. It flew probably 10+ flights on that checklist before we had a really rough start, in an environment that was highly conducive to water accumulation in the fuel (large daily OAT and RH swings). We were horrified at how much water came out when I realized that no one had been checking... and yet there had been zero negative effects until there was a big negative effect.
Gini does not give a full picture, it is just one measure.
Here is a German podcast on the high quality "Deutschlandfunk".
Headline: "Only the top four percent make it to the top in Germany."
> Despite political upheavals over the past 150 years, Germany's elites have remained the same. Sociologist Michael Hartmann criticizes the fact that only four percent of the population shapes the country. He calls for a quota of working-class children on executive boards.
Same with Germany's schools, my country has one of the worst records when it comes to mixing it up. Those who come from well-educated parents will become well-educated. Society is quite static.
Next, Germany puts the majority of the financial burden of financing the country on incomes from work. Income from capital, or much worse, inheritances, are not even considered, whenever the government needs to plug holes it's going to come from working income.
Also, the number of bad jobs, especially those where even many engineers don't work for the actual employer, but for companies that lend them out, has only risen decade by decade to absurd heights. Employers may claim that is to work around the strict labor laws, that they cannot just fire somebody they don't want, but that is an incomplete statement at best. The entire economy has gone away from stable long-term, even life jobs, to ever more insecure employment. That is part of why our birth-rate has just dropped to new record lows too, there is just too little security and too much uncertainty in one's live these days.
We are also terrible at providing housing, which also depresses the labor market because moving has become risky and costly, there just is no housing no matter where you go, and if you find something it's likely to be much more expensive than what you had.
> Here is a German podcast on the high quality "Deutschlandfunk".
I forgot the actual URL! It's German though. And a podcast (19. July 2025, 29 minutes) But it's good and quite thorough and has a lot of details, so if you understand German, it is worth a listen.
They say the "elite" is about 4,000 people in Germany, defined as those having significant and real influence in politics, law, media (highly concentrated ownership), business.
The majority of the linked articles is waaayyyyy too long for what they have to say, and they reveal the subject only many paragraphs in.
From reading one or a few short comments I at least know what the linked article is about, which the original headline often does not reveal (no fault of those authors, their blogs are often specialized and anyone finding the article there has much more context compared to finding the same headline here on a general aggregation site).
Strongly agree with this. Many authors and video creators have interesting, valuable things to say, but they don't exercise restraint or respect for their audience's time.
If something is overwhelmingly long, especially considering the subject matter, I just skip to the comments or throw it in an LLM to summarize.
I don't get such incomplete, selective, comparisons.
The country can't go bankrupt and you just found another one.
Yes, when a country messes up they have to actually fix things, there is no way around it. Except getting merged into another country - like my birth country, the GDR, ended up as West Germany's problem (but its people still had to do the work).
Also, if big enough companies (and banks) fail, it is the same. Not having a string government would not help either, in such cases the companies would be the government, as we saw in even wilder times of huge companies and much less state in the US some century or two ago.
At some point in the hierarchy you have to live with not having omniscience and accept that sometimes things don't work out, and that you can't just walk away from the consequences of those failures.
Oh boy. Haven't watched much US news since, like, Reagan, have we? Dumping the debt of your failures on future generations has become somewhat of a competitive sport in politics. Can't really do that in the private sector.
Many years ago, in another millennium, before I even went to university but still was an apprentice (the German system, in a large factory), I wrote my first professional software, in assembler. I got stuck on a hard part. Fortunately there was another quite intelligent apprentice colleague with me (now a hard-science Ph.D.), and I delegated that task to him.
He still needed an explanation since he didn't have any of my context, so I bit the bullet and explained the task to him as well as I could. When I was done I noticed that I had just created exactly the algorithm that I needed. I just wrote it down easily myself in less than half an hour after that.
In general I agree with you, but I see the point of requiring proof for statements made by them, instead of accepting them at face value. In those cases, given previous experiences and considering that they benefit from making them, if they are believed, the burden of proof should be on those making these statements, not on those questioning them, no?
Those models seem to be special and not part of their normal product line, as is pointed out in the comments here. I would assume that in that case they indeed had the purpose of passing these tests in mind when creating them. Or was it created for something different, and completely by chance they discovered they could be used for the challenge, unintentionally?
Documentation does not help beyond a point. Nobody reads the documentation repeatedly, which would be needed.
When you keep working on a project, and you need a new function, you would need to check or remember every single time that such a function already exists or might exist somewhere. You may have found it when you read the docs months ago, but since you had no need for that function at the time your brain just dismissed it and tossed that knowledge out.
For example, I had a well-documented utils/ folder with just a few useful modules, but they kept getting reimplemented by various programmers. I did not fault them, they would have had to remember every single time they needed some utility to first check that folder. All while keeping up that diligence forever, and while working on a number of projects. It is just too hard. Most of the time you would not find what you need, so most of the time that extra check would be a waste. Even the most diligent person would at some point reimplement something that already exists, no matter how well-documented it is. It's about that extra search step itself.
The closer you want 100% perfection you get exponentially increasing effort. So we have some duplication, not a big deal. Overall architectural quality is more important than squeezing out those last not really important few percent of perfection.
reply