I think that's the biggest issue with code signing, Xcode should come with a "Sign and Notarize.app" that allows you to sign anything you point it at with a single click.
Let's say I almost got into an accident and I did two things: use my brakes and close my eyes. Does that mean that closing your eyes is good for curing a rash?
If you used your brakes and closed your eyes at the instruction of a researcher specialising in such accidents? And barely anyone had ever avoided such an accident before, making the researcher one of the world's leading experts?
What's "critical software"? Software controlling flight systems in planes is already held to very high standards, but is enormously expensive to write and modify.
In this case it seems most of the software which is failing is dull back office stuff running on Windows - billing systems, train signage, baggage handling - which no one thought was critical, and there's no way on earth we could afford to rewrite it in the same way as we do aircraft systems.
Something that has managed to ground a lot of planes and disable emergency calls today is in fact critical. The outcome of it failing proves it is critical. Whatever it is.
Now, that it was not known previously to be critical, that may be. Whether we should have realised its criticality or not, is debatable. But going forward we should learn something from this. So maybe think more about cascading failures and classify more things as critical.
I have to wonder how the failure of billing and baggage handling has resulted in 911 being inoperative. I think maybe there's more to it than you mention here.
Agreed, there is no such thing as perfect software.
In physical world, you can specify a tolerance of 0.0005 in but the part is going to cost $25k a piece. It is trivially easy to specify tolerance, very hard to engineer a whole system that doesn't blow the cost and impossible to fund.
Given how widespread the issue is, it seems that proper testing on Crowdstrike's part could have revealed this issue before rolling out the change globally.
It's also common to rollout changes regionally to prevent global impact.
To me it seems Crowdstrike does not have a very good release process.
There's only one piece of software which (with adaptations) runs every Airbus plane. The cost of developing and modifying that -- which is enormous -- is amortized over all the Airbus planes sold. (I can't speak about Boeing)
What failed today is a bunch of Windows stuff, of which there is a vast amount of software produced by huge numbers of companies, all of very variable quality and age.
I meant critical software a short-hand for something like: quality of software should be proportional to the amount of disruption caused by downtime.
Point of sale in a records store, less important. Point of sale in a pharmacy, could be problematic. Web shop customer call center, less important. Emergency services call center, could be problematic.
I, as a producer of software, have effectively no control over where it gets used. That's the point.
Outside of regulated industries it's the context in which software is used which determines how critical it is. (As you say.)
So what you seem to be suggesting (effectively) is that use of software be regulated to a greater/lesser extent for all industries... and that just seems completely unworkable.
What you're describing is a system where the degree of acceptable failure is determined after the software becomes a product because it is being determined by how important the buyer is. That is backwards and unworkable.
It isn't, though. "You may not sell into a situation that creates an unacceptable hazard" is essentially how hazardous chemical sale is regulated, and that's just the first example that I could find. It's not uncommon for a seller to have to qualify a buyer.
I think the system is rather a one where if you offer critical services then you're not allowed to use a software that hasn't been developed up to a particular high standard.
So if you develop your compression library it can't be used by anyone running critical infra unless you stamp it "critical certified", which in turn will make you liable for some quality issues with your software.
I assume you mean "if the buyer will use the software in critical systems."
That's very realistic and already happens by requiring certain standards from the resulting product. For example, there are security standards and auditing requirements for medical systems, payment systems, cars, planes, etc.
> Software controlling flight systems in planes is already held to very high standards, but is enormously expensive to write and modify.
Here's something I don't understand: those jobs pay chump change compared to places like FB and (afaik) social networks don't have the same life-or-death context
Would not shock me for AV companies to immediately work around that if it were to be implemented. “You want our protection all of the time, even if the attacker is corrupting your drivers!”
All these things matter when you try to build an enduring company instead of the embrace, squeeze, and run strategy. I don't think that happens a lot any more, if we are lucky one company might keep its course for 20 years and then it gets sold or the founder moves on.
Not necessarily. JSON is used in a lot of places, also for large documents in data lakes and archives. It's useful to be able to query them with tools.
One solution not mentioned in the article is writing entries to something like Logstash or other similar services specifically built for handling event logs.
With AWS you also buy a scapegoat because it's much easier to explain to superiors or investors when a large cloud service has downtime than it is to explain the same cumulative downtime caused by human error in your team.