I'm surprised at how dismissive the comments are. We need many angles of defense against these criminals. Dismissing this because companies should do better security is like dismissing doctors because people should get more exercise. That's silly. We need preventative care and treatment.
I'm not surprised by this announcement because the way that the pipeline-company ransomware hackers beat a hasty retreat was noticeably unusual, and already seemed to telegraph that the state was getting involved more...actively. Good.
Agreed. I'm a bit tired of the victim blaming with security. It's physically impossible to build a house that can't be broken in to, and even harder for computer systems. Crime is a social problem, we can't rely on a dream world of mathematically perfect zero trust security.
It's impossible to build a safe airliner, but we can get pretty damn close. Airline engineers know one cannot create a component or system that cannot fail. So the question then becomes, assume a system fails. Now how does the airplane survive?
With software systems, instead of demanding a perfect defense against the root password being compromised, think "if the root password is compromised, how do we prevent that from bringing it all down?"
In other words, think in terms of redundancy and isolation between systems.
And the largest piece of hubris and madness in critical systems is allowing over-the-internet updates.
But there is a big difference between airline safety and software safety. An airliner survives against the environment, it's PvE, a software system has to survive against hackers, it's PvP. If you shoot a rocket at an airliner, the airliner will fail, in that case we blame the person who shot the rocket.
> But there is a big difference between airline safety and software safety
I've worked professionally in both industries; they are not fundamentally different. Software practices can learn a lot from aviation practice, but they seem determined to spend decades rediscovering the methods the bitter, expensive way.
For example, software is still stuck in the dark ages where the idea is better training / better programmers / more punishment will prevent these sorts of failures.
> For example, software is still stuck in the dark ages where the idea is better training / better programmers / more punishment will prevent these sorts of failures.
What is your source on this? This goes against what anyone at any company where I have worked at ever believed.
No-fault root cause analysis, process improvements, inherently safer practices, languages, libraries is what every place aimed for. I don’t even know what you might mean by punishment?
There are many, many programmers, you can see their comments right here, that fit (for many, probably despite their age), into what you could call brogrammer/cowboy coder/lone star/rockstar developer types and that will try to shame developers making mistakes or present certain types of failures as inevitable, "you just need better developers".
You can frequently see them come out in Rust threads, they're generally against it, coming from C/C++, it seems a common attitude amongst low level devs in my experience (there's a thing with "hardware" sounding "hard" which I guess makes them feel more "hardcore").
It's obviously not universal, but it's super easy to find if you search for some programming language discussions.
Okay, that criticism is legit, but they also have a point, and more importantly, they have docs, tooling, and cross-compiling experience. When you're implementing the basic C machine, or kernel, or drivers on which every other tool chain ever conceived at some level relies on for new architectures and hardware, you are operating in the most constrained setting of just about any programmer today. It is different, and you have to think different because you're trying to make sure you're getting the foundation right.
When the docs exist, and are accurate they can somewhat hide behind "get better programmers"; when they aren'the some can be even moreso, because there is nothing worse than trying to drive poorly documented hardware. It either works or it doesn't.
T. QA guy amongst a bunch of dev types who regularly points out how they do a great job implementing the wrong thing on a regular basis, and helps shape process to make that harder.
The fact they come out in Rust threads has more to do with Rust's evangelist types running afoul of the long standing love of "things that work". Somewhat in the cowboy camp's defense, none of theach no guardrail's type ever turns down a good static analyzer or test suite once you figure out how to get it smoothly integrated into their process. That's where I think Rust gets their outreach wrong.
Don't try to sell development on a brand new lang to learn and replace what they are using. Use the lessons you learn with making that lang, and improve the tooling they are familiar with. We don't have an infinite capacity to learn a new lang and library ecosystem every 6 months to just keep doing what we do. Once you get savvy enough with C and where the spec holes are, you've gotten to a point where you've gotten insight into how things actually work many levels more accurately than just about any other programming toolchain, and also onenjoy of the only languages completely divested of licensing lock-in on the planet.
There is also the point that you can'take really argue against C's effectiveness. It's always the first code to be made functional on any new silicon. I'm interested to see if Rust supplant's it, but I'm weary of any language that's heavily reliant on LLVM as I'm getting more savvy on how licensing risk tends to play out in the long run.
You can't beat the immortality and ubiquity of GPL. It is as close to the unrevocable toolbox from the public domain you'll ever get.
Also, a general belief among C++ programmers that better training is the answer to programming bugs. This belief is slowly fading, but it's got a long way to go. Scott Meyers' books on Effective C++ represent a lot of effort to educate programmers out of making mistakes. For example, from the table of contents: "Prefer consts, enums, and inlines to #defines". If C++ was an airplane, #define would simply be removed.
> I don’t even know what you might mean by punishment?
There are several calls for punishment in the comments on the article.
I think the work of the people operating a system is just as important as the one of the programmer. You can build the very solid plane or software and then have it fail due to being operated in the wrong fashion.
The question is whether both sides are doing their best, within reason, to mitigate issues. The programmer doing everything right while the admins forget to patch for years won't change a thing. The opposite is true, patching or configuring correctly won't do a thing if the system is full of "built-in" holes.
It's not a stretch to think of a setup where specific conditions that define this "within reason" are established for software developers and administrators. It's what an audit should normally uncover: weaknesses in the process, points for improvement, etc. Only this time it would be in the form of general and specific guidelines that get progressively stronger as time passes. It's not a sure thing but it raises the bar enough for most ransomware attacks to become cost prohibitive for the attacker.
BTW, I practice dual path in my personal life. If I'm doing something risky, I have a backup. For example, when I work under my car, I put the car on two sets of jackstands, even though I use stands that are rated for trucks. I'd never rely on a single rope/piton if rock climbing. I cringe when I see climbers doing that. I carry an extra coat in the car in winter, and water when driving in the desert.
Thanks for sharing. I knew some of D's history, but there was stuff in there I hadn't read before.
I like much of the way D's designed. It doesn't try to be flashy, gimicky or different for the sake of being different. It gives you a set of practical tools and doesn't try to be too opinionated on the way they should be used. It mostly makes it hard to shoot yourself in the foot. But if you really want to you can. You gotta really try though.
That's defense in depth as applied to system design. Think of it like cleaning out a cat box, and only having bags with holes. You only need a couple bags whose holes don't line up, and you're good to go.
The simpler they are, the easier they are to learn. The easier they are to learn, and less "opinionated", the less resistance they tend to build up against adoption.
D is interesting, because it seems, from my experience, D, like Ada, has been a hypeless language. Though I haven't checked on licensing encumber meets that might be behind that.
In about 2008 I started working for SAIC, on a contract to NASA's "Enterprise Applications Competency Center". While I was waiting for my computer and all the accounts and permissions to get set up, I was sent to do a code review for a minor application written in Flash/Flex/ActionScript + Java as was popular at the time, written by one guy. Everything looked pretty decent to me, except that he'd done all of the authentication/authorization in the Flash frontend. I pointed out that anyone who could connect to the app and fake the protocol could do anything the app could do, at a minimum. He said yeah, he'd have to do something about that. It went into production the next week. He's now part of the architecture/"engineering" group.
All of the things you mention are great, but they don't really address the problem. You need developers who know what the issues are and are willing to do the work to fix them even though they don't add anything to the feature list. In my experience, I don't have much reason to believe that today's developers are any better about that than yesterday's. There is a lot of security cargo-culting going on, which probably does improve the situation, but there's also a lot of "bootcamp" developers without the background to know that there are issues.
First thing I have a group of developers do in a new context is learn the existing business process without automating or writing a line of code. They can dissect, name, and research any code they want generated by who they are to build for, but no writing until they get the business context.
You'll never build a better tool than the one that eases your own pain. Make the user's pain your own, and beautiful things happen.
Not only that, but we spend billions of dollars on defense to protect those airlines from bad actors. I mean when a person blows up a bomb in an airplane, our response isn't "build bomb-proof airplanes".
Historically the choices were made to spend billions (and trillions) of dollars to invade countries harboring terrorists and use the situation to project power against other adversaries, advantageously control the price of oil, work trade deals, etc.
I predict the same path will be taken with cybercrime. The U.S. defense apparatus won't be giving subsidies to non-tech companies to boost security. Rather, they'll be waging war and using overlapping objectives and narratives to further other goals.
Cyberwarfare will be used to further terrible agendas (and already is) - that must be fought politically, but I am plenty jaded enough to see where that is likely to go. Unfortunately not participating in Cyberwarfare is not an option.
I disagree - Russia seems to be a large source of these crimes and they are a bit too big to invade (without nuclear bombs it might be possible, but only a fool would invade given they have them)
We might seem some special forces go into action under cover. However it would be assassinations done in such a way that Russia either won't know who did them, or is willing to look the other way (the later implies something diplomatic).
It turns out that airplanes are fairly resistant to bombs aboard. Several attempts with smaller bombs have failed, despite causing significant damage. The cockpit door has been hardened, too.
Airliners are now pretty resistant to engine explosions, once thought to be impossible to do.
Keep in mind that a bunker will never fly.
Nobody is suggesting not going after criminals who attack software.
The cockpit door being hardened introduces its own set of problems as well, and the engine failure containment is a good point.
Though the prevailing logic on a bunker taking flight is that the engine size will be too large to be economical, which you probably factor in, Walter, but the uninitiated in the aerospace industry tend to simplify away.
We design military aircraft to fly into warzones. They are very much PvP. We design them with various countermeasures to deal with rockets and ejector seats if those fail. Yeah, planes will be shot down and pilots may die despite these precautions, but skimping on the ejector seat because "hey they might die anyways" is totally unacceptable.
Yes, building a safe airplane is doable. But this is not a good comparison.
Securing a company is like saying that you have to chnage all of the wiring in a country without impacting power supply. ALL of them - the house wirings, the cables transporting power, everyting. At once.
Security in a company is not a single system, it is a messy interaction of unknown dependencies nobody understands. And this mess runs a business.
Of course, there are plenty of things one can do but even for simple tasks such as "let's reset all the 100,000 accounts to make sure they are long/complex/whatever". This is asking for apocalypse.
How it is difficult is visible when you work in information security and have to balance the "we MUST NOT be hacked" and "we MUST NOT impact the business".
It didn't start out that way. It took a long time to figure out how.
> But this is not a good comparison.
I can't agree with that. I don't see any rationale for either airplanes or software systems being special.
> Security in a company is not a single system,
An airplane isn't, either. For example, part of airplane safety is the air traffic control system. Part is the weather forecasting system. And on and on.
> Yes, building a safe airplane is doable.
It didn't start out that way.
And now only FAA/EASA etc. certified companies and individuals can build a commercial aircraft.
And they can only build the aircraft they are certified to, using the same certified components, and the same certified tools. They cannot change any aspect of the construction without another round with the authorities.
Let me know when the CIOs of listed companies are up for that kind of lifestyle for their email and word processors.
> Let me know when the CIOs of listed companies are up for that kind of lifestyle for their email and word processors.
I think you're absolutely right that this kind of rigidity is not part of our tech culture, but maybe it should be if that tech is running power grids, [oil] pipelines, and other critical infrastructure.
In summary - maybe we should spend more money so that we get systems which are reliable and resistant to this kind of attack. (_I_ think that's probably a good investment for power/transit/core network/safety systems)
"this kind of rigidity is not part of our tech culture"
Yes and no. "No" because there are best practices and bits of midleware that although may still get improvement over time, receive nevertheless fewer and fewer changes (and have logarithmic looking dynamic of development). They mature. Advising strongly things that passed the test of time and broad use scrutiny just makes sense, regardless if that may look "rigid". (Not that many implement their own double linked lists nowadays.) Then "yes" because the our "tech culture" pool is big enough to also accommodate fashion, hype, and a whole lot of other psychosocial can of worms...
"And they can only build the aircraft they are certified to, using the same certified components, and the same certified tools."
And that level of rigor is appropriate for the stakes that selling mass produced commercial aircraft implies. The discussion context was critical systems. But then you threw "word processors" in there. Why?
Because word processor documents have often been the vectors for attacks. And once an attack is inside your systems, there is nothing preventing the attack attached to a document from infecting and encrypting your machine or infecting your PLC and destroying your industrial equipment.
I'd say that the surface of attack here is the industrial equipment's link to general computing equipment (which it's expected to be less secure). The solution just can't be to secure the whole world of software that may somehow end up on general use computers. The point is, my remark is still valid, as a discussion on critical systems got mixed with clearly non-critical ones.
I don't think you confused the issue there at all, but forced a clarification of boundaries. The safety critical PLC industrial controller network should be isolated from the Net, however, even with the pipeline hack, the shutdown of the PLC network was due to compromise of billing systems, which are non-safety critical to the immediate user population (administration) but mission critical to the architecture of the western, market-mediated economic activity. You can't secure those systems perfectly, though we can definitely do better. The correct response, however, in this case is effective deterrence of those looking to engage in cyber offensives. Like it or not, when you can sit back outside the reach of effective enforcement measures, and cause mayhem and havoc, and make a buck doing it financial incentivization mechanisms pretty much ensure it will happen.
I just hope we don't take it too far. Many young and talented people in the CS and IT space cut their teeth testing the limits of legitimate access without pushing into the full on destructive regime these attackers have.
I'd hate to see things cracked down on so hard we lose a good signal for talent because we decide that the integrity of cyber systems must be defended at all costs. However, there needs to be a much more pronounced reaction to the types ofor blatantly malicious activity that has been escalating for the past decade or so.
Dual path systems came first. Regulation came much later, it wasn't a prerequisite. Regulation didn't design airplanes, it standardized existing practice.
Imagine you had airplanes be built the way they wanted, crashing from time to time, not starting and having people work on the wings to fix things in flight.
If this was something done for fun and without impact on people then nobody would care.
Suddenly, a Monday morning, someone says "woah, this cannot be - you have to fix this". But this is not fixable, you have to build a new plane from scratch, or completely review the existing ones. Planes would be grounded.
Now a software company: typically your old plane flying by more or less miracle (when it flies). You cannot fix it, you have to rebuild it. Either you ground the company and force them to build something new, or you will always have legacy.
The legacy is not fixable - it simply is not. You need money to redo everything and if you do not have the proper pressure then it will not happen.
Then, building a new company/software can be done the right way. This is not even difficult, I would even say that having these constraints will help in the overall quality. But this is a new software, not a "fix" of the old one.
Um, airplanes are constantly undergoing revision and improvements and bug fixes. Only very serious ones result in grounding. Eventually, they become too expensive to upgrade and Boeing/Airbus designs a ground up replacement.
> And the largest piece of hubris and madness in critical systems is allowing over-the-internet updates.
What would you suggest in its place?
You'd need to replace the internet with something - postal mail, Fedex, courier deliveries, etc, or just have things that never get upgraded. Every one of those options has significant limitations, and in many countries, I'd trust SSL over postal mail every single day.
I think if you alter the wording to be "more-secure internet deliveries" then you'll have me agreeing with you, but unless I've missed something, your comment seems poorly aimed (which is odd, as your previous example of the root password is spot-on).
Note, the American postal service got it's reputation for reliability among the citizenry (which has been soiled by hostile management and politics in recent years) in part because the U.S. government was willing to back it with men with guns. The Marines were tasked seeing that mail was delivered, or dying in the process. This was incredibly effective at the time to the point that even today, no one even considers attacking the post a realistic option despite the withdrawal of armed forces from active involvement.
The Internet has a two-fold issue.
A) It's fundamental ideation was an interconnected network of trusted nodes, with a self-healing capability to facilitate C&C continuity in case of nuclear attack. All protocols have underneath them that starting assumption.
There is an entirely unexplored depth of "authorization/security first" computer networking practice out there waiting to be enumerated, instead of trying to bolt-on security mechanisms to what is already built without an ideation of distrust built in from the get-go.
It's just so wildly impractical to implement, and undesirable to at least the Western philosophical foundation to free by default expression that it's not a natural thing to wrap one's head around.
B ) What are you gonna do to me? I'm behind 7000 proxies in different jurisdictions that work fundamentally different from yours and are unlikely to cooperate with your projection of power!
In short, it's a people problem, not a technical one. To the degree it is a technical one, the middle-boxes hold everything back <shakes fist>.
For critical systems, I suggest using a usb drive.
Do you really want a missile guidance system update-able over the internet? How about the auto drive system on your car? What about the code that keeps track of accounts in your bank? Don't forget the code that keeps the pipeline running!
I don't think this is valid comparison. If you are trying to compare software on a plane to application, then airplane software is not attempted to hack into due to it being generally well isolated from outside networks.
If you are comparing physical build of systems in a plane to software, then hacking of software is equivalent to bird or drone running into an engine or a laser attack or hijack attempt... Which while we know do not happen often, but lets say if a lot of money were to be made by doing so, I'm sure the frequency would increase.
At the risk of putting words in their mouth, they are comparing the method of airplane safety, where they look at redundancy (assume X will fail and the plane needs to survive this), looking at system solutions over individual fault (redesigning a warning indicator so pilots cannot miss it rather than blaming individual pilots that do miss it), and a regulatory body of investigators that enforce standards and investigate failures with the aim of learning from them and improving practices. You are thinking about the specific resulting design choices rather than the system that led to them.
Oddly enough though, the analogy tends to diverge when scaled: the more material you put into your house, the less vulnerable it is; the more lines of code you put into your software, the more vulnerable it is.
Taken to an extreme, anyone can take down a house made of straw with their fist, but nobody can exploit hello world.
I despise seeing simple apps with ridiculous dependency trees (package.json with line counts in the 5-6 figures, for example) and other complexity that can't possibly be fully understood by whoever's responsible for operating it. But I suppose things would be in even worse shape if we reinvented the wheel instead of using well-known libraries and so forth.
> the more material you put into your house, the less vulnerable it is
I think you have cause and effect backwards.
Yes, if you want to build a more secure house, you will need more material than another house with equivalent functionality. However, a bigger building isn't magically more secure than a smaller building.
If you walk into my house, I will detect and kick you out almost immediately. If you walk into our office building all you need is a hardhat and a confident stride and you can get anywhere you like. Hell, people will probably even help you get there.
Which makes it similar to code. The smallest app in terms of total 'material' builds up its queries with string concatenation. It takes a lot of 'material' to prevent those kind of injection vulnerabilities. And yes, a data access library that helps you with that is also 'material'.
If you do not audit and are not willing, in case of emergency, to take over support of your dependencies, you are creating a mine field. Most people do not understand what this even means though.
Solid libraries are boring and done: old, stable and active bug fix support. Not new features weekly.
Java or .net vs the js ecosystem. In our client contracts we have responsibility for our deliveries; in .net and Java we use well supported libs of over a decade old which we can support ourselves if the maintainers quit. With js this is an issue. Things are generally just not set up for decades of runtime and yet, there we are: we now have node js projects of almost 10 years old with many libs we have to audit and support ourselves and they are not very good quality. I think modern web is only just seeing the tip of the iceberg security wise. It will get much worse.
If I may quibble over a technicality, hello world is just one layer of an already complex technology stack. Suppose someone was able to slip code somewhere deeper in the stack such as your printf implementation (which generally a programmer will, and should, trust just works like it's supposed to) that opened a C2 channel. Then when you run your innocent hello world program, you're pwned through no fault of the program at the top of the stack.
Exactly this. And, of course in practice, it's also the OS, firmware and so much more.
Your comment speaks to exactly how people underestimate the true attack surface. It's far more vast than most anticipate, and their conception of it tends heavily towards the literal surface.
> the more material you put into your house, the less vulnerable it is;
After following the lock picking lawyer on youtube for a while, it seems to me there is a fallacy somewhere in here.
The weak points of a structure are often underestimated to begin with and adding complexity to the building (eg. door badge system vs. a good padlock) doesn't necessarily add security.
> the more material you put into your house, the less vulnerable it is
I have a counter example about scale. The more material you put into a city (the more houses you build), the more vulnerable it is (more potential problems, more opportunities for crime, etc).
While the house is less vulnerable than a tent, it can be secured for only as long as the flow of people and material through this house is very well controlled. The bigger squat house is not necessarily more secure. On a city scale free movement of people and goods is essential, and thus any place can be potentially visited (used) by anyone. We want the same urban infrastructure to be re-used by as many people as possible. There is a huge attack surface.
Code is more like a city. We want the same code to be re-used in as many different contexts as possible.
That completely depends on the specifics. If you expand your house with extra spaces or buy gadgets, you typically also lower security. Security always requires a principled approach.
Well the package.json only specifies the top level dependencies not all of the dependencies in the tree. So the .lock is probably a better representation of the actual dependency tree. It's not uncommon to include one library and get an extra 10, 20+ dependencies you don't even know about unless you care enough to check.
nope but we also demand some due diligence from private entities. When you leave the garage, the windows and the front door open with a "here's the money" sign pointing at your safe you might have a problem if someone steals your customers stuff.
Company private security and protection against these attacks is more than abysmal. Just take the pipeline hack as an example. There should be no way at all that infrastructure critical to the nations security is getting shut down because of a corporate hack.
If the private sector wants to earn profit from these things they need to show they're competent enough to handle it.
The market doesn’t incentivize security until it is too late. A pipeline operator that passes security costs onto consumers will lose to one with lower security and lower costs. Serious, significant attacks might occur at year 5, when the company becomes a big enough target to make it worthwhile to attack. By this time, the company who did not invest as heavily in security has captured the market while the one that invested in security does not have strong market share or may have gone out of business.
Or they invested, but not in the right areas, or there was a new attack through a zero day.
They can be extremely competent in managing oil and gas (and the physical and operational safety that comes with it), but not be competent in Cybersecurity.
It’s real, hard costs today for something that may or may not happen and paying for controls that might prevent an attack. In the best case, as a customer, nothing happens. Whether improving the likelihood that nothing happen is worth a $1 or $3 per barrel premium (or if that $2 is justified) is a hazy mess and hard to make a decision around.
>The market doesn’t incentivize security until it is too late.
That's why you have government and law to require it. The free market solving everything is a myth, and the USA is lucky that all the pipeline hackers wanted was money. Imagine if that was a nation state trying to immobilize the military in preparation for an invasion. No ransoms, instead bombs start falling while you are paralyzed.
Pulling this thread: say the government regulates it - what do they require? Regulations that say you need to be secure enough to not be hacked? That requirement changes daily.
Baseline security standards? Sure. But what is the baseline? And how influenced by lobbyists is that baseline? You know the big security companies would love to have their product be a government requirement. Attackers do not have regulations. They know the regulations and work around them. Makes it more difficult, but eventually they develop new attacks.
So now you’ve got this government body making regulations that needs people who understand security to make the regulations, who then need some way to audit the companies to ensure compliance. The companies then have to focus on the audits and not on emerging threats, or do both, and it increases overhead.
I’m not against government regulations when it is a good fit, but there are a lot of unintended consequences. The government is made up of people, too, and they may not be as close to the work to understand the optimal allocation of resources to minimize security risk.
Money laundering is a similarly difficult problem. Most of the rules are written in a "spirit" manner, instead of a "prescriptive" manner. With AML (anti-money laundering), you often hear the term "red flags". They are signs, but not a source of absolute truth. I could foresee something like AML in the form of corporate computer security coming soon. In the same way that Sarbanes-Oxley forever changed corporate accounting after the Worldcom and Enron accounting scandals of early 2000s (top execs now need to sign-off on yearly account) -- imagine if top execs need to sign-off on corporate computer security. As I see it, CTO-cum-head-of-security will soon be signing yearly audit documents in blood.
And the trick to making AML regulations effective is massive fines -- fines so large that they genuinely affect quarterly earnings and stock prices. The same could be done with corporate computer security regulations.
The government already has baseline security standards that are created and published by the government, not private entities. NIST 800-53[0] is a good example of this, and it generally applies to critical infrastructure providers:
Regulations should not adopt a private company's baseline security standard; we should lean on the work NIST has already done and standards that already apply to (mostly defense) critical infrastructure.
IT can't be looked at as a cost center anymore. The constant pressure of cost reduction is what causes these failures to happen in the first place, because nobody running the infrastructure really cares. If something goes wrong, they're out of a job anyway.
This is an interesting point. If the relentless drive to reduce cost in IT is the root cause of so many corporate computer security issues, why isn't corporate accounting (which is covered in the US by near-draconian Sarbanes-Oxley rules) not similarly affected? I point to regulations.
Further, would the same be true of giant pharma companies that create drugs that we ingest? ("Oh, skip those tests. If a few people get injured, we'll pay hush money.") Why don't we see it? Simple: Incredibly strong regulations in US/EU/Japan (the "big three" for global drug regulation & approval).
I'm not against government regulations when it is a good fit, either, but as long as it's cheaper to pay the ransom, the companies have incentives not to do anything about the threats.
Industrial civilization has been dealing with these kinds of tradeoffs for what, a couple hundred years now? Regulations seem to be the best option out there.
If the goal was to disable the pipeline the attacker could just apply thermite somewhere along one of it's many unguarded miles. The reason that doesn't happen is because retribution for such an act would be striking to say the least.
It's impossible to prevent all attacks, physical or cyber, so at some point one needs to either submit to an order where attackers act with impunity, or otherwise invest in retribution.
> If the goal was to disable the pipeline the attacker could just apply thermite somewhere along one of it's many unguarded miles. The reason that doesn't happen is because retribution for such an act would be striking to say the least.
For what it's worth, a couple weeks ago someone (according to a manifesto, an anarchist group) did exactly that in Munich - they set about 50 10 kV electricity cables that were laid bare due to construction works ablaze to strike against a military supplier and cut off about 20.000 households for over 36 hours until the utility managed to restore service: https://www.br.de/nachrichten/bayern/stromausfall-in-muenche...
Sabotage or plain old theft against utilities is pretty common, but it's hard to do something physical that truly disrupts service for longer than a day or two - the networks are designed with reliability against all kinds of issues in mind. An IT-based attack leaves no traces if done well and can have a week to month long impact, simply because back when these networks were designed, IT threats were not existing.
I'd actually be less worried about that to be honest. That type of mobilization puts out signals we're already well equipped to catch.
The entire reason the pipeline hack is such a noteworthy event is that (and I truly believe this), someone was dumb enough to take a job bigger than their head. I don't think a nation state would have burned that opportunity by signaling their technical capability that way.
Funnily enough, the pipeline hack and ransom wouldn't even be feasible without the advent of cryptocurrencies making the AML bypass feasible. As much as I despise those regulations in particular, I cannot argue with their efficacy.
You're imagining what is essentially an impossible scenario, because your setup is factually badly wrong.
The ransomeware attack in question wasn't capable of shuttering the pipeline as a target, whether the hackers wanted money or not. That was a voluntary action by the company, a questionable precaution they chose to take.
The US military isn't directly restrained by that pipeline. They have their own fuel supply lines that do not particularly care about that specific pipeline. And even if they did, they can go to the source, they don't require that pipeline for fueling purposes. They have other means of mobilizing refueling, up to and including anything that is necessary from a transport, manpower and logistics standpoint (including commandeering approximately four zillion private fuel trucks and tankers to get fuel moving for defense purposes).
There is no scenario where that pipeline existing or not existing tomorrow would shut down the US military or prevent its ability to defend against an impossible and amusingly implausible attempted land invasion of the US domestic territory.
So, imagine if that was a nation state (uh, which one?) mobilizing for an invasion that can never happen, an invasion that could never get across the Atlantic or Pacific. No.
Bombs start falling? From where? China? Russia? Russia is doing what, invading the east coast? With what ships? With what air cover? With what magical clandestine capability to hide a massive military as they sneak across the Atlantic on non-existent ships. With what aircraft carriers? And with China, so the US sends bombs back the other direction. The ability to throw a thousand nukes at China isn't restrained by the East Coast pipeline, and they know that, as does every other nation on the planet. Again, your setup is so far outside of reality that it's absurd.
Or a foreign government paying more to not give back the data. Just for the economic impact. One that might be under sanctions from the target country, say.
I don’t think that’s a fair comparison. I think a fair comparison would be 80,000 companies buy the same vault door from supplier X. But suddenly one criminal group has found a universal key to the vault that no one else knows about, and can now access all 80,000 vaults nearly simultaneously and clandestinely even though they still look closed and secure from outside observers.
... but this was 5 years ago and everyone and their dog knows it by now, the company just didn't bother to change that door. Also, the criminal group doesn't hit doors with cameras, but nobody bothered to install one.
---
What you described is a zero day, which is very rarely used - most ransomware simply uses the absolutely low hanging fruit of companies lagging behind years in security updates combined with highly insufficient backups.
Companies generally do not have a sufficient tested backup strategy and plan (though backups have only one part to play). Those same companies have probably never suffered any major consequences as a result - which might explain why it’s so usual for this to be a common gap. Sloppy/incomplete backups can probably wing/firefight the more common ‘single server dies’ ‘single directory needs recovering from accidental deletion on file server’ ‘need to restore database following botched upgrade’ type scenarios creating a false sense of security.
In the same way companies do not have sufficient HA for their critical systems or processes. HA means being actively resistant to events impacting availability by having (typically automated) redundancy to remove SPoFs. It doesn’t mean HA owing to luck the server hasn’t died in 10 years owing to lack of/poor maintenance. But both with HA and backups companies can dodge bullets (until a real emergency) and maybe never even have a major incident.
Ransomware kind of exploits this lack of organisation level sufficient backing up of all critical information assets.
The backup thing is really weird, I think only very small non-startup companies care about them a lot because they just cannot afford not having them. In all other places when I bring up the topic everybody seems to be lightly surprised and the sense of urgency just isn't there.
Personally I find this just puzzling, in my home folder I loose files at least once a year and I also lost whole partitions. So having no backup at all is no option for me at home. I cannot understand how no or unmaintained backups can be an option at companies.
80,000 planes use the same bolts to secure the engine pylons to the wing. It is found that the bolt can sheer in cold weather due to a casting defect. This does not cause a catastrophic failure because the aircraft are designed in such a way that an individual compromise will not bring down the entire system. Bank vault doors don’t open to the street for a reason. If your entire system is relying on the security of that one vault door, you have already failed. If you are storing sensitive data on millions of users in plaintext, you have already failed.
Even plaintext can be secure with the right other measures. Air gapped in a physically secured, access controlled location, with all egress/exit points covered by sufficiently rigorous security measures? You're fine.
Encryption introduces it's own SPOF's. If you'very never had a power failure in the middle of a key rotation, you've never been bitten back by your attempt at securing everything with cryptography. Or worse, corrupted by sketchy hardware.
Defense in depth, and proper threat modeling is key. There is also the very real question of "Do you really need that Internet connected anyway?"
The infrastructure wasn’t impacted. It was shut down because the billing system was.
It’s also easy to assume that everyone is incompetent. That doesn’t make it true. Like any hostile situation, a defensive position can always be overcome, you have to have an active offense as well.
You have a point, but I also can expect a minimum level of competance and caring about the data you have stewardship over.
Sure, given enough time and motivation anyone can probably break into anything, but that isn't an excuse to let the password for the FTP server that pushes out updates be 'password123'.
Here's the problem companies have absolutely no incentive to care about Security. Several years back some hackers stole a bunch of my information from Experian. You know what I got out of it, a free $10 subscription to their identity service, along with lifelong worry of wondering if someone has opened an account in my name and racked up a ton of debt that I'll be held responsible for.
You know what Experian got, nothing, a slap on the wrist and now their stock is in the same place it was before.
I am doing my Masters in Information Assurance and Cybersecurity right now, and the whole mindset of all of my classes is "you're going to be pwned eventually so figure out how to move the risk to some other poor sucker to take the blame when it happens." pisses me off so much. The entire industry basically uses this as an excuse to avoid responsibility and just make sure they aren't the ones held responsible when the manure hits the rotary oscillator, and that bugs me to no end.
At the end of the day there are real people who are getting screwed and hurt by this, while the execs and security "consultants" spend their time trying to figure out how to make sure that when sh* hits the fan they can't be sued, and d** the customer and their well-being we've got to figure out how to make sure we keep the law away.
The concept of Moral Hazard is important here, which basically states that there is a lack of incentive to guard against risk where one is protected from its consequences.
In a lot of cases the execs of these companies will continue to collect a large bonus, despite the fact that they utterly failed their customers.
You have a lot of replies but as far as I can see no one made this point, so let me add yet another reply.
Victim blaming is a framing that makes it sound like it's about moral and ethics. But it's about practicality. There is a causal chain leading to a bad outcome and we simply break the weakest link. Sometimes it's easier to lock up the treasure and sometimes it's easier to lock up all the thieves.
Consider the case of computer security. Locking up all the thieves is super duper hard, because they are located in places like Russia and China that wont cooperate with law enforcement.
I'm a bit tired of the victim blaming with security.
The victims of these breaches are the end users. Companies are the beneficiaries of not having to pay for and especially not having to inconvenience themselves with much more secure systems.
That said, it's true you can't ask for 100% security. You can instead set standards. You can especially set standards of security for any enterprise that the public dependents on. Since, there are no coherent standards now, just liability isn't useful. And the standards should involve actual topology, what kinds of information is allowed in and out at all.
Many of the most serious recent incidents don't involve theft of end user data or impacting end users in any real way, unless you consider the "end users" of gas stations and ferry boats to be the victims of these attacks. That's not incorrect in a way, but also seems like a pointlessly wide net.
The thing I'm a bit tired of is IT people in these threads taking every incident that comes along as an opportunity to elevate their pet cause. These more serious incidents have more in common with mafia extortion rackets than computer security.
The debate is pretty much divided between people who say "improved security is the solution" and people who say "treating it as crime/terrorism/the-mafia is the solution".
I'm in the improve the security camp. I think security can be improved if we impose good standards (meaning enforce inconvenient things like no backdoor updating apps, no critical infrastructure connected to the web).
The reason "treating this like terrorism" is useless is that there's always another hacker. It's hard but not that hard and anything doable today will be automatable tomorrow.
Your viewpoint is extreme. Saying the increased effort by law enforcement is “useless” is unfounded. Sure, it won’t solve the problem by itself, but it’s entirely possible it will help.
What is the percentage of hacks that are ultimately traced to individual and result in his imprisonment? 0.1%? Are you going to ever get that even over 10%?
If the hack comes from a jurisdiction without extradition, how will you solve that? How foea a country know they are not allowing their citizens to be harrassed with trumped up charges? What if definition of hacking differs in two countries?
It is not just Russia and China, Denmans and Uk have refused to extradite to the US becausw pf concerns over inhumane treatment.
Saying the increased effort by law enforcement is “useless” is unfounded.
Note you are misquoting me: my comment: *"..."treating this like terrorism" is useless"
The right sort of the law enforcement action might be useful. But "treating this like terrorism" is just escalating penalties, threats and so-forth, which doesn't make sense for a very conventional property crime.
Several major countries have a history of not prosecuting computer crimes as long as those crimes don't affect local targets. In fact, the crime is the "day job" and training of people who participate in the whole "cyber war" thing during international shenanigans. (See South Korea, Georgia, Estonia, and the Ukraine.)
There are many standards out there such as SOC-2. But that’s not particularly meaningful against dedicated professional hackers. It’s a totally asymmetric game.
These standards are more or less useless. It seems that people who swear by them expect the hackers to attack some documents or a powerpoint presentation.
These standards (and PCI-DSS, and ISO, and NIST (and this one is by far the best)) have plenty of blah blah that never gets implemented. They rely on some magical risk assessment exercices with a nice risk grid that gives you answers.
The reality is that the top 5-10-whatever risks are very simple to assess and very difficult to address. Unfortunately such concerns do not exist for the writes of standards.
I have been doing information security for 25 years in huge companies. The more relevant the risk is, the more painful it is to implement.
Even the ones such as "awareness" that theoretically should be useful assume that people care or think. I get emails from people who went though 10 awareness sessions who wonder why someone wants to enlarge their penis. And yes, the awareness sessions wera like in the ads: short, to the point, entertaining, relevant, magical.
So now imagine rising a risk that endangers the key legacy system that cannot be isolated.
You have to enforce standards. Good security is expensive. If companies in competition don't have to pay for good security those that do have it will have higher costs and have trouble competing.
Yep, wasn't disagreeing just wanted to highlight the importance of enforcement / regulations. This much like work place safety. You have to force it or the unsafe businesses get too much of an advantage.
> "running power plants is expensive, if companies in competition don't have to run their own power plants then the ones that do will have higher costs and will have trouble competing"
running power plants is expensive, if companies in competition don't have to run their own power plants then the ones that do will have higher costs and will have trouble competing
Texas? Winterization that wasn’t done and wasn’t required, and thereby those generators who didn’t had a more competitive edge? The similarities write themselves.
> It's physically impossible to build a house that can't be broken in to
Obviously you're correct because impossible is a tall order, but it's possible to get close assuming the aspiring intruders don't have dynamite or artillery[1].
I have seen personally, heard first hand accounts, and read many a post-mortem for situations where the primary blame really should be on the "victim". There's another word for this:
Negligence.
Of course there are always 0days. There are always sophisticated attacks. There is always human error.
Then there are people in leadership positions being given accurate information about basic security problems and possible outcomes over long periods of time flatly refusing to make security a priority or spend any time fixing dangerous situations.
Many of these ransom situations aren't the result of targeted attacks, but "hey we have this exploit and ransom kit, let's scan the entire Internet and see if we get anything".
Or the ever popular (ok maybe not so much any more) unsecured elasticsearch server on the public internet. I'm sorry but if you put your production data on a public IP on a standard port with zero security, it is positively your fault when your data gets stolen. (not much to do with ransom, but an example).
There is a difference between being the victim of a sophisticated attack and being the victim of your own negligence (and a lot of grey area in between).
And a lot of cases where around the table you have those who say "here is the risk that must be addressed" and then the others who say "if we do that we break production".
This is why I wrota about the imaginary plane that flies above oceans without people nobody care about. In such a case the priorities are not obvious at all.
If this is a real plane then there are consequences for the company and people (jail). Suddenly it makes sense to fix things.
The software industry is in the former case - new code being diarrhea-ed down without any consequences if it is hacked.
Crime needs to be dealt with but a lot of companies I worked with in the past 30 years are simply negligent. If you do not blame the victim, what is the incentive for spending money on even basic security? Not morality surely, so...
Crime is a social problem but the US government is especially bad at fixing crime that originates from other countries. See war on drugs, terrorism, call center scams etc. On the other hand, a good team of security experts is very good at preventing computer systems from getting hacked into.
The problem with drug cartels does not originate from other countries. It is domestic demand, paying in dollars. Those dollars overwhelm local economies and police forces abroad.
On the other hand, we have known since the 1970s roughly how to make computer systems significantly more secure and resilient. However, we, as a society, have decided that it's costly and difficult and that we would prefer not to do anything about it. It's "move fast and break things" at the social level.
I really, really hate reasoning by metaphor, but you do lock your doors, right?
> It's physically impossible to build a house that can't be broken in to, and even harder for computer systems.
Which is exactly why you don’t store your savings in your sock drawer, you put it in the bank.
Companies not taking appropriate backups is akin to keeping all of your money in a dish by the front door. Sure your house may never get broken into but nobody is going to have sympathy for you if it does.
Which only works because they’re paying the ransom. The second the government introduces criminal penalties against the executives and boards for paying ransom, it will stop.
Which only works because they’re paying the ransom. The second the government introduces criminal penalties against the executives and boards for paying ransom, it will stop.
Making it a criminal offence to pay a ransom would eventually stop criminals ransoming the data they take to the company they took it from, but it wouldn't stop attacks and data breaches if there's some other way to profit. For example, attackers could sell the databases they steal. Or they could ransom individual's data directly to the individual. Or they short the stock of the company and then release the stolen database publicly to make the share price fall.
It's important not to over-simplify the problem. There is no single, simple solution as long as there are many ways to profit from data thefts.
>Or they could ransom individual's data directly to the individual. Or they short the stock of the company and then release the stolen database publicly to make the share price fall.
You're listing a bunch of things that are almost assuredly already happening. If a company was dumb enough to keep social security numbers unencrypted in a database or spreadsheet, that data is going onto the dark web whether they paid a ransom or not.
It's a LOT harder to find a buyer of proprietary data that will likely put the buyer in prison for a long time, than it is to get a ransom from one individual trying to keep the whole thing quiet. Once you advertise "I have Apple's top secret next gen laptop details!!" - when someone releases a strikingly similar laptop, or "leaks" the details on an Apple focused fansite, the feds will be all over them.
It's surprising and disappointing to see this point of view here, particularly with so little evident dissent.
Of course you can "break into" a typical computer system by gaining physical access to it, for example by breaking into the house that it's in and unscrewing the computer's case; but that's only metaphorically connected to what's going on here, which is that criminals are sending data over the internet to the computer systems in question. The software the owners previously installed on those systems then responds to that data by giving the criminals complete control over the system, as long as they care to maintain it, or unless the system is destroyed. This is dumb.
Writing software that does not behave in this fashion is not only physically possible; it's actually the majority of software. Even in typical software, there is only one deployed exploitable security hole per thousand lines of code or so, and, until only about 25 years ago, it was reliably possible to recover from such an invasion by reinstalling the OS.† The best software, like seL4 or qmail, has orders of magnitude less, though we can quibble about whether the actual number is 0 bugs or 1 bug.‡
The problem is that our systems are architected so that even one exploitable bug anywhere in hundreds of millions of lines of code enables total and irreversible subversion of the system; our system complexity is growing much faster than existing code is getting audited and fixed; much of the code is not even open to auditing; and the people with the power to fix it have no incentive to do so, instead spreading pernicious misinformation claiming that usability and security are unavoidably in conflict (a concept obviously absurd to anyone who has had to use an OS without memory protection) and bulletproof security is impossible anyway. So, at any given time, there are somewhere between thousands and hundreds of thousands of exploitable vulnerabilities in our systems, any one of which is sufficient to enable the implantation of a persistent backdoor that cannot be reliably detected or removed.
The solution to this has been known since the 01970s. At the systems design level, minimize the complexity of the trusted computing base (the hardware and software whose integrity every program in the system relies on) in complexity, audit it rigorously, and freeze it. At the hardware level, provide an easy incorruptible way to restore a known safe state. At the social level, ensure that the people who rely on the integrity of the computer system have the authority to audit it and fix any problems they find, and the technical competence either to do this themselves or to delegate these tasks to people who are competent to do it, rather than to charlatans. At the user-interface design level, ensure that users can understand the information they need to assess the risks they are taking in relying on any given piece of information, and decouple the system to eliminate their incentives to take risks, for example with memory protection and petnames. We know a lot more about how to achieve these things than we did 45 years ago, and in some ways we have enormously more resources. We have seL4, Bitcoin, ssh, Monte, elliptic-curve cryptography, BLAKE3, NaCl, LUKS, decades of SOUPS proceedings, RISC-V, yosys, and 16-MIPS microcontrollers§ that cost 3¢.
But that future is not merely "not widely distributed"—it has become inaccessible except in isolated cases like Trezor, as economic incentives have driven our hardware and software down a path of boundlessly ballooning complexity and diminishing alternatives, while proprietary software licensing eliminates any possibility of assessing and controlling the risks. Meanwhile, the shallow pop culture of computing reduced users from creators to mere customers, and then "eyeballs", while conflating hacking—the only way out of this mess—with computer invasion.
So, I fully expect that if I live long enough to need a pacemaker, I won't be allowed to secure it against ransomware, which will be rampant at that point.
It doesn't have to be this way. This can all be made better.
Ready? Begin.
______
† In fact, shortly before that, on most PCs you could recover from any kind of system corruption just by taking the floppy disk out, resetting the system, and inserting a new, uncorrupted floppy disk. Better hope that one's not stoned too...
‡ You might argue that the possibility that there's an undetected security bug in seL4 means that complete computer security, even against carefully crafted data sent over the internet rather than some dude running off with your cellphone while it's unlocked, is still impossible. But in fact I think there's a very real difference in kind between the possibility that I might currently have presymptomatic covid, and the certainty that I have a small amount of covid. Systems like qmail are analogous to the first case, because they might be secure or might contain an undiagnosed flaw; systems like Linux and Chrome are analogous to the second case, because they are certain to contain a small but fatal fraction of flaws, which are inexorably multiplying.
§ Unfortunately the whole line of Padauk microcontrollers is out of stock this week at LCSC.
> The problem is that our systems are architected so that even one exploitable bug anywhere in hundreds of millions of lines of code enables total and irreversible subversion of the system
A modern jet airliner uses about 1,500,000 bolts and screws. Imagine if they were designed so that a failure of any one of them could cause a catastrophic failure of the entire aircraft. Then imagine if people were defending it by saying “This is a metallurgy problem. There will always be the occasional improperly cast bolt or over tightened screw. To expect that to never happen is victim blaming”.
I am definitely going to use this wonderful analogy. Thank you.
(Yes, fuzzy metaphorical reasoning is what misled us in the first place, but the problem isn't that the "this is victim blaming, you can always break into a house" people are using fuzzy metaphorical reasoning; it's that, like physics crackpots trying to build the Grand Unified Theory out of styrofoam models, they are only using fuzzy metaphorical reasoning.)
It's not any different than the war on drugs. The gov't can't really think in a different manner than just black and white. The world is made up of shades of gray, and it's just too difficult to create legislation to handle shades of gray.
We have to think of this similar to how any other human or company behavior is monitored. Hold companies responsible and have a market sell ransomware insurance.
(One exception to my solution is poor government and public institutions who run awful software. Not sure what we can do)
If I have a habit of burning down my house by being sloppy with safety, my rates will go up. There should be something similar
Let us take the case with Equifax mismanaging their servers, with running obsolete Java packages resulting in identity theft for millions of Americans.
Credit score is controlled by 3 companies. Credit score determines mortgage rate and hence it literally controls if a US resident can afford to buy a house or get a job (in some states). Don’t the companies need to take some responsibility?
Similarly dozens of companies leaving Elastic search installs and MySQL open to the internet. I mean.. how sloppy can one get?
Fine, be sloppy, just pay $$$$$ to your insurance company. That $$ amount will indirectly decide whether we go to war with Russia or pay software engineers.
> That $$ amount will indirectly decide whether we go to war with Russia or pay software engineers.
Not related to the subject, but wow I wonder how alarming it is that "war with Russia" meme is having a strong comeback, as it's being casually brought up in online discussions about software.
I was being sarcastic in this instance. Meant to say there is more to do within software before blaming “user”/“hacker”. Russia reference is only based on recent news/claims
Things are actually getting better in some ways. Modern OSs with automatic updates are more secure than OSs have ever been. The days where clicking a link on an email or plugging in a USB could infect your computer are almost gone outside of rare zerodays which get patched for everyone pretty quick.
Things are getting even better with hypervisors, SELinux and secure languages rolling in. Significant portions of Android and in the future linux, will and are being rewritten in rust which wipes out entire classes of the worst bugs we are being faced with.
The problem is that the attackers are also getting more sophisticated and the targets are becoming more valuable with more and more getting put online.
From what I see on that page. It costs $500,000 to exploit chrome and your attack only lasts for a few days and then google pushes out a fix to all chrome users. This is so much better than the IE era where a teenager with some free time and skill could make something up and there would still be exploitable machines years later for the same bug.
> I'm surprised at how dismissive the comments are.
I've gotta ask: has the US's stance on terrorism been effective? Or did they merely use it as an excuse to militarize the police and erode human rights? Because I want the government to take effective action around ransomware, but "similar priority to terrorism" just doesn't fill me with hope.
The headline might be vague (though I'd personally love to see drone strikes on the scumbags that scam elderly people to of their meager savings) but the article itself talks in terms of priority and effort for investigations into malware attacks. E.g., they won't just shrug and do nothing because they care about other crimes more, like with my stolen GPS case.
I'm astonished you would support drone strikes on civilians. Scammers are working a bad job out of necessity. They are not villains who deserve to be extrajudicially murdered.
The professional nature these scams are now (call from 'Microsoft', steal a little and then send properly dressed and documented people to the house to 'investigate the fraud' and steal more) is not desperate people scamming. Sure they do not have to be murdered, but removed from society is not a bad plan. These are pro organizations, not a handful of poor people anymore.
The HN crowd can sometimes have an issue with pragmatism. Sure, I'd love to live in a world where everyone follows best security practices 100% of the time, but this ain't it. Arguing how your imaginary perfect world should be gets us nowhere.
How much of these hacks would be prevented by adoption of simple preventions like Yubikeys for login, backing up data and images regularly, and encrypting data by default?
Typically, these attacks start by compromising a regular workstation by some office drone via Office macro. Then they start escalating privileges by exploiting Kerberoast, RCEs (think BlueKeep, Eternal Blue, Tomcat servers with the default password, etc.) and other quick wins. When they get a clear text password, password hash or kerberos ticket of a privileged account, there is nothing to stop them. Windows doesn't care if you have MFA at the workstations or at your VPN interface. With the hash or a ticket you can perform network logons to any system where you have local admin permissions. Otherwise, this would destroy the entire single sign on feature that Windows and its users love - logon once, access everything. Kerberos is deeply built into the Active Directory.
Backups are fine, sure, but typically you want to figure out what exactly the attackers did and at what point they started doing it, so you know how far back you have to go with your backups, because you want a backup without back doors. So you need to hire special consultants and they take at least a few days, maybe a few weeks to figure this out.
If you don't have offline backups or took some other special precautions, the attackers might have deleted your backups.
After all that, you still need to apply the backups. A process that probably varies widely in length depending on the quality of the admin team and the size of the organisation.
Sometimes simple preventative measures aren't as simple as they might sound. How would you go about integrating yubikeys for login into multi-decade-old SCADA hardware systems?
I'm a security specialist and I honestly wouldn't know where to begin.
Is that the case with all of these hacks? How many would be prevented, is what I'm wondering? My mother's hospital was hacked this week and now they can't even clock in but they're not running SCADA
SCADA's a good example of systems that are difficult to secure for complex reasons. There are many others.
You ask a very wise question. Unfortunately, I think it's unknownable. The best we know is that the answer is more than none and less than all. The more you get towards "all" the more prevention measures cost to implement. For instance, managing a mature backup and imaging operation at scale may be conceptually simple but is both complex in practice and far from free.
Hospitals in particular are the scene of some interesting conflicts between security and usability. There are a lot of stories about health staff doing things like jamming open medication dispensing machines so that they could get on with the job instead of dealing with security measures they experienced largely as obstacles.
Can you imagine throwing yubikeys into a scenario like that, where people already have an adversarial relationship with IT and security measures? What do you think is going to happen when someone forgets their key and can't send an x-ray to the remote radiology center? I have my guesses.
Adversarial relationship with security are very often created by very annoying security requirements which do very little to improve security. Like requiring users to change all passwords ever 2 or 3 months and requiring a new password to have characters from every class (see also [1]). While all you need in the most cases is just minimum length requirement and some guidance how to choose a good password.
If user will have to enter 16 charter password each time after HW key (like Yubikey) will be connected to a computer to unlock it, then users will leave it always inserted. Or password will be saved in a text file. If HW key will just work once inserted (or will require 4 digit pin) most users will comply. It is already 2nd factor in addition to some other password, it doesn't necessary need a strong password to use it.
Leaving a key inserted is still a vast improvement over the current situation. Yubikeys have to be pressed to generate a new code each time (as they expire after each use) and the situation you avoid is remote hacking especially via social engineering.
Not all u2f keys require being touched. That's an optional hardware implementation detail, rather than a mandatory trait across all u2f devices. Yubikey sells keys that are commonly used by plugging them in, leaving them, and never again touching them. This effectively turns the computer itself into the second factor.
Depending on the precise scenario, that may or may not represent an improvement. If the key is used as a second factor to authenticate to the network, then an infected Excel document will trivially ignore the involvement of a Yubikey as it uses the logged-in user's Kerberos ticket to spread.
You're completely right, though. Even this would definitely cut down on phishing attacks that send users to fake websites pretending to be internal systems.
We first start by moving all the non legacy stuff to MFA. There are so many easy targets in security that we can look in to first before declaring it impossible because of a handful of legacy apps.
You're absolutely right. There's often no shortage of low hanging fruit.
I'm not suggesting we should declare anything impossible. Far from it! I'm merely trying to suggest that we should appreciate that not all things as easily fixed as they may seem at first blush.
As all of us in software know, complexity can lurk in unexpected places.
>> Dismissing this because companies should do better security is like dismissing doctors because people should get more exercise. That's silly. We need preventative care and treatment and everything in between.
Not exactly.
Executives are choosing to hire el-cheapo offshore middlemen to manage security, software development and to save money (latter is more important - more money in their own pockets) - and we all are on a hook for this behavior.
Criminals and hackers are like viruses - they are always there.
But we need to maintain the health of the whole body (country and it's entities) to make sure we're resilient.
Execs and politicians selling our security and freedoms for profits and bribes need to be dealt with appropriately.
I think this is an unfair false choice. I blame security "experts" who store private keys on public facing web servers or allow for SQL injection in the same way as I blame doctors who over-prescribed Oxycontin for way longer than was safe to avoid dependence. Sure, there will always be procedural mistakes and zero days, but gross dereliction of even basic level expertise deserves scorn.
My anecdata is that the majority of times someone has gained access to a user's system somewhere that it has not been anything technical but purely social engineering. Whether that was a secretary at a medium-sized company or someone in medical records at a hospital. The latter case was more extreme - user received an email from the hospital's lawyer, replied saying she wasn't sure, "lawyer" emailed back to go ahead and access the website. Lawyer was the 'hacker' who had already broken into and taken over the lawyer's email. Luckily, it was phishing attack and not ransomware.
IT security is hard, very hard, and a lot of the commercial products used by most companies are terribly insecure. The fact is our IT infrastructure has grown much faster than the global pool of talent who know how to effectively secure it.
So faced with a deficit of expertise, and a constantly changing IT security landscape, it makes perfect sense for governments to support and co-ordinate cyber security efforts. We need to get maximum benefit from the resources we do have and that mans pooled effort, clear best practices, strong security standards, etc.
Personally I see an additional significant benefit coming out of all of this. If governments and politicians skill up in understanding the seriousness of cyber security at a national level, hopefully they will come to understand the deep folly of insisting on backdoors and secret keys to everyone's systems for security and law enforcement agencies. Politicians keep talking some really dumb crap on this topic, but if we can get them to take the security of businesses and citizens seriously, I'm hopeful this will change.
The problem is our govt money is subsidizing private company security policies instead of more directly helping people. This money should go to healthcare, infrastructure, or even be redistributed before it's used here.
Providing security and protecting property is one of the core functions of the state. Police protect businesses. The US Navy protects shipping lanes. I do not understand why this particular case could be so objectionable.
It is an absurd argument up to the point of "reasonableness" that it's the responsibility of the company to defend against 100% of theoretical security vulnerabilities, in my opinion.
There will always be a vulnerability, unless some truly secure-by-design technology exists .. airgap, not vulnerable to social engineering .. ?
Most these hacks where done using leaked tools from the same alphabet peoples tools to fight terrorism in the first place. Horrible idea. We need companies to get their act together. Personal responsibility.
> I'm surprised at how dismissive the comments are. We need many angles of defense against these criminals. Dismissing this because companies should do better security is like dismissing doctors because people should get more exercise.
No it is like increasing price of cigarettes and giving those money to the health system.
>That's silly. We need preventative care and treatment and everything in between.
No, we need secure by default. These things are already criminalised, this does not seem to stop anybody.
Why is a child on a default Windows 10 account able to install a program by clicking a link ? Why is this program able to install itself as a service ? This is not security.
> do better security is like dismissing doctors because people should get more exercise
Not dismissing doctors but making fat people (or drug users etc) pay more is not that silly and happens.
There need to be standards (ISO, PCI) for all companies. And if you get hacked, you get fined if you did not adhere to the standards.
And yes, go after the criminals as well, but bit to easy to just ignore ancient Windows installs and users with passwords 1234 who have admin access etc. All these issues are stated in both ISO and PCI compliance: we just need to have all companies comply, not just banks etc.
> the way that the pipeline-company ransomware hackers beat a hasty retreat was noticeably unusual, and already seemed to telegraph that the state was getting involved more...actively.
Uh, there was no retreat - the company paid the ransom the day after the hack.
I think they meant the folks providing the ransomware as-a-service, who basically said "yeah we provide criminal services but we don't endorse their use for crimes that big, so we'll be more careful who we sell to."
IMO the first step to fixing is to add liability. If a breach happens through a piece of software, then the vendor is liable. Same way cars get recalls. (sometimes)
Really? I don’t think this is at all similar to a car safety recall. That’s more like trying to issue a recall for a car because people can smash it’s windows and break in.
yeah of course it's an analogy. But by adding liability we'll get more recalls (patches) done. Vendors will stop playing FUD and will focus on the real cost of their security flaws.
And yes some will still not do patches, just like some car vendors are considered less trustworthy.
But at least the risk of suit will loom over their heads.
But the parent's point is that's still putting the liability on the vendor rather than the actual criminal. Perhaps it's more like if a car is sold without an immobilizer or an alarm, holding the manufacturer liable if it's stolen. But if that kind of fails, because it's pretty simple to mandate a handful of security additions to cars, whereas software is orders of magnitude more varied and complex. It would be hard for any vendor, let alone small companies, to prove they'd followed every conceivable best practice. Might even be impossible, as some likely conflict. And if you try to codify exactly what security practices should be followed, what do you do when those practices become obsolete?
Yeah, I think it's just a fault in the analogy and in part demonstrating why reason from analogy is faulty.
My point is this If vendors were liable (at least in part) for security faults in their products, then they would be more diligent about closing those gaps.
Yeah, and in principle I agree. It's just tough to imagine how exactly you'd regulate that in practice without doing a lot of unintended harm along the way, especially to (potential) small vendors.
Liable in what way? Wouldn't that just kill OSS? Or do you not count programmers who upload swiss-cheese scripts to Github as vendors? What about Linux, openSSH, etc?
OSS licenses include a very broad waiver, after all it is a gift provided as-is.
Software that runs critical infrastructure (or could cause injury or death if it malfunctioned) should be required to use formal methods and that certainly would include everything to make it run also used such formal methods. (From the OS to shared libraries and even the compilers)
A lot of commercial software has similar waivers, too. See Windows 10.
"Microsoft and the device manufacturer and installer exclude all implied warranties and conditions, including those of merchantability, fitness for a particular purpose, and non-infringement."
You'd have to outlaw that or breed a more discerning consumer. One way to do that would be to blame the company using it, which would make them take more care in what they choose to use.
It just gets broad and vague after a point. Can the software that schedules trains use Linux or MySQL? People could die if it puts two trains on the same track. Note that GP never mentioned safety either. Just being hacked.
But yes I'd hope that anything bespoke should be covered under a contractual agreement with SLAs and penalties.
The software that schedules trains can do what it likes, because there are several, independent safety layers below it: the signaling system itself, and the software and hardware locks within the signaling system, and formal methods usedto prove their integrity.
Any signaling failure will fail safe (all trains stop).
Any trusted actor (controller, train driver, sometimes passengers etc) can also stop part or all of the system. (On many European railways, if the driver sees a problem, like a car crashed into the railway, they press a red button and all trains in that region are halted.)
> We need preventative care and treatment and everything in between.
Not to mention the fact that the cybercriminals who do these attacks also get involved with state-sponsored offensives. The ransomware stuff might just be "training wheels" or resume bullet-points for something far worse in the future.
If we're going to get serious about stopping the state-sponsored stuff and even bother to have the "US cyber-command" it makes sense to go after the relatively petty criminal elements as well. If they can't make a dent with these, why should we think that can go up against the FSB?
Corporations can only ever view cybersecurity as yet another compliance exercise (and all the incurious checkbox tickers that entails). The smart ones will play "cops and robbers" (red-team/blue-team games) but they can't offensively go after cyber criminals. Unfortunately, that's what needs to be done to get ahead of this stuff.
Usually people here want government to stay out of the way of business, or especially not to compete with business. I agree with you, but it isn't necessarily entirely a pure-business perspective.
the zeitgeist the past 10 to 20 years heavily biases people to think government cant work and if it can, its too expensive, and if it does, it infringes someones right to profit at others expense.
Ship owners invested in arming their ships to the point where the pirates would hopefully pick softer target which is exactly what they did. Incrementally over the 16th-18th centuries the profitability of piracy was highly reduced because the goods had fairly fixed relative value and the risk kept going up and it more or less went away on its own in the Western hemisphere over the course of the 18th century. Crime rarely pays at scale when every instance carries a high risk of a firefight. A few may make a good living in such an environment but it caps the maximum industry size at a very low level.
Piracy persisted in the Mediterranean where it was more or less a state sponsored activity. They mostly avoided harassing the commerce of major powers (Britain, France, etc). Which worked well enough until the 2nd tier powers got pissed off enough to stomp them a few times (with the blessing of the first tier powers, think of it like a reverse Falklands). They still didn't tone it down sufficiently and they wound up speaking French for that mistake.
If anyone has any good resources on the history of Indian ocean or east Asian piracy I'd be interested in reading them.
As an aside, old school high seas piracy is an surprisingly good parallel to the variations of criminals in the current cyber-crime environment. You've got state sponsored theft of money and goods (privateers). You've got under the table cyber criminals who would be prosecuted by their home jurisdiction if found (traditional western pirates, the kind you typically see portrayed in pop culture). And you've got locally approved as long as they pay their dues professional cyber-criminals (north african pirates). The former groups mostly steal things of value they can use or fence. The latter mostly takes stuff hostage for ransom.
I think the statement I was hinting at was that essentially all these companies are on their own (vs pirates and hackers) until the problem is too big and dangerous and then the state steps in.
Not exactly: their behaviour is more akin to a storage facility with faulty security cameras and sleepy guards (and cheap locks).
They have a duty towards their customers (contracts to be honored) and this requires security.
Sure, but terrorism has always been the blanket "fuck it, max charge it, we can't be bothered to ACTUALLY come up with legislation" so maybe don't fuck it and work out a proper set of laws.
Anytime you can let a group of criminals get away with impunity it's going to run out of control until we get... the current situation.
It's getting close to having China and Russia either start cooperating with us to flush these guys out, or we start having "fleet exercises" in their seas again. I think it would be prudent of said nation states to wash their hands of these folks.
Would it be reasonable to demand every company hire a team of armed guards? No? So why is it reasonable to demand they each hire a cybersecurity team?
It’s reasonable to tell companies to lock the doors. It’s reasonable to tell them to follow accepted best practices in tech too, but not that they be experts prepared for everything.
What about the other side of this? Instead of seeking backdoors and using them to spy on Americans, the NSA should be stepping up their game and securing vital infrastructure and domestic businesses against these attacks.
I'd rather not see taxpayers have to foot the bill for the profit of megacorps neglecting proper cybersecurity while sitting on mountains of tax-evaded offshore cash, thank you. The industry should be magnitudes larger than it is currently, and we shouldn't encourage corporate recklessness by socializing the costs.
If other States sent proper Armies over to attack critical infrastructure the US government would surely foot the bill to aid in security. Why should cyberarmies be treated more leaniently?
The incentives are all misaligned and the solutions aren't obvious. How is the USG going to secure some random admin access password? Are they going to update the code in the repo?
I agree with hack-back. I agree with a number of proposed solutions, but at the very end of the day the problem with cybersecurity is that most orgs don't have the fiscal allocation that they need if they were to have any hope of stoping foreign states.
Rather than compare it to armies, I think we should compare it to spies. If this is truly at the army level we could send a couple dozen missiles and the attackers would get the message. But there are reasons we don't do that though. First, we're not always sure who did what. Second, it's a political quagmire. Armies don't come to your house and help secure it from air strikes. Armies understand attack asymmetry and they hit back.
But when it comes to dealing with foreign spies there is a different playbook. The government helps organizations that are critical to national security secure their entry points and resources. They help, but they don't do everything.
This only works if the parties involved are interested in working with the government. Long after Nortel was first told of the Chinese hacking / stealing of their IP they were still woefully insecure. They went from being a third of the Canadian stock index to bankruptcy in a couple of years.
I don't actually think cybersecurity is possible. I've tried very hard to get governments to change, and there is some progress on the most fragrant violations, but the space is growing too fast and the domain is too maneuverable. I don't think it is possible. All we can hope for is some combination of more defence and realignment of incentives of the actors involved limiting the eventual damage.
I think if you had good attribution it's more like armies. We have been focused on locking our doors, on building better walls, etc. But there is a non-defensive side.
In meatspace we expect the government to use kinetic force to stop people from attacking us. Like if I leave my door unlocked and some person comes in to start stealing my stuff, the cops really will respond and come stop that person (I have had a home breakin they responded quickly to). They didn't blame me for having bad locks. I pay a lot of taxes so my walls and locks don't have to be perfect.
In cyber land, it's an anarchy. The government offers no defense. But there's no reason someone can't offer a deterrent. Like if you knew who broke into your servers, and there was a goon squad that went and broke down their door either kinetically or electronically I think a deterrent strategy could eventually work. Like it literally does for meat-space security.
(Not totally sure I want that, but I'm just saying it would probably work and we haven't really tried it yet.)
So you have group of 20 somethings in russia that you suspect are behind the hack.
What do you do ? Sending a single missile/drone wont work because Russia has air defense (probably - with them you never know how on top they are, but they will after the 1st one). Sending multiple might work, but Russia might fire back and start a war.
Sending special forces, or whatever would probably work better first few times, until Russia deliberately set's a trap for them.
How about if they are form China, or maybe France or India and you don't relay have prof that would stand in court ?
And then what, it's not like USA doesn't have its own hackers that do shady stuff internationally. Other countries have spacial forces as well.
I am not sure we want to go this way.
In practice that means US doing whatever they want in poor countries (where they already do whatever they want), and not doing much in powerful enough countries where most of those criminals actually are.
Most of the time we don't even know definitively who is behind the hacks, so it's kind of a moot point.
Yea, I'm not saying I want a kinetic or offensive solution either. More like, it might be possible. Like if someone kills a bunch of people in France and flees to Germany, then Germany is going to hand them back over to France. Clearly not everywhere has extraditions, but you can imagine a world where hacking isn't so much tolerated and defended against as punished and thus uncommon (like breaking into a house; you can but you're not supposed to and there's consequences).
oh this happens regularly inside EU. And even cooperating with US. A few years ago, someone from town I live was involved in making/selling one of those exploit toolkits on dark web. FBI contacted our police, he got arrested, and convicted. He didn't get extradited to US, but is in prison here.
When you can drop a bomb into a pickle barrel from 30000 ft, the question is not “how do I make my pickle barrel stronger?” it is “how do I decrease my reliance on this single pickle barrel?”
Spy-craft is notoriously laughable in its effectiveness. InfoOps, on the other hand...
I guess I’m saying comparisons to both Air based warfare and to the propaganda machine are both the most useful analogs, imho.
> The incentives are all misaligned and the solutions aren't obvious. How is the USG going to secure some random admin access password? Are they going to update the code in the repo?
They can publish best practices, research vulnerabilities, provide educational support, and generally do all the kinds of things governments do to encourage the right behaviors. We have some of this, but at some point, switched to the sexier "the best defense is a good offense". Likely because defense is hard.
Because proper cybersecurity should be treated as a cost of business, unlike the use of force which is an exclusive prerogative of the state. If large companies want the state to step in to absorb some of their costs, they should stop trying to avoid contributing to said state at every step of the way. If said public involvement came at the cost of partial ownership of companies requiring it, with complete disclosure of their financials including offshore, I would not mind at all. I am simply extremely tired of corporations running to daddy at every inconvenience - sometimes of their own doing - while actively trying to crash the whole system into the ground by starving it. You can't have your cake and eat it too.
That assumes all cyber threats can be averted by private corporations. It's difficult for a company to play effective defense against nation-state levels of cyber attack R&D. Yes, companies need better security than they have now, but they cant do it without help.
This is where the threat of retaliation comes in as a deterrent, and the country should be equipped to do so. But publicly subsidizing private cybersecurity is both impractical (how would that work exactly?) and would encourage underspending even further.
Why do you think China or Russia prefer to hack foreign private competitors rather than sending a bunch of missiles on their infrastructure?
We publicly subsidize every other kind of security to some degree already. A company might have security guards, but police are certainly going to be there to provide a baseline policing the neighborhood, respond to calls, etc.
And security via threat of retaliation does not sound like a practical or effective solution either: we already have plenty of capabilities in that area, and it didn't stop east coast oil & gas infrastructure from going down or a sizeable portion of the nation's meat processing from going the same way. These attacks are escalating rapidly, and relying in the free market to find a solution doesn't look like it's going to happen fast enough.
This needs to be a national, not (just) a private corporate issue because of the enormous national security implications involved in cyber attacks against infrastructure. When a single company's security failure can cause national chaos, there needs to be a nation-level approach to this.
Then how about nationalizing that infrastructure, if it is so crucial for national security and the private sector is unwilling to spend enough to protect itself against threats? Let's not kid ourselves: this is first and foremost a matter of incentives and consequences rather than a lack of capabilities.
I don't see what the public could do better than private entities, besides absorbing their costs.
The only way I can see it practically working is if the private sectors would allow government entities full access to their IT infrastructure, submit themselves to random controls, audits and checks, and bear sizeable fines if they're found to be negligent.
The government could set & enforce standards for levels of security and disaster recovery, especially if critical systems. It could not just research but also pass on knowledge of vulnerabilities. I don't expect the government to actually run the security. I expect the government to provide the framework and tools so that everyone doesn't have to figure it out on their own.
Yes thank you for your responses. People wine but we're clearly pathetic as an industry, limping along with Unix etc. We know so much about how to build better systems, and yet it's more bandages on the status quo, all the while increasing complexity which makes things largely futile.
Unlike "real war", cyber defense also gets to design the battlefield, everytime time. There will always be social attacks, but the stupid C and Unix stuff that is the bread & butter today is completely preventable
The feds can’t even secure all their own systems. We had the OPM hack which resulted in the personal information of federal employees exfultrated who knows where. Also the federal government were still using passwords that were exposed in the breach 3 years after https://www.forbes.com/sites/leemathews/2018/11/15/office-of....
Tbh I trust the FAANG companies to run better security. Government is incompetent in this area.
Many times it's not the government securing their systems, they've outsourced it to places like SolarWinds. Maybe they would do a better job if political pressure didn't push for more and more privatization of critical operations.
Except you can't do that, which is why the army metaphor doesn't work.
(If you want to argue that this is a realistic response, please explain how doing so would not be acts of war, inviting both retaliation and much worse acts then justified by ours.)
I mean, it can be argued that trying to damage our infrastructure by hacking our computers is just as much of an act of war as firing a missile at our infrastructure. In some cases, the effect of the damage is the same. (I admit the 'cleanup' of the Colonial Pipeline problem is much less than it would be if someone blew up the pipeline, but the impact it had on our country was similar.)
I don't expect the US to start handling this that way any time soon, but I'm not sure it'd be irrational for a nation to decide a cyberattack is, in fact, an act of war.
It really depends how that attack is being organized and backed though - in most cases we'll be left with only a strong suspicion of who actually launched the attack and, due to the nature of technology, it's much more likely with a cyber attack for the real perpetrators to frame someone else.
Even once that's all decided, we'd need to figure out if war would be a reasonable response. I'd propose that one of the main reasons the US hasn't ever escalated the situation with North Korea, even if we ignore China's likely response, is that actually subduing the populace and occupying the country would likely be extremely difficult. It's unlikely that a thoroughly bombed North Korea would be any more stable and friendly than the current North Korea.
War is extremely inefficient at bettering the lives in any of the countries involved - there are times when it is necessary, but it should be avoided whenever possible.
China is literally the only reason the US tolerates North Korea. And China solely tolerates North Korea because it causes all sorts of irritation for the US. Arguably, it would be better off for everyone living in North Korea if one of those two powers annexed it outright, but geopolitics loves backwater proxy wars.
> China is literally the only reason the US tolerates North Korea.
Closer to the active phase of the Korean War, the USSR was also a factor. Today, the US distaste for instability, and naiton-building, and North Korea not having a hoard of oil or something similar to overcome that distaste is also a reason, today.
> Today, the US distaste for instability, and nation-building
This is an unpopular opinion, but I feel like we should generally accept nation-building doesn't work well, countries we leave tend to go back to being horrible in a number of years after we set up a new nation there. And accepting that, and accepting sometimes that countries are completely failed, harmful to world security, and larger countries need to intervene: Annexation isn't actually a bad concept. It's absolutely frowned upon today, but I'm not sure is worse than what we've done to half a dozen countries in the past couple decades alone.
The barrier to war should be high, but at the point you obliterate a nation's governing structure, defenses, and likely civic infrastructure, you should accept you have a permanent responsibility for the civilians there. And maybe the best way to be democratic about it is to establish a process that states one annexes can petition and vote for secession after they've reached a more stable position.
> North Korea not having a hoard of oil or something
There's that. North Korea is a property that literally only Kim Jong Un wants. And major powers seem perfectly fine to let him have it as long as he mostlyish behaves.
This is strained reasoning. The threat of war with China and the literal guns pointed at millions of heads in South Korea are what prevents the US from picking off DPRK infrastructure and personnel. Compare to Iran if you doubt.
If US government authorizes the NSA/CIA to infiltrate/attack all bitcoin exchanges that accept payments from wallet ID with ransomware, the problem likely be solved very quickly.
And the threat of a Topol-M nuclear missile with a yield of 800 KT detonating over New York is a pretty good incentive not to launch tomahawk missiles at office buildings located in nuclear-armed countries. If you ever wonder why unfriendly countries have nuclear ambitions, rhetoric like this is part of it.
How many people are you ready to kill over ransomware?
And weren't we just splitting hairs the other day over whether or not Belarus forcing an airplane flying over Belarus to land is excessive use of force? Apparently, ballistic missiles targeted at office buildings aren't?
At this point, with repeated attacks against our infrastructure, we need to get said countries to either help us route said cyber attacks (state sponsored or not).
If this continues to happen we are looking at a really bleak future. There is an -insane- amount of money at stake here. How many meat/farm futures got affected by just taking out the meat industry this time? How much money can these people get not just by the ransomware attack, but by also knowing how fucked an industry is about to be and cashing out.
When they can do this shit with impunity it's a problem. And there's potentially a lot of money available.
This is all just ignoring the fact that some of this might be state sponsored.
I think it's time to start getting some sort of cooperation from said nation states and allowing us to help take out some of their trash.
Because the other option is to treat this like state sponsored attacks on our infrastructure and no one is going to like that.
How do you get countries to cooperate that have no incentive to cooperate?
Cyber warfare, whether ransomware or espionage, is largely asymmetric. Why would these other countries want to play ball when they have everything to gain?
The answer tends to be that you make them cooperate by attaching additional costs to the actions, in order to make them less attractive. These costs come in two major forms, which we might want to categorize as passive and aggressive.
Aggressive costs might include:
- Offensive hacks
- Military response
The issue here seems to be that the passive responses aren't likely to be strong enough to dissuade the other actors, while the aggressive responses are too costly. Aggressive counter hacks might just normalize cyber hacking and espionage, and the US is on the wrong side of that asymmetric gamble. Normalizing the behavior would be likely to make it worse than it already is!
Military responses go too far. You can't reaaalllly militarily respond to another nuclear power. Not directly. The potential outcomes there are almost uniformly bad. If you want to play the longer game maybe you do some poking and prodding by supporting third party combatants (IE: Soviet support of Vietnam against the Americans) or political opponents. But there aren't really that many great options on that front today for Russia or China.
So that leaves trying to increase the cost of the passive responses. This is kind of troublesome with China, since they'll just throw identical costs right back at you. It's a bit more possible with Russia, but Europe's entanglement with their power sector screws everything up. And it's not like we're lacking on Russian sanctions as it is.
You can try to play a strong defense, but that's kind of like putting a bandaid on a gunshot wound at this point.
Yadda yadda yaddad, I don't know what to do but I think it's an interesting problem!
Edit: Maybe I shouldn't say European entanglement with Russian power sector. I suppose it's more appropriate to say gas sector?
Physical security is a public good, while computer security is a private good. (Websearch the definitions if you don’t know them already.) The economics just don’t match up.
Because that analogy doesn’t hold. These cyber attacks are all but literally one bored kid and a computer. If the Russians sent one bored kid over here to blow up Hoover Dam, and that actually worked, we’d blame the people who put up the dam.
The fact is that the correct and secure working of computer systems and networks has been severely neglected by companies in favor of their profit. If we are to have state response to such neglect, it should be funded by a huge tax on every copy of Windows.
> These cyber attacks are all but literally one bored kid and a computer.
Are you sure about that? A lot of this stuff is way more than just some bored kid. For the company I work for, there is almost certainly a group of well paid people who sit around every day trying to figure out new ways run scams using our site.
When there is financial motivation, people go through great efforts to get that $$$.
"Security" isn't some catch-all box you can check. It's a non stop game of whack-a-mole where your adversary spends each day getting around whatever you put into place.
Right, security is definitely not a box you can check but American business have decided that if they run Qualys to get that PCI-DSS everything is good. Nobody is out there seriously talking about the fact that the Linux kernel is written in fucking C. Well it's 2% faster than if we wrote it in an actual language with, I don't know, bounds checking, and we'd rather use the 2% for dividends, thanks very much. We need some economic and regulatory incentives here so the public endpoints of your critical oil pipeline are running applications written in safer languages on seL4 platforms with hardware roots of trust, instead of god-damned DOS.
The software industry should be ten times bigger than it is, but the economic incentive has been to make it cost less, rather than to make it safer.
How many exploits are because of the Linux kernel, and not userspace software? No kernel will protect you running public ElasticSearch with "username" and "password01" as the credentials.
These cyberattacks are all but literally boiler rooms full of bored Russian men wearing balaklavas and holding flashlights under their chins while they type.
I think the argument is more that if we taxpayers are footing the bill for the corporations then we should also have some say on how much of the profits the corporations get to run away with. The same ought to apply to traditional war too: the government should pay, but the supplier shouldn’t get to charge literally whatever they want.
This is well within the scope of what the government should be doing--just as a country's navy protects merchant ships from pirates and the police protect shopkeepers from burglary. If a foreign military were launching physical attacks on your business we'd expect any government in the world to intervene.
Realistically even with government support, effective cybersecurity is going to require significant private effort and investment as well.
Should our society collectively pay for walls, doors and locks for every company in the country? How about paying for private security on every site? How about paying for personal bodyguards for every CEO? How about we all chip in to buy a password manager subscription for every private employee in the country?
We should regulate and punish, not subsidize. The same way we have dealth with corporate recklessness for decades.
I'm not sure what specifically is being proposed here. I gave some specific examples of government actions to protect its citizens engaging in commerce going back hundreds if not thousands of years. I'm not aware of any government which has paid for doors, locks, or walls for every company in their country, I suspect any action taken by the NSA would be guided by similar restraint.
As the parent comment said, I'd like to see the NSA working to get zero day vulnerabilities fixed as opposed to hoarding them for future exploitation. At least this is my perception, to be honest aside from a few examples I've heard of I don't actually know whether I've correctly characterized their activities, they may already be doing this.
I agree to a point, but to continue the physical-security analogy: while private businesses should not be negligent in securing their property, a patrolling police force should also exist to discourage theft and vandalism at large.
I think the private and public sector have both been negligent when it comes to cybersecurity. Both need to improve. (Like you, I'm willing to bet the private sector is hoping to sit back and let the taxpayer foot the bill for everything. This is a problem too.)
Or, alternatively, the NSA could be tasked with constantly pen testing US companies' computer security. If they find a problem then they would mandate fixes and assess a hefty fine. The fine would be used to cover the NSA's costs and to pay a bounty to the individual who discovered the weakness.
Sure, but they are up against state-sponsored, highly trained actors, and that's not a fair fight. This requires the resources of the US Government as their bodyguard.
Not really, the US military in particular has a lot of slack that could easily be funded into cyber stuff. I would bet there's plenty of (digital) offensive capability in the US so maybe it should be used?
The costs are already socialized - it's our data that gets stolen in hacks. The problem is, the megacorps who lose it must only pay a negligible reputational penalty.
If you could claim compensation for data lost, if businesses had to foot the bill for everybody who's security and privacy is impacted by data breaches, then it would quickly become something they would have to insure against, then the insurers would demand they take reasonable precautions. A system of fines would work well, for instance - an aggressive enforcement of the GDPR or similar, for instance, could create this kind of virtuous circle.
Police forces are paid for with taxes and respond to private businesses. What if publicly funded cybersecurity ends up costing everyone less money over the long term?
Tax laws are a different issue, even though I agree some megacorps aren't paying their fair share of "private security" right now.
Corporations pay tax too, if I was an American shareholder of a company that went to the wall due to a 0 day vulnerability that was known by the NSA I would not be happy. Imagine if you found out that the NSA knew about COVID but didn't develop or release a vaccine because they wanted to use it themselves, why is it really and different if corporations are people too?
Generally speaking, you'll find the federal government has a litany of agencies, on both the offensive and defensive side of... everything. There are absolutely government resources working on securing American infrastructure.
At this point though we need to have entire new branch of the military. Otherwise, I just don't see how we are taking this serious enough.
The way the Air Force specializes in the air.
It is just incredible how we are always fighting the last war. From 9/11, instead of learning we need to constantly stay on top of a shifting battlefield we learned to fight Islamic terrorism. Not good.
The DHS in conjunction with the FBI is supposed to be protecting our critical systems from foreign attackers -- and they are failing spectacularly. New laws and new approaches will be required to even begin to make headway, especially where private companies' operations intersect with national security issues. When should the feds be allowed to access my network to verify my assets are secure.
The NSA's charter is foreign signals intelligence (including computer networks), not law enforcement -- They can't spy on Americans in America except under extraordinary circumstances (Must have a FISA warrant and that person must be talking to one of a few thousand foreign bad actors). And even then, the collected data is not court admissible. Only the FBI and other law enforcement agencies can spy on Americans in America in legally admissible ways using court orders.
The real issue here is when exploits should be weaponized or shared with industry. Should we prioritize the protection of our networks or should we penetrate the networks of our adversaries? This is a tricky political question that needs to be seriously addressed, the status quo is broken.
Let's say you're a CEO at Big Pipeline Co. One day your phone rings. It's the NSA.
They say your systems are vulnerable as hell. That you're very likely going to be breached in a quite expensive way very soon. It could shut down all the pipes on which Big Pipeline Co depends!
They offer to patch your systems for you. Do you accept, knowing that your staff will have to hand over hundreds to thousands of credentials? Knowing that the employees of the NSA care more about patching than if your systems work afterwards, and you have no real recourse if they screw up?
If you don't accept, what would you prefer the NSA do to secure your company's systems?
Let's say you're the Chairman of the Board of Directors at Big Pipeline Co. One day your phone rings. It's the NSA.
They say your systems are vulnerable as hell, and they told the CEO about it, but he did nothing. He didn't allow the NSA to come in and fix anything; he also didn't take any action on his own to have people internal to the corporation fix it.
What's your obvious response? Fire the CEO and install a new one who will direct the appropriate resources to fixing the problem.
What CEOs have ever been fired for security breaches? If the "free market" doesn't care, why would any "I told you so" from the gov't make any difference. He'll have already taken his golden parachute and some poor CSO will take the fall.
> What CEOs have ever been fired for security breaches?
None. That's part of my point: the root problem is not actually security by itself, it's bad corporate governance. CEOs should be fired for such things, but they're not.
> If the "free market" doesn't care
Corporate governance is not a free market nowadays. It was more of one in the past (although an argument can be made that there were important non-free market forces even then), when most stock ownership was in the hands of individuals who at least had some incentive to hold boards of directors accountable for long-term stewardship, since they were investing with a long time horizon for their own retirement.
But now most stock ownership is in the hands of large mutual funds (since that's where most people's retirement funds are now), which don't care about long-term stewardship; they only care about short-term earnings. So corporations have a positive incentive to overlook things that, to be fixed, will require sacrificing short-term earnings for long-term stewardship. Individual investors never even see this; all they see is the overall rate of return of their mutual funds. So they don't realize the long-term consequences of what is going on and aren't able to apply free market incentives to correct things.
To play devil's advocate, how much money did this breach actually cost the pipeline? A few million bucks?
That's probably a rounding error on their quarterly report. Heck, it might have cost them more money to hire more people to provide adequate security to prevent such attacks than to just suck it up and get attacked.
It may actually be economically favorable to stay insecure!
If that were the case, the market would actually encourage CEO's to spend less money on security, not more.
Surely the NSA can tell companies about their vulnerabilities without having to actually log in and fix them? "You have a server on 23.117.25.208:3999 which is vulnerable to CVE-2021-1120, fix it."
Realistically, I find it not credible to believe that nobody in big infrastructure companies with IT departments is aware that they have vulnerable systems. I find it far more likely that people are aware and people in positions of leadership making decisions about risk have decided that these risks are acceptable.
Do you think getting an email from the NSA telling IT what they already know is going to change those calculations? My experience with bug bounty programs is that leaders who make risk decisions are more likely to shrug and say "I know, we're OK with that risk".
I realize that this is a personal judgment, and other people may have had wildly different experiences.
> an email from the NSA telling IT what they already know
No, that's not what the email from NSA would say. It would not say "there is a risk of your systems being compromised by cyberattack" in general terms, which is what IT already knows. It would say "your systems are vulnerable to these specific attacks", which IT does not know. So yes, getting this new information should change the risk-benefit calculation dramatically.
I've been on the receiving end of various emails like that. They have details on specific systems and specific attacks. They're occasionally useful, but often not. Knowing that a particular app is vulnerable to XSS might be useful, if I have staff that can fix it and they have the spare cycles.
For example, a hospital IT department might get an email telling them that their MRI is exposing remote desktop to the internet with default credentials. They know that. They don't change it because if they do, their vendor will drop support. This is a real thing that real medical hardware has to deal with, and it's only slowly getting better.
A big industrial company might easily have it worse than a hospital. Fixing the specific CVE on a specific port on a specific machine might mean having to retire a whole series of obscure, niche bits of SCADA hardware that don't support anything modern. It's like all those IoT gadgets that don't support 5GHz, writ large.
Somewhere between those two, you have your well-run Windows network. It's probably a month to several months of patching behind. IT has a whole process to test any new patches for stability and compatibility with line-of-business software to ensure that nothing breaks. Knowing that their systems are vulnerable to the CVE that's fixed by a patch they're testing - or tested and found broke something important - might not always help them very much.
If the message comes with, "You have X time to fix this or you will have Y penalty" it definitely changes the risk/reward equation. Severe enough penalties moves it from "if we have spare cycles" to "how do we get this done."
The NSA’s mission-statement in domestic civic cybersecurity is to ensure the flow of commerce, i.e. to protect GDP. They aren’t going to patch things in a way that makes them not do their jobs any more. That’d be an “attack on commerce” just as much as exploiting the vuln would be.
That's true in broad strokes, but I'm trying to portray things from the position of an executive. Having a bunch of outsiders that you have no real influence over in charge of your systems is terrifying.
The alternatives are a regulatory system for information security or offering advice and hoping companies implement it. There's a lot of advice on offer.
Let's say that you're a CEO at Big Pipeline Co. One day your phone rings. It's the NSA.
They have a report with a list of vulnerabilities. If you don't fix them to your satisfaction, you will be fined in 2 months, 2 months after that you get fined and publicly reported as negligent, and 2 months after that you get fined again and your outstanding vulnerabilities will be published for everyone to take advantage of.
How much effort are you going to put in to securing your infrastructure?
I’d prefer the NSA put in the hard effort to shed their reputation as spies and start by offering plain security advice in the open that can be verified by independent experts. The best way forward is for the NSA to focus on providing high quality security advice, best practices, and guidance to critical infrastructure. This doesn’t involve handing over the “keys to the kingdom”.
The NSA seems to agree with you. So do the Departments of Energy, Commerce, and Defense, all of which have various efforts to provide independently verifiable high quality security advice, best practices, and guidance. In some cases, they've been doing so for years.
But let's skip the NSA bit. Let's say you, CEO of Big Pipeline Co, have been called up by someone at The Office of Cybersecurity, Energy Security, and Emergency Response within the Department of Energy. They offer you all the advice and guidance you could wish for. Now it's up to you to budget resources. What do you do?
Realistically, you probably hand that advice off to your IT or software staff and hope for the best. Though I realize that reasonable people may differ on this point.
The law should require certain minimums of security for infrastructure deemed vital, like oil pipelines. If entertainment companies and HIPPA can ensure those they work with practice good cybersecurity, why can't the government do the same?
There's already branches of cabinet-level departments that try to do this. In my opinion they're having about the same level of efficacy as one might expect in any other set of large-scale changes in very large old companies with a wide variety of internal systems and needs. If you look you'll find a plethora of government-led attempts to secure various critical industries.
You'll also note that entertainment companies and hospitals are routinely breached. There's perhaps room to question if they are indeed practicing good cybersecurity.
That is certainly not how it works. See the links others posted for context. NSA is more likely to inform you of the vulnerabilities and associated mitigations.
I understand that's not how it works. I'm constructing a deliberately absurd example to show both how the NSA could help and why companies wouldn't accept it.
What exactly do you expect the NSA to do? This is entirely preventable. Something as simple as an offsite tape backup completely thwarts the attack.
Do you want the NSA to send agents out to every Fortune 500 with a blank check so taxpayers can pay for a sane backup strategy to stop a problem we solved 30 years ago?
"Something as simple as an offsite tape backup completely thwarts the attack."
Not true when they are also blackmailing companies to not release their internal data.
Even something as simple as a companies customer base and contracts with them can do a huge amount of damage to the company if it's publicly released. So paying a 2 million dollar ransom is the more profitable choice for the company.
Even if the company isn't doing anything illegal or that it's ashamed of.
Wasn't NSA involved in finding Osama or Suleimani? Find them, then send In Tom Cruise, drone strike what have you. Israel isnt targeted cuz thats what their response woule be to this type of stuff.
Are Russia or China going to react any different from Iran or Pakistan? They currently think they are untouchable. That needs to change.
I think this is needed because the security industry seems to be well on the way to adopting paying off these people as a routine cost of business. That is going to lead to an absolute disaster if it is allowed to continue and grow.
It needs to be a double edged sword though where companies are just as afraid of facilitating ransomware attacks as they would be of the consequences of facilitating terrorists. In other words, this will only work if it means company's are taking the threat more seriously, not less.
Is that the best USians have at this point? Hope? After "useless wars, TSA and all the security theater" the best you have is hope it will not repeat itself?
This is just DOJ, so far. If ransomware gets defined as terrorism for the US anti-terrorism community, it could become very dangerous to be in the ransomware business.
The US has a huge anti-terrorism operation in being, and it's not that busy. Islamic terrorism against the US has been confined to minor local nuts since the US wiped out Bin Laden. And, before that, being "#2 in Al Queda" meant having a rather short life expectancy.
Now, all those people in northern Virginia and southern Maryland may be getting new targets.
They fucked up by targeting infrastructure. If they stuck with small companies they could keep doing it till the cows came home. But now they have governments against them so now they will be hunted down.
These groups aren't really "targeting" anyone. These ransomware attacks are as sophisticated as nigerian prince emails. Send out a lot of spam, wait for someone who clicks on it and is running outdated software and boom. Sooner or later you will encrypt something important enough to pay for.
> penetrated the pipeline operator on the U.S. East Coast, locking its systems and demanding a ransom. The hack caused a shutdown lasting several days
This rings a little disingenuous, since (IIRC) the shutdown wasn't caused by the hack, the interruption of service was a deliberate choice by Colonial because (in brief) they wouldn't be able to charge their customers until they got their accounting systems working again.
The company, providing arguably an essential service, chose to stop the flow instead of estimating / approximating / using past averages to bill their customers. They likely lost much more revenue this way.
> a cyber criminal group... penetrated a pipeline operator on the U.S. East Coast, locking its systems and demanding a ransom. The hack caused a shutdown lasting several days...
I expect more precise language than this from Reuters. This makes it sound like the ransomware was responsible for shutting down the pipeline. The billing system was compromised. Colonial shut the pipeline down themselves so they wouldn't have billing inaccuracies.
“Colonial Pipeline decided to pay the hackers who invaded their systems nearly $5 million to regain access, the company said.”
That is the problem right there. Someone just made 5MM tax free. Time to make paying ransomware illegal and that will stop the potential criminal market for ransomware attacks apart from political motivations.
If it was illegal to pay the hackers back, and the Colonial Pipeline ransomware attack still happened, what would the options be? We'd have to turn the systems back on some way right?
They'd restore from backups, which is already what they did even after paying the ransom. More importantly, would the hack have happened in first place if they knew there was no chance of being paid?
Every ransom paid just funds and encourages the next hack. The social damage is deserving of a large fine (i.e. 10x the ransom).
Apparently they ended up having to do just that even after paying the ransom:
"The decryption software provided by the hacking group DarkSide, notes Bloomberg, was reportedly 'so slow' that Colonial Pipeline 'continued using its own backups to help restore the system.'"
I mean if you have backups then sure, don't pay. Every case won't be that simple. It also seems a bit odd that they'd pay if they truly had all the backups they needed.
Theoretically, no, the hack wouldn't happen if they knew there was no chance.
Realistically, yes, the hack would still happen. Because there will never be a world where people don't pay ransoms, especially if they have no other options / backups.
If they restored from backup, how do they know the attack wouldn't hit again immediately? The ransom wasn't just to decrypt the data, but to halt the attack.
> More importantly, would the hack have happened in first place if they knew there was no chance of being paid?
Why wouldn't it? They could easily been paid by another group to perform the hack, used the hack to manipulate stock prices, sold the stolen financial data, or, most likely, the ransom would have been paid indirectly though some other means, like hiring a "cyber security consultant."
It is just pointing out the obvious that there exist ways to transfer money obtained via criminal actions that may not be in compliance with various nations tax laws. Talking about activities that are illegal in one's own jurisdiction without intent to break the law is not illegal...
Irregardless the groups that pulled this off already know this and have machinery like that which was revealed in the panama papers and crypto mixers ready to launder the money.
What is now needed is to give real consequences to US-based companies and institutions that pay any sort of ransom to the state/non-state actors that are perpetrating these hacking events. This will remove the profit motive, and I don't care if it's Colonial Pipeline or UCSF, this behavior needs to stop and the criminals behind it need to know that there is not any money to be made.
I find it wild that the "run government like a business" crowd now wants government to run business. No one in this thread is really discussing what, if anything, the government can really do. Meanwhile, business is more than happy to be a toddler wielding a gun of computer security literacy, or to take the money of such companies and not truly helping.
As others are pointing out it various ways in this thread, to put it bluntly, this viewpoint treats it as a 100% computer science theoretical question, there are many many angles to making this more painful, even just via signalling. Ex. the pipeline hackers backing off and creating a code of conduct for themselves, then disappearing altogether
All this talk about software (in)security within companies reminds me of a typical conclusion after a data leak. When it's a large company, the conclusion is they may, and should, have done better, but it's inherently impossible for a large company to secure everything well enough. When it's a small company, the conclusion is they should have done better, but it's inherently impossible for a small company to, well, do better, they are too small.
Now, I'm all for treating ransomware, and generally all the large scale and/or state-sponsored hacks with a much higher priority, send the drones and whatnot. But this MUST be accompanied by more accountability on the commercial entities.
You're too small to secure sensitive data of hundreds of millions of people? Maybe you shouldn't have amassed this data in the first place. You're too big to secure everything? Well, did you secure ANYTHING? Did you follow reasonable procedures, did you, crazy idea, make sure you can't access critical systems from the internet and/or with a default password, etc.?
And if you fail, and fail you will, there's no perfect system, I believe there should be penalties not for failing, but for not doing enough to prevent it. To refer to all the plane analogies, if your wings are made of cardboard and everybody knew but pretended it's OK, because otherwise it would slightly diminish shareholder value, well, there will be consequences.
In aviation, you could go to jail for signing off on something that you know is not secure, if it causes an accident and people die. Specifically not for accidents, but for neglecting your duty to make sure that you've done all you could. For lying, deceiving, ignoring, faking, for being too lazy or too greedy to do things properly. Sounds familiar?
With large scale infrastructure under constant attacks, people dying because someone couldn't be bothered to do things properly is not an "if" any more. And better hope those autonomous trucks are very, very hard to hack.
Would a nationalized bug bounty program help here? Along with some compliance enforcement that the bounty is actually addressed, fulfilled, and payed by the vulnerable entity or the government (funded through some form of corporate tax). I haven't really thought out the details, but likely some kind of practical and effective threshold exists where a business entity in the US enters into mandatory participation.
Genuinely curious, would love to see others' thoughts.
> Would a nationalized bug bounty program help here?
A nationalized ransomware team would.
I'm serious. Just like how NSA said "we can't beat em so we'll join em" and started buying zero-days with both fists. If, back in the 1990s, you tried telling people this would happen you would get shouted down by everyone in the room. But it did happen.
If you get owned by Team Fed you get a phone number. You call the phone number, get informed that you got hacked, and get the decryption key immediately. The ransom is added to your company's next annual tax filing. Ransom levels are slowly jacked up until morale^H^H^H security improves.
Bruce Schneier, our country needs you! If you—or someone with your mindset—isn’t in authority and we get the technical equivalent of the TSA, we’re in for a world of hurt and trouble.
Of course you get the technical equivalent of the TSA. Even if you had Bruce Schneier setting it up, he won't run it in perpetuity; government in the long run descends to maximum power exercised with minimum intelligence unless prevented by the people governed.
If we don't get ahead of this we'll regulatory capture ourselves into oblivion and the enemy will win anyway. As long as state-sponsored-actors are indistinguishable from black-market criminals this will never escalate beyond the perpetual cat and mouse game. We simply have to be better, and we can't have oversight committees and regulatory boards managing it. Infosec is ripe for being revolutionized.
US Constitution empowers Congress to issue "Letters of Marque and Reprisal" - to wit grant permission for private entities (people, companies) to wage war on other private entities. Enacted to help shipping companies deal with pirates, applies today for the likes of ransomware perpetrators.
Finally we going to get security research paid properly and companies punished for not fixing their zero-day-sponges. Oh, its just another monstrous deterrence Three letter agency.
But yeah, in a game-theory sense, its the cheapest option, to have a nuclear counter strike, instead of building all cities like underground bunkers. Security, by strike team. That would actually work, if all countries agreed on that.
Or the internet is expected to break into allegiance-sized parts. The server only connects to country, who will extradite cyber-criminals and adhere to this connection contract.
Curious if this will result in extraterritorial enforcement. For example, it's clear Moscow is either unwilling or unable to prosecute cyber criminals within its border.
That's one possible reading. Another is that the US will start working on their own Great Firewall, such that your packets need to be cleared by a metaphorical digital TSA to enter the country.
Something like SCION may be in the "Western" Internet's future, is my guess. I don't expect protection-at-edge or pervasive atop-the-current-Internet surveillance to be the solution for the OECD.
1) All security has weaknesses or work-arounds. That doesn't mean that all security is worthless. Forcing adversaries to take more risks and expend more effort is kind of the whole point, and that's exactly what you're talking about.
2) Are you arguing that the actual Great Firewall, a real thing we see actually working on a massive scale, does not make it much harder for foreigners to cyber-attack China?
3) See my other post on this thread—there's work toward re-designing the Internet to make evading state- or bloc-level origin control, including communicating with existing compromised nodes inside a state, remotely, way harder than it is now. I'm talking at the node-to-node routing and backbone level. It's interesting/terrifying stuff.
4) Couple 3 with some other minor and fairly obvious tweaks to how Internet access works, and even getting a foreign device with its own infinite-range radio into the target state would be reduced to step one of several to gain access to a target state's network, and that access would likely not last long if you start doing anything weird with it.
it's just a metaphor. in reality, they're just going to use the old Patriot Act mass surveillance infrastructure, which sits inside ISPs and processes every packet.
Hackers in Russia extorting Americans is illegal under U.S. law; that's extraterritorial jurisdiction. The U.S. government going into Russia (or Pakistan or Ethiopia) to punish those hackers without the home country's permission is extraterritorial enforcement.
We have a lot of precedence with the former. The latter's use is more limited, for obvious reasons.
Looking back with 20/20: clothing styles back then, even for the rockstars, were damn basic: 99% long-haired, half-naked paler-than-I people in jeans, jean jackets, and wifebeaters. xD
How are we going to have enough turns to intercept all of these flying white TicTacs? No really, if we don't even have anything fast enough to keep-up with whatever the heck these are (if they're real).
(Just don't equip your army with only nuke missiles because they destroy all of the good stuff and psy attacks would cross the streams.)
I think that's what they're referring to. Elerium-115 seems to be the current name* for Element-115, which is said to have antigravity properties and so is how UFOs are able to do their impossible maneuvers.
*Back when I was obsessed with this in the early 2000s I'd never heard of Elerium-115, it was always Element-115. Looks like the origin of the name is actually a game in 1994, but may not have become common until around 2013/2014.
I'm sure the Russians are as interested in these crooks as the Americans, as it would be attractive to seize their assets. They will not extradite them, but they might wish they had been.
Why would they shut them down when they can just recruit them? Think of these ransomware groups as the minor leagues, the best get to move up to Russia's cyberwarfare teams.
LOL. A good portion of cyber security attacks are government military operations. You think China or Russia is sad that hackers closed down our gas pipelines?
This article reminds me about another published by The Harvard Gazette, Government can't keep up with the technology. The article argues that big techs are keeping larger and larger for government to keep up with the pace. In case of ransomeware, government and the Supreme Court are trying to keep up but in my opinion, it will be long before government and bureaucracy could address the problem. Same happened in case of Bitcoin. Sure now everyone wants regulations around Cryptocurrency but it seems governments are investing in lost causes of catching up with these growing uncertainties.
I don't mean that government shouldn't be engaging in these talks and try to regulate these markets, my only concern is the pace of these two entities. Instead of using the same old frameworks of regulations and same old mentalities, unorthodox approaches can better address these issues.
The federal government has been pushing to ban different encryption standards for years, or at least require a governmental backdoor. If we get a 9/11 size cyber attack, they will ruthlessly weaponize it, and whatever semblance of internet privacy we had will be gone.
They already failed - let's make taskforce for central monitoring of revents...
Do they hear about evaluation of the problem ? a) un-Internet that damn infra; b) replace M$ system with Linux or
other for of nix excluding Apple products (too user brain replacing and limiting for building on them).
Now you say nixes have bugs too and hackers already have malware versions running on them ? But there is at least one
difference: admins have control of that systems parts and can do what they please - including components upgrades. Not so much with 40's years M$ big-ball-of-mud.
And why I'm so so anti M$ ? Remember Ukraine electric-factory problems ? M$. Hydro-dam in US ? M$. Lastly something in Brasil ? Guess what.
Do the evaluation of base problem.
And build more fibers for infrastructure handling !
If the USG treats this even close to the way they treat terrorism in regards to policy and funding, I’m curious what that will look like and how nation-states harboring those people will react.
So, will this will be another TSA? Time will tell, but looking at recent history of what USG decided lately in past 2 decades, the score is leaning overwhelmingly to "yes".
Given the US response to threats and disasters including COVID-19, global warming, fascism, white nationalism, gross media manipulation, wildfire, drought, opioid crisis, the 2007-8 global financial crisis, housing bubble, Hurricane Katrina, and 9/11 attacks, just to cover the past two decades, I'd say "asleep at the wheel" is standard operating proceedure.
All of those were known threats or repeat instances of similar previous threats.
It's the likely threats for which there've been no earlier parallels that I'm truly terrified of.
They absolutely should. We are in the midst of a cyberwar against criminal gangs sheltered by a kleptocracy that already attempted political sabotage against this country. All options must be on the table including physical retaliation - the threat isn’t going away.
I think a lot of people don't realize this, because I never see it mentioned, but when the Soviet Union dissolved we (U.S.) convinced the Ukrainians to give up their loose nuclear weapons with the promise that we would protect them going forward. I may be time to ratchet up on that promise and help the Ukrainians drive the Russians back across their border. Crimea will stay gone because it belonged to Russia to begin with (https://en.wikipedia.org/wiki/1954_transfer_of_Crimea) There are a lot of things we could do with Ukraine to punish Russia.
Everyone points at Moscow as if they are behind the attacks, when, in fact, all we know is that the hackers are probably based in Russia (if treating Cyrillic keyboards specially isn't a silly false flag). They say Russia is unwilling to do anything etc. But did the FBI actually reach to their Russian counterparts for assistance? Or are they waiting for Moscow to come forward and fix all their security problems on its own? 10 years ago when mail bride order scams were popular (targeted at US/Canada/Australia), Russian police actually did catch and imprison a lot of scammers after American/Canadian requests; some of them in my own town
The free market needs to start punishing these companies reputation wise for not paying for backups. If you can't afford that, why should I have faith your IT department is even competent? This is data hoarder 101.
The government should also cover any damages, then, for failing to protect its citizens, if it will prevent them from remedying the situation themselves.
Let's look at the chain of events. Computing machinery becomes exponentially cheaper, and it gets pushed into all corners of industry.
Shared computing becomes a thing, and the need to have a better model of security is realized as a lesson from Viet Nam, and the Capability Based Security model is born.
Microprocessors again exponentially decrease the cost of computing, and Capability Based Security isn't required because all of the installations tend to have one or a handful of users.
The internet is born, and the cost of networking becomes exponentially cheaper, now all of those low security end users are connected together.
Systems become more powerful with the continuing drop in the cost of processor, memory and storage, so they become more complex. Nobody writes their own software any more, almost all coding is outsourced in some fashion. Security is only a concern if it trickles back to the original source as a problem.
A culture of "move fast and break things" pervades Silicon Valley, and the internet, and thus newer is always seen as better.
The lack of a security model at the base of all these systems is exploited for financial gain. Band-aid layers are added to try to patch the obviously inferior operating systems that pervade the land.
Because the lessons of capability based security were ignored for decades, and not taught, the common consensus is that computers can never be made secure, and your best hope is to hire the smartest people in the world, at less than the average market rate, to secure your systems.
And we repeatedly blame criminals, corporations, programmers, users, and now other countries, instead of solving the problem by properly implementing security.
I'm not a "security expert", I have no encyclopedic knowledge of the ways of criminals. Let's agree that is well established.
I do know how computers work, down to the transistor level. I've been playing with them since 1978.
Rules I would impose:
Industrial control systems would be isolated from the internet by a unidirectional network. Data could get out, ONLY. You can have helpers on the inside and outside to handle things like buffering logs, etc.
If you need remote control of something industrial, it has to be on a physically separate network, airgapped from the world.
In Government, I would have NEVER connected the Office of Personnel Management system to the internet, except to allow data INBOUND through a data diode. All outbound queries would require passing through a human with the proper security clearance.
All sensitive or classified systems would be similarly isolated, and only allow ingress of data.
Multilevel secure computing would be required for all government systems. Red Teams would be used to test security periodically, run by the Inspector General.
Capability Based Security would be the norm. Most users wouldn't see much of a difference in their day to day interactions.
Bug bounties would be required for any commercial software vendor, with public disclosure after 1 year of all payouts. Bugs submitted that aren't paid would be disclosed in 6 months.
The NSA would shift roles from spying on everything just because they can, to first making sure nobody can spy on us, and only then spying on everyone else.
> Because the lessons of capability based security were ignored for decades, and not taught, the common consensus is that computers can never be made secure, and your best hope is to hire the smartest people in the world, at less than the average market rate, to secure your systems.
I presume the OP is a fan of capability-security and while I'm not an expert on capabilities, I agree they can go a _long_ way to mitigating risk. Unfortunately, none of the mainstream OSs even offer a smidge of a way of actually working with capabilities. Google's recently laughed Fuschia _does_ support capabilities out of the box, but they have a long way to go before they're regarded as mainstream.
Things will continue to get worse. Google's Fuchsia and Genode are two capability based Operating Systems that are likely to be good enough to hack in the next year or so.
I expect 3-5 more years of this before enough experience is gained with Capability Based systems to finally cause mass adoption.
In the meanwhile, it would be nice to have a Raspberry Pi based data diode setup that can buffer all the standard stuff, as well as SCADA.
Also in the meanwhile, there is non-zero danger that Congress will use this as an excuse to purge the nation of general purpose computing available to the masses.
Also, the Military Industrial Complex will push for more funds from this.
Also, many a Startup will sell more security snake-oil.
If under "proper security" the author means something that is impenetrable then such thing does not and will never exist in general. We can approach some reasonable level but with the current explosion of software, its complexity, insane degree of dependency, every button of your shirt becoming "smart" gizmo connected to Amazon and whatnot I believe the situation for now will only get worse.
The irony of this post on a VC hosted forum. If you believe this, couldn’t/shouldn’t you pitch it get funding and live the Silicon Valley dream and “make the world a better place”?
If I were going to pitch something, it would be a kit consisting of 2 servers and a data diode, useful for getting data to move only in your direction of choice, guaranteed by the laws of physics to be un-hackable. (LED/Photodetector pair)
uh... next time when the US targets Iran for example with stuxnet, will that mean they will call themselves terrorists now.... great. i didn't know that
Before the yacht was launched, before it was first put in the water, there was a big problem with rats entering through the large holes in the bottom of the hull. To remedy the situation, the yacht builders began feeding a large number of cats around the base of the yacht while they finished the furnishings and painted the gold trims. The rat problem was solved and the happy day of launch is near.
They'll just hire a million little Dutch boys with SCUBA to put their fingers where less wholy materials up to ship-building codes belongs. Problem solved!
wholy -> holey? Definitely not wholy. That's partly wholly. Wholly is derived from whole (all/everything/complete) and not relating to a hole.
This is an adjective derived from a noun, so hole -> holey. It could be hole + ly -> holely but it isn't.
Now, we have the word pinned down. How on earth do you pronounce the bloody thing? For me (en_GB): hole-ee. The dash "-" is not a pause, I would run the word hole straight into the ee sound. The ee phoneme is quite short.
Every business owner is either ignorant (default), has made the wilful calculation that risk < cost, or is so busy barely surviving that things like security are not high priority enough to get attention. Security is fundamentally a resource attribution problem. Overspending on security results in high opportunity cost. Under-spending on security results in high risk in terms of trust and money, as well as poor national security.
A valley company that takes security seriously will: Hire experts. Scope attack surface/risks. Implement direct mitigations. Implement policy. Implement defense in depth. Develop a system capable of discovering indicators of compromise (IOC's). Verify security via bug bounty and pen testing, both internal and external.
Clearly most of these things are not "features" and therefore are a cost. Furthermore, since every company must impeliment these, the cost of security for society at large is an O(N) problem.
We must set up a system that mitigates the unpayable O(N) cost of security.
Pen testing/Bug Bounty/verification is probably the most easily scalable problem to solve. Whether you unleash hackers on companies by indemnifying them or specifically pay for Project Zero like entities or turn our own nation-state attackers against US companies with the weight of the US government behind it, it seems quite feasible to create scaled cybersecurity monitoring which can then better inform both technical solutions and policy solutions.
Once companies know they have poor security and once a business can see being breached as a certainty rather than a potential risk, I think the free market can probably solve the problem.
The title is a bit misleading. It is the U.S. Department of Justice that is promising to give the prosecution of these hacks a similar priority to terrorism. Not the entire United States government. Please keep this in mind before speculating about military actions or SEC regulation or new lays being passed or the intelligence community getting involved. This is about DoJ priorities.
Yes, just DOJ. So not drone strikes, but undercover FBI agents spending months trying to cajole and harass coders into writing ransomware so that they can bust them.
Whelp, that's the end of cryptocurrency... probably should sell your HODLings now. If we're going to Patriot Act the crud out of ransomware, Bitcoin is gonna be illegal.
Difference is if corporations and funds can't hold bitcoin/crypto - you're back to $1/BTC. The whole value proposition of BTC hype bubble bursts if it's illegal in a major market like USA. Don't doubt some cyberpunk nuts will keep playing with it.
And presumably if nobody can easily convert large quantities of crypto to and from USD. Sure, you find an international exchange, willing to give you some other fiat, but American KYC laws are still going to be chasing you all over the globe.
I mean, it could still be used as an IOU of sorts for illegal activities. But if this is the sole remaining use case, I'd imagine there are better means for this than hosting on a public ledger.
We have plenty of historical and current examples of governments imposing capital controls to restrict access to foreign currency. Very rarely does it result in the market price of the foreign currency going down. Usually quite the opposite.
Heck, even in American history we once tried to ban private ownership of gold bullion. The black market price of gold rose substantially.
> We have plenty of historical and current examples of governments imposing capital controls to restrict access to foreign currency. Very rarely does it result in the market price of the foreign currency going down. Usually quite the opposite.
That's a nonsense comparison. When governments impose capital controls it means their currency is already sinking and it's a last ditch attempt to prevent this inevitable scenario.
BTC valuation is entirely based on narratives about how it's going to replace standard currency in whatever story is popular, and from what I can see right now it's being pumped up by funds who can't find other good investments in this markets and are willing to play with crypto. If it's illegal for US funds/citizens to hold it/be involved with it - the selloff alone would kill the market instantaneously.
The US only constitutes 15% of global GDP. There's no reason to think that US investors represent an outsized position in crypto holdings. American funds may have large positions, but that's largely because the American asset managers tend to attract substantial overseas positions.
There's no reason to think that, say a Japanese pension fund, that's invested in Grayscale is going to say, "oh shoot, guess there's no possible way to allocate to this asset class now". They'll just reinvest that same allocation in a UK or Caymans domiciled fund.
If anything the 85% of crypto investors who aren't invested, will most likely hoard in anticipation of the policy being reversed. For better or worse the US government has extremely low credibility when it comes to long-term policy consistency. Almost any US policy can simply be waited out until Congress/White House flips parties.
> The US only constitutes 15% of global GDP. There's no reason to think that US investors represent an outsized position in crypto holdings.
Holdings would seem to be more reasonably assumed to be proportional to wealth [0], not GDP. The US has a significantly larger share of global wealth than it does of GDP.
[0] or maybe more-than-linearly related, since less-wealthy people will have more of their wealth in directly-used assets like tools, vehicles, and homes.
I mean, does Bitcoin give people the sort of high that they'd risk going to prison for? I'm not sure nearly-random numbers has the staying power compared to addictive substances.
Bitcoin is just math. The US isn't going to be able make holding bitcoin illegal, and I very much doubt it will ever be able to make the buying and selling of it illegal -- there are even free speech issues here. But what it can do is tax the hell out of it, regulate the exchanges as investment platforms, but they will have a hard time trying to make it illegal to pay someone to sign a cryptographic hash.
That is a blanket statement that doesn't bear up to scrutiny as you are confusing things like international transactions with national transactions, regulations by states versus by the federal government, and completely ignoring all the constitutional restrictions we have. It's like someone talking about freedom of contract and you counter with trade sanctions against North Korea. How does one respond to such high level dismissals so removed from the specific legal issues at play here?
So if I give 10$ cash to buy something, the US government is right above my right shoulder approving that transaction? You realize that doesn't make any sense right.
You won't be seeing 30%+ gains when they're throwing people in prison for it.
Most drug users are never prosecuted. But the threat of prosecution does very little to affect the quality of their purchase, relative to what it would do to BTC market as a whole.
Do you know how absurd that would be? Crypto like Bitcoin are just a database in essence. Throwing someone in prison for running a database on their computer would probably spell the end of general purpose computers. You will not be allowed to run databases anymore unless they are approved.
> Do you know how absurd that would be? Bitcoin is just a database in essence
It’s really bot unusual for the law to treat things differently based on the purpose for which they are used when they are “just a database, in essence”.
I know, in theory you can have a law for everything. For example, in the Soviet Union basic electronics such as radios were restricted and you were not allowed to tune your sets to western stations.
I don't think a database or a computer murdered or kidnapped anyone. I mean, a computer which runs the B-tree algorithm and talks with the rest of the internet according to a bunch of loops/if/then/else statements is not exactly an assault rifle.
But yes, everything can be banned eventually. We'll just go back to living in the cave.
I assume you'd have no issue taking down a database full of child pornography? You know there are some who argue that CP is just "bits on disk".
What if society determines that cryptocurrency also has negative externalities? You're free to disagree but I just stuck my finger in the air and it's pretty clear which way the wind is blowing.
The only societies who determine that cryptocurrencies have negative externalities are the ones that are controlled by criminals in the highest levels of their government. Just google a map to see where crypto has been banned. (Note that many these may not even appear as criminals and project a legitimate image to their public)
Here's a thought experiment: If you're a political party that had taken over the government through criminal means such as election fraud or more coercive methods such as a disinformation/propaganda campaign or a coup, the first thing you will do is to make sure that you have control of the money. Since cryptocurrencies are too transparent and undermine the absolute control of state-issued currency, these will be seen as a threat to the criminal government and will be the first thing to get banned.
We can quickly see that the world's worst criminals at the highest levels hate cryptocurrency, and prefer to use the existing paper-based technologies instead, that allow them to be more opaque and retain absolute power.
I mean, look.. cryptocurrency is a deeply political disruption to nation states that always had absolute controll over their national currencies. If a totalitarian government ever is to maintain power, banning cryptocurrency would be a necessity, and so far there's direct evidence of this as I've pointed out. (Another thought experiment: why crypto is banned in North Korea?)
It would be a necessity and extremely easy. It's not preventing authoritarian states from doing anything. Men with guns in riot gear just show up and seize all of your hardware, and whatever other property you own that they want. See China's current mining crackdown if you don't believe me. Many authoritarian states are currently tolerating crypto because it's an excellent way to avoid international sanctions and launder hacking ransoms.
By pointing out that crypto is banned for regular North Korean citizens, did you not just prove my point that it's a worthless tool for countering authoritarianism?
I thought I smelled authoritarianism! Here we see the ultimate purpose of this entire desultory exercise. Having problems online? No backups? Don't fix your pathetic shit; just be the excuse for the USA military-enforcement-imprisonment-industrial complex to oppress everyone on earth. Good grief.
Nations make laws against bad things. People who violate those laws go to jail. A ban on cryptocurrency (or rather, exchanging it for dollars) will be a hell of a lot easier than banning popular intoxicants.
We're done putting up with this particularly pernicious iteration of tulip mania. Time to pull the plug before it does any more damage.
I think this dynamic will play out very differently with something the value of which is mostly determined by current and future-expected transaction velocity & volume (to the extent that it's not sheer speculation). Now, the cost for services involving Bitcoin, like converting it into dollars, would probably shoot way up.
Outlawing Bitcoin (or cryptocurrency generally) would cause a huge demand reduction. Some coins might adjust supply to compensate, but total crypto "market cap" would surely plummet.
If you can't exchange BTC for dollars other than in person, and if you can't use it to purchase goods online other than via TOR, that is not going to increase the price. It's going to crash it.
Drugs are renowned as a special case when it comes to states' enforcement power. Currency control is not.
Outside failed states, capital controls and foreign currency restrictions have been historically well enforced and followed.
The U.S. banning cryptocurrencies, sanctioning connected individuals and firms and committing to leveling repeated 51% attacks would functionally destroy most cryptocurrencies. (There is zero indication this is being contemplated.)
I think if you’re a criminal with a lot of Bitcoins you can do it. One way is through exchange insiders taking a bunch of your balance and giving you a bag of cash (but you sell your coins to them at a discount of course.) See eg https://cybernews.com/security/how-we-applied-to-work-with-r...
LSD was and still is one of the cheapest black market drugs you’ll find. There is no shortage. Imo it’s easier to get and test for safety than ever before.
We're not outlawing math. You can still run your little calculations on your machine. You just can't exchange them for dollars. That's what we're proposing here. Currency control.
Are you confusing this with the debate around encryption? That wouldn't surprise me coming from someone who uses the phrase "nocoiner".
Thanks for finally admitting your intentions. I don't see a similar suggestion anywhere ITT or TFA, but it all did seem a bit too coy. Physically, it would be possible to shut down e.g. Coinbase. Legally, that seems a stretch. Politically, with the particular investors they now have, you're trying to shut the barn door after the horse has joined the circus.
Shutting down Coinbase, however, will have no effect on bitcoin or the people who use it. That's the point of bitcoin, and it has been since the protocol was published.
> Shutting down Coinbase, however, will have no effect on bitcoin or the people who use it.
You all are more than free to continue to associate with each other. As long as you're not breaking any preexisting federal laws (let's be honest: most of you are). It's the normie and Wall Street market we are targeting. Good luck maintaining the bull run, sweetheart.
It would be precious for you to suggest you might have some effect on "Wall Street". It's called "capitalism" because capital is in control; "democracy" would imply something else. Most humans break USA federal laws every day. That's the point of USA federal laws: if they couldn't be used to destroy any person at any time then the billionaires would come up with something else. At the same time, if rope is being sold, why wouldn't they sell it? Cops are sometimes surprised the first time they realize who they work for, but if you stick around you'll learn.
The most precious thought is that one might think to reign in authoritarian capitalists by attacking the one thing created in the last century that has a chance of actually undermining authoritarian capitalism. You don't actually think that, because ITT we see plenty of evidence you're on the other side of this conflict.
You’re well aware I’m not talking about opening your neighbor’s mail, or any other "crimes" that are still on the books but never prosecuted. When I refer to the federal crimes that are committed daily by crypto enthusiasts, I’m talking about blatant tax evasion that makes the Panama Papers look like child’s play. In a sane world, none of it would be possible. Soon enough.
Do you propose that Bezos should pay his taxes at the same rates my neighbors and I pay? That sounds good, but I don't see what it has to do with bitcoin. Do you allege that he is hiding income by not reporting it? It has been my impression that he prefers instead to hire people to write the tax law in his interests.
You keep focusing on the enforcement itself rather than the goals we hope to accomplish via enforcement. I infer that you have a special fascination for enforcement, but please understand that many people do not share this special fascination with you.
Just a special distaste for the crypto crowd. Who somehow manage to moralize all day about "authoritarian capitalism" while skimming off everyone in society with their tax avoidance scam.
I'm inclined to say Jeff Bezos is no better. But then I remember he and his companies have actually produced some innovations that are legitimately beneficial.
AML, SDN lists - yes, all that is in scope. But enforcement has been uneven: it’s so far been about making US exchanges comply with KYC laws. Nobody has really gone further.
What happens when a company is a victim of a ransomware attack and OFAC puts the extortion wallet on an exclusion list?
The risk isn’t just to the person holding the wallet: it’s the risk of OFAC sanctions hitting the exchanges that takes dirty BTC and pays USD.
So now, know your customer turns from “be sure I don’t send USD to a specially designated national” to “be sure I never accept crypto from a burnt wallet.”
I thought being a solution in search of problem was perhaps too charitable. Every joule we waste on hashes is another gram of carbon in the air. And it is all waste — the only problem cryptocoins solve better than other solutions is illicit transactions. It’s not even close to as anonymous as cash.
Coinage used to be more portable in the days of precious metal coins. But honestly I’ve had very little barriers in converting cash. It’s a solved problem.
bitcoin and related systems are a solution to the double spending problem. Perhaps a flawed solution based on the information we know now, but it is a solution nonetheless. some related systems such as monero, zcash, and GNU taler make attempts at ensuring spender privacy, like cash.
but the computational power is nessisary for the network to function in a manner that is provable to new nodes. Because you can use a digital signature to confirm a transaction happened after some time, but not before some time.
I don't think cash is a solved problem within the context of computer networks. If I could transfer money using a program by using a digital signature, I would be satisfied, but anyone who can get access to my credit card numbers (and name, billing address and other open source info) can make purchases in my name. And you of course must rely on the fractional reserves of some central entity.
> Try taking $10,001 in cash into the US and tell me that again
You have to declare it. You’ll probably get follow up questions on how you got it and why a wire doesn’t work. But otherwise, large quantities of cash transit the U.S. border all the time.
The majority of the uses of cash are legal. The majority of uses of Bitcoin are criminal. And bear in mind, Bitcoin hasn't just been a boon to ransomware, it's been a strategy to evade financial sanctions by countries like Iran: https://www.reuters.com/technology/iran-uses-crypto-mining-l...
So there's a lot of reasons the US government just may find themselves happier without it.
Do you have a source for your claim that the majority of Bitcoin uses are criminal? Research by blockchain analysis companies show that only a small percentage of Bitcoin usage is illicit [1][2][3].
Bitcoin is easier than hard cash to track. There's no need to make it illegal. I suppose you could argue that government is heavy handed enough to simply ban the mechanism by which ransomware payments are so easily conducted by. My intuition is that government would prefer to regulate it rather than outright ban it.
How much success, historically, has the US government had at regulating math? I can't think of any, but it's not really my specialty so I'm curious if anyone's encountered a successful example.
They don't have to regulate math, they just have to regulate where cryptocurrency touches the "real" financial system which they're actually really good at.
I would assume most diehard bitcoin holders are just fine with no financial system entity touching them. After all, that's what many of them are trying to route around. They want peer-to-peer transactions independent of state actors, and have very little desire to hold BTC in their Fidelity-managed 401K portfolio.
Rather it's been the banks that have been clamoring to get a piece of the bitcoin action, not the other way around.
Ransomware is actually a legitimate threat to the well-being and health of all people. They lock down government and health records. It a huge risk to the American people
That doesn't mean you avoid making laws for the legitimate threats, it means you also keep tabs on how they're used. A system of laws, and a system of oversight for the use of those laws.
Yes, keep tabs on how it's used. But also, when it's being written, try to think about how it's likely to be misused, and write it in a way that it can't be misused like that. (Amusingly, I made a typo, and misused came out mis-sued.) Legislators try to write laws broad enough that they cover everything and can't be weaseled out of, but that leads to them covering more than intended.
Ehh, you can have reasonable security and still be a terrorism victim, you can have reasonable security and still be a crypto-ransomware victim.
This is like tut-tutting arson victims for using wood in the construction of their buildings.
I'm okay with encouraging reasonable levels of security while also making life horrifically miserable for people engaged in criminal enterprises that attack those victims.
So we are going to launch a trillion dollar war on ransomware which inevitably leads to more ransomware before patting ourselves on the back and saying "mission accomplished"? Are we also going to make ordinary citizens take off their shoes and get probed before using their computer?
So many of today’s problems result from a lack of morality. If the solution doesn’t involve Bitcoin then it involves religion. Security is impossible and digital security doubly so.
I wonder how many of these would be stopped by getting rid of SMB file shares. Not that that is really an option, of course, but things like OneDrive and Google Drive scan for malware during upload and often don't sync a file (especially a shared file) to a user's device until they specifically click on it. Seems like it would make it a lot harder to move around if you were malware.
(You can't do this if you want on-prem Active Directory or a good open-source cross-platform file sharing service like Samba, which means 90% of companies can't do this. And there are of course actual security things you can do [like blocking hard mapping of network drives in Windows] instead of the cowards way out I speak of.)
For me "Terrorism is, in the broadest sense, the use of intentional violence to achieve political aims", there is no political aims here, their goal is to extort as much money as they can from their victims. This is a criminal activity and any small or big companies that pay for it are feeding the monster and should be persecuted. But the US has been ignoring it for years and now it comes right back to them.
They haven’t said ransomeware is terrorism, only that they’re going to prioritize it like terrorism. As in, it will follow a similar centralized reporting process. I don’t think the goal is to start sending hackers to Guantanamo or categorize ransomware as WDMs. Not yet, at least.
The willful ignorance and non-action by states that provide a safe haven for launching these attacks seems to be potentially political to me. If the attackers are state backed, then its definitely political. If the attackers are not state backed, it seems plausible that the state has made a decision to allow the attacks to take place because sowing chaos and discord in the united states is an aim of their government.
It's the international "finger in front of the face, I'm not touching you" game. But by any reasonable interpretation, yeah, it looks the lack of prosecution of ransomware groups is lacking for one reason or another.
So let's look at the chain of events: companies start to become monopolies, make billions of dollars that way. They become "too big too fail", important "infrastructure" for the US. Then, start to expose their user's data on public networks, and don't follow proper security procedures. Now, the public has to pay for the government to secure the magacorp networks! It's a non-stop scam, where they fail their (already small) responsibilities and use public funds to increase their monopolies!
It is good, but it still does not beat JIT. First MBAs various JIT acolytes did everything to make sure there is nothing on hand or manufactured in US just in case it ate into the profits and then when the 'everything shortage' happened, they had the balls to run to the government asking for bailou.. sorry.. incentives to move manufacturing to US. It is fascinating to watch, because it is done with a very straight face and expensive lawyers.
The Colonial Pipeline is a monopoly? It appears to be a joint venture between at least 5 energy companies. Or to what monopoly are you referring? There is not mention of any other companies in this article.
When hackers start to interfere with American food and energy supply chains, it rises to the level of national security, IMHO.
With all due respect, it seems like you might be jamming this story into a pre-chosen narrative.
I too dislike megacorps, but you could say the same thing about a business being robbed - they most likely could have done something to prevent it but police will still respond and not charge them for it.
Well it's one thing getting robbed when you took precautions like securing your back entrances, putting security cameras in your store, putting the cash in some kind of safe and it's a different if you take no precautions whatsoever and everything is out in the open.
Many of these companies that get hacked haven't even done the bare minimum, so it's not even remotely comparable to a robbery imo.
1. Make ransom illegal to pay.
2. Fine the hell out of any company that has not kept up with best practice in security. Require the board and exec staff to resign without payouts.
3. Make minimum jail time for ransomware hackers 100 years.
4. Make any hack that can be attributed to a loss of life (like shutting down a hospital) a death penalty offense.
5. State actors get economic death penalty - no US company or company that does business with a US company is allowed to do business (banking, etc.) with the state actor for 1 year for each offense.
6. Authorize NSA to retaliate in kind vs state actors.
At the height of the Roman Empire a citizen could walk the length of it without fear, because if they where attacked and killed the legion would burn the city / village to the ground that was responsible.
We had the Cold War and not a Hot War because of mutually assured destruction. I fail to see a reason not to bring that balance to hacking by state actors.
Bla, bla, I am a bad person. No, I am suggesting a reasonable measurable set of steps that force the companies to do better while imposing great risk to the criminals and state actors.
Not the minimum jail sentences: The government will then just keep watering down the definition of "ransomware hacker" until all of us are technically eligible for 100 years of prison because of that one time we used an incognito tab to circumvent the NYT subscription nagware.
I'm not surprised by this announcement because the way that the pipeline-company ransomware hackers beat a hasty retreat was noticeably unusual, and already seemed to telegraph that the state was getting involved more...actively. Good.