This article makes so many unfounded assumptions in order to make a point.
> Presumably they are going to immediately make themselves admin, or wire all your bitcoin to their account.
Attackers running scams like a sophisticated BEC will lay dormant for long stretches of time to gather information before acting. Sure, they can export the emails and set up auto-forward rules to maintain visibility when the session expires, but they've now made a lot more noise to detect on. I've seen threat actors view mailboxes once a day for months before they launch this scam.
> Also, it would be better to protect against this by securing the logs or using hard drive encryption.
Of course it would, but often it's not. It's that simple. It's crazy to think the person responsible for writing a secure app is also the one making decisions on endpoint encryption.
> some applications are used strictly within an company from company devices
Some are, lots are not. This reads like someone who has worked in enterprise environments with well funded security teams, not a small business with one IT guy running the show.
> But even then, the attacker could install a browser extension that sends your credentials to them the next time you log in.
This contradicts the rest of the article. Why is a company securing logs, encrypting disks, locking down where users can access apps, but then allowing anyone to install browser extensions?
I agree that short sessions are not the quick fix that some devs make them out to be, but the author is ruling out a perfectly acceptable control based on an imaginary end user setup.
I recently had a BEC on my desk where they had gained access months earlier to a real estate agent's mailbox. They took the time to create perfect forged documents and understand the agent's workflow. Finally it was time to tell a buyer where to send their Earnest Money and the actions were perfect. They made a mail rule that captured the RE agent's outbound message and then sent their own, an exact replica with just the account number changed. Even if the buyer had called to verify the message it would have been fine because the agent really did send a message.
Of course finance people are used to stuff taking an arbitrarily long time (partly the users, partly the system) so they were able to do this several times before anyone raised the issue of MIA transfers.
Oh and we don't know the exact date of the compromise because the customer was not paying for good log retention from microsoft or exporting them to any kind of collector. We were able to uncover a lot but I wonder how this goes for indy RE agents that do everything out of AOL or whatever.
Can you elaborate more on how the UK's workers rights are "literally worse than the US"? I would say things like statutory sick pay, mandatory holiday allowance, protection from unfair dismissal and the right to uninterrupted breaks are all pretty progressive compared to the States.
There is no worker protection for the first 2 years of your employment with a company, so you're essentially an at will employee (worse actually, you have less rights and still need to give notice).
You also have only laughable unemployment benefits (£85 per week), that don't depend on your contributions.
Again, completely untrue. Automatic unfair dismissal does not have a minimum tenure to be applicable. Here [1] is a handy list of protections that do not require two years. You've also conveniently ignored all the other benefits in that list that are not available in the US.
You've already been corrected by someone else on the unemployment benefits so I'll not waste time repeating that.
The UK has a lot of problems, but downplaying workers rights in comparison to the states is a strange hill to die on.
This article is from 2019, but it was my recent experiences with autocorrect which led me to finding this. My phone will occasionally have a few days of complete autocorrect meltdown (words completely out of context, random capitalisation, Spanish?) before normal service resumes.
While I agree with the sentiment around not paying, I don't think it's as simple as that. Calling on law enforcement to "track down the adversary" is not easy, and when you track it back to a random Russian cybercrime group what can you do with that information?
A lot of these payments are not fortune 500 companies with unlimited IT budget, it's small or medium businesses with a 3 person IT team. Should they have proper off-site backups? Yes. Should we just let these companies go out of business until organizations learn their lesson? I would say no.
I really like the idea of making payment more difficult and mandating organizations to report these incidents. You're correct, companies do have the incentive to cover things up. Banning payment won't stop that.
I'm interested to see how people will circumvent this if the bill passes. If you pay a third-party company who "deal with the issue" on your behalf, all under legal privilege of course, would you still need to report?
> Should they have proper off-site backups? Yes. Should we just let these companies go out of business until organizations learn their lesson? I would say no.
Can you expand on that? Bad management leading to criminal interaction seems like something we'd be better off without.
An organization going out of business isn't just a case of bad management being eliminated.
I wrote that line thinking of the clients I've worked with who've been hit by ransomware and didn't realize IT were not doing their job until it was too late. In some cases it's a failure on their part - not investing enough time or resources and seeing IT as "the guy who installs windows". More often than not, they were assured it was taken care of. I don't expect a manager of a car dealership to know if their Exchange server is running recent patches. If companies like SolarWinds and Kaseya can get popped and compromise their downstream customers, think of the number of small MSPs causing that same issue every day. I don't think a business should go under with people losing their jobs because IT screwed up.
We would be better off without leadership who take no interest in security, and once a company is hit with a 100k ransomware bill you can bet they'll care going forward.
> More often than not, they were assured it was taken care of. I don't expect a manager of a car dealership to know if their Exchange server is running recent patches.
You're not wrong, but would the same dealership be as blasé about the assurances from their accountant that all their taxes are being paid?
Certainly no one can be an expert in everything, but regular audits from third parties of one's business at semi-regular intervals is prudent. We call in an external IT security auditor regularly ourselves to make sure we're not missing things and still following best practices.
>Certainly no one can be an expert in everything, but regular audits from third parties of one's business at semi-regular intervals is prudent. We call in an external IT security auditor regularly ourselves to make sure we're not missing things and still following best practices.
You're absolutely correct, up to a point.
As an InfoSec professional, I've been on both sides of such audits. Sometimes they're quite good. Sometimes they're awful. Usually, they're somewhere in between.
What's more, just because an audit has been performed (even a really thorough one), there's no guarantee that the recommendations will be applied, or even if they are, that they will be applied competently.
Leaving that aside and assuming that everything is done properly and thoroughly, regardless of all that hard work, it just takes one non-technical resource to click one link, and ransomware could be loosed on your network.
There are, of course, mitigations and, hopefully they are all in place and just the one desktop/laptop system is compromised.
All that said, many organizations don't have the time, money or expertise to properly secure their environment, let alone bring in outside auditors.
Medium/large companies with such resources should absolutely do all of those things. But the vast majority of companies in the US are SMBs who likely don't have those resources.
I'm not making a value judgement either way about the value of mandatory reporting, but I don't agree with your assessment.
> when you track it back to a random Russian cybercrime group what can you do with that information?
Having solid evidence of state-sponsored (or egregiously tolerated) criminal attacks on Americans is the first step to building will to launch (cyber) counterattacks, or at least credibly threatening them.
I'm a one-man IT show for a few small- and mid-sized companies, and I had to manage an incident a few years ago where one of the companies was taken down for several days under a massive dDOS attack. This was accompanied by a ransom email to me. I disclosed the email to the company owner and he asked if we should pay it. I told him I wouldn't pay it if he ordered me to. I was sleeping on the floor for an hour at a time - the host handling the dedicated server basically said we had to go and threatened me that I would have to pay for their downtime, and the attack was large enough to shut down their connection to the transatlantic cable, so I had to fight around it for 48 hours while trying to quietly exfiltrate our data off the server through another one I had in Europe at the moments I could connect. I was contacted by the FBI and ultimately they found the assailants and one of the people behind it went to prison for a couple years; I got a judgment against him for my cost mitigating the attack (although it's symbolic, obviously. I'm sure I won't see a dime of it). He was just some schmuck in Florida.
TL;DR - if everyone refused to pay there would be no profit in it. And if everyone had to have IT staff who were competent, or worry about being fined for malfeasance, there would be no question of paying a third party to deal with something quietly. It's right and proper that authorities get involved. Even if they do find it's some troll farm in Russia, they can sanction and block in a way that small companies cannot. It's one of those situations where you stand together or die separately.
>if everyone refused to pay there would be no profit in it
Of course it's very saintly of you to refuse to pay, but often not paying is a very bad business decision. You can be sure that enough people will pay for your refusal to not make any difference.
It wasn't for saintly reasons; I saw clearly that it would do no good. If we paid, there was no guarantee the attack wouldn't come back the next day. It would solve nothing - actually, it would be worse because it would put us at their mercy. If we couldn't overcome it, get back online and block the next attack, then I didn't deserve my job and the company would have been better without me. Paying wouldn't make the problem go away, it would have made it infinitely worse for me.
Still, the GP's point stands. Especially as attackers create a "brand" and establish trust -- if a few quick searches show other people reporting that they paid and things were restored, then you can bet people will feel much more at ease with the idea of paying and actually being left alone after. When a business depends on it, when data loss is at play instead of just downtime, even more so. People will pay.
I think paying is a dumb move, regardless of the "brand" of extortionist you're dealing with. It just signals that you're a mark. It might briefly make an executive's life easier, but it'll come back and bite you.
I didn't refuse to pay because I thought it would be inspirational or set an example or bla bla bla. It was because I'm not a dumb mark, and I'm not going to let my clients be. And personally, I'd rather burn my house to the ground than let someone rob it.
>"And personally, I'd rather burn my house to the ground than let someone rob it."
That is your personal choice and you're more than welcome to go up in flames if that is what you wish. You should have no rights however to force your personal choice upon the others. They might have a different perspective.
Yeah, but this just completely detached from real life. Companies will always pay, even if you make it illegal, companies will still pay.
Will fewer companies pay? Sure. Does it matter? No. Ransomware gangs wouldn't go anywhere even if their average payments got cut down by 90%, and the stuff they might switch to (BEC) isn't going to go away either.
Maybe, but even if just a few don't pay, they will do other things. The attacks ransomware uses will become harder and harder to exploit. (already it is a lot harder than 20 years ago). More things will be invented to prevent them in the first place. Maybe formal proofs of all code? There are a lot of things that companies who aren't going to pay will start demanding of their vendors who they will pay.
> Maybe, but even if just a few don't pay, they will do other things
You're joking. There are already many who don't pay, payment rates could fall by 90% and it wouldn't slow them down a bit. You clearly have no idea how hugely profitable ransomware is.
If less companies pay, the ransomware operations will just scale up their customer support teams and further automate deployment. This really isn't going to be a problem for them.
> Maybe formal proofs of all code? There are a lot of things that companies who aren't going to pay will start demanding of their vendors who they will pay.
Haha. Funny. Have you ever worked with formal proofs in a software context?
> There are already many who don't pay, payment rates could fall by 90% and it wouldn't slow them down a bit. You clearly have no idea how hugely profitable ransomware is.
You misunderstand. Those who don't pay still have the problem. They will invest in solutions. Some (like good well tested backups) only affect them, but others like hardening software make it harder for ransomeware to get anyone in the first place.
> f less companies pay, the ransomware operations will just scale up their customer support teams and further automate deployment. This really isn't going to be a problem for them.
True. Though the less companies that pay, the more examples of not paying get out there and so the more likely it is other companies will get good protection for themselves.
Probably not enough to really affect profits too much, but still helpful to limit the amount of investment "big evil" can afford to do.
> Have you ever worked with formal proofs in a software context
Just a little bit. I'm looking to work with them more because for my area quality is important and we have reached the limits of what unit and manual testing can do. (but not the limits of other automatic code analysis which I'm also looking into)
>You misunderstand. Those who don't pay still have the problem. They will invest in solutions. Some (like good well tested backups) only affect them, but others like hardening software make it harder for ransomeware to get anyone in the first place.
Even those who pay are also investing in solutions, but the reality is that throwing money at this isn't going to make ransomware go away. Billions have been poured into this, and billions more will follow. What does that money get us? Mostly snake oil antivirus products, and big full page ads in the Economist for said snake oil products. I can't imagine that we're going to see significant results on a timeframe that you'd consider acceptable, maybe in 20 years.
>True. Though the less companies that pay, the more examples of not paying get out there and so the more likely it is other companies will get good protection for themselves.
>Probably not enough to really affect profits too much, but still helpful to limit the amount of investment "big evil" can afford to do.
Honestly, I think stories of companies paying out huge amounts has more of a chilling effect on ransomware than stories of companies refusing to pay. The huge ransom payment gives everyone a concrete number to be afraid of, a refusal to pay is a non-story unless it causes devastating damage to the company in question.
>Just a little bit. I'm looking to work with them more because for my area quality is important and we have reached the limits of what unit and manual testing can do. (but not the limits of other automatic code analysis which I'm also looking into)
The problem here is that you will never be able to produce a useful formally proven general purpose desktop software stack that would present meaningful advantages over current systems. Formal verification really only works for very simple pieces of software, and in any case, formal verification is only as good as the model you are verifying against.
We're not going to see formally verified web browsers, nor are we going to get a formally verified microsoft office suite.
very telling how the "real" people fight over how much they can deny any and all "saintly" motivations ; edit- apologies to noduerme who appears to have dealt with a serious situation respect .
second thought - here in the USA in the mid-2000s there were waves of identity theft and also mortgage fraud.. massive waves, very large numbers of accounts and even larger dollar amounts. I believe it was BANK-related employess and USA-based people who knew the credit system, performing quite a bit of all of that, with eyes open! the reputed phrase was "you wont be here, I wont be here" about the consequences down the road.
Sure, name-your-enemy Eastern Europeans are caught doing these things, outside of the reach of the casual US law.. but is it ONLY outsiders? or, you just dont know how badly your own money system employees are stealing from you now.
There are countless reports, studies, Gov intel briefings, even whole books, all pointing towards Russia and neighboring countries being a huge exporter of these style of attacks. I'm not saying ALL ransomware is from that region, but the industry agrees that a huge percentage is.
Do you have something to the contrary, or is this just a hunch?
my interest in this is more on the "saintly" side, so please take all my comments as naive
It appears from a casual read of spy-versus-spy sort of documents, like tell-all books or dramatic movie scripts, that "false flag" is a standard play, since forever. Second, mysterious enemies that speak in unintelligible tongues, are the perfect cover for any institution failing its own constituents.
My experience in life is that money people are attracted to money jobs which then handle money with imperfect rules. Experience also says that business is often a dog-eat-dog world where company loyalty is repaid with termination or lower wages. When confronted with the dichotomy of "thats just business" alcohol and pain-killer abuse are common responses of an injured individual. Multiply that by thousands and mortgage pressure, kids college pressure, or just skid-row entrance behavior, and you get "corruption"
creative examples from the 2000s when aforementioned mortgage fraud ran like fire through the USA: a middle-aged immigrant meeting discreetly at a restaurant, discussing with a colleague a long-term siphoning and fraud scheme, gets back into their mid-range luxory car and goes to their suburban home in the USA; an alcholic single-mother forges a few signatures while on ordinary day duty with no one watching; an up-and-coming sales guy with a sports background wants to "level up" on their property purchase next year.
I do understand that non-US people, non-English speaking people, do run blatent scams. I myself have caught a PHP hook script on a server I ran, which pullled from some Bulgarian hacker board, an infection via a single ID number, like a menu. Of course that is true! but I object to fueling prejuidices by nationality, of people.. human beings.
Most humans are innocent of all of this activity. Yet, almost every single Western adult must have money in an account somewhere. The fundemental divisiveness of the Western monetary system, now under pressure from lockdowns, retirement and health problems, appears to me to be reaching East Germany levels, and I object to wholly and readily attributing this to "outsiders"
It's a "meme". Unless you refer to the really badly google translated screenshot, in which case yes, but I've now learned sufficient Russian that it's easier for me to read without.
>why do you have those screenshots?
I follow a bunch of these forums for intelligence gathering purposes. There are companies paying ridiculous amounts for a few of these screenshots and a little accompanying text, it's apparently called "threat intelligence".
> Of course it's very saintly of you to refuse to pay, but often not paying is a very bad business decision. You can be sure that enough people will pay for your refusal to not make any difference.
And this is something where the law can help. Paying a ransom in these kinds of situations needs to be a felony. The punishment needs to be dealt to everyone who acted or knew and didn't report, and it needs to be harsh enough that even in the worst cases people would rather call the cops and report than risk it. (Or, at least some employee in the organization would report it and save their own hide than risk it to protect their boss.)
The ramsomware crisis is really bad now, and it keeps getting worse. It will not stop getting worse until the money dries up. Small businesses cannot be expected to have the level of IT knowledge that they absolutely can't be hacked. The reason they weren't victimized before at this level was that the money wasn't there. There are enough places in the world where the authorities will look the other way (or actively cheer on the criminals), meaning this will not end until the flow of money is stopped.
If you want to mitigate the losses caused by ransomware gangs, create a subsidized insurance system that helps the victims. Just, that insurance is not allowed to pay off the criminals, just help the business get back on it's feet.
Companies would still pay even if making ransomware payments was a felony. The ransomware gangs would not go away, companies would just have a bigger incentive to hide ransomware attacks.
These groups aren't going to go away even if 95% of companies suddenly stopped paying, deploying ransomware costs next to nothing. There's also a huge incentive for ransomware actors to punish this sort of regulation.
> The ramsomware crisis is really bad now, and it keeps getting worse.
How come everybody is crying about the "ransomware crisis", but you never hear about a "BEC crisis"? BEC losses are bigger than ransomware losses, and they keep getting worse.
> Presumably they are going to immediately make themselves admin, or wire all your bitcoin to their account.
Attackers running scams like a sophisticated BEC will lay dormant for long stretches of time to gather information before acting. Sure, they can export the emails and set up auto-forward rules to maintain visibility when the session expires, but they've now made a lot more noise to detect on. I've seen threat actors view mailboxes once a day for months before they launch this scam.
> Also, it would be better to protect against this by securing the logs or using hard drive encryption.
Of course it would, but often it's not. It's that simple. It's crazy to think the person responsible for writing a secure app is also the one making decisions on endpoint encryption.
> some applications are used strictly within an company from company devices
Some are, lots are not. This reads like someone who has worked in enterprise environments with well funded security teams, not a small business with one IT guy running the show.
> But even then, the attacker could install a browser extension that sends your credentials to them the next time you log in.
This contradicts the rest of the article. Why is a company securing logs, encrypting disks, locking down where users can access apps, but then allowing anyone to install browser extensions?
I agree that short sessions are not the quick fix that some devs make them out to be, but the author is ruling out a perfectly acceptable control based on an imaginary end user setup.