A handful if times, I was able to track down the guy distributing (not authoring) the malware via social media, youtube,discord, github and more (even opened a github issue respectfully asking them to stop distributing malware) and I was able to find the country they live in as well as their name (even a home address and cellphone in one case). I mention this because even with all that info there isn't much I can do that would be worth doing to take action against them. I have filed IC3 FBI complaints for far worse and they don't even so much as reply. I can get an "industry contact" to get me to relay that to an actual special agent but it would have to be something highly impactful like a ransomware, I can't do that for every small time crimeware I find.
Jurisdictions like Russia have a policy of looking the other way as well so long as you look the other way and in some countries, just having actual cybercrime laws and then the diplomatic relation strong enough to cooperate with their cops can be rare to find.
But focusing on the cost alone is a mistake, threat actor cost-benefit analysis is key here. In the 80s and early 90s for example, big cities were a crime horror show because cops couldn't catchup enough and the reward, relative to potential reward of law abiding life was dismal compared to today (well, that and lead babies!). I don't believe stop-and-frisk or "broken windows" policing made a difference nowhere near as much as better opportunities, entertainment, education and economy as well as "the internet" and tech making it harder to get away with crime did.
> To my mind, the old proverb “opportunity makes the thief” describes the main issue with cybercrime quite well – the internet is a very “target-rich” environment, and it is incredibly easy/cheap to create a simple piece of malicious code or launch a basic attack.
It's also consequence-free. You can do on the Internet whatever the fuck you want, but unless you anger the wrong people (e.g. you hack a mega corporation or a hospital) nothing will be done.
A large part of the "cheapness" of cybercrime is that even though we know where a lot of the bad agents are coming from:
- enemy nation states like Iran, North Korea, Russia and China where the government itself has hacker groups or tolerates their activity
- neutral nations like India or Turkey where local law enforcement is bought off by scammers and other criminals so the masterminds get warned of raids in time
- domestic agents like ISPs who don't give a shit about abuse reports if there is no legal liability attached to them (i.e. everything but CSAM and copyright) because they don't bother to hire enough qualified staff to follow up on reports and get bad actors (e.g. people with compromised IoT or other devices) cleaned up or disconnected
... absolutely nothing is done against them, even if identified.
And on top of that: if you drive an unsafe car on the road, you'll get fined for being a danger to other motorists. If you have an Exchange server not patched in years reachable from the Internet, you're a danger to other systems on the Internet, and yet nothing can be done against you.
Our collective governments need to get their act together: nation states must be told to either clean up their act or get disconnected from the Internet and the global financial system, ISPs must face regulation requiring at most 6h response time for abuse reports and evidence of corrective action taken, and people being grossly negligent in keeping up with patches must feel consequences.
It's time for the laxness towards criminals and bad actors to end once and for all. We don't tolerate gangs of bullies intimidating grandmas on the street into extortion schemes, we shouldn't allow their cyber equivalents to do the same.
> It's also consequence-free. You can do on the Internet whatever the fuck you want, but unless you anger the wrong people (e.g. you hack a mega corporation or a hospital) nothing will be done.
definitely not consequence free if you don't know how to cover your tracks, and only consequence free if you're in a country that doesn't extradite.
Iran / Russia / NK / China / etc. can play dirty because they're moving on a nation-state level and geo-politics, up to and including nuclear weapons, are a discussion topic.
The average kidde re-using Indrik Spider code is gonna get some easy wins but will eventually get nailed.
One of the more interesting aspects of early sci-fi that hasn’t made it into reality (yet) is ICE, or security systems that “bite back” with physical feedback, pain, and potentially death. If body interfacing tech continues to develop and cybercrime becomes increasingly prevalent, this does seem like a possibility in a few decades.
Realistically it would never go this route. Instead what we see in China might become more common with social credit but this is also not necessary. A simpler approach is something that already exists. The denial to the online marketplace or just simply removing the ability for an individual to pay for things they need in their basic lives. As we move increasing towards a digital era who is able to access these resources can change. Movements like this will lead to increasingly worse problems as individuals who become trapped in these systems are forced into a life a crime it's an interesting thing to think about.
It's a terrifying thought when your entire life can come to an end just based on something as simple as your name entering into a registry list similar to a no fly list.
I guess you could argue that in order to get correct/useful haptic feedback the full-dive interface has access to your nerves? And the interface is as counter-hackable as anything else in this world?
Thinking in-universe here, where this sort of system exists, wearing an equivalent of a full-dive-condom which prevents the feedback you speak of, would maybe make an operator too slow in responding to countermeasures or make the whole process the mental equivalent of walking through treacle. A skilled operator is more effective without it, despite the higher risk?
All of the above is pretty moot though in the real world, computers can and always will operate at a million times the speed of a person, I can't really see the value proposition of being "in" the computer, when all you'd ever really be doing is deploying icebreakers out ahead of you and waiting for the response
You can also have a worldbuilding in which there are technical difficulties on isolating or filtering the signals across a neurosynaptic link, so most people (other than the most resourceful hackers) have no choice but to expose themselves with a direct neurosynaptic link.
Galvanic isolation of high-speed digital electronics today is already fairly difficult. Imagine adding optical isolaton all the data ports on desktop computers, including the high-speed ones like 20 Gbps USB or Thunderbolt, it's entirely possible but difficult and expensive, sometimes with compatibility issues. USB 2.0 High Speed is a notorious example, it's difficult to isolate (until it was recently solved by some new ASICs from TI and Analog Devices) not because of any inherent technical problems of data transmission, but because its signaling and protocol are not designed with transparent repeaters in mind. Thus, galvanic isolation is only used in highly specialized applications. As a result, a USB Killer can easily destroy most PCs because the signal is often wired straightly into the CPU (SoC).
One can only imagine the difficulties of doing the same for a neurosynaptic link in a Sci-Fi world. For example, in Ghost in the Shell, ICE is in widespread use, meanwhile isolating firewalls do exist but they're rare, mainly used by intelligence agencies. They are also disposable devices and would be completely destroyed after an electrical overstress (not unlike real-world galvanic isolation...) Further, one can use bandwidth limitation to rationalize the "the mental equivalent of walking through treacle" part of your plot.
Of course, as you've pointed out, as computers operate much faster than the human time-scale, the argument of bandwidth is not really that convincing.
We might still have mathematicians, but there was a job that was eliminated by calculators:
Calculators!
We have over time abstracted away the hard-for-people-easy-for-computers tasks to the point often people just do the hard-for-computers stuff, like interpretation of results, and coordination of next steps
Presumably the Net would somehow require physical feedback in order to browse and participate. Or, the functionality would be available (but not required) and ICE would counter-hack the hacker and overpower his local hardware settings and enable the feedback.
A more contemporary example might be something like headphones: as far as I know, limits on sound output are software-based, not hardware. Theoretically a hacker could modify these software limits and output extremely loud sounds.
> A more contemporary example might be something like headphones: as far as I know, limits on sound output are software-based, not hardware. Theoretically a hacker could modify these software limits and output extremely loud sounds.
Seems pretty much impossible.
True, my headphones could be modified for it. In fact I once looked into the firmware of a previous model, and apparently there was 16MB RAM, 2 cores, and sqlite in there. The darn thing could run DOOM if there was a display attached.
But an application doesn't get to talk Bluetooth to my headphones, it only gets to submit audio.
Besides that, audio is logarithmic. If the internal amplifier can produce 2X of the software limit, that only adds 3db. Maybe annoying, but nowhere near bad enough to do serious harm.
Obviously the System can locate the terminal and direct all neighboring IoT devices to attack. Hack the wrong machine and your toster reassembles itself into a killer bot that tries to zap you.
Maybe with direct neyrosynaptic link they would have no way to block their hacked hardware behavior. Assumes ICE will be more advanced/faster than hacker itself
I am probably going to get flak for this, but this "scam flow" including a forged Microsoft login, is pretty much the problem I had when I first encountered OAuth and the "Login with XY" concept.
Aside from the fact that I can't keep the way the auth flow works in my head for longer than 2 days before my understanding of it becomes fuzzy again, a layman is not supposed to understand what's going on at all. Because if they were, they would ask "does this service get my XY password if I enter it here?" and unfortunately, trying to get this answer will lead them down a rabbit hole of auth flows, OAuth2 vs. OpenIDConnect and whatnot, because it's only ever documented for implementers.
The normal user is to just believe in its trustworthiness, which may on a technical level be warranted or not, that's not the point. If you keep logging in with XY to different services, you become conditioned to not question anymore whether it's correct to enter your password right now, and where the login form, which always looks the same across different services, comes from.
But oh well, I don't know what the solution should be here, maybe mandatory 2FA yet again, or passkeys. The market will decide.
I get lots of unsolicited "Microsoft Login" popups and even as a sophisticated user, there's no satisfactory way for me to establish whether this is a trustworthy request or a fake login page.
For example, my company's VPN uses Microsoft SSO and will occasionally pop up a Microsoft Login window without me having requested it.
I establish trust via the autofill of my password manager. If my password manager doesn't offer a list of my Microsoft accounts for autofill, it's probably not a Microsoft website.
Yes, this is not bulletproof, because some companies have login pages on multiple domains. But at least it fails safely and causes me to become cautious when that happens.
Basically a devil's advocate: I have a person in my family, a personal computer user since 80s-90s, who still can't fully understand the concept of authentication realms, i.e. news.ycombinator.com.
From my observations, they would try to type in "the" username and password to "unlock" the system everywhere, and in the process realizes that special instruction applies, and jumps to an ever growing switch-case statement in the exception handler, that routinely skips and punches through the bottom. No matter what or how I speak about an identity being a set of [domain,id,secret], it would not stick. The schema was carved in stone long ago and isn't changing. I've naturally tried converting them to a password manager, it didn't matter. Falling into installing WinZip, twice, and subsequently paying for it was far easier than using it, apparently. Whenever a login process could not be completed, the system is considered to have "became unusable", and signup flow is repeated to again "unlock" the system.
To include such kinds of common people, reducing situations that realms matter is crucial. Passwordless login such as SMS and e-mail magic links is one way, password sharing among websites(dangerous) is another, and OAuth/OpenID/federated login is among those.
A user that feel uneasy typing XY account passwords on a browserish popup that claims to be a part of the legitimate "Login with XY" flow is not a normal user. That is a competent, near-developer power user.
"Login with X" is OAuth, not 2FA.
2FA is a great addition on top of a password per site. Neither a FIDO device nor an Authenticator app provides the site with any extra PI.
Edit: SMS would give them your phone number, but SMS is a really bad 2FA and should not be used
I'm not so sure this is the case. Even less safe 2FA like SMS and email seems to prevent a whole host of cheap attacks. TOTP is much better and can be as anonymous as your login is.
What is sad is that misuse of 2FA data is rampant and causes people to think (rightfully so) as you do.
I don't want MFA everywhere either. I said recently that KYC has gotten out of control. HN doesn't rise to the level of Yubikey. Reddit shouldn't have fingerprints. Musk can kick rocks with collecting PII and employment history.
To your point, the market will decide, but I'm hopeful passkeys will ultimately be one of the key solutions here. Already seeing a lot more app adoption (e.g. Shopify, Google, Docusign) than original webauthn given some of the UX problems that brought with it
BTW, I suspect there is a new (to me) type of scum in US. Nonames are trying to 'collect' debts pretending to have some rights. They get fresh info on possible targets on black market.
Can’t wait for everything to be FIDO2 and security keys (phishing proof) and for these people to go get real jobs flipping burgers or something where their employers withhold their taxes…
Not too au fait with FIDO2 details. How exactly would i help in this instance if the the user believes they are entering their details into a valid MS form? Is it that the attacker would only be able to log in once?
AFAIK webauthn uses the domain as passed from the browser. So user might see micorsoft.com, but to the device it's a different domain so it won't pass on the keys.
Naw, it just looks like the same submitter, freedude, has been submitting the same temporal top-level over the past month rather than linking to each blog post's static address directly.
Off topic, but the stylesheet for this page is just a couple of tweaks away from not being unreadable trash on mobile.
I don't understand how in 2023 some sites still insist on using fixed minimum values instead of just adjusting the layout for narrow viewport widths in a media query. The images just need properly scaling thumbnails, and the gutters around the single column of text need to go away.
> they target pretty much everyone, and nothing demonstrates this better that generic, “un-targeted” phishing e-mails.
Had to stop there, this is written like its a given, that there is no way to avoid phishing emails when there is. There's plenty of ways and alot of the criminality is maintained by those who dominate the sector, via standards or solutions.
>either immediately or very soon after they are delivered to their first recipients – detected and blocked by any security solution worth the name
I've always been taught ignorance of the law is no defence of the law, so unwarranted data sharing is ignoring various laws.
"Articles 13 and 14 of the GDPR require you to tell data subjects who you share the personal data with (the recipients or categories of recipients of the personal data)."
The email systems and associated security products, namely anti spam and anti virus software/services are knowingly breaking the law with a recipient that chooses to employ said service to scan their emails.
Everybody on the planet who has an email address and an AS/AV scanner is breaking the law, but I guess hypocrisy can be overlooked even in the best educational establishments, whilst ignoring the ill thought out nature of law in a global world.
>JavaScript loaded from the external domain was not as simple as the rest of the attack (it was heavily obfuscated and "weight in" at 155 kB)
I guess dialup is still an issue for some.
>Proving that the cost of committing cybercrime can be really low.
The title is misleading. It should be called "The low, low effort of ...". There is no dollar value expressed in conducting such a simple attack. One needs to buy emails, then setup dns and host the files on some servers. How do you pay for those servers? There are a bunch of interesting parts of this that were just not covered in the article nor there was any attempt to show the actual cost, nor did it prove it is actually cheap. The cost would be dictated by the amount of valuable emails you have and the ability to squeeze them into one campaign (minus the effort).
For crimes where the criminal's "happy path" result is to take somewhere from $50 to $50,000, you'd expect that the most important cost element would be risk of getting caught and punished, although I have the impression that enforcement is not actually that strict.
Remember Ukrainian cybercriminal that was caught when he crossed into Poland? Yeah you will need to remember to never go to any of those countries for the rest of your life. If criminal wants ever to live a nice lifestyle (which is probably why become criminal right?) then getting caught is high cost
For probably much of cybercrime you don't pay for these servers. Many are usually security compromised servers that were put up for legit business and left some config file out in the open or some other silly mistake. In the case of phishing, they are hijacked and by the time anyone notices, enough emails have been sent to create a lot of damage for the good guys, and a decent profit for the bad guys.
As always, something will only be done to fix the unbearable lightness of committing a cybercrime when there is a realization that the price being paid for those that have the power make a difference outweighs their benefits.
Often people only notice their badly maintained website has been hijacked to host phishing pages under some subdirectory when their whole site ends up in Google’s safe browsing shitlist.
I’ve investigated a metric fuckload of these cases at work.
Usually, the emails are from public leaks. So, free email list.
The hosting? More often than not, we find phishing panels on hacked web servers exploited through vulnerable Wordpress plugins or whatnot. So the servers are free.
Sending the emails? Usually a shitty PHP bulk mailer uploaded to the same compromised server as the phishing kit.
The effort on a lot of this is incredibly low, the cost is “time”, etc. public mail lists, infra compromised using public exploits, etc.
A bigger problem for me personally is the high cost of reducing developer productivity and increasing operational risk just for the sake of cyberponies trying to defend their job.
Also I am not so sure the cost is that low. Well for phishing attacks maybe, but what is the return here?? Many skilled people had been caught doing 'cybercrime'. I just think if you compare this to e.g. tax-fraud then I would expect the risk/reward to be much higher than doing phishing attacks.
And a bigger problem for me is the high cost of losing my job when some code cowboy leaks a bunch of people's data and passwords because "md5" was already in the standard library and easy to use.
Or someone replaced all the pictures on the website with hentai because a developer found this "really cool GitHub project" that saved him the hassle of "having to learn regex" or decided to outsource a bunch of customer analytics to "this really cool startup I saw on ycombinator. No I just paid with the company pcard, no I didn't read the privacy and data documents those are boring."
It's a funny worl like that.
EDIT
Or the developer who put the CORS to '*' because that was the only way to make it work on my machine.
Or "Why is this random Serbian guy currently admin in our AWS account?" "Oh that's gavrilo great guy he was one of the front end guys we brought in back a couple of months ago to finish a project. We couldn't figure out the permissions to the s3 bucket though so we just gave him admin rights. Should probably get around to removing his access. Cool dude though although he had problems with the Asutrians for some reason."
>A bigger problem for me personally is the high cost of reducing developer productivity and increasing operational risk just for the sake of cyberponies trying to defend their job.
This is why programmers are not licensed engineers, and I have my doubts about being a serious engineering profession.
"Oh, the bridge fell down and killed 15 people, but it was worth it because I built a lot of bridges this week"
> "Oh, the bridge fell down and killed 15 people, but it was worth it because I built a lot of bridges this week"
Not everything non-software engineers do is life or death either, and there are plenty of software people who do work in critical areas that don't have that attitude. This oft-repeated view is a caricature.
Licensed engineers make all sorts of crappy consumer goods that fail all over the place. Conversely, there are software engineers involved in medical devices, defence and various other spheres that follow very risk-averse, formalised coding, verification and release procedures.
The fact that a lot of consumer-oriented software is insecure, flakey crap is a consequence of time pressure and risk tolerance, as well as nebulous or amorphous requirements. Should I release this now, and try to capture the market, move fast and break things, adapting functionality as I go, with quality important but not that important?
Or should I spec out a complete and formal design, maybe formally verify it, implement to that spec and thoroughly validate it all afterwards? There'll be no product for five years but that's OK.
This latter category does exist, but isn't especially visible because it's not very sexy, it doesn't have 'hero' engineers, and there are few techbros getting rich off it.
> I have my doubts about being a serious engineering profession.
It's not. If it were, vaguely talented amateurs wouldn't run circles around people with degrees.
But to get it to that level, we'd have to figure out 1. how to teach it, and 2. how to measure it.
1 is probably more important, but you can't really have a licensed profession when the students being educated don't actually know how to do the job that you're licensing. You would have to set the standards so low that it would be a useless license.
I don't think the standards have to be low, but the software engineering field would have to be narrowed down. Only teach and use a certain few languages, only use certain tools, only use certain patterns, only use a specific few frameworks etc. That's basically what construction engineering is. There are many ways to build a house, but only a few that are approved and accepted.
It could maybe work, although new inventions would be slow to catch on. A bit like in construction engineering today.
Maybe for some security related tasks it should be a requirement in some cases. A bit like software for airplanes and trains etc.
Based on what you've described, I disagree. Most colleges have already winnowed things down to teaching in only one or two languages. Usually java and python. And from what I've observed, most students still don't learn much.
We, as a field, don't know how to teach people to code. The closest we've gotten is showing people what an if statement and a loop are, show them a couple examples, and then tell them to screw around for a while until it clicks. And for some of them it does.
If you want standards to be higher than a random teenager who screws around for a summer, this has to change. Or you have to accept that ~75% of college graduates are not going to be able to go into their profession.
> Some software issues are caused by engineering, but most of them are just some implementation detail like an "!" in the wrong place.
Yes most of them (by the numbers) because software is more complicated than a bridge which has been a solved problem for a long time and because even the worker bees that write code have a more serious relationship with the software than someone pouring concrete in construction, which is also already a solved problem and for the most part - has one true way to do it, unlike software.
If you have proper quality control and budgets for that, then we'd have a lot less bugs that you are talking about. I don't get your point though.
You are saying software is so easy it's not real engineering and software fails because some coder screwed up? I don't buy that.
The success of every project comes down to quality engineering which encompasses a lot of things that are outside of just the part of the job called coding (budget, schedule, requirements gathering, build, quality control, risk mitigation, redundancy, ongoing monitoring, hosting/infrastructure, maintenance, etc).
A civil engineer is more of an engineer than a coder (which isn't an engineer) which is the conflation that caused my prior post.
However, a civil engineer is NOT more of an engineer than a real software engineer. I'd submit the opposite and the market agrees.
And if you don't believe that then you will eventually when there are more catastrophic failures in critical software. Probably AI or self driving cars when the stakes are higher. If it's a dumb bug as you allude to, that won't make it less of an engineering discipline. It would make it more.
I can't even believe there is debate about software engineering being a thing with all the incredible things that have been built.
I think programmers should re-evaluate the stakes that they're playing with. Even seemingly inconsequential things can have pretty large impacts at the scale that programmers operate. For example, if a form that you write doesn't properly support unicode characters then you might have just locked out millions of people from non western countries from using your software.
And even worse, if you're building a "tiny low stakes" piece of a much larger software then you might end up accepting money from those paying customers who can no longer use your software because your form doesn't work for them. (I've had this happen to me personally). And then people don't bother fixing it because it's only 0.01% of the userbase.
But isn't that part of the poor comparison with other sorts of engineering? If I build a bridge or design a machine, I have a pretty good idea of who will use it and for what, and it's engineered for those use cases, and some reasonable edge cases. Plenty of medical and chemical engineering isn't tested on anything close to the breadth of humanity that could be harmed or excluded from using it.
There's plenty of "engineered" products that are designed in ways that don't support unicode or international electrical systems, or other factors that exclude "millions of people from non western countries", but they also just don't sell those products outside of western countries.
Just because I put something on the internet doesn't mean I'm thus required to support literally the entire planet's use of it, and I don't think that's "stakes". Sure, if your company has a Korean division and you forget to have Unicode support, that's a problem, but if you work for a regional company in Virginia, it doesn't seem like some failing of professional responsibility to not support Unicode in my forms.
Did you mean english speaking countries? Because most European countries (that I would assume are part of the "West") require unicode support as names and last names include non-ASCII letters.
Also, if you work for a "regional company" in Virginia and you don't support unicode you're likely excluding 11.5% of the Latino population that do make use of non-ASCII characters in their names and Asians, that are 7% of the population. So, yes, it is a serious professional failure to do that, even in "Virginia".
You're somewhat right, though I was just parroting the parent comment's example. If anything, you're pointing out what a terrible example it is. If you drop unicode support, that's not "not thinking about your immediate users", it's shipping a pretty broken product in general in a way that's easily fixable in most projects, which transcends the "is it engineering" line by a good bit.
That said, I'd be curious what percentage of people with non-English names actually require Unicode for their names. Not every Asian or Latin person, or even most of them I know, use non-ASCII characters in their name, and very few businesses or applications require you to enter a full legal name that needs to be accurate the the language used to name you. I work at a company with a large amount of international employees and I'm not sure I've seen anyone with a non-ASCII character in their name, and I'm pretty sure Active Directory and Slack support Unicode. So while it would be a mistake to not have Unicode support, I am curious how much it would actually cause an issue. It would be inconsiderate to not support it on a form, but there's plenty of businesses who could probably operate just fine with only Latin characters.
You haven't seen them because they have already been burned by stupid systems that will not take non-ASCII characters so they don't even try, the most common Spanish/Portuguese first name is José but I doubt you'll find a José at your job.
Sure, but then that's a very loose meaning of "requires" unicode support if all the people who "require" it probably aren't even going to try to use it even if you support it.
Ironically, isn't part of the issue that most "engineered" keyboards in the US don't have any innate way to type those, and thus require your "programmed" OS or application to have the support needed to even type those characters with some chord of keys?
So wouldn't you agree that in a sense, the stakes for programming is much higher? Because you can have a global impact just by putting something on the internet, I think programmers should take more care into the work that they do and not less.
Not at all. There's only a handful of bridges over the river near me that I could use to get to my friend's house. There's only one AC unit in my house that's keeping my house in a livable temperature. A failure in almost any "engineered" thing in my life would have more impact than the loss of literally any programmed thing in my life, and the only programmed things that come close are treated more like engineering projects in my experience.
Regardless, none of that was the point I was making. You're claiming that because code could run anywhere, that it's therefore every programmer's responsibility to make it work everywhere, because that's "engineering". My point is that Engineering is nothing like that - most actual engineering is of a vastly more defined and constrained scope than most software. My mechanical engineering friends spend years building, say, an AC unit that only is ever sold to something as niche as hotels within a certain latitude range in North America.
Do engineers have to be more robust? Often yes. Should some software also be developed to that level of rigor? Yes. Should all or even most software be required or even expected to have that rigor? No.
It’s high stakes in the sense that leveraged trading is high stakes. One person can provide value to a million customers, but it’s unrealistic to expect that person to be able to cover one million people’s worth of edge cases.
So what’s better, keeping the ability for one person to have an outsized impact in improving others lives along with some caveat emptor, or bolting down the industry to the point that one line of code costs several hundred dollars?
> So wouldn't you agree that in a sense, the stakes for programming is much higher?
That doesn't mean the stakes are higher. It means the potential upsides are higher.
The potential downsides are also much lower (a bad website will rarely kill someone).
The combination of these two incentivize bringing in as many people as possible, even if standards suffer because there's almost no downside (civilizationally speaking, companies do occasionally lose quite a bit of money).
That's probably true although I think eventually, that kind of callous line of thinking might actually get someone indirectly killed because programmers don't worry about the impact of their code.
A less severe (although in my opinion still live changing) problem is having an unusual name making it harder to book flights. Another example where having a NULL license plate got someone thousands of tickets [1]. Or IP addresses getting mapped to locations where they actually don't come from and now someone's house is getting searched because "their IP had illicit activity".
Coding probably won't kill someone but I still don't think it's low stakes.
The real sad is that there's nearly no defense against this, other than scanning codepage of all text you see. At least punycode is available for domain names in URL bar if turned on.
Those stakes are often misjudged or simply ignored. Nobody cares if a thing takes 10000x more time than it could if it works, right? Boom, you just wrote Windows Update that wastes centuries of human time each day.
And we haven't even gotten to all the effort that is wasted due to insecure software, hacked devices and leaked data.
Software can have high stakes very quickly even if you don't expect it. Like that time hundreds of UK post office workers were fired (and worse) because higher-ups decided the faulty IT system was more trustworthy than hundreds of people. And that's still rather harmless compared to what's possible.
It is, but if you get it wrong, you can really screw things up. Not trying to get it right is not engineering, and just hoping you get it right is not trying.
Software engineers shouldn't be, and shouldn't want to be, licensed engineers. Being licensed a pain in the ass, a huge bureaucracy (have fun learning about thousands of ISO standards for trivial stuff for your license) and stifles innovation. We accept certain jobs requiring bureaucratic oversight because the effects of too low standards are unacceptable to us, but repeat with me, "licensing is a necessary evil, not a good in itself". It's ridiculous to demand that every library author or writer of a corporate website be licensed.
All security is a balance between risks and costs, in this case productivity.
If I believe the Verizon DBIR reports -- and I do -- around 20% of breaches are straight up errors, screw-ups, and accidental disclosures.
After that it's hacking web applications, at around 30% of the breaches.
Keeping these things from happening starts on the developer level, and if I find out a software suite that I'm using in the Enterprise ain't doing their security due-diligence then they're fucking gone, like ASAP; security for highly vetted, well-protected systems is hard enough, and those people are trying.
> Many skilled people had been caught doing 'cybercrime'
There is getting in, getting out, and getting in and out cleanly. Just cuz they didn't get out cleanly and got arrested -- eventually; could be 4 years later -- it doesn't mean they can't do massive damage until then. Shutting down work, destroying data, exposing secrets, whatever.
And these are the ones that you can arrest, cuz plenty of them won't be in countries that will extradite.
I can get on board with education about social engineering hack attempts, but the amount of spyware on my work machine feels like institutionalized cyber crime. I have work that is I/O intensive, and the third party AV makes sure that's as slow as possible.
Jurisdictions like Russia have a policy of looking the other way as well so long as you look the other way and in some countries, just having actual cybercrime laws and then the diplomatic relation strong enough to cooperate with their cops can be rare to find.
But focusing on the cost alone is a mistake, threat actor cost-benefit analysis is key here. In the 80s and early 90s for example, big cities were a crime horror show because cops couldn't catchup enough and the reward, relative to potential reward of law abiding life was dismal compared to today (well, that and lead babies!). I don't believe stop-and-frisk or "broken windows" policing made a difference nowhere near as much as better opportunities, entertainment, education and economy as well as "the internet" and tech making it harder to get away with crime did.