I think the developer community need to start ostracising people working for these companies. Don't hire former employees, don't hang hang out with people who work for these companies and conferences.
Don't supply services to these companies (build their website, network...).
I believe by letting people of the hook for participating in this (similar things can be said for e.g. the NSA) we are essentially endorsing the behaviour. If you work on at e.g. NSO group, you are personally responsible for governments surpressing and even killing (just look at SA) critics
Ostracising someone from society solely on where they work without looking at their actual actions is implying guilt by association. A tactic often used by authoritarians. Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
Finally, somebody's brave enough to say the truth!
I've been helping with some work for a small local gang- we do the usual (murder-for-hire, "debt collection", extortion, etc). Although I only do administrative work - keeping records and such. Pays great. But you know what? My wife- my wife of five years- left me when she found out.
Can you believe that? What a fucking fascist. I didn't do anything wrong. I never killed anybody. And, sure, I did also help machine firearms for folks, and I did help with some supply chain issues to make sure we have a reliable supply of bullets, but I never shot anyone. Not one person.
Unclear whether this is legit, or if you're being facetious to demonstrate a point. I'm gonna bet on the latter, seeing as this is HN!
You intended to supply a local gang with guns and ammo to earn profit from it, along with the other actions you took. You purposefully set out to profit from their criminal behaviour in full knowledge of what that entailed.
I'm not surprised your wife left you. Good on her.
Here is a circumstance where your point is not valid, where there is no malicious intent:
- Developer A in dept X finds out developer B in dept Y is working on Z. Is uncomfortable with anything to do with Z.
- Dev A raises this with line manager C and gets pushed back.
- Dev A tries to raise this up higher. Gets push back.
- Dev A decides to leave the company because the workplace has now become increasingly hostile.
Dev A tried to do the right thing and raise up the fact that project Z was unethical. By presuming guilt by association, Dev A is treated exactly the same as Dev B.
Consider engineers at Google. Did every Google engineer work on project dragonfly? Did every engineer know about it until it was leaked to the press? Do the project zero team work on ad tracking?
Bringing it back to your example now. If you were an accountant for a printing shop that just happened to be a front, but you never knew about it or suspected it, that's another story. There's no intent to profit from or knowledge of the criminality. Now you're an innocent bystander who was taking advantage of.
If your wife left you in this situation, I'd feel for you.
This is why we presume innocence until guilt is proven. I, for one, would rather some guilty people slip through the net of justice if it helps us to not habitually punish innocent people for crimes they did not commit.
The world is not perfect, nothing is ever black and white.
Anyway, we can evaluate the permissibly of moral actions using the principle of double effect. As you suggest, we do not always have the luxury of choosing courses of action without some kind of negative side effect. At the same time, it is not morally permissible to engage in intrinsically immoral acts (sorry, utilitarians/consequentialists) nor is it permissible to intend the evil effect. We may also not use the evil effect as a means of attaining the desired good. Finally, there must be a proportionality between the good and bad effects that justifies the toleration of the bad effect.
I am genuinely confused by your comment. Are you, again, being facetious or are your arguments just bad?
> it is not morally permissible to engage in intrinsically immoral acts (sorry, utilitarians/consequentialists)
How would anyone define an intrinsically immortal act? It seems dishonest to discard well-established schools of thought while ignoring the very premise that makes them relevant.
> We may also not use the evil effect as a means of attaining the desired good.
> Finally, there must be a proportionality between the good and bad effects that justifies the toleration of the bad effect.
These two statements directly contradict each other.
Cool, so there are areas with shades of grey.
Claiming that you somehow were not aware of what NSO is doing is just not one of those, at least I won't give you the benefit of the doubt if you're working there as a dev. Likewise, if you work for Hacking team, you know what you are doing.
> Claiming that you somehow were not aware of what NSO is doing is just not one of those
The parent comment of the comment I replied to (try saying that twice as fast backwards) was attempting to point out that we should avoid using guilt by association. i.e. We should focus on innocence for the individual until guilt is proven.
How do you know for absolute certainty that at least one developer (who has ever worked at NSO at any point in time) never said "this is completely illegal and I'm not comfortable with being near it."?
How do you know for absolute certainty that at least one developer (who has ever worked at NSO at any point in time) never said "You know what, I'm really not comfortable doing this work. I thought this was a good gig and I'd be okay with type of work... but I'm really not. It's killing my soul and I can't stand it."?
> at least I won't give you the benefit of the doubt if you're working there as a dev.
Fair enough. You're entitled to that position.
For me: People can make mistakes. People can get in over their head. People can mistakenly believe the lies other people tell them. I'd rather assume someone is innocent until guilt is proven by evidence.
> Likewise, if you work for Hacking team, you know what you are doing.
What even is a "Hacking" team?
Project Zero could be considered a "Hacking" team. Are they bad people for doing what they do? We know about loads of new zero days thanks to them. Extending this, am I part of a hacking team? I do white hat research. Does that mean I'm bad?
Do you mean "malicious adversary" perhaps? Because that is an entirely different concept. Then we are dealing with malicious intent. That's when someone may indeed be guilty (if backed up evidence, of course).
> HackingTeam is a Milan-based information technology company that sells offensive intrusion and surveillance capabilities to governments, law enforcement agencies and corporations.
As @svane pointed out, "Hacking Team" are exactly one of the companies that are on my personal list of companies where if you work for them, you lose (my) benefit of the doubt.
From your comment I do take the point that maybe not everyone is aware of all of these actors.
But if you sign a contract with them, you either know what you're doing and are cool with it, or you didn't care enough to google them. (And I have a very hard time believing the latter.)
The former I find morally wrong, the latter I find negligent (note the "I find", indicating personal choice here) and I do think both should be disqualifying if not explained well.
Edit: even though my personal attitude towards this doesn't matter in the grand (or even small) scheme of things, I'd consider this a situation where I'd invert the burden of proof. Yes, being associated with these companies should put a burden on the person working there if they want a different job. They should have to think about that before they sign. High-skilled people who consider their offers should have a strong incentive to decline.
Let's say I am staunchly anti-abortion. I believe any abortion is, unambiguously, murder - the killing of a human being, and, maybe even worse, a child. It's unforgivable- goes against everything I believe in.
I also have a friend named Sammy, who's a doctor. Last night I discovered Sammy got a new job at Planned Parenthood, where a significant part of her time will be spent performing surgical abortions.
How many articles on the web should I read before I'm allowed to stop being friends with Sammy?
If you found out about Sammy's new job from a blog post on the internet, maybe you should give him the benefit of the doubt and ask him about it in person before ending your friendship.
In that example yes, but are you really telling me that I should change my views on whether it's okay for governments to hack journalists and activists to publicly smear them and arrest their sources?
It's an interesting position but one with unpleasant implications. Do you think it is morally wrong to work for, say, a terrorist organization planning a chemical weapons attack if all you do is manage procurement for them — you don't make chemical weapons or use them, you just make phone calls and manage a few spreadsheets.
Also, do you have the same feeling for the reverse situation? If instead of being an employee, you're the boss. You're aware that your employees are doing something morally wrong and you do nothing to stop them, but you aren't doing it yourself. Do you have any responsibility? In the ethics of war, there is the concept of command responsibility[0], where leaders are responsible for the actions of people under their command. Do you think that is a bad doctrine because the leaders aren't the ones shooting people? No one dies through their individual actions even if their subordinates commit war crimes.
Regardless of how one feels about abortion, or whatever the issue at hand is, "just change your morals so that whatever your friends do is ok" is not a solution at scale.
Or maybe the right thing to do is to try to help Sammy see the evil she is committing. That could involve a Socratic approach that will help uncover the underpinnings of her position. If that fails and Sammy either obstinately refuses to see or for some reason cannot see, then it makes sense to part ways. You gave it a shot, you tried being charitable, but you ultimately cannot force another to accept the truth, nor should you want to. Everyone is morally responsible for his or her own views at the end of the day.
I find your argument to be quite disingenuous considering that the intention of the ostracism it to harm by denying job opportunities and by exclusion from social relations.
Have some decency to call your actions what they are.
If you want to harm somebody in retaliation for doing something you don't approve of, don't call it 'the opposite of harassment'.
I can see your point but perhaps it's a difference of viewpoint. When I am ostracising someone, it's because something bad they did. I think I have the right to choose with who I associate, and nobody has the right to force themselves upon me.
I don't understand why you are redefining a call for ostracism as "deciding who your friends are" and then classify ostracism as "the opposite of harassment".
This pretty much looks like harassment to me:
"the developer community need to start ostracising people working for these companies. Don't hire former employees, don't hang hang out with people who work for these companies and conferences.
Don't supply services to these companies (build their website, network...)."
If a organisation is criminal, then being member of that organisation means, yes, you are a criminal, too.
But in this case it is not a criminal organisation, by law, it is a company selling "software weapons" to a government the western governments view as legitimate and therefore ok to sell weapons to, even though it is not at all democratic.
So the company is acting within the law (probably), but we probably agree, that it is not moral to do so. If we agree on that, than it is also not ok, to support a company who is doing wrong. So I agree, to avoid people, who do no ethical work. But I don't know enough of the companies in question to make that final judgement and judge case by case, like always.
This is a great question/thought. I am really surprised at how people forget why laws, rules, policies and governance were put in place in the first place.
Rules/laws/policies are to prevent abuse/evil things and to serve people! RULES ARE ALWAYS MUTABLE and should punish abusers of law/rules/policies, NOT the people it is built to serve. Rules must/will change and they should always benefit people who are doing the right thing.
I will give a simple example on why we should go above the law and think a lot of things using basic common sense;
Recently our country introduced camera's on highways to avoid speeding. The govt is so rigid on the rule now that they have already fined people who were speeding in an event of emergency and even ambulances. And I read complains of these people on Facebook. Instead of the laws working to protect and serve, we see it happening the other way.
What we as a society lack is process/structure to handle the outstanding/exceptional scenarios. We need to build a system which tolerates outstanding/exceptional situations so that the system doesn't break down. But unfortunately, we don't tolerate any outstanding situations and it breaks down.
This organisation is doing bad things, and this is not the first or second time this has happened. So the whole world and especially the employees should definitely know what they are doing. They are engineers aren't they? So they are bound to be smarter than the average folks. There is no need for benefit of the doubt with them.
And let's starts acknowledging that many countries are still only in the drafting phase when it comes to dealing with most of the digital abuses, bad things. Most democratic systems are super slow when it comes to digital crimes and laws. So current laws are not sufficient and hence should not be used gauge the severity of the incidents the law cannot handle.
Also remember, Facebook started struggling a lot after the cambridge analytica scandal to hire, because people didn't want to affiliate with them. So this works. We should call them and the employees out for a better world. :)
Many people see the world as inherently adversarial. There is no cooperative solution to any society-scale problem for them. They just see a bunch of people and isolated groups with dollar amount score cards, clawing their way to the highest position they can muster. So, they have no problem helping one adversary against another, and do not see it as immoral to get ahead at others' expense. (That's what winners do!) The only problem in a trolly problem, then, is if you're not the one operating the switch.
Being a member of a criminal organisation does not make you a criminal by itself, that is called guilt by an association and has been used by tyrants throughout history to justify oppression and killing without cause. You are only a criminal if you are proven through substantial evidence that you have in fact committed a crime.
Perhaps if you don't know its a criminal organization, then sure. But if you knowingly assist murderers do their job, I say you can go to jail with them.
Despite the fact that Germans as a whole held responsibility for what happened, it's a different question.
You didnt have a choice to be born German, you do have a choice to work for NSO.
So the question is more, does the accountant who was keeping books of the belongings taken from the Jews have moral (and legal?) responsibility? And yes I believe he does (and the courts in Germany agreed look up Oskar Gröning).
My point of view is, yes! Unless he opposed in a meaningful way. But most did their duty and did so proudly. And after the war everyone just did their duty because hey had to and no one was a nazi.
But the question also applies to today. The US for example use turtore and murder. So for some family members of people being murdered, because they attended a wedding in afghanistan, the whole US is guilty and therefore a legimitate target.
I do not think so, but I think people in the US should take this more into consideration when thinking about terrorist. Most terrorist legitimate their actions (and get support) by saying they fight back the evil empire.that brings them only bombs.
Why do people always conflate legality and morality. A wedding guest might not be doing anything illegal eating/taking so much food/drinks that there is nothing left for anyone else, but I'm pretty certain many would make the decision to not invite them again, even without giving them a trial.
If we talk about its legal, whose laws should we even apply?
Well, here in germany it definitely is a crime to be member of a criminal or terroristic organisation (§ 129) and I am pretty sure in the US its the same.
And it makes sense. The concept is valid, if you knowingly support criminal activity, which you do by associating with them, you are guilty (by varying degree). Mere familiy members of the mafia in italy for example will usually not being prosecuted, even though they are part of it.
What you mean with guilt by association, is when the nazis for example enprisoned whole families, because one member was part of the resistance. And from the point of view of the Nazis this also makes sense. Because family members do support each other and are close(in ideology), so one enemy in a group means, there are probably more and if not, then it is an example for others not to help anyone resisting even if it is your brother and rather stop or support them.
So the problem with guilt by association to me is not the concept, but how it is applied.
The cruel, despotic government is the problem in the first place, not the tactics they use.
And this concrete case here is about supporting authorian, cruel governments, by (indirectly) working for them.
And I am free to despise and avoid people by my own standards, no matter that they are within the borders of the law.
> Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
In the case of criminal and civil proceedings, sure, but a boycott on my part is an application of my own moral compass, not of the law. I don't owe anyone a "fair trial" for the judgement that guides my own free actions.
If for any other reason than absolute necessity you work for an organization that serves authoritarian, anti-democratic regimes with tools designed specifically to implement policies to that end, I will think poorly of you for no other reason. I won't trust that you are able to make decent moral choices. I will base my own conduct on that judgement. My conduct insofar that it's clearly legal should not be the subject of a fair trial.
I didn't not say ostracising from society, just from our community. Regarding fair trial, this is not a legal matter, it is a moral judgement. And working for a company who is actively helping authoritarian governments to prosecute and kill dissidents is making a choice. That is not guilt by association, you _are_ helping with your actions.
Your argumentation is exactly how how totalitarian governments commit atrocities, divide the responsibilities up enough so that every little cog can justify to themselves that what they are doing is not morally wrong. I know I'm coming close to invoking Godwins law, but Oskar Gröning had a moral (and even legal) responsibility for his actions, even if he did not kill anyone himself.
> Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
This is absurd. A "right to a fair trial" is the standard for criminal trials, which are associated with criminal punishments -particularly but not always- imprisonment and execution.
The right to a fair trial has never been a standard we as individuals are obliged to follow in other contexts; for example: it would be ridiculous to think "a trial" is needed before we decide whether we should continue doing business with a company that dismisses its employees for being gay or trans.
Similarly, there is a long history of consumer boycott movements to pressure both companies and nations into acting more ethically; from Apartheid South Africa, to confectionary and fruit companies, to oil companies, to which eggs we might choose to buy. In none of those circumstances is a "right to a fair trial" a relevant concern.
A better question might be: How unethical does a company have to be before the act of just working for them should be considered immoral enough to merit public rebuke and repudiation? I don't think there's a lot of companies that reach that threshold, but I'm adamant that some should: Blackwater is the most obvious choice here.
I agree that a trial by jury isn’t always required. But when punishing individuals, there should always be due process. In any case, any action taken should be evidence based and properly investigated.
>any action taken should be evidence based and properly investigated.
My emotionally charged actions to not be your friend or colleague requires no evidence or proper investigation. If you want to be my friend/colleague, stop being an asshole. It's that simple.
This is the epitome of the ignorant, tiresome, bad-faith "innocent before proven guilty" argument. The childish "A tactic often used by authoritarians" is just icing on the cake.
It’s not guilt by association, it’s guilt by action. The action of deciding to work for the company with such a mission / clientele. Especially in a role requiring enough technical knowledge to know what’s going on.
>Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
Fair trial is about the government enforcing laws, not about social groups enforcing morals and ethics. No one has the right to a trial when you act like an asshole and no one wants to be your friend because of it.
I was recently offered a job by NSO, didn't take it due to their terrible reputation. I won't be surprised if some countries start denying entry to NSO employees. Even Facebook suspended accounts of NSO employees after NSO hacked Whatsapp - https://www.vice.com/en_us/article/7x5nnz/nso-employees-take... .
On the other hand, their product is just a tool which can be used for good (stopping terrorists) or evil (spying on human rights activists). Just like a kitchen knife can be used for good (cooking a meal) or evil (stabbing people). So I find it hard to find the moral justification for the actions you suggest. The problem is not the tool or the tool's manufacturer, it's how it gets used.
I’ll play the opposite side of this argument, for the sake of discussion. You point to knifes having a good use: cooking. It’s by far the dominant use of knifes, and no doubt it makes cooking sunstantially easier.
But hacking tools: to what extent are they actually being used for good? Stuxnet is the clearest example I know of these tools almost certainly decreasing a threat to US citizens (at least for the time before it was found out). But beyond that, there’s very little publicly accessible information demonstrating that these tools are actually effective at stopping or decreasing terrorism. Moreover, even if they turn out to be effective at that, their use in this manner comes with other questionable effects on law and personal rights. I don’t think the knife is a good analogy because while everyone agrees that a knife can be put to either good or bad effect, there’s not consensus on whether hacking tools can even be used for any good.
When I was in the Israeli army, I personally saw a phone being hacked, info being pulled and the info being used to stop a terrorist attack targeting civilians. I was not involved in the hack (I served in the navy).
In that particular case (but not the majority of cases) the target of the hack was an Israeli citizen who was practicing terrorism (against the Arab minority). After their info was intercepted they were arrested and the situation was de-escalated.
Tech like this saved lives that day. I don't think it justifies the freedom cost, but let's not forget real lives are saved by tools like Pegasus.
> Tech like this saved lives that day. I don't think it justifies the freedom cost, but let's not forget real lives are saved by tools like Pegasus.
Additionally, even if the tools are developed and used only by governments that are deemed democratic today (e.g. USA, Israel, Germany) and under strict independent and parliamentary oversight, who can guarantee that future governments of these country will be democratic (obvious recent cases Brazil, Poland, Hungary, but one might also ask that question about the US)?
These are tools of the Regime, and some regimes will wield them against minorities (like Uyghurs in China), journalists (in Mexico and Jamal Khashoggi in Saudi Arabia) and protesters (in Belarus).
One good user case doesn't justify selling this tool to autocratic and totalitarian countries, or countries involved in systematic oppression of minorities.
One’s autocratic country is someone else’s ideal of social organization.
Should we stop selling steel to the US because it could be used to put migrant kids in cages, or weapons because it could be used to invade random countries? I’m not saying the answer is obvious, I’m saying the problem is complex and multifaceted.
Take Morocco: not the best government (somewhat theocratic, absolutist monarchy, big on unaccountable and torture-oriented secret police), but overall more peaceful and stable than its neighbors. Do “we” help continuing this state of thing, or do “we” let malcontent bubble up and risk turning it into a failed state and civil war? It’s shades of grey all around, sadly.
I think the question, although genuine, has a flaw, that is, reasoning in terms of "good or bad".
"Good or bad" for whom? Is something that is "not good" inherently "bad" and viceversa?
Is something "good" only because is "decreasing a threat to US citizens"? What about the consequences of "decreasing a threat"? Like Guantanamo Bay, Patriot Act, this poor guy (https://news.ycombinator.com/item?id=23625215), bombing a country thousands of miles away?
"Good or bad" is relative, just like right or wrong. It's difficult to correctly grasp a concept or conceal an idea by just defining it as "good or bad".
I agree with you, and believe it or not I did try to go out of my way to avoid calling stuxnet itself good or bad: I kept those words out of the sentence which mentions stuxnet
> Stuxnet is the clearest example I know of these tools almost certainly decreasing a threat to US citizens...
However, you still have to make value judgements at some point when organizing a society. It’s literally impossible to do so otherwise. Even if you make a conscious effort to not organize socially — I.e. to embrace anarchy — you’ve made at least an implicit value judgment that governance isn’t worth the limitations it requires of the people (I.e. limitation of individual freedom is “bad”).
“good” and “bad” are messy things to deal in, but they still have their place. Any answer to “should we allow NSO group to operate” has to make a value judgement at some point. I think it actually helps to make that explicit — for example my point should still stand in most other value systems precisely because it refers to “good” and “bad” — which vary across value systems — without prescribing what is good or bad.
I could have been more clear about separating an example (stuxnet — the thing which brings in a value system) out of the argument itself. But I couldn’t find a way to do it without sacrificing brevity or readability. Such are the limitations of communication, particularly written :|
"to what extent are they actually being used for good? Stuxnet is the clearest example I know of these tools almost certainly decreasing a threat to US citizens"
By this logic an equally good use would be to sabotage American military-industrial complex thus reducing threat to the citizens of many countries around the world.
> But beyond that, there’s very little publicly accessible information demonstrating that these tools are actually effective at stopping or decreasing terrorism.
Absence of evidence is not evidence of absence, particularly in this context where the actors involved are highly incentivized to keep success stories well-hidden and well-guarded.
You'll never know about all of the terrorist attacks that didn't happen.
> On the other hand, their product is just a tool which can be used for good (stopping terrorists) or evil (spying on human rights activists).
That applies to lots of technology things though. With the NSO group specifically though, wouldn't their tech have Sales people that need to actively court and sell it to potential customers?
> Just like a kitchen knife can be used for good (cooking a meal) or evil (stabbing people).
NSO knowingly sells tools to repressive regimes that use them to violate human rights. If you sell a knife to someone you know is going use it for murder then you're culpable and your behavior is immoral.
Is it bad for their reputation really? A oil company gets bad rep for the environment, a mill gets bad rep for deforestation. This doesn't matter the slightest to their customers, they "understand" what they're buying.
This won't work, as long as there is a market for hacking phones, there will be those willing to sell their expertise.
We should focus on making things more secure. While security is a tough problem, it's also somewhat surprising that properly sandboxing a browser is so difficult.
Usually, high salaries are used to attract star performers. But there's other factors at play too. Sometimes, high salaries are used as compensation for dangerous or unpleasant work. So, for example, if the NSO had to pay higher salaries because all developers whotake the job get immediately divorced by their wives and no longer invited for beer by their friends, then that wouldn't attract better talent. Regular talent is merely being compensated for the negative externalities of working for NSO.
In order for a company to attract superior talent, they need the entire package to be better than the competition (lifestyle, salary, free pizza, prestige, etc).
The mobile OS strategy failed everywhere. Now we have bad security (seriously, this is as an OS and browser error) and bad lock-in. I doubt it would have been easy to do with a decently updated, conventional desktop PC, even if you could redirect its network access like it was done here with his phone.
Even with a mitm attack on your browser, this shouldn't have happened.
I agree that people supporting this are guilty, but I don't agree with blacklists of developers for political reasons. These are established in the industry and speaks of incompetence in leadership as it is. That doesn't mean their behavior should be endorsed, but that is a case for legislation.
I really don't like NSO, to the point I never go to their parties and meetups even when invited and they have good parties.
However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
That ‘only’ difference is a very big one, and they are completely aware that their software will be misused and are happy to make a profit with it.
To say, ‘we will only sell our software to countries who promise not to use it to violate human rights, and if we catch them doing it, we will suspend it’ is just hand waving. The software is designed to be undetected. That’s the whole point.
A actual policy would be that ‘we do not sell our software to countries who have a bad human rights track record, as defined by <independent group>’ ... but that would cut into sales.
NSO is strictly regulated by both the Israeli and US government - and only sells to bodies those two approve - I guess your beef is with those entities then.
His and my beef is with the employees who think they're not doing anything wrong and partying/making bank while part of their head definitely knows that their work directly funds authoritarianism and evil acts by governments.
Oh and I agree with both of you and would never go work for NSO. I didn't when they offered me double my current pay (I'm Israeli) and I wouldn't for triple either.
I am just saying NSO is an extension and a tool of the US government and its regional (mostly controlled but somewhat autonomous) colonial ally Israel. So arguing about who gets Pegasus when the US government regulates it rather directly (through the "ethics subcommittee" in Israel that is semi-supervised by the US delegate) is ironic and funny.
Is there any country that doesn't have a bad human rights track record? Sure, some are worse than others. But where do you draw the line, and how far back into history are you willing to look?
This is a decent source: https://www.cato.org/human-freedom-index-new
There are 16 countries with personal freedom ranking above 9 (out of 10). US is ranked 26 with a score of 8.72, which intuitively makes sense.
Morocco is 135th with a score of 5.68, which pretty obviously indicates, that there is more than one thing wrong about offering hacking tools to the government
It's more like going after people who make waterboarding kits and run logistics for kidnappings. Anti-DRM and p2p software aren't usually associated with aiding & abetting torture and murder of dissidents and journalists. Framing the two as equivalent elides what NSO group's employees are actually complicit in.
Respectfully, I think knowingly aiding covert surveillance of dissidents is a lot worse than merely helping pursuit of copyright violations, even if I don't like the latter much.
> I really don't like NSO, to the point I never go to their parties and meetups even when invited and they have good parties.
> However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Yes we as humans are very good in justifying our own actions to ourselves. It also doesn't help if it's in your employers interest to reinforce this perception, creating a culture of "we are what stands against evil". This makes it even more important that outsiders will tell them that we hold a different moral judgement.
> Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
Apart from the fact that people don't die or get tortured because of p2p software, the question is also should someone working on e.g. biological weapons be able to absolve themselves by saying "I did not throw the bomb?". Yes, they did not throw the bomb, but they made a tool designed for one purpose only, to be put into that bomb, and they were fully aware of its purpose. They hold as much responsibility as the person using it.
I do agree that individuals should be held accountable for their work but it's the degree of the work that is problematic. Is it direct contribution or is it indirect contribution?
If I am working on an open source project used by NSA to hack you, am I responsible? No. That type of moral policing would be bad.
If someone is writing software directly for hacking you, then yes they are responsible but then you must consider all the actions of the org where they used that tool. People might work on these tools because of terrorism or believe in security of the state. That's by no means bad but how the org go about that can be bad and infringe rights. They don't have control over it. Now if they don't quit over the bad reuse of their tool and are not constraint by something (a person working for NSA is likely to get another job without problem), then I think there's something to be said about the personal responsibility.
Verifying the degree of contribution from outside is very hard to do as most details of what happens inside the orgs remains a secret. What their employees are told is wildly different than what they end up doing.
That said, I don't believe targeting individuals will have much effect. It's actively bad because there's an easy road here. Hold the org accountable. If we go down the path of wasting energy on ex-communicating individuals, orgs may get a free pass. It's not hard to replace people in a big org especially a monopoly. Go for the low hanging fruits. Boycott the org.
I don't think the inclusion of the word "mob" is very helpful. The connotations are both sinister, and organised.
What we have is the logical extension of the social justice, or SJW, movement. Which even 2 years ago, in my recollections, would have been met with utter disdain. Somehow we've arrived at a time when social justice has a new-found legitimacy and few detractors still speaking out about it.
To me this is scarier than a mob, who usually have a figurehead around whom they rally. The SJWs have been building their seat of power on the shoulders of social media celebrities.
This is Huxleyan populism. People 'follow' others from their sofa, they 'like' things without critical assessment, bolstering support for an ill-defined cause based on memetic catchphrases and sound-bite signals.
They learned from their detractors. Before they started witch hunts on Twitter, there were other groups that did the same in the age of the early net. Difference is that many people now have a public persona on social media.
Twitter mobs help get people fired. NSO helps people get murdered. I can't claim to be a big fan of either, but as long as both exist I know which I'd like to see prevail.
I think the developer community need to start refusing to use the cellphone. It cannot be trusted. It's tainted by non-free software on top of non-free OS on top of non-free firmware with the separate processor whose behaviour we cannot observe from the main processor. It also relies on central wireless network from only a handful of providers. Easy single point of vulnerable target.
I do refuse to own a cellphone. What about you. Since you're suggesting the boycott, can you?
Is it? I'm not overly familiar with any security exploits, but my understanding is that (at least for Android) the phone OS is often woefully out of date simply because the vendor stopped supplying updates. The end user generally can't supply updates themselves because everything is locked down in a decidedly user hostile manner.
For the vendor's part, they often stop supplying updates (as I understand it) because the proprietary hardware doesn't have it's drivers upstreamed into the kernel (they're proprietary after all) which leads to a completely unjustifiable maintenance burden. They can't simply open source things because the hardware manufacturers generally require NDAs.
As far as the hardware goes, my (probably woefully incomplete) understanding is that it remains proprietary due to a combination of attempting to maintain a competitive edge through secrecy, licensing complexities due to containing third party IP, and DRM issues (which are again a licensing concern).
No, that's not what happens on iOS. I'm writing this on a 5 year old device still getting updated.
The iOS exploits that have historically allowed the device to be jailbroken have been zero-day vulnerabilities. And I'm assuming the TFA is about a zero-day too.
Also Android is open source (AOSP). How does that help?
I'm well aware that this isn't nearly as much of an issue for Apple devices - that's why I very clearly specified Android in my previous comment.
Yes AOSP is open source but that doesn't help as much as one might hope for the reasons I outlined in my previous comment. Basically most end user devices aren't actually running AOSP at the end of the day, and can't without investing a nontrivial amount of effort. (And that still wouldn't prevent vulnerabilities related to out of date firmware.)
The comment of yours that I originally responded to seemed to me to insinuate that having access to fully open sourced phones wouldn't be able to do anything to improve device security as a foregone conclusion. I was objecting to that, pointing out that there are a number of real world examples where access to a fully open source mobile stack would immediately and drastically improve the current situation. In a hypothetical world full of such stacks perhaps this article would never have been written.
I think the problem can be solved by separating the "phone" experience from the "mobile" experience.
Phones are these devices powered by a philosophy (and to an extent, a technology) from 3-4 decades ago and day after day we see them ruining the experience of having the internet access from your hands. We need to move from a mobile-phone era to a mobile-internet era.
In what way do you think the phone legacy is holding us back? What concrete steps would you suggest?
It seems to me this has already happened. We only call these things phones for legacy reasons, but the iPhone broke the design link with actual phones and turned the phone aspect into just another communications app.
OK, but how does that hold any other aspects of these devices back? Remove the baseband processor and you've got an iPod Touch, or a small iPad. It doesn't fundamentally change the device or really open up any new avenues not possible with a baseband processor.
The Citizen Lab reports (one linked from this article) about the Israeli NSO Group's Pegasus spyware have been really scary for a few years now already.
Here's a category of articles on the citizenlab.ca web site described as "Investigations into the prevalence and impact of digital espionage operations against civil society groups": https://citizenlab.ca/category/research/targeted-threats/
"NSO Group Technologies (NSO standing for Niv, Shalev and Omri, names of company's founders) is an Israeli technology firm whose spyware called Pegasus enables the remote surveillance of smartphones. It was founded in 2010 by Niv Carmi, Omri Lavie, and Shalev Hulio. It employed almost 500 people as of 2017, and is based in Herzliya, near Tel Aviv."
--Wikipedia
I don't think it changes anything, but is additional correct information.
Edit: For reasons that are extremely unclear, someone is downvoting people who are saying this.
To make it clear why this information is important: If you run a startup and a potential investor is this British group I think it's important to know what else they invest in.
I saw this discussed on reddit, and I was surprised that there was so much confusion about how this happened. It wasn't just "network injection" - quite clearly (unfortunately very poorly described in the article) there was a vulnerability in iOS/Safari that allowed remote code execution; network injection alone wouldn't have been enough. Does anyone know what the CVE was that allowed this?
A code execution vulnerability isn't enough. To work on truly any website, they need:
- A remote code execution vulnerability. There are almost certainly multiple vulnerabilities at play here, since long gone are the days where a single vuln gave arbitrary code execution.
- a way to bypass the encryption/https, unless the remote code execution was on a layer before encryption (which seems unlikely). EDIT: Apparently the hack only works on non-encrypted websites.
- Once remote code is achieved, they most certainly need a way to elevate privileges in order to make the hack more persistent and tap into other apps.
There are most likely several CVEs at play here. The amount of effort that went into this hack is, frankly, terrifying.
From my understanding, it is easy enough to bypass HTTPS encryption if you need to intercept traffic for an attack like this. You only need to intercept and modify the traffic for a website, not a specific website.
There are still websites that don't use HTTPS.
For websites that do use HTTPS, if they haven't configured something like HSTS, HPKP or Expect-CT, typing example.com into a web browser will make it will send an unencrypted HTTP request to http://example.com. If the website's content is served only HTTPS, the server will most likely respond with something that redirects the web browser to the HTTPS version of the website (most likely a HTTP 301 or 302 status code). The initial unencrypted HTTP request can be intercepted and modified.
Ability to disable javascript for non-https would be a great security setting. Aligns well with Apple's goal for privacy and security as many wireless hotspots and even ISPs abuse HTTP javascript injection.
Sorry to nitpick, but hpkp and expect-ct arent really relavent here (since browsers dropped support for hkpk, and expect-ct is now basically the default in chrome). HSTS is what is needed
However, you only need 1 http website to pull off the attack, so its not really a problem that hsts is great at addressing since its opt i per server. The easy way to do this attack is control the local wifi, and make the login landing page malicious.
Expect-CT does almost nothing. Safari and Chrome both have CT policies that are enforced anyway, Firefox doesn't implement CT yet (patches doubtless welcome). The remaining thing Expect-CT might do (I have not tested) is ensure that bad guys can't present a certificate with a date prior to Safari/ Chrome's enforcement deadline. That's a small and shrinking window.
If bad guys can make a certificate dated February 2018 and valid until May 2021 that certificate will be accepted in Chrome despite not having any SCTs. A real one from that date probably does have SCTs but Chrome only required them in April 2018. Setting Expect-CT now might make Chrome reject that certificate for lacking SCTs if it was shown on a subsequent visit.
A year from the now the window is closed, Chrome will reject a certificate that says it was issued more recently and lacks SCTs, and it will also reject a certificate that says it was issued longer ago, because that violates Baseline Requirements on validity periods. But for now a February 2018 certificate could be valid yet not require SCTs.
All Google-owned TLDs are HSTS-pre-loaded, so if you use a domain in a Google TLD (e.g. example.app) then browsers
always use HTTPS anyway. Unfortunately it's unlikely that many older, popular TLDs will pre-load HSTS so most users will be unprotected for the foreseeable future.
They make enough money in between selling the tool and Apple patching the exploit to make it worthy, not to mention the amount of people who don’t do updates meaning the tool is still effective some of the time.
I think the assumption is that if they have the capability to create this exploit chain, they also likely have the capability to develop others (and possibly have others waiting to deploy if these are discovered and patched).
It's the same with any of these hacking technology companies, they have to keep moving at the very edge of what is possible.
Apple could indeed fix some CVEs that they know about but every new OS/app/library release has the potential to introduce new CVEs that hackers can exploit.
It's a moving target indeed but due to the increased complexity and questionable quality of modern software there's no way they're going out of business, quite the contrary.
Glad you fixed your comment with your edit, but this really isn't that hard to imagine at all: a privilege escalation in Safari is exactly how one of the original jailbreaks worked: https://en.wikipedia.org/wiki/JailbreakMe.
All you then have to do is network-inject on a user who visits a non-HSTS site by entering it in their address bar.
Sure, but we've come a long way from the original jailbreak, and what this hack is is essentially another full jailbreak from another Safari vulnerability, which is pretty scary.
But "any website" here seems to be "any non-https website", so this is more likely: a) router or baseband-processor hack plus b) malicious JS injection into unencrypted HTML plus c) browser vulnerability via JS.
> "There are almost certainly multiple vulnerabilities at play here, since long gone are the days where a single vuln gave arbitrary code execution"
Could you go into this in a little more detail?
I'm inferring that chains of vulnerabilities are needed to go from some starting point to arbitrary code execution. Is that correct?
Have efforts to secure computer systems over the past ~2 decades succeeded, at least in that much more effort needs to be invested in order to get to the point of arbitrary code execution?
For the most part, yes, it's much harder to get ACE today than it was 20 years ago, and even then ACE doesn't actually grant you any fancy capabilities on a modern phone.
To get ACE, you will generally need a couple of primitives, such as an ArbR/ArbW coupled with an infoleak to get ROP. This will allow you to execute arbitrary code, but you're still stuck within the confines of the current process' privileges. Phone apps are generally heavily sandboxed, and the web browsers tend to be sandboxed even harder. Having ACE in some arbitrary process won't give you the ability to do anything: filesystem will still be out of reach, most of the time you won't even be able to see other processes or even make network requests. So you'll need to break the sandbox.
Breaking the sandbox tend to involve looking for an RCE in a process outside the sandbox that you can communicating with over an IPC channel. And you'll likely need to do this twice: once to break free of the browser sandbox, and once to break the "App" sandbox. If we take a look at chrome for instance (which is very well documented[0][1]), they have sandboxing mechanisms built-in to disallow access to most resources (like the filesystem) to most of its processes, and to prevent access to most of the kernel API surface. And then Android further sandboxes all apps to disallow them from accessing each-other's data. So again you'd have to find another bug somewhere to bypass this.
There are tons of mitigations techniques being developed to make bugs harder to exploit, from Pointer Authentication (making it much harder to exploit ArbR/ArbW bugs) to Control Flow Integrity (making it much harder to create a ROP chain). Of course, not all apps actually have those mitigations in place, but the web browsers tend to enable most, for instance chrome has CFI enabled[2].
Would you mind expanding the acronyms? This is super interesting, but hard to follow (and also somewhat hard to google, apparently Arbr is a bike brand)
RCE: Remote Code Execution. It's fairly straightforward, but basically any vulnerability that allows you to run (native) code without physical access to the phone (e.g. when a user visits a website).
ACE: Arbitrary Code Execution. Basically any technique that allows taking control of the execution to execute your own arbitrary code.
ArbR/ArbW/ArbCall: Arbitrary Read, Arbitrary Write, Arbitrary Call primitives. They tend to be the "basic unit" which you can weave together to further poke at things once you've gained ROP.
ROP: Return Oriented Programming, a technique used to take control of execution when you have the ability to overwrite the Return Pointer of the current stack frame (for instance, from a stack buffer overflow). ROP is used because nowadays, most processes adhere to W^X (Write Xor Execute, basically a memory page is never both writable and executable at the same time), meaning we can't just inject shellcode and jump to it anymore. You can find a small tutorial on ROP at [1].
ROP This can then be used to generate various primitives (ArbW can be achieved by weaving together a "ROP Chain" that calls memcpy with the right registers, for instance).
IPC: Inter-Process Communication. Imagine a Unix Pipe, where two processes communicate with each-other over stdin/stdout. This is an example of an IPC. There are other IPC mechanisms (D-Bus, Unix Sockets, localhost...). When a process is sandboxed, it will sometimes need access to things beyond its sandbox (like accessing the filesystem to access a cached image or something). To do so, it will talk to another process over an IPC mechanism, with a well-defined protocol.
ROP return-oriented programming, which I understand to be using code already on the target and manipulating code flow in order to piece together the boots of programme to execute a routine of the attackers choosing (like cutting letters out of a newspaper to make a ransom note!), https://en.m.wikipedia.org/wiki/Return-oriented_programming
Yes. For example, webpages viewed in Safari are sandboxed. You can't run arbitrary code to affect parts other webpages. So they had to break out of that.
All apps, including Safari, are sandboxed. Apps can't run arbitrary code to affect other apps. So they had to break out of that.
The system itself is sandboxed. Restarting the phone resets it, in many ways, to a "known" state. So they had to install something that would persist across rebooting the phone.
Too bad I can't run a different browser engine on iOS. In a monoculture everyone is exposed to the same vulnerabilities. If we had 3 or 4 browser engines running on iOS then the odds of a specific vulnerability affecting a single user go down.
Further, there is competition for having the most secure browser. it's not controversial to say the 12 years ago IE, Firefox and Safari were pretty bad at security and Chrome in 2008 pushed them all to up their game.
Apple's stance on browser engines is at best claiming security by obscurity. Either apps are sandboxed or they aren't. If they are then it would be safe to run any browser engine. If they aren't then having only one means users have no choice when that one fails.
It doesn’t protect you from vulnerabilities in things like, say the code in the system API which paints video frames to the screen, which is where a lot of these vulns seem to be. It wouldn’t have been 2008, didn’t safari and chrome mostly share WebKit for quite a long time anyway?
They shared WebKit in 2008, but Chrome ran WebKit in its own process with no permissions and Safari did not, one of the reasons why Chrome was so much more secure. Glad I had the option to run Chrome and secure my browsing. I don't have that option on iOS. No team is allowed to even try to make things more secure.
Also, does iOS have something similar to SELinux? I know it's not perfect, and there have been RCEs in Android as well. But I'm surprised there are still things out there like the original tiff jailbreak exploit that allows full root access to a person's device from just visiting a webpage.
The iOS equivalent would be the app sandboxing mechanism, which very heavily restricts kernel access from most of Safari (most importantly the process that is JITing javascript). It’s structured differently than SELinux and has some complications like entitlements that have led to vulnerabilities in the past, but it largely allows iOS to apply the same sort of app-level access control.
iOS has a number of features that provide similar functionality; getting kernel level privileges on iOS requires multiple vulnerabilities to be chained together. It's not like RCE in the web browser process instantly compromises the entire system.
>Does anyone know what the CVE was that allowed this?
>The malicious code even wipes crash logs, making it impossible to determine exactly what weaknesses were exploited to take over the phone, said Claudio Guarnieri, head of Amnesty International’s Security Lab, in an interview.
Thanks for clarifying. I honestly thought, how is the browser able to install spyware that "allows remote access to everything on the phone" (per the article), as the browser is supposed to be a sandboxed environment. I'm relieved it was "just" a vulnerability in iOS.
You can still be the best and have security vulnerabilities. That proves absolutely nothing. I don’t know what kind of logic you are using. Are you implying that the best at security should never have had security vulnerabilities? If yes, what platform would that be?
On HN I've seen a lot of unencrypted sites lately. I don't personally feel comfortable browsing on them, so I avoid them. Near the end of the article here, it mentions that this is only possible on an unencrypted website. Is there a reason why so many people are not encrypting their websites? Even browsers seem to have picked up on the insecure nature of http. Please correct me if I'm wrong here, I just find it very strange how many links I've inspected only to see a lack of TLS/SSL.
If your browser can be hijacked by visiting a webpage, the threat vector is not substantially different whether the website is HTTPS or not. It changes the attack path from a MITM attack to a watering hole attack but that doesn't overly raise the difficulty level.
The real threat in this case was that sending the right string of data to a browser let a malicious actor execute a RCE and install malware. Either you trust the browser to be secure against such attacks or you can't trust much of anything.
If I'm targeting an individual/organization a watering hole attack requires that I can own a site the target visits regularly.
This seems a lot more complicated than just going to any unencrypted website via network redirection of some sort. Do most people routinely visit encrypted sites that are easily hacked to target an RCE on an individual?
While I'm not certain about the terminology and whether this would still be a watering hole attack, but any of the "Show HN" posts here, could trivially be hosted by an actor with malicious intentions.
You don't need to compromise a site they visit regularly, there are plenty of other ways to get someone to visit a malicious page, phishing email or even a link on HackerNews for example. People will visit new sites for all kinds of reasons, you'd just need to understand your target.
Also, I at least, have very little confidence that a state level actor couldn't subvert the news.ycombinator.com hosting provider if it wanted to seriously enough.
If someone were to be "secure enough" to be running drugs/guns/children - they'd _have_ to be totally disconnected from the internet and cellular networks, and probably all their first and second level associates as well.
I don't have any sympathy for those people, but sadly a journalist critical of a government has all the same problems there, and that's bad for humanity.
I'll admit up-front that I don't have a solid source that I can cite for this, and there's a good chance that it's outdated by now. That being said:
I've heard several times that this is largely driven by the crappier flavors of "media platform" and bottom-of-the-barrel ad networks breaking in spectacular fashion due to CORS and mixed-content problems when the main site tries to switch to HTTPS.
That was indeed a thing several years ago as ad networks were being forced to support HTTPS, but all of the major ad networks and ad servers have supported HTTPS for years at this point.
I don't doubt there are some bespoke ad servers or other dark corners of ad infrastructure where HTTPS support is still lacking, but that should be rare at this point.
For my simple self-publishing needs .. because I didn't make a self-signed cert for my Debian-based server, have not purchased a commercial cert, and do not want the short-term expire on LetsEncrypt.
edit- I do not have javascript-driven pages, they are PDFs or simple content
You're not worried about a middleman injecting their content into yours? It's such common practice that Comcast even documented how they do it [0].
There are easily found examples of malicious content being injected into HTML -- malvertisements for example. I can only imagine what might get injected into a PDF which can run javascript [1]. PDF readers aren't exactly known for their security.
Frankly, I'd much rather be able to talk to you about something downloaded from your site and get you to fix it instead of allowing a third party to infect me and point fingers at you.
please lets not confuse the issue with PDF -- I know PDF internals actually, and do infact d-compile and rebuild PDF is I think there is anything fishy.. zero point zero Javascript and PDF in this setup
I understand you know PDF internals and you can verify that you're not serving javascript in your PDF content. I'm glad you've gone to that depth of knowledge!
But I'm saying that what you're serving and what the user receives can be different. When you're using plain unencrypted HTTP then anyone between you and the user can inject javascript into the PDF. The user can get a PDF file with javascript in it even though you didn't put any javascript in it.
And while I think that most people reading HN would understand this, it doesn't have to be an ISP, just somebody that controls a network device between you and the destination. eg, a public wifi hotspot.
That's pretty much all it ends up stopping. Some hacker-wannabe that sets up a honey pot is going to be thwarted by HTTPS. But if we don't connect to shady hot spots HTTPS doesn't seem to provide that much protection. It runs into the problem of anyone that can MITM via your ISP or home router probably has the resources to also find an exploit around HTTPS.
> Some hacker-wannabe that sets up a honey pot is going to be thwarted by HTTPS. But if we don't connect to shady hot spots HTTPS doesn't seem to provide that much protection.
Pinned HTTPS certificates provide a good degree of resistance to MITM attacks, and without installing local certificates on the device under attack HTTPS itself stops MITM attacks.
> It runs into the problem of anyone that can MITM via your ISP or home router probably has the resources to also find an exploit around HTTPS.
This is absolutely not the case. Home/work routers are often monitored by suspicious spouses, annoying housemates or intrusive companies. It's very rare that one of these has the resources to develop their own exploits against HTTPS.
>This is absolutely not the case. Home/work routers are often monitored by suspicious spouses, annoying housemates or intrusive companies. It's very rare that one of these has the resources to develop their own exploits against HTTPS
I didn't say against, I said around. And your hypotheticals are good examples of that. If we are giving our bad actor physical access to devices in the house there are a lot easier/more effective attacks than a MITM on the router. Keyloggers, spy software, webcams, etc. are simpler and more effective tools. If they're sophisticated enough to do a believable MITM attack they can do all these other attacks as well.
> Keyloggers, spy software, webcams, etc. are simpler and more effective tools. If they're sophisticated enough to do a believable MITM attack they can do all these other attacks as well.
This isn't true. All of these attacks are pretty hard to run against mobile devices unless you can install something on them.
> a believable MITM attack
This makes me think you don't actually know what a MITM attack is. MITM attacks are dangerous because they are invisible - ie, "believable" is implied by it being a MITM attack.
>and do not want the short-term expire on LetsEncrypt.
certbot is too hard to set up?
>edit- I do not have javascript-driven pages, they are PDFs or simple content
The issue is that if the http protocol can be tampered with, even if all you serve is plain text, the attacker can change your response to contain javascript. Anyone visiting using a browser (with scripts enabled) will be vulnerable.
I also avoid them most of the time, although I don't think there are necessarily more of them now than previously. Sometimes the reason they are not encrypted is that they are fairly old (hopefully the server software was updated at least, although that should be much less effort than setting up https). Sometimes the sites do support https but don't redirect http to https and the http link was submitted.
I did a quick manual count of yesterday's HN front page articles according to hckrnews.com and found 8 non-https links (vs. 109 total non-dead links). 2 of these have a working https version.
I use the HTTPS Everywhere extension set to the new "Encrypt All Sites Eligible" option. Instead of using a list as previous HTTPS Everywhere, this tries to access all websites via https and pops up a warning if it doesn't work (most of the time; one of the six non-https supporting sites was misconfigured in a way that didn't get the popup). Since I want to know anytime I access an http site, I choose the "open insecure page for this session only" option if I want to look at an http page to make sure that it tries the https site again in the future and that I know any time I am visiting an http site. There are simpler extension that just do that, but unfortunately they are not Firefox Recommended extensions that are monitored by Mozilla. Hopefully it won't be too long before browsers do this themselves.
Obviously, HTTPS adds complexity many people find unnecessary + it puts you in position of depending on a 3-rd party - a certification authority.
In fact you can program almost any device (including very old and simple) to be a plain old HTTP client or a server but this is not the case with modern HTTPS.
You don't need to "implement the crypto algorithms" unless you're reinventing the wheel. You just need to get the private key and certificate (run certbot) and configure your server software (change your settings).
It literally does not matter for you if Let’s Encrypt is run by a hostile entity. They never get your private key, they only give you a certificate saying your key is valid.
As part of dealing with this I wrote a simple Firefox add-on to highlight insecure links (https://addons.mozilla.org/en-US/firefox/addon/insecure-link...). Basically it gives you a big red border around any HTTP, FTP or dynamic link (that last one can be turned off as it makes sneaky places like Google light up like a holiday decoration).
According to the article, Amnesty International assumes, that the Journalist in question was targeted by http-MITM attack. This assumptions nicely fits into popular "http is bad, https is good" narrative, but it is just a guess (and probably is far from truth). Modern browsers support multiple network code paths, several HTTP versions, dozens of TLS versions and boatload of ciphers. All of that code has RCE bugs.
Besides, delivering vulnerability payload via advertising network is far more reliable — with http-only exploit chain police would have to wait and hope that Omar will someday visit an http-only site. I would expect a pricey exploit toolkit, used by governments, to be more robust than that.
I've seen this improve a lot in recent years with Let's Encrypt, so that's been a great trend.
LE is still tedious as heck to set up on your own, though, so I guess people who haven't migrated to modern hosting yet are still being left behind. Most hosting-for-devs platforms these days give you HTTPS by default and don't think would even let you host a website without.
Certbot went through the registration process in the terminal window. Enter in an email address and read over the terms of service and then it goes and does its thing and finally spits out a success message telling me where the certificate and private key are on the filesystem.
Then just point an nginx configuration file to the two [1] and tell nginx to test and reload its configuration.
Then, LetsEncrypt will send an email to me notifying me that one or more certificates are about to expire (20 days, 10 days, 1 day ...). I even decided to test that and make sure that works (on a different site a couple years ago) [2]. The certificate can be updated using the certbot-renew service:
systemctl start certbot-renew.service
Google searches show several examples which put the renewal service on a timer.
That's it! I'm not sure what you think is tedious about that process. Would you care to elaborate?
The switches would need to be there for a GUI anyway. There would be a drop-down menu to select the authentication type (--webroot), a folder selection field to specify the web root directory (-w /srv/_default), and a list of domain names to certify (-d systemd.software -d www.systemd.software). How else would you configure the options?
systemd.software is just a website and isn't related to running certbot. Indeed, if you'd visit the site you'd see I have a lot of gripes about systemd too!
Systemd here is only used as an example for the renewal service. It actually just calls `certbot renew`:
$ cat /usr/lib/systemd/system/certbot-renew.service
[Unit]
Description=This service automatically renews any certbot certificates found
[Service]
EnvironmentFile=/etc/sysconfig/certbot
Type=oneshot
ExecStart=/usr/bin/certbot renew --noninteractive --no-random-sleep-on-renew $PRE_HOOK $POST_HOOK $RENEW_HOOK $DEPLOY_HOOK $CERTBOT_ARGS
The important bit here is the ExecStart: it's just `certbot renew --noninteractive --no-random-sleep-on-renew` which is indeed a lot of stuff to remember.
the certbot tools were made with the "default" experience in mind and no one tries them with more custom setups. the fact that they don't have easy guides on how to do it without certbot is worrying. you should be able to get a straightforward automated experience without having to use their convoluted tool that will choke and leave you hanging.
google certbot systemd or certbot openrc to see it fail. tack on that they had to invalidate all of their certs one day and gave little warning and you could easily have experienced revoked SSL certs if your system wasn't the default setup.
I'm sorry, what? Are you projecting? By what measure is "custom" and by what measure is "no one"? My old setup was very custom and worked quite fine with certbot. My current setup is less custom and still works quite fine with certbot. I literally haven't had any trouble whatsoever with certbot.
> the fact that they don't have easy guides on how to do it without certbot is worrying
Without Certbot we'd be using openssl's arcane command line. And you're absolutely right, openssl is a !@#$ing dumpster fire. I'd argue that openssl's garbage tools are exactly why certbot was created.
> google certbot systemd or certbot openrc to see it fail.
I did google certbot systemd and certbot openrc. I don't see failures. Please cite some.
> they had to invalidate all of their certs one day and gave little warning and you could easily have experienced revoked SSL certs if your system wasn't the default setup
"They" are LetsEncrypt. And "They" aren't Certbot.
In retrospect I thought I'd vouch for your message since I thought it would be a decently constructed argument. Instead I now recognize you didn't write a reasoned argument. That was my fault for vouching before reading.
If you don't like Certbot or LetsEncrypt then nobody's forcing you to use them. Go pay for a SSL certificate since you don't understand how to use free tools. Or just use unencrypted connections and let your users get hacked.
January 31 of this year I got an email telling me that my LE client used the older ACMEv1 protocol, not the newer ACMEv2 protocol. They gave me 4 months notice to update my LE client to something compliant. I burnt the time and did the work.
On March 3 myself and many others[0] got an email demanding that we manually re-issue our certificates because of a vulnerability discovered in the LE service. They gave us one day to comply, after that they would revoke the certificates and our users would receive security errors. I begrudgingly went through all my servers and issued the command to forcibly renew certificates. Not a huge burden for me, but likely a bigger burden for larger operations.
As the feature set grows (new challenge types, wildcard support, etc.) and the service gets even more popular, it's going to be an even bigger target and the effects of a monoculture will really be felt. I'm starting to see the value in paying for certificates, and more specifically, using providers that don't provide a public certificate issuance API (or at least stick it behind a paywall.)
How many times would LE have to accidentally issue gstatic.com or fbcdn.net before they get the Symantec treatment[1]? Too big to fail: It's not just for investment banks. And that should give anyone seeking a decentralized internet pause.
> How many times would LE have to accidentally issue gstatic.com or fbcdn.net before they get the Symantec treatment[1]? Too big to fail: It's not just for investment banks. And that should give anyone seeking a decentralized internet pause.
I agree that some problems are unfortunate. But let's contrast for a moment. LetsEncrypt has demonstrated track record of quickly fixing issues. Symantec has a demonstrated track record of hiding issues instead of fixing them.
It's wise to consider options carefully. LetsEncrypt isn't the be-all end-all service for TLS and your needs might not be compatible. But I don't think it's fair to shove LetsEncrypt aside just because it's had its share of problems.
It's a good service, but I think the GP's point is that it's not trivial to do. Lots of websites probably went down because they missed that e-mail and we've seen lots of major websites go down due to some issue related to their certificate.
Let's Encrypt has lowered the bar, but it's still a bar that needs to be overcome.
As we'd expect ISRG not only fixed the immediate problem they also accelerated plans to ensure that any similar problem would have less serious ill effects.
In particular a current Certbot (or similar software from other developers) will conclude that it should try to replace a certificate which has been revoked and not only certificates that will shortly expire. So if a similar event happened, and you missed the email, your Certbot will treat the certificates much as if they'd expired and replace them automatically.
Also if you didn't replace a revoked certificate the thing is: Online revocation is broken. Most of your users will not have noticed your certificate was revoked. Popular browsers do have an out-of-band way to enforce revocation but they didn't use it on that Let's Encrypt incident because they felt it was low risk. So maybe some people are running Internet Explorer (really?) or have explicitly turned on revocation, everybody else doesn't even see a warning page.
> How many times would LE have to accidentally issue gstatic.com or fbcdn.net before they get the Symantec treatment
Our concern with Symantec was inadequate oversight.
This is not some clumsy "Three strikes and you're out" rule. Symantec did not have the culture needed to do the job properly and we had no confidence that their management was capable of instilling such a culture.
If you're American or just follow American events somewhat you may have seen the "One rotten apple" argument being pulled apart in respect of problems with their police. Symantec used this argument, asserting on two occasions that their policies were fine but an employee had fallen short and this employee was terminated so now everything is fine. I am not sure I believe them but it doesn't matter because:
That is not good enough. We need public CAs to design procedures so that merely incompetent or lazy employees cannot sabotage things. Because individual humans are by their nature incompetent and lazy, such problems are to be expected and must be allowed for in your processes.
The big incident that blew up for Symantec was Crosscert. Symantec had not explicitly disclosed that the Crosscert relationship existed. In fact even if you read their paperwork closely (as we did after the incident) they actually simply did not disclose key facts about the relationship to anyone, not to their users, not to relying parties (ie you and me), and not to their independent auditor. Perhaps not even to their own board of directors (of course maybe private documents available to the board had such a disclosure).
It is likely that in practice Symantec as a corporation was unaware of what Crosscert were doing. Even if one or two Symantec employees had a good idea, the organisation as a whole was ignorant. As a result there was in practice no oversight over this entirely separate entity in a foreign country issuing certificates!
We concluded that building confidence in a new management of new infrastructure would take several years and that was the minimum we could allow. At first Symantec decided to fight this at the executive level, which of course only made us more confident that we'd been correct not to have confidence in them. When that failed (my impression is that trying to bully Google senior management isn't a good strategy) they settled upon a plan of selling their CA business instead.
So all that's a long way from Let's Encrypt accidentally mis-issuing a certificate from their own systems to bad guys.
It took me maybe 10 minutes to set up for my nginx setup. Debian and OpenSUSE both package letsencrypt's certbot. Tedious to set up and maintain is essentially the opposite of my experience.
Because 3 lines of nodejs can make a cool web demo for HN, but making that same demo https (in a way which isn't going to require manual action every 3 months) involves many more lines of code.
What does https have to do with getting exploited?
If the web server is compromised, then it’ll inject the malicious JavaScript code into the HTML, and transmit that to you. SSL is irrelevant in this regards.
Unless when not using SSL, then the HTML is getting intercepted in flight, and a malicious JavaScript code is injected into the HTML.
Is that more of what we are now seeing these days? The routers are compromised, and the HTML is getting compromised too.
This is a legitimate question.
Granted, I’m fully in support of SSL. Nobody should be seeing what you are browsing. This leaves too much digital breadcrumbs lying around.
>What does https have to do with getting exploited?
It significantly helps to prevent MITM [1] (Man-in-the-middle) attacks. (without scary certificate warnings anyways)
>If the web server is compromised, then it’ll inject the malicious JavaScript code into the HTML, and transmit that to you. SSL is irrelevant in this regards.
The web server isn't compromised in this case, presumably the network is compromised.
>Unless when not using SSL, then the HTML is getting intercepted in flight, and a malicious JavaScript code is injected into the HTML.
Yes.
>Is that more of what we are now seeing these days? The routers are compromised, and the HTML is getting compromised too.
"Stingray" devices [2] spoof mobile towers so cellphones are tricked into believing they're connecting to "Just another cell phone tower" and at that point traffic can be captured/modified.
>Granted, I’m fully in support of SSL. Nobody should be seeing what you are browsing.
It's more than people just knowing what you're browsing (or issues such as leaking passwords/private info), it's that if someone can MITM you, they can also transparently modify unencrypted data (including adding exploits).
I suspected MITM is now technically feasible. Although I wasn’t quite sure how widespread it was.
The stingray is indeed worrying. And it’s been around for nearly 20 years now.
So everyone with a cell phone, or using WiFi over a cellular hotspot, can now get caught up in a MITM exploitation attack, if they browse a non-https website.
When will the network providers now issue a blanket ban on browsing over plain http?
The adversary in this case is the local (Moroccan) government, so using a VPN would have likely saved him, as long as he chose one with an exit in a different country (one unlikely to cooperate with his own, obviously).
If the insecure website itself is in Morocco then he's hosed either way, whether the website is behind SSL or not.
Ok, so I want to make something clear to the (smart but mostly not "in-the-know" about NSO) HN crowd.
Let's say you're a Mexican drug lord or Saudi prince. You know this tech exists and the US/Israeli/European governments use it.
Then, you see this article, and see all the comments in the comment section about how competent, scary and balance-changing the technology is.
Basically: I think these pieces are bought for and paid by NSO through a PR firm, but you are not the target. When we leave comments like "NSO's tech is so good it has to be regulated!" or "NSO's tech is dangerous!" we are playing directly into the PR firm's clever hands.
It's like an article about how good the AR-15 or the F-35 are. Obviously to me (and most of the readers) it's mostly "why are we focusing on technology of death" but we are not the target.
Of course, NSO and other players in that field can do much, much, much more than advertised to the media.
Remember, the vast majority of people working for NSO worked for Israeli and US intelligence bodies. They serve in the 8200 unit doing malware analysis trailed by the NSA and then go work for NSO on the same sort of technology.
(If you want to get an idea of how much, I recommend "Permanent Record" but if you don't like Snowden then check out how far ahead intelligence bodies were _historically_ compared to public knowledge - WW2 crypto being a good analogy)
This lets the US government (and the Israeli government in turn) to make money off the technology without going through the same international regulatory systems.
The US government (or Israeli government) can stop companies like NSO in a single decision but they are not since it is making them money.
It's up to us (the citizens) to pressure them to do so and to promote security best practices and work on better tools to make it harder to breach peoples' privacy.
Thanks for your through reply. My question was more in line with enterprise PR/sales. More precisely, to what extent does such a PR activity drive sales for NSO. In my early engineering career I did enterprise pre-sales for two different companies and we always relied on more direct touch points with potential customers (e.g. doing seminars, workshops).
I'm not sure this particular article is paid for by NSO but there is a fierce competition in this space and NSO are just one player. As far as I know (I really don't, just rumors) NSO's tool is the best one but also the priciest. So arguably if I am a target audience (like a national security agency) an article like this outlines the competence of NSO / Pegasus.
Why aren't cell phone tower communications secured? Why aren't cell towers secured with certificates verified by the network? Why aren't stingray devices considered an attack on the cell network?
If stingray devices work by tricking your phone to connect with older protocols like 3G, why aren't those protocols deprecated just like we deprecate older encryption methods that are no longer secure?
Oversimplified answer: because people want their cellphone to work outside of major US cities.
For example, the laws on what is allowed to use encryption and what is not differ significantly from country to country. There are also often older installations that only provide 3G support.
Basically, it's complicated and there are a lot of different reasons but it mostly comes down to the world being a big place with lots of different laws and requirements, yet people want a phone that works everywhere.
I think it has to be 2G GSM specifically, 3G UMTS do cipher that kind of holds, also a lot of phones aren’t dynamically updatable or updated smartphones
GSM downgrade attacks as well as USB SDR gears came out late 3G era, I kind of trust 3GPP guys for protection for LTE onwards but if GSM downgrade attacks are your primary concern in your life you can move to Japan and get contract on au by KDDI as KDDI flat out ignored CSFB to CDMA2000 and went all VoLTE
>if GSM downgrade attacks are your primary concern in your life you can move to Japan and get contract on au by KDDI as KDDI flat out ignored CSFB to CDMA2000 and went all VoLTE
Wouldn't it be easier (at least on android) to go to mobile network settings and change it to "lte only" or "3g only"? As for using au by KDDI, I'm not even sure whether using their SIM cards will prevent a downgrade attack. It's possible that they still support 2g for roaming use, for instance.
okay, I’ve mistaken, KDDI do have eCSFB to CDMA since the get go...my brain was stuck in pre-LTE launch era. Sorry.
I’m not sure how “LTE Only” options work on every phones, moving across countries to just make a phone behave certain way is beyond absurd but verifying how it’s working might be a bit of challenge?
regarding roaming, you mean a fake GSM tower with backend going over a VPN to a GSM tower somewhere the phone could roam to? That I didn’t realize. That could happen indeed.
> Why aren't cell phone tower communications secured? Why aren't cell towers secured with certificates verified by the network?
They can be, but that adds cost to running a cell phone network. Since very few people ask for their cell phone communications to be secured, companies just don't do it. it's like how GPS is completely unsecured - anyone with a couple hundred bucks can interfere with GPS signals. Maybe route a cruise ship into an island, or a random driver to a sketchy area of town.
> Why aren't stingray devices considered an attack on the cell network?
LEO's use them very frequently to conduct investigations and gather evidence. Just look at the current debate around full-device encryption. USA Law Enforcement really likes access to information. Telecommunications (and a lot of the inernet: TCP/IP, DNS, etc) were initially set up to be open - security was an afterthought. And they just never bothered to add security later on.
Also deprecating old things is hard. People still expect to be able to pick up a charged Nokia phone from 2001 and call 911 on it.
Both wind up eventually to the same place, but the first redirects to the second.
That can be a link, it can be an old bookmark, etc.
Worse if it was a targeted ad, the https:// link could be just a redirect back to an http:// link, something the browser probably has no trouble doing.
> the https:// link could be just a redirect back to an http:// link, something the browser probably has no trouble doing.
Doesn't HSTS prevent exactly this? Sure, not every website implements it, but the most visited ones overwhelmingly do - it's certainly misleading to say "any website" in that context.
doesn't the hsts header have to be delivered in cleartext the first time you visit the site with http? so your browser knows from then on it can only ever visit via https? the header can be removed in that initial cleartext request so it never gets set in your browser in the first place. I don't think HSTS works against highly targeted MITM attacks like this.
bb88 raises a bad example. Not only is google protected by HSTS, but also most browsers ship with HSTS preload list that includes google. It is nearly impossible to perform downgrade attacks on google domains.
>That can be a link, it can be an old bookmark, etc.
Doesn't even have to be a link. If you type addresses like most be people do (ie. without https://), the browser is going to attempt http first. So any manually typed in address will be vulnerable as well.
At least in the current state of browsers, they all try https:// first. That may not have been the case when the attack in question that's shown in the article was performed but every major browser now checks https:// before http://.
>At least in the current state of browsers, they all try https:// first.
That's trivially disproven by typing "example.com" in the address bar and hitting enter. Both chrome and firefox goes to the http version not the https version, even though the https version is available.
Even if that were true though, an attacker in a position to perform an MITM attack can also block the https connection from going through, forcing the browser to make a http connection.
NSO is worth a billion dollars, and they've probably made a lot more by renting out these hacks to governments than they ever could have made through responsible disclosure.
Does Apple allow anonymous reports/crypto payouts? There could be anti-money laundering issues to sort out, but perhaps this could incentivize individual actors to break ranks and leak vulnerabilities upstream.
"Break ranks"? If you work for one of these actors (NSO with intentionally "sassy" PR or "quiet" ones like Verint or ones in the middle like Cellebrite) and leak info - you will get jailed.
Breaking ranks in this case is a 10 year jail sentence if you get caught.
Funnily enough, the ex-head of malware analysis for NSO recently released this https://www.jsof-tech.com/ripple20/ and "switched ranks" to the light side.
I'm probably naive here because I'm not versed in networks -- but couldn't he avoid surveillance by using a VPN? Wasn't one of the design features of VPNs that your connection can't be hijacked?
"So do you trust some shady foreign VPN provider more than your government approved ISP?" might return true from journalists in autocratic states that have a record of censorship, surveillance and imprisoning journalists.
The article mentions NSO's Pegasus that the journalist victim downloaded, and presumably installed surveillance tools on his iPhone. What is Pegasus? Is it platform of browser zero days that then installs surveillance tools? Does it root kit the phone?
From the looks of the associated CVEs, it is buffer overflow attack against Safari. The exploit is probably to use a MitM attack to inject the payload into an arbitrary website.
It’s easy to look up, it has historically been used to spy on human rights activists and reporters.
It’s sold by the NSO group, that has repeatedly tried to smear and spy on organisations that report on them knowingly selling to and supporting governments that are using these tools to attack HRAs, etc.
I assume that they have updated it since it first came to light, as the vulnerabilities at the time were all fixed.
Note that the vulnerabilities that they use are all the same ones used by jailbreaks, so fixing them necessarily means preventing jailbreaks.
> the Israeli company issued a policy that vowed the company would cut off clients if they were found to misuse the surveillance technology to target journalists and human rights activists
This goes right up there with "the backdoor for which only we'll have the key".
If a system’s purpose is to protect citizens from bad actors, it is only a matter of time before the citizens become suspected bad actors, and the machinery is turned against the citizens.
This is a concept that is sadly missing in well-intentioned people.
Sounds more like "if you get caught using this then you'll need to use a third-party to purchase it and we're going to charge you extra", but perhaps that's too cynical.
Between uBlock Origin and uMatrix I hadn't even realized. Prior to seeing these comments I had actually been impressed by the site's (apparent) responsiveness combined with the (apparently) clean and mostly pain text layout. Whoops.
I'm running UBlock Origin and UMatrix too. And both of them are blocking eyereturn.com. But because I've got Firefox to alert me to fingerprinting (not a default option I think) I also got a little pop-up informing me that the site was using it. Which advice I appreciated even though I was already protected by those two earlier mentioned services.
Would using something like Opera Mini have prevented this attack from happening?
I’m imagining a proxy like tool that lets high exposure individuals to request webpages, have them downloaded/parsed, and possibly rendered before handing them off to the client device.
Perhaps it would let the client download https normally but switch modes for any http requests (if I understand what happened here correctly.)
I feel like this article makes the technique sound a lot more novel/surprising than it is. It seems like a simple case of "phone had an RCE vulnerability that got exploited by an attacker in control of the network".
The point is how moneyed and powerful interests align, leveraging technical expertise to target specific investigative reporters, members of civil society, etc.; with the goal of killing democracy.
It's probable that they were intercepting the connection and the act of hitting _any_ http site resulted in the redirection to an exploit payload. Many many sites have http requests, so you can't really blame someone for hitting a non-https site.
You're right it may not be his fault, but that is why all sites should be https, and sites should have the hsts header set.
Essentially if you are using http:// anywhere on your site (including transitively loaded resources) you are putting your users at risk - the same thing was exploited with the great cannon or whatever that DoS from the Chinese gov was called.
Time for everyone to install HTTPS Everywhere and turn on Encrypt All Sites Eligible (EASE) on desktop.
On mobile, highly recommend using a browser that supports extensions or pressuring companies to enable third party browsers. It's not overstating it to say that our legislators should compel that competition, it's a national security issue that journalists, intelligence officials, and the President using devices by a certain manufacturer cannot change the browser or use helpful extensions like HTTPS Everywhere.
On iOS there are pretty much none. Any that claim to detect such things are generally selling snake oil. (Rebooting your phone is usually a good way to get rid of an infection, since persistence is generally not a part of these attacks, but you might just be infected again.)
So just by being redirected, you can have root access to your iPhone taken without warning? That sounds like an insane vulnerability. Anyone with a security concern should immediately drop iPhone shouldn't they? I admit the intercept process is clever, but if that's all that is needed for total security failure, the real issue is the browser/OS.
Possibly, but advanced malware like this can add many re-infection paths, opening a safari tab might be sufficient to trigger a re-infect upon browser launch. Many apps also make periodic background http requests, might be enough to just hijack one of those to trigger the RCE after reboot.
Journalists need to take a page out of Assange's playbook and use ancient Thinkpads and Powerbooks and use VPNs which would make this technique obsolete.
I suspect that Mr. Radi would have been safe if he had 1) used a PinePhone, with cellular and WiFi radios turned off, and a USB-connected cellular modem; and 2) hit the Internet through a VPN, Orchid, Tor or LokiNet.
If you think not, please share, because I hate spreading BS.
Edit: Well, someone seems to think that I'm wrong, but they aren't saying why. Just sayin'.
It seems this attack exploited the browser on the device via non-HTTPS sites. So the elaborate networking scheme really wouldn't do much. They just had to inject their browser exploits on every non-HTTPS site the user visited. Just because the PinePhone runs non-Android/iOS doesn't mean there aren't 0-days for the browser it runs.
In TFA, it says that they injected the exploit via stingray or cellular network compromise. I'm pretty sure that better isolation of the browser and other apps from the cellular network would have prevented that.
But yes, he would have been vulnerable to malicious sites exploiting browser bugs, as one always is. I mitigate that by compartmentalizing activities in multiple machines and VMs. For example, the host that this VM is running on contains absolutely no information about my meatspace identity.
Don't supply services to these companies (build their website, network...).
I believe by letting people of the hook for participating in this (similar things can be said for e.g. the NSA) we are essentially endorsing the behaviour. If you work on at e.g. NSO group, you are personally responsible for governments surpressing and even killing (just look at SA) critics