Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents (nytimes.com)
412 points by t0dd on March 7, 2017 | hide | past | favorite | 235 comments




This headline is extremely dangerous. The phone itself was owned. No encryption was harmed by capturing the keystrokes and audio before it reaches the application. NYTimes should be ashamed of themselves for basically lying about the nature of the hacks.


We've updated the headline to what the NYT currently says. Previously it said "WikiLeaks: CIA managed to bypass encryption on popular services Signal, WhatsApp".


No, it's not.

The encryption is not broken, it's bypassed. The data go to an unintended third party, even when the encryption is legit, rendering the encryption useless.

So the word "bypass" is correct.


This is a dangerous headline because it implies that Signal was broken, which could lead to people moving to LESS SECURE SERVICES because they think the more secure one is broken. When in reality is the phone and OS.

They have similar end result for the phone in question, but headlines like this can lead to people being less secure on the whole.


Most users cannot tell the difference between between the Phone, OS, App and the signal (Let alone an app named Signal). Likely the journalists work with tech savvy to make sure their understood this and it was hard for them to make sense of gigabytes of technical jargon and noise.

Arguing this point at all is silly when many people, even many IT professionals don't know and don't care about the difference between bypassed and broken. This arguing detracts from the important news...

The CIA sees fit to ignore the security of Americans by not alerting the companies that make the software the CIA exploits. They do this to insure they can hack whoever they want, and there is no meaningful oversight and no ethical, economic or constitutional consideration.


That hardly matters if people's response is to use other, less secure things, as was the case with the Guardian and Whatsapp.


This is entirely a non-issue.

If group with the massive funding and pervasive reach like the CIA can operate with impunity it does not matter what app or what security you think you have.


Going from easy dragnet surveillance of unencrypted communications to having to use expensive to deploy, develop, maintain targeted attacks that get patched (with, on iOS, ridiculously high penetration rates) does not seem like a moot issue.


I don't see how this goes from one to the other. It seems that just about every Android and iOS device can be part of an "easy dragnet" without any app installed. If the wikileaks article is correct about the CIA having kept multiple 0-day exploits hidden for each OS, then breaking anything even remotely is a work ticket and not a research project for them.

The fine distinction of one app being singled out sucks, but it really is small potatoes here. The owner of the app should write the NYT and complain that their app was used inappropriately or perhaps write an editorial to get even more free advertising. The real news is that the CIA lied to Americans and the President so they could continue damaging American businesses, in the name of protecting America.

It sounds like we are not too far off from the CIA being able to write self spreading malware that allows monitoring they just haven't because... maybe it would be too easy to spot. Oh wait groups like the CIA did this already and rigged it to delete itself when not on one of their intended target's machines, stuxnet.


You made a specific claim: no app, easy dragnet, work ticket level, because tons of hidden 0days. I'm taking it as read that a publicly patched one doesn't count. Is there evidence for that claim in the actual documents?

Pending that, here is evidence of a counter claim. I'd repeat what tptacek said, but he's whittled it down better than I could: https://news.ycombinator.com/item?id=13811541

To cite Tony Arcieri, the only elite cryptanalysis trick in play here is "Android is a tire fire". Cue surprised gasp from security researchers.

Furthermore, you did not refute my central claim. Popping a Cisco 12k: read a bunch of unencrypted comms until detection. Target a specific person to get bit by a specific iOS exploit: maybe read some of the data until it gets patched. Surely you'll agree that one is drastically more expensive than the other?


I haven't gone through all the documents but the summary does say verbatim:

> dozens of "zero day" weaponized exploits against a wide range of U.S. and European company products, include Apple's iPhone, Google's Android and Microsoft's Windows

The only presumption on my part is that they are remotely exploitable, which is practically a requirement for mobile device exploits to be useful because physical access is hard to obtain. I do plan on going further through these, they look fun.

Of course encrypted communication is better for the user than unencrypted, but this is not the place for that, which is why I ignored it. This was supposed to be a discussion about massive government overreach, not petty squabbles between apps. With unfettered access to these phones there are all manner of hypothetical attacks that could go after any of these app providers and not just snoop on the communications of the users. With root access to a large number of phones and little oversight their capacity for harm is frightening, this seems more worthy of discussion.


The documents do not mention encrypted communications; that same summary editorialized them in.


People who care sufficiently about the security of their crypto don't use NYT or the guardian as an information source to base their opsec decisions on.


And the bonus to the CIA ignoring the deal the Obama administration made with Big Tech to disclose vulnerabilities is that now (apparently) all of the tools the CIA had accumulated are out in the wild, instead of being fixed.


I don't know why Obama allowed this, could he have had the CIA shut this stuff down he was the Chief Executive?

I wonder what this administration will do with this knowledge. It will be interesting to see trump respond too, rather than manufacture news.


Frankly, I am not sure if anyone in the White House has been able to truly control the intelligence agencies since the first Bush. And I say that only because he was a former CIA director, so he had a better chance of knowing where the control levers were hidden.


I've had friends and family reaching out to me all morning saying "Signal is broken, see the NYT." This headline is incredibly misinformed and misleading and I hope they issue a correction quickly.


I don't think your argument about less secure services is helpful to layman. By arguing that Whatsapp is more secure, you are giving people a false sense of security. A good way to phrase it would be "all messaging services are equally vulnerable to these kind of attacks, regardless of encryption."


> because it implies that Signal was broken

It does mean Signal is pointless to use however. Why encrypt if your communications are picked up prior to encryption? Akin to putting your seat belt on after the car has crashed.


No, no, no!

Defense in depth! Do you stop using TLS on your banking website every time a Windows 0day comes out?


Because your opponent might not be the CIA and because your phone might not be compromised.

So in that case switching to something less secure will instantly make your problems worse.


Of course, Im only speaking in the context that you are worried about the CIA or other governments.


Even if you are worried about them it still does not mean that you have been compromised. And if you do worry about them: don't use your phone (or any computer, for that matter) for sensitive stuff.


Of course, my original point was don't count on Signal to protect you. That was my whole point.


"Akin to putting your seatbelt knowing full well a thermonuclear attack is always possible."

Yes, catastrophic compromise is possible, but that does not render all security measures moot. A precious few attackers have the capability for such attacks, they are very costly to develop and therefore very precious and well kept secrets, to be used on high profile targets.

Unless you are a spy, a terrorist, a state official with significant power or a dissident against the likes of Russia or China, end-to-end encryption like Signal will keep your communication private.


> Unless you are a spy, a terrorist, a state official with significant power or a dissident against the likes of Russia or China, end-to-end encryption like Signal will keep your communication private.

Maybe, if one person can do it so can others. It would be foolish to assume you are safe just because the US government doesn't deem you a person of interest. It might be far fetched, but now that the world knows it's possible to bypass encryption you cannot ignore the fact that Signal may not work at all.


I'm sure someone savvy enough to use end-to-end encrypted communication channels will switch to less secure methods based off of a headline /s


It's not really that savvy people would be switching away; it's that non-savvy friends/family of savvy people who read this article now will have a slight negative connotation to those product names, so if their savvy friend/relative tries to convince them to switch to either of them, they might say no for stupid reasons.

This is the point of the majority of propaganda, really: it's not to convince the people who know anything about the issue; it's to prejudice the people who don't, so that it'll be harder for the people in the know to communicate the facts to them.


In particular because the App Store features not only the usual suspects (Skype, Allo, ...), but many other somewhat random apps (Gonzo, BabelNet, Kissapp, 5s, ...) promising encrypted chat, and people might think, "hmm, WhatsApp and Signal are insecure, it says so in the NY Times, so let's try one of these"


It already happened with Whatsapp and the Guardian's irresponsible reporting. Organizers and protesters switching to unencrypted messaging or even SMS, because of the perception that Whatsapp was hacked. Someone savvy enough to use end to end encryption may be someone who values privacy, but there's not reason to assume they are also someone who is themselves a security expert. The point of apps like Whatsapp and Signal working to make end to end encryption easy for the average person is to increase encrypted messaging use, not make everyone a security expert.


Well, it would be really dangerous if they have put a headline that did not make normal people not read it. I do not see this as click bait, and I see this as a usual signal for the mainstream to be aware.

Also, if you these people read only the title, then the problem is not any sort of text, you should fix those people first. No matter what words were chosen they will most likely make the wrong judgment.


If people really require security from state level agencies perhaps they should read more than the headline.


The point is that the title mentions explicitly Signal and WhatsApp, generating the false impression that it was a weakness in these applications. However, it was a weakness in the OS, so a proper title would have been:

| WikiLeaks: CIA managed to bypass encryption on popular messaging services on Android phone (nytimes.com)


They pwned iPhones too. And servers. And desktops, tablets, and your TV.

While Signal and WhatsApp have not been broken (apparently), pretty much every platform they are hosted on has been.

The main point is that the CIA can read your encrypted messages before they become encrypted, if they really want to. So while your encryption works, you can still be pwned.


Or even just "CIA managed to hack into Android phones."


I think the part that's misleading is that with a loose/typical/casual reading it sounds like the bypass is at the application level as opposed to the OS/host level. By suggesting specific apps/services may be "bypassed" they fail to make it crystal clear to all readers that any breakage is likely app/service agnostic.

Of course this source is part of the same media that continually calls the election "hacked" despite there being no known technical irregularities with voting machines or vote recording or the actual election itself [^1] (that I'm aware of, at least). (Yes, computer systems were compromised, and data was exfiltrated from the DNC/related parties and released by foreign state actors. Unfortunately that is not "hacking an election." It's just plain and traditional information ops.)

So it's pretty par.

Mainstream news sources seem to continually get worse at reporting tech related stories, and I think there must be an even greater level of confusion when it comes to typical non-techinical individual citizens.

[^1]: Whether anybody is actually interested in actual elections running in auditable, effective, and functional way is apparently another question entirely, and the answer from most seems to be "nope."


You are 100 percent correct. Though I think the headline is a bit clickbaity but have to agree, it is accurate.


Accurate, but dangerously misleading.


How is it misleading if it is accurate? They bypassed it by compromising the phone. No encryption is going to save you in that situation and their targets were WhatsApp, Telegram, etc. So that part is accurate as well. It is a headline, I think what you are expecting is they put all the facts into the headline and there isn't enough space.


It's misleading by omission. Until I read the article I was under the impression that they had found a flaw or something exploitable in the OWS protocol.

If the problem was with Signal or Whatsapp, as the headline suggested to me, switching to another messaging service is the natural reaction. If people understand that the problem is with the platform, and that all platforms are compromised that solution doesn't work, and using signal is still better than SMS because it still protects against other forms of surveillance.


Well, why singling out the two services in the headline when this applies to basically every application ever?

Luckily, they've realized the mistake and apparently changed the headline.


"CIA bypassed secure apps on Android" would've been nice. Sure, there are hacks/implants for other platforms too.


They bypassed it by compromising Android phones. There is a clear action item here if you want to be secure: switch to an iPhone, which is what tptacek has been saying here all along.


Have you read the announcement? iPhones are wide open for the 3-letter-agencies, too.


While the Wikileaks announcement explicitly mentions the iPhone (zero-days to "control, infest, and exfiltrate data"), the NY Times article mentions only Android in the context of bypassing Signal, WhatsApp.


I don't think that's a fair description of the information. Comments GP refers to include e.g. SEP -- iOS' threat model includes kernel-level 0days.


I've been thinking a lot about this as it relates to the "fake news" trend. Journalists have been using real information to lead people to wrong conclusions. Now we are very concerned about political sites using false information to lead people to wrong conclusions. Fake facts are bad but using facts to mislead people does damage to people's trust as well.


If they emphasized that no app is secure if the phone itself is compromised I wouldn't hae a problem with it. By calling out specific apps it could cause someone to switch to a less secure alternative not mentioned.


If the app and service were not involved the only reason to mention them is to create doubt they are secure.


Not necessarily. To a lot of people encryption is "using Signal" or "using WhatsApp". They don't necessarily understand that these are distinct things and that their communications could still be captured by virtue of simply using a phone.


"The strongest chain will break at it's weakest point".

If I as a user, believe that a sequence of actions, from my keystrokes to voice input, which I perceive to be a direct interaction with a secure app are in fact insecure, then is the app really secure?

I guess that's the question being posed here


There is a balance -- one is reminded of the constant "data charged may apply" footnote to so many free services. The same goes here: you really shouldn't tout your impenetrable security without also informing users that things external to the service may undermine its utility.


Also make sure no one is looking over your shoulder or listening nearby. "Signal encryption bypasssed by new look over shoulder attack."


I think it's a little different when the person "looking over your shoulder" is omnipotent.


The OS these services run on isn't secure, so wouldn't these services by definition not be secure?


The article should be emphasizing that they actively attack devices of targeted individuals, not leading with the particular consequence of this that Wikileaks mentioned in a tweet.


But the rest of the headline is misleading. It's Android that was broken into, not Whatsapp or Signal. The headline encourages a false idea that those apps have a systemic flaw that allows the CIA to read any messages sent over them, which is incorrect. "Bypass" is the right word, but the sentence as a whole misses the point and spreads a misleading message.


You're both right: encryption was bypassed, but mentioning specific apps implies those apps were specifically affected and is misleading.


Headline does imply the issue was with the messaging services and not the phones.


What's wrong with it? They were able to bypass the encryption. They got the data without it being encrypted. How is that not bypassing encryption?

Furthermore, from the point of view of the end-user, the important point is that WhatsApp and Signal are not necessarily secure to use. The exact nature of the security hole is not as important for the vast majority of users.


The phone itself may not be secure. Maybe they should include gmail, schwab, camera, microphone, amazon and every other thing in their description. Literally this is FUD.


These are the most relevant apps to use, it indicates that even when using security apps there is a problem. The message is "WhatsApp is not safe", its not relevant to most people why.


And it's a rare case where FUD is absolutely applicable:

Fear that using a "secure" messaging app on your rooted phone will expose you to consequences.

Uncertainty that your communications are secure when using your phone with the "secure" messaging app.

Doubt that using the "secure" messaging app is secure.

yes, it's FUD.


"sidestep" would probably have been a better word choice than "bypass" only because of the connotation of these words... the average person isn't going to parse these words however, sooo... <shrug> ?


Completely agree that "sidestep" would have been a much better choice. I think the title is technically correct but that doesn't mean much since context matters a lot. "Sidestep" is a lot more intuitive and, I do disagree with you here, I think the average person would get a better idea of what's happening if they read "sidestep" instead of "bypass"


Unfortunately, this is a line that Wikileaks themselves are running with: https://twitter.com/wikileaks/status/839120909625606152


Running misinformation is part of Wikileaks' job. It's not the NYT's job.


They at least re-clarified on twitter. But not in the article. https://twitter.com/nytimes/status/839160771674255360


I believe they've edited the article:

"Among other disclosures that, if confirmed, would rock the technology world, the WikiLeaks release said that the C.I.A. and allied intelligence services had managed to bypass encryption on popular phone and messaging services such as Signal, WhatsApp and Telegram. According to the statement from WikiLeaks, government hackers can penetrate Android phones and collect 'audio and message traffic before encryption is applied.'"


If that's an edit, it's still pretty poor. The 'experts' quoted are Wikileaks themselves. The disclosure 'A spy agency had 0-day exploits for mobile devices' would not rock anything.


Agreed 100% - but methinks NYT (and others) still look to them for technical guidance on some matters - however misguided that might be.


The NYT has a _huge_ list of experts to contact for stories like this. They chose not to, in the interests of getting a salacious lede printed quickly.


Well I think they put out the article first and get experts to correct the finer points later. I don't agree this is the best tactic, but reporting first is important.

They have changed the title. Currently: "WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents"


See for instance WaPo, where Greg Miller and Ellen Nakashima got Nicholas Weaver from ICIR on the record for analysis for the story. Compare with the NYT's original story, which had no disclosed expert sourcing. Maybe a habit we should all develop is to first scan these things to see who they got on the record to talk about it.

It's not hard for them. I'm not making this up: NYT has a huge list of experts to reach out to for stories. They just chose not to.


> It's not hard for them. I'm not making this up: NYT has a huge list of experts to reach out to for stories. They just chose not to.

I wasn't disagreeing with you. I was just saying that their tactic is to publish first and update. This is pretty common, especially in bigger stories. They get paid by getting eyeballs on their site. If someone publishes 20 mins before them they lose money, even if they are more accurate.

So just always take a breaking story as a draft. The story isn't finished and I bet will get updated several times.


If I'm not mistaken then the NYT has shown in the past that it can get basic tech/security facts like these straight.


Could you explain more about "misinformation is part of wikileaks' job"


And that it's not NYT's job while we're at it.


In contrast, running McCarthy type propaganda and smear campaigns against Julian Assange/Wikileaks is part of NYT's job.

https://www.nytimes.com/2017/01/04/us/politics/julian-assang...

https://www.nytimes.com/2017/01/08/business/media/assange-wi...


The word "effectively" is somewhat clarifying in WL's tweet as opposed to the outright misinformation in NYT's headline, also it's just a tweet and not an article headline from a major newspaper.


edit: apparently NYT had a different headline and changed it... ignore this post

The current title [0] is wrong, but NYTimes is relatively clear:

> Among other disclosures that, if confirmed, would rock the technology world, the WikiLeaks release said that the C.I.A. and allied intelligence services had managed to bypass encryption on popular phone and messaging services such as Signal, WhatsApp and Telegram. According to the statement from WikiLeaks, government hackers can penetrate Android phones and collect “audio and message traffic before encryption is applied.”

It depends on how you define "bypass". In my opinion, accessing data before encryption is a form of bypassing... but it doesn't necessarily mean they can decrypt an already encrypted signal.

[0] "WikiLeaks: CIA managed to bypass encryption on popular services Signal, WhatsApp " as of this writing


They changed the headline: https://twitter.com/nytimes/status/839161021369573378

edit: A new tweet referencing the article: "WikiLeaks release said CIA managed to bypass encryption in mobile apps by compromising the entire phone"


They changed the tweet which I guess is factually correct but still misleading.


When I read "bypass" I kind of read "go the alternate route. As in around the impasse" which in this case the impasse was encryption.

I think a lot of people in this thread are hating on NYTimes today for this headline because of the inaccurate WhatsApp encryption news stories of recent.

I could see myself being bothered if they had written that the encryption was "broken" or "cracked" as if you destroyed the boulder in your path. Bypass seems fine. Hacker News doesn't normally use bypass as a synonym for break, but for some reason today it i to the commentators


> I think a lot of people in this thread are hating on NYTimes today for this headline because of the inaccurate WhatsApp encryption news stories of recent.

More because we're all getting blown up with "Signal is broken" messages and have to answer them one by one because of misleading/disingenuous headlines. Yes, 'bypass' is technically correct but the implication of the headline is that the problem lies with the named apps. This is not true and actively problematic.


> No encryption was harmed by capturing the keystrokes and audio before it reaches the application.

Exactly, and some people including me thought about this possibility years ago. The most secure system in the universe can still be hacked very easily by a malicious closed driver because device drivers have the highest access level to the underlying hardware. Every information being produced: (virtual) keyboard writings, data, contacts, sensors data, GPS, audio, files, etc. I mean everything can be accessed a lot before it reaches the encryption code and be relayed to a 3rd party without the user even noticing.

This plague won't go away, not until enough people with enough influence will require hardware manufacturers to document their hardware in order to create OSS and trustworthy device drivers.


yup, its a chicken/egg scenario.


Eve operates in meat-space not a mathematical Flatland. Operationally, it does not matter how the message was read. The encryption system is compromised and users do not have practical alternatives.

The reality is that no matter how good the software engineers are; no matter how sound the algorithms; no matter how well funded the startup or open source project; it's completely outnumbered and completely out gunned. Nation states operate at a different scale and easily deployable encryption systems for novice users are white horse led brightly dressed musketeers drum marching to their general's firing line in the midst of a modern free fire zone.

To me, any secure communications systems that provides the convenience of app store downloads and over the air updates should be considered compromised. On the other hand, if someone thinks that a three letter agency might be interested in their communications and that person does not work for another three letter agency, they should probably assume that their signals are compromised if they are detected.


They've corrected it; HN should do the same.


Am I missing something? The actual headline is "WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents"


NYT is pivoting to a model that brings it more clicks.


NYT corrected/changed the title. I don't get why they are getting the heat when wikileaks leaked the information.


It is a disturbing trend lately for publishers to seemingly insert deliberate fallacies into their headlines just to get more people engaged, causing them to pop up on more social media timelines.


Well they want to become Drudge before Drudge becomes them.


I agree. They should be called out on it. The headline is basically "fake news."


Seems like the headline is perfectly fine. The software on the device you don't own is bypassed, resulting in encryption being ineffective? That seems like a highly critical issue for whoever owns the software.

Step the fuck up Google. Android security is an embarrassment.


The headline mentions specific apps; readers will remember that those apps are "insecure". Very dangerous.


They are insecure from the perspective of protecting people from targeted government surveillance though. The security model doesn't support it.


And people wonder why I am only lukewarm about encryption and opsec. I use both for myself, but I've given up evangelizing other people years ago because (as I've said here on HN many times):

For regular people, the effort of encrypting things is simply not worth it because they're powerless against a really determined attacker. It's rational to protect against casual attacks from spammers and scammers, but protecting oneself against state-level attackers is futile unless you make a full-time job out of it.

Someone usually pipes up at this point saying 'we need to limit the powers of the state', like some sternly-worded law is going to undo the existence of the technology or take away the vast economic and political incentives to deploy it. Get real folks, technology doesn't get un-invented, and powerful organizations are just like powerful organisms; they're opportunist, they maximize their own chances of survival, and when they do collapse the resulting power vacuum is filled as rapidly as any other vacuum would be. One can certainly seek to govern the behavior of a state or state organ, but attempting to limit its technical ability is naive, for the same reason that you'd be naive to try to fix police brutality by legislating about the design parameters of police batons.


> WikiLeaks, which has sometimes been accused of recklessly leaking information that could do harm

Nice passive voice there, NYT.


We really need Qualcomm and others to document their hardware interfaces for modems, baseboards, and SoCs so that open firmware and drivers can be developed for these devices.


While I completely and I think I understand why, could you expand on this? If I did I would probably not be as accurate as you.


This headline is false and misleading, and does not reflect the headline on the article (WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents)


The headline here was the headline in the article. They've changed it after the submission and I believe mods here are going to do the same.


Yes. NYT often changes their headlines and we follow suit, with some lag.


You should consider the assumption that your security IS compromised at any given point in time (bypassed or whatever) then you could foresee and prevent some worst case scenarios which usually come from hubris nonetheless ("hey, our app is 100% secure and tested by the top security experts - not like other apps on the market").


This point can't be emphasized enough. Sophisticated operators always assume they're being listened to, and take precautionary steps.


> According to the statement from WikiLeaks, government hackers can penetrate Android phones and collect “audio and message traffic before encryption is applied.”

This a perfectly useless bit of information in that it says nothing about how this penetration could occur. Pretty much anything can be cracked with a trojan. Something like a currently valid remove exploit would be a much bigger deal.

I could say that all the secure apps are broken because I can stand behind you and look over your shoulder while listening to anything you might say.


To me this is much more worrying:

> As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations.

https://wikileaks.org/ciav7p1/

Given the fact that car makers don't even have "PC age" security in their cars, things are looking pretty bad for self-driving cars in general.


Makes the conspiracy theories regarding journalist Michael Hastings' death in 2013 seem more plausible. [1]

Former U.S. National Coordinator for Security, Infrastructure Protection, and Counter-terrorism Richard A. Clarke said that what is known about the crash is "consistent with a car cyber attack". He was quoted as saying "There is reason to believe that intelligence agencies for major powers — including the United States — know how to remotely seize control of a car. So if there were a cyber attack on [Hastings'] car — and I'm not saying there was, I think whoever did it would probably get away with it."[68]

Cenk Uygur, friend of Hastings' and host of The Young Turks, told KTLA that many of Michael's friends were concerned that he was "in a very agitated state", saying he was "incredibly tense" and worried that his material was being surveilled by the government. Friends believed that Michael's line of work led to a "paranoid state".[80] USA Today reported that in the days before his death, Hastings believed his car was being "tampered with" and that he was scared and wanted to leave town.[81]

[1] https://en.wikipedia.org/wiki/Michael_Hastings_(journalist)


> Makes the conspiracy theories regarding journalist Michael Hastings' death in 2013 seem more plausible.

Not really. The possibility of taking over unmodified cars remotely was not very widely known at the time. An organization that knew about that and had the technology to actually do so would not want to use it except on high value targets that they could not reach by more conventional means, because they would want to keep this capability under the radar of potential targets for as long as possible.

Due to the nature of his work Hastings would have been easy to take out by conventional means. He was an investigative reporter. It would be easy to feed him a lead on some story, like some important political person having a connection to a drug gang, and set up a meeting in a sketchy part of town with someone who says they want to give him confidential information about that. There would be nothing suspicious about that, and it would be easy to arrange for this fake meeting to go bad and end up with Hastings dead.

This would look like a sad but not totally unexpected way for a bold, risk taking, investigative reporter to die, and there would be not even a hint of a connection to any government agency.

If his car did not have remote vulnerabilities, and so any takeover involved modifying the car, then killing him by car takeover is even more absurd. It runs the risk of the modifications being discovered between the time they are installed and the time they are used (what if he takes his car in for service and the mechanic finds them?), and if used in a place where the agency doing the assassination does not have control of the scene afterwards risks the mods being discovered in the wreckage.


Agreed, which is why I think this information is additional evidence and not a smoking gun. I do think given what is public knowledge, the prospect of the CIA using an experimental new technique on a target like Hastings is suspect.

Ultimately though, we don't know what he uncovered and intended to publish, and how long the CIA had to react to it. An extraordinary revelation may have necessitated an extraordinary reaction. The point is that the new information takes the concept of malicious car hacking from speculation to reality.


His brother and family don't believe the conspiracy theories. If there was any evidence, I don't think they'd be scared to say so in such an emotional state.

Also in the police report, I believe his brother said he had been using DMT and he tested positive for what was likely Adderall. He was in a unique state to truly be paranoid and throwing psychedelics in the mix could cause one to try to cope in ways that challenge reality.

Of course, this also would be the perfect time to stage a murder and it's not improbable that someone did discuss killing him. Also DMT only last 5-10 minutes, he certainly wasn't driving while doing it and if anything, it can give you a sense of peace and acceptance to the craziness of life.


I think given what we knew until today, it was prudent for his family to deny the theories. Now that we have evidence showing car hacking isn't just some theoretical exploit, but something they were actively looking into around that time, it merits reexamination.


> Makes the conspiracy theories regarding journalist Michael Hastings' death in 2013 seem more plausible.

I never thought they seemed implausible.



These are great arguments against super power private institutions (corporations) that operate in secret and are essentially unaccountable to the public.


One of the many reasons I drive a manual. RIP Michael Hastings.


<deleted>


  According to the statement from WikiLeaks, government 
  hackers can penetrate Android phones and collect
  “audio and message traffic before encryption is applied.”
How is that possible? Isn't the data encrypted before it's sent over the wire?


The kernel is owned (or some part of the phone below the application level). The encryption only gets applied at the application level before the messages are sent down the wire.

The interception happens prior to the encryption being applied. Think of it as a dongle on the wire between your keyboard and the computer. It doens't matter if the computer is secure - the message is intercepted prior to any encryption.

This is, what I am assuming, has happened here.

Edit: lots of stuff deleted for very valid criticism, as below.


> Given Google's stance of not encrypting local storage in any way that I am aware of, this is fundamentally unsurprising. I have long been saying that Android is insecure and that storing passwords in Chrome is dangerous.

ChromeOS and Android both implement FDE. There are some legitimate criticisms of (especially) the latter, voiced by e.g. Matthew Green, but you're just speaking nonsense here.

There's very little value in per-app encryption on desktop OSes; it's security theater.

I shudder to think of what your "secure communications" app does. I hope you're a good lawyer. ;)


I am not talking about ChromeOS - I am talking about the Chrome browser. Localstorage, last I checked, which was recently, is plaintext.

> ChromeOS and Android both implement FDE

Which is irrelevant if the runtime is compromised, which appears to be the case.


Let's be all Socratic here:

Given a desktop OS like Windows that implements FDE like Bitlocker and runs a browser like Chrome, can you describe a hypothetical threat in which Chrome encrypting localstorage would prevent exploitation?


Yes - worms or browsers that scan local data files without accessing the runtime of the parent application.


So your threat model is "malware which has access to memory containing plaintext but is written by idiots"?

0_o


Dunno if you are still checking this thread, but I had a followup to this question.

It seems to me that certain cryptoviruses function in the following way (e.g. certain variants of ransom_vxlock - I will see if I can find a specific example):

* The virus functions like other cryptoviruses, encrypting local data and holding it for ransom

* However, in addition to holding your local data ransom, it archives certain files that are likely to hold passwords (e.g., the chrome password store), and then emails them to the C&C server

If this is the case, would local encryption of the chrome password store be a protection, or would the decryption of this store be trivial the the virus author? Again, assuming that the virus author is a script kiddy.

So, basically, I am asking that if the characterization of the virus described is accurate, doesn't that mean that the threat model I describe also actually occurs in the wild? I'm not trying to be facetious here - I am trying to get to the bottom of this.

I will try to find links to support the above.


And it does not matter - it is in Chrome's homedir, no other app can access it. Wrt. physical store, it is on FDE anyway.


Good sn.


>Which is irrelevant if the runtime is compromised, which appears to be the case.

You're under the false assumption that these exploits are current - they're not. In fact, they're very old.



Why not point to the actual ancient exploits from circa 2011-2013 for Android versions below 5 and Chrome versions below 40?

https://wikileaks.org/ciav7p1/cms/page_11629096.html


Thanks for the link. So then is the assertion that the relevant hacks are all for older versions of Android? How does that comport with the current batch of hacks?


All of the hacks are for older versions of Android and iOS. Specifically Android version 4.x and iOS version 9.x.

"Apple says most vulnerabilities in Wikileaks docs are already patched"

https://techcrunch.com/2017/03/07/apple-says-most-vulnerabil...


You're very much misrepresenting the facts. Android very much encrypts data (or gives users the option to, I'm not certain if it's the default). Chrome, the desktop application, does not. Why? Because that's a false sense of security. Chrome would have to also store the encryption key, and store it in the same place and under the same access controls as the encrypted data. This is not real protection. It is up to the user to not run malicious apps under the same security context as Chrome, and to encrypt their hard drive to protect their data at rest. Nothing that chrome can do in its security context is anything more than placebo - as shown by the fact that malware (and legitimate programs!) can read the Firefox local password database.


> Why? Because that's a false sense of security. Chrome would have to also store the encryption key, and store it in the same place and under the same access controls as the encrypted data.

I hear you, but this is not the case with Safari. It offers secure local storage. It's the securesettings API. It uses the OS level encryption, and, based on the current state of play, this does not appear to be compromised.

> as shown by the fact that malware (and legitimate programs!) can read the Firefox local password database.

Is this also the case for Safari? I have not read anything to this effect.


I'll admit that the OSX Keychain has advantages - Offering OS-managed secure storage, with the possibility of the OS authenticating requests for credentials with the user is pretty cool. But as Chrome is cross-platform, and there's not standardized and similar APIs on Linux and Windows, I don't think it's the right move to have an OSX exclusive implementation that uses that api.


> But as Chrome is cross-platform, and there's not standardized and similar APIs on Linux and Windows, I don't think it's the right move to have an OSX exclusive implementation that uses that api.

That is a very fair point, which I will take into consideration.


> It uses the OS level encryption

So all the NSA/CIA needs is a XNU kernel exploit which they need anyway for iPhone root exploits. Then, intercept the securesettings API or just do a raw memory dump of the browser process.

And the NSA has another card they can play, and that way easier on Apple than on the fragmented Windows ecosystem: all the tiny chips on your motherboard (EC, or any chip on the PCI bus which has DMA) can read and parse the RAM. Given that there is a highly limited number of different Mac EC chips and even then Apple likely uses the same firmware across them, it's easier for CIA/NSA to develop an exploit for these and don't care about kernel at all.


I don't trust Google, Facebook, or anyone else who provides freemium services and has ties to the American Government. I don't understand why anyone does.


It's funny, I trust Apple because they're not "freemium" as in, they make their profit elsewhere. But I'm not sure I should really! After all they did participate in PRISM AFAIK


> Given Google's stance of not encrypting local storage in any way that I am aware of, this is fundamentally unsurprising

How does not encrypting local storage relate to this story? You're just pulling that one out of thin air to somehow prove your point. Besides the fact that there is no correlation between encrypting local storage and intercepting keystrokes or more broadly owning the kernel, it's also false. Though there are concerns with how disk encryption is implemented in Android and there are ways around it, it's come with FDE since version 5.0.

Encrypting local storage wouldn't have saved you one bit from this kind of thing where they just intercept the keystrokes. And your app wouldn't be safe from it either.


I agree. Wholeheartedly. The point, however, is that it belies Google's larger approach, including that they turned a browser into an operating system.

Encrypting local storage would not have saved you. Absolutely. Maybe I am reading tea leaves, but it seems to me that this is indicative of the sort of security-lax mindset that allowed android to be owned.

> And your app wouldn't be safe from it either.

Yup. I know. It is a concern.


You don't even need to necessarily own the phone at the kernel level. Things like the Android AccessibilityService APIs are kind of a huge gaping issue if the app uses standard text controls without overriding the View.AccessibilityDelegate event handlers.

Of course, this is a bit of a balancing act, because many disabled people legitimately benefit from the accessibility services, but they are like a huge vacuum from which displayed and entered textual data from your application can be sucked out.


Malware running on a phone can do anything it wants, take screenshots, record messages/typing, etc. Unfortunately, the article is misleading by claiming encryption was bypassed.


Bypass could be appropriate in the sense that it was "sidestepped", not "broken". I think it's a fine word but I don't think the average reader knows / cares about the difference.


The article is about the phone equivalent of installing a keylogger. Even if all the apps you type into encrypt everything end-to-end, a person capturing keystrokes still knows what you typed.

This is also why every single reputable source on security is condemning the NYT for running such an irresponsible headline, since it was not about flaws in the secure messaging apps or their encryption in any way.


Thats the thing, they capture the data before its sent over the wire, as you type it or speak it. you ->capture->app->encrypt


Think of it as a keylogger capturing the keystrokes before the app sees them.


If the device isn't secure, all bets are off.

And in my opinion, if you require security that the CIA can't bypass, you won't find it in any mainstream consumer hardware or software.


Edit: deleted, for very valid criticism. Next time I won't post in a rush during work hours.


> Next time I won't post in a rush during work hours.

I'd suggest taking the same approach with your "secure, end-to-end encrypted communications" app you keep mentioning here[0]

A one-way sha256 hash of a message using a password that has to be 8 characters long[1] and can't accept special characters[2] is not a secure communications app

It is trivial to find the plaintext in these situations.

Your Chrome extension has a very elementary RCI bug in it[3], which because of your extensions broad permissions[4] profile means anyone with your extension installed can have any code executed by visiting any page.

To release (excuse me) crap like this on one hand while FUD'ing Google's security practices on HN on the other requires a level of hubris that I don't think i've ever previously encountered.

[0] https://www.gibberit.com/

[1] http://i.imgur.com/CsgOkZ2.png

[2] http://i.imgur.com/uZg0E4l.png

[3] http://i.imgur.com/eq19mET.png

[4] http://i.imgur.com/lOsibBP.png


Could you explain how [3] is an RCI bug? getNum() returns either 'false' or 'n' with the length of gibberText (ie. n20, n35, etc). I can't imagine any content where .length() would return harmful code; though I'm not well versed in JS.


I'm also interested to know.

Believe it or not, I would love to get Nik as a consultant. I fear my 'hubris' (I won't deny it, this idea is extraordinarily ambitious and I have to be arrogant to even conceive of it) will have pissed him off irrevocably.

That aside, I don't really follow his point on the login PW. I understand 8 char alphanum pw is pretty low entropy... but that isn't used for encryption. And the login attempt rate is pretty strictly rate limited.

And yes, I am getting professionals - not me - to do the heavy lifting. I wrote the proof of concept. I am in no way surprised to find it has issues - I am aware of a few others myself.


It isn't the login password but the message password - although using sha256 for a login password isn't great either

if you're doing

aes(plaintext, sha2(password)) = cyphertext

given cyphertext I can get to plaintext with sha2(8-char dictionary)

well designed systems will generate a truly random key there, exchanged using public-key. if you're going to use a password, you need a key-derivation algorithm

this is all bunk tho since the big vulnerability here is that you're delivering the encryption routines via javascript in a global browser space


> this is all bunk tho since the big vulnerability here is that you're delivering the encryption routines via javascript in a global browser space

So what about mailvelope?


Nm, I understand your point, and yes, no contest. The extension is being broken up and will communicate with the environment on the tab with sent messages, rather than just injecting the whole content script. I hear your point loud and clear.


You're not taking your own advice from two comments up :)


Regarding professionals? I hear you - loud and clear.

I want you to know, very sincerely, I appreciate your feedback over the past two days.

Some lessons (re-)learned:

* Security is a conclusion, not an assertion - it is improper to present a system as secure without evidence. * I am not, nor will I ever be, qualified to provide a conclusion regarding security. * The language on the homepage needs to be clear in this regard without being 'cute.' * If I ever post on HN regarding security, either use evidentiary sources to back my points on provide code.

Thanks for the reality check.


That image isn't very useful on its own.

the design of the app is that it injects content scripts with global variable names everywhere.

any site can overwrite the encryption functions, or redefine some of the global vars that are used for images, etc.


Believe it or not, I very much appreciate your feedback.


I'd probably take the site down, or mark it clearly as being a hobby project.

As for Chrome - that team has some of the smartest infosec and cryptography people in the world working on and contributing to the project. If you want some insight into how some of the security design principals and tradeoffs were rationalized, i'd start with the project wiki:

https://www.chromium.org/Home/chromium-security


It's currently marked as beta all over the site, for exactly these reasons.


it is more prominently marked as "secure, end-to-end encrypted communications" - which it certainly is not.

it is also being marketed as such by you here


You can check again - I have changed the language. That was improper and thanks for the feedback.


Don't take this the wrong way, but as a non-lawyer, I try to heavily caveat any statement I make about the law.

Would you consider heavily caveating statements you make about information security? A lot of what you say here is basically wrong.


Indeed, this is technical expertise domain of an information security researcher, not a lawyer.

A national security lawyer could provide interesting insight into how the CIA is allowed to use these tools vs NSA.


> I try to heavily caveat any statement I make about the law.

That is appreciated, and you are in the minority.

I'm taking this advice, btw, and being more circumspect when I post in the future.


:)

Cheers.


For serious, thank you for taking the time to engage. I do take this seriously.


Yep. Same. And I would probably have posted a longer and less confrontational explanation of why you're (mostly) wrong if I weren't tired after a long day of work. ;)

The whole "why not encrypt local resources" thing is an odd red herring that a lot of (even fairly experienced) people trip over. There was a massive public furor over Chrome's chrome://settings/passwords (i.e. lack of a master password) design choice a couple of years ago that was a specific such case in point.


This argument is almost exactly what I was on about. I'd love to see some summary of it and why they came down where they did.


Sorry. I went to bed. I'll frame the basic argument for Chrome and then show how it expands to other systems.

Chrome: =======

Someone who can access chrome://settings/password is presumed to have physical access to your powered-on, unlocked machine. E.g. someone who sits at your keyboard when you get up for coffee.

And that person can just as easily install a Chrome extension that sniffs your passwords or steal your raw auth cookies directly from the developer console. (He could even paste some JS into the developer console to intercept the password as-typed by autocomplete!)

(Note that an attacker with access to a locked/powered-off machine or with no local access is not part of the threat model, since they are presumed to be addressed by FDE, screen locks, remote access controls, etc.)

Now, the major counterargument is essentially that a lot of unsophisticated attackers (like spouses) may not know about cookie jars or JavaScript, but they know about "view saved passwords." I find this argument somewhat reasonable, but from some vantage point it's security theater--not knowing about ctrl+j isn't a strong security guarantee, after all. So I view the Chrome team's stance as being a very principled one, namely: don't invest in "security" features where a bypass would not be a bug.

(In some literature this is referred to as a "security boundary", typically defined as "a control which, if bypassed, has a bug." Note the contrast with, for example, spam filters and antivirus, which may be sometimes bypassed while working as intended.)

More generally: ===============

I think what was lacking in this conversation in general was a firmly defined threat model and a firmly defined security boundary. My contention about per-application encryption is that it doesn't represent a security boundary because any attacker who can execute code that can read application-ACL'ed data on disk is by definition either running code at a higher security level (e.g. has root) or is running code at the same security level (and can thus inject code into the browser process itself).

This conversation gets a little more complex when talking about mobile OSes that have per-application sandboxing, but the same observation effectively holds.

Anyway, I'm tired of typing, but hopefully that makes a bit of sense. Let me know if I'm being confusing.


So, tell me what I am misunderstanding here:

* On OSX, OS passwords are stored in the keychain. * However, Chrome stores passwords in a local SQLite database https://www.howtogeek.com/70146/how-secure-are-your-saved-ch..., which, on osx, I believe is in your Application Support Folder ("ChromeDB") * The user, who is not root, has read/write access to the ChromeDB * Is it not the case, then, that any script that has user-level permissions can access the Chrome passwords? Because Chrome is not available through the app store, it does not store passwords on the OSX keychain, which, again, correct me if I'm mistaken, requires higher permissions to read? So that, for instance, a malicious script that only had user-level permissions could not access the contents of databases encrypted with credentials stored in the keychain?


If the malicious script can execute arbitrary code as you then you're owned, essentially you always allow unsupervised use of your computer which as discussed is not part of the threat model.


Right, but I believe I have seen the threat model I describe actually functioning in the wild - cryptoviruses that archive files on your local drive that are likely to contain passwords and then mail them to the C&C server.

Would local encryption of these password stores be a (potentially) effective protection against this?


To expand on this (correct answer) very slightly, what you say (libertymcateer) is true but misses the other vector:

A malicious script that runs as user X can typically (on desktop OSes) inject code into any other process running as user X at the same security level. The details vary by OS--in Windows there's a system call called NTCreateThread that lets you inject code from a loaded DLL in your process into any other process at the same or lower security level; in OSX, at a quick Google, it looks to me like http://web.mit.edu/darwin/src/modules/xnu/osfmk/man/thread_c... may do the same.

So the attack that this opens up is to basically wait until Safari is running and loads the credentials into its memory--which it has to do to prefill the password field in a page--and then just read that memory from your code running in the same process in a different thread (which shares the address space). And if you don't want to wait, you can simply request the credentials directly from the Keychain API; Keychain doesn't know you're not "Safari" (since you're running in the same process) and will happily give you the credentials!

Now, there's still a small advantage to the DPAPI/Keychain approach, namely that it allows the OS to show approvals to the user ("Unlock the keychain?" dialogs or whatever), ensuring that the malware can only steal credentials while the user is present. There are some circumstances where a credential API is nonetheless useful. Offhand:

1. Cases where there's a test-of-presence ("Do you want to unlock the Keychain?") conducted by the higher-privilege process, and where approval is not routine (so that the user is not going to just click "OK"). Browser password autofill is not such a case, however.

2. Cases where there's a test-of-presence and the user assent is transactional (i.e. they see what they're approving and the approval is only good for that one action--as with Windows UAC).

3. Cases where the credential granted is a signature and not a bearer-token, and we find some advantage in the token itself being bound to the device. (IOW, the down-level process can steal a signature, but it can only use the signature for some limited use, and cannot steal the signing secrets, which never leave the privileged domain.)

So to get this right requires a lot of thought about things like broker processes, transactional approval, etc. I'm far from an expert in this, but hopefully the above makes some sense.


Right - you are describing a very well written worm up at the top of your comment.

However, in my experience (disclaimer: the plural of anecdote is not data, I am very well aware of this), the frequency of worms and viruses that are released by script kiddies using commercially available malware is on the rise, and these are malicious and effective but not terribly sophisticated. Check my other thread for more on this.

In other words, what I am saying is that you are describing a very nasty theoretical worm - I am, however, describing to you a family of worms that is currently out in the wild and causing a hell of a lot of damage, and, as far as I know, actually does function in the way I describe. Filecryptor viruses can be made / purchased by any script kiddy jerk these days, and it seems to me that they do not function in this very sophisticated way you describe, but instead may actually be stymied by local encryption of files with passwords in them. (Or, rather, the distribution of your passwords to the virus owner would be stymied.)

I would very much like to know if is accurate or not. I understand that the devil is in the details, but if it is true, then I stand by my point that it seems unwise (borderline indefensible) not to encrypt local password stores - as there is a known valid threat. If it is not true, then I stand corrected - which happens all the time.

Either way, I am deeply interested to know.


I don't have a huge amount of exposure to current malware trends, to be honest--it's not the area I work in at the moment. So tl;dr I can only guess.

You're right that unsophisticated malware may be thwarted by per-app disk encryption or credential stores like Keychain, but it doesn't represent a security boundary. That's why I would describe the Chrome team's approach as being "principled"--they're refusing to implement an ambiguously useful security feature because its bypass would not represent a bug.

Whether such a feature is nonetheless valuable for the user is unanswered by that discussion, however; as you say, it may have value in some circumstances.

However, remember that by volume most exploitation is (as best as I can tell) economic--people who do it for business. And people doing it for business can buy whatever malware is on the market. If stealing in-memory secrets is reliably accomplished (which it is), malware vendors have a strong incentive to implement this and sell it as well.

So I think you have the right idea, but answering the question is nontrivial. If Chrome implemented file encryption (or, more likely, used the platform APIs where available), would the engineering cost (and complexity--e.g. different behavior on different platforms) be counterbalanced by the increased cost imposed on malware authors? Or would one or two malware authors quickly adapt and malware prices/effectiveness would remain fairly static?

You get the point.


Found it: the worm is called Dynacrypt.

Check it out and let me know what you think.

Edit: from the top google result on Dynacrypt:

>While the ransomware portion of DynA-Crypt, as described in the next section, is a pain, the real problem is the amount of data and information this program steals from a computer. While running, DynA-Crypt will take screenshots of your active desktop, record system sounds from your computer, log commands you type on the keyboard, and steal data from numerous installed programs.

>The programs and data that DynACrypt steals includes:

>Screenshots

>Skype

>Steam

>Chrome

>Thunderbird

>Minecraft

>TeamSpeak

>Firefox

>Recordings of system audio


> Compare the security of Android - which we now know to be 'owned' by the US Government

To what are you referring to here, precisely?

Since AOSP is open source, is there a specific line of code that you can point to that contains (or is emblematic of) this insecurity?

Your article doesn't seem to say.


> Here is my take as an information lawyer and (slightly-higher than script-kiddy-level) web developer

as a "(slightly-higher than script-kiddy-level) web developer" I'm going to guess that he doesn't actually know very much about AOSP, the Linux kernel, or indeed GNU/Linux security in general. So his emphatic statement "Compare the security of Android - which we now know to be 'owned' by the US Government" is pretty much worthless as he's very clearly speculating about things that he doesn't understand.


> I'm going to guess that he doesn't actually know very much about AOSP, the Linux kernel, or indeed GNU/Linux security in general.

I know a fair amount for a 'layperson', which you can (probably rightly) argue means I am unqualified for comment in these circles, and you are right that I am including too much speculation. I absolutely, inarguably overstepped. My bad.

However, with regard to Android being owned - this article is literally about the CIA tools that are used to compromise Android. The trove has been released. It is incontrovertible at this point: https://www.washingtonpost.com/world/national-security/wikil...

The issue is to what extent other fundamental assumptions are now called into question. Is it only android, or is Chrome now suspect as well? What protocols are compromised?

I suspect we will find out more in the coming days, and I should have been more circumspect in my own post. It was unbecoming.

In any event, thanks for your feedback.


You should be careful about making bold proclamations about the supposed insecurity of Linux or AOSP. Especially under the guise of tech-savvy lawyer. And just because you made a browser extension that doesn't trust Chrome's security model doesn't mean you're also qualified to confirm the complete and utter "owning" of the Linux kernel on ~5 year old Android devices.


Your point is well taken. Consider me suitably chastened.

Back to the question at hand - there seems to be extremely strong evidence that android - and iOS, apparently - are compromised pretty thoroughly. https://betanews.com/2017/03/07/wikileaks-vault-7-cia-year-z...

If this is not the case, then what is your conclusion? Because the claim that is being made far and wide, right now, is that android, ios and samsung smart TVs appear to be fundamentally compromised.


The CIA tools to own it were just leaked. You are commenting on the thread talking about those tools and the leak announcement.


As far as I can tell, my question ("which line of code is broken?") is also not answered either in the NYT piece or the wikileaks release. If I'm wrong, do you have a link?



careful with claims that android is 'owned'.

You want to distinguish between:

- remote takeover (like stagefright vuln)

- ability to take control of the device from within an app sandbox

- ability to unlock a device in your physical possession (FBI was able to do this on prev V of ios)


I spent a fair amount of time reading up on the wikileaks last night. While the released info does appear to not be beyond KitKat, the array of listed hacks is staggering. Have you had a chance to look? I understand that there are very fine levels of distinction between these types of hacks, but it seems to me that there were multiple root compromises.


Was going to email you but I couldn't find a way of getting your contact information without enabling google JS (something you might want to consider as a privacy advocate)

The background video on gibber is awful, makes it very hard to read the page. I've just opened it in another browser with all the JS on and again your page totally doesn't work with the google ajax switched on. Worth fixing.


Noted, and thanks. Your feedback is much appreciated.


>Compare the security of Android - which we now know to be 'owned' by the US Government - with the security of iOS, which was the subject of a public and gruesome lawsuit about a year ago because the Fed could not hack iOS.

So your comparing the current security of the iPhone with old CIA Android and Chrome exploits from circa 2011-2013?


> Please be aware that the Chrome browser does not offer a secure local storage protocol for its developers ... Compare this to Safari, which offers secure local storage at OS level security

But this is just as secure as full disk encryption of the device right?


Not for malware running in user space.


Running in user space is not enough. It would need root. It is also hard to keep root, when you have dm-verity and selinux in enforcing mode.

Android applications are also sandboxed from each other. You would have hard time getting from one app to another's files, unless the original app published them - or you've got root.


The browser runs in user space. Desktop OSes don't offer any sort of partitioning here. So there's very little reason to have any app-level encryption on a desktop OS.


Eh, they couldn't hack (1) the device in standby/locked with keys flushed and (2) the encrypted physical data on the flash. But that is a much more difficult proposition.

Just hacking the device while it is being used is what every iOS jailbreak does. And there seem to be quite a number of them.

Not sure why you mention "secure local storage"; none of that local storage is secure if the device is compromised! That can then be bypassed in the same way that you bypass WhatsApp or manipulate any other app on the device.


I would like your take on a more specific question: Do you think that Google applications (GMail, Search, Translate, Maps, etc) themselves can get access to the necessary kernel subroutines to catch information which is intended for encryption? Or would running a custom rom (ie. cyanogenmod) while still using Google applications suffice to mitigate these attacks?


Wait, what?

If you are running something based off of AOSP, you're running code that was touched by Google employees. Is your fear that Google is installing backdoors to help the CIA? If so, why are you afraid of that?


> you're running code that was touched by Google employees

Doesn't matter who touches code when that code is publicly visible and available to the scrutiny of everyone. AOSP can be checked out and audited independently, just like any open source project.


Sure, I guess. As I noted elsewhere in this thread, in general your risk is that the cost of exploitation is low (e.g. Android 4.x) or your value of exploitation is high (e.g. San Bernardino shooter).

Open vs closed source is a distinction I don't see a lot of folks in the security community take seriously, and for good reason: it's a response to a very specific threat model, where your concern is not primarily accidental 0days but intentional backdoors.

I would posit that the cost of a backdoor is probably higher than the cost of an 0day: the reputational risk to Google or Apple if they were discovered to have planted one is worth potentially billions of dollars in sales, so they will spend a lot of money fighting any such court order (and, as far as we know, such an order has never been successfully made).

The counterargument here is that if the government did win such an order, the backdoor is the gift that keeps on giving, whereas 0days eventually get patched and fixed.

But that's a long digression. For most users, this is simply the wrong threat model.


Right, so in the scenario I mentioned, an update to a Google application would give this application more access to the kernel (through some backdoor) and enable it to intercept the communication of other apps. I'm asking whether this is possible or not - assuming the kernel itself cannot be modified. If that's the case then parts of the android kernel or the way android handles access to microphones, etc. might need to be hardened in the future.


The Android security model doesn't work that way. Non-system applications can't access the kernel, minus a local EOP or something like that.

Is that your concern? And if so, why are you concerned specifically about Google apps? Any malicious app can exploit a local EOP.


Google Apps are typically installed as system apps. Play Store is obvious, since it needs to be able to install/update applications without prompting. The need for other apps (e.g. Gmail) to need system-level permissions is less obvious, but most of them fail to run if you just sideload it without the permissions.


Huh? What permissions are you referring to that the Gmail app has?

Also, if I remember right (and I'm not an Android expert, so grain of salt here), Android OS itself enforces sandboxing based on app signing keys; even the Play app can't overwrite the Signal binary without a binary signed by the same key (though conceivably it could install some other fake-Signal app that looks just like Signal and has a similar icon--but that app would not have access to your private key!).


While you are correct about the enforcement applying to Google Apps the Google Play Services has all possible permissions. I don't know if they could do something with the kernel from that alone though.

Personally I trust Android as much as I'd trust iOS... Which is to say I expect the government can get at either with physical access but only at the highest levels of government (CIA/NSA/FBI).


True. But Play Services is effectively a part of the OS (from Google's perspective).

As you say, in both cases (iOS and Play Services) it's a commercial closed-source bundle. shrug

I don't personally spend time worrying about that, given that Google and Apple's code is probably better reviewed than some random open source app, but some people like to nerd out about such things.

As you say, the FBI was eventually able to get access to the San Bernardino shooter's phone. But this isn't exclusive to the highest levels of government; it just depends on your budget: http://www.reuters.com/article/us-apple-encryption-fbi-idUSK.... It's not surprising the CIA would have a stockpile of unpatched 0days, found or bought.

I don't believe I'm worth $1m to anyone, so I feel pretty safe using both iOS and a recent, patched Android.


Totally agree with you. I like my Nexus 5x a lot. I figure that usually it's the highest levels of government who are actually willing to pay a sum like that though. I doubt the local PD is willing to dedicate 1mil to cracking a phone if you get arrested for possession of a controlled substance or something.

And it's kinda an unspoken goal of mine to, ya know, not end up on a CIA watch list. Now I know some of the concern goes around controlling so the Government doesn't get out of control but I think that we would expect a government's worth of resources able to do something as trivial as cracking a commercial phone.


> And it's kinda an unspoken goal of mine to, ya know, not end up on a CIA watch list.

First rule of not being on the watch list is not to admit you don't want to be on the watch list.

What, do you have something to hide? Huh?


I invoke my 5th amendment right.


> Huh? What permissions are you referring to that the Gmail app has?

Maybe he was referring to these privileged permissions: http://android.stackexchange.com/a/17874/104563


I don't think that confers any ability to bypass the sandbox. Again, not an Android expert, so happy to be shown I'm wrong.


Yes, that is my concern. You mention system applications - I believe this excludes any application which can be installed (with an app store or apk)? What is a local EOP? I couldn't find any info on this abbreviation. I used Google as an example.


EOP = Elevation Of Privilege. e.g. a local->root exploit.

To expand, this would be some vulnerability which allows a non-privileged local app (like Gmail) to execute code at a higher security level.

The focus on Google apps specifically here is misleading. In the Android (and iOS) security model, apps are sandboxed, and cannot generally inject code into other apps (in contrast to most desktop OSes, where all processes running as "you" can sort of do what they want to each other).

The threats that apply on Android or iOS are, roughly speaking:

1. You grant an app more permission than it should have (e.g. microphone or keyboard input)

2. Local EOP plus installing a malicious nonprivileged app (or a remote code execution vuln) such that someone can get root on the device and inject code into Signal (or whatever)

3. A backdoor in the OS or app you are using

Android and iOS both have vulnerabilities in the wild. Older Androids are riddled with them, and the Android ecosystem is shit for getting updates out. If you're not using a Nexus or Pixel or a device from a reputable OEM (supposedly Samsung takes patches seriously, but I don't pay attention to this), you're probably easily exploited.

That's all the news that's here, AFAICT. The focus on encrypted messaging apps is on the one hand silly and on the othe rprobably necessary. Everyone in the security world knows that the easiest way to beat end-to-end encryption is to compromise the endpoint. But everyone in the wider world thinks that if they use Telegram they're secure, even if they're using an unpatched Samsung from 2011.


Well, the Play store can seemingly update my system apps with no prompting. So I would guess that it's possible even on Cyanogenmod.


The open source side of that spun off as LineageOS: http://lineageos.org/

There was some huge political problem and the Cyanogen company did something the open source guys didn't like, so they left.


CIA Android Exploits

https://wikileaks.org/ciav7p1/cms/page_11629096.html

As you can see they pretty much all reference very old versions of Android (v4) and Chrome.


I thought they were already compromised since both these services use SMS authentication; since the defaults AFAIK aren't particularly concerned about a change in the public key, it's broken for anything secure anyway.

Tox on the other hand seems much more secure... though I guess if you're phone is compromised you're pretty much screwed to start with (which is not too hard with all the bloatware one needs these days).


See this: https://github.com/TokTok/c-toxcore/issues/426

Long story short: if someone obtains your Tox private key, they are able to impersonate you in the conversations with other people without you realizing it.

Tox developers admitted this was an issue. Fixing this means changing the protocol itself (which will affect everyone).

Tox is still experimental (which they admit here: https://github.com/TokTok/c-toxcore/issues/426) and it is not advisable to use it.


Given the other revelations of the last few weeks, I have to wonder if these exploits are getting installed on every phone that the CBP demands people unlock. Seems like the obvious thing to do. Best not to trust your phone or any software on it at least without a factory reset, and preferably a software update, after it's been in CBP custody for any time.


Besides the initial titlegore, these tools really aren't that surprising. I've always operated under the assumption that if the NSA, CIA, etc are in your threat model you've already lost.


Lol at NYT, it says that when jack into an android phone they are able to route the messages to a third party before it gets encrypted


This is why we should not rely on encrypted apps running on top of some other platform.

disclosure: working on an open source alternative for messaging


Wouldn't you also need an encrypted os for your phone?


A fully open source RTOS that is trusted and only running this single application. The only external communication is the encrypted messages.


This just in: man looking over your shoulder bypasses strongest Signal encryption!


I found none of these revelations surprising. In this era, you have to assume that someone is monitoring you. You're naive if you think otherwise.


[flagged]


That's because so far he's been right. Every news outlet seems to want to report on a "signal hack" and will go to great lengths to twist words to make it sound like that happened.


Signal doesn't even need defending here.

The article claims that the CIA has compromised the Android device itself. They are intercepting communications before/after it's decrypted on the device.

Signal can help make sure you're transmitting information encrypted over the wire, but it can't really help you if your device is compromised.


Agreed, signal on iOS->iOS still seems unaffected.


> Mention "Signal" in any article and you'll have @tptacek running here to defend it with any costs.

You made a new account today just for that?

You would enjoy Yasha Levine https://twitter.com/search?q=%40yashalevine%20signal&src=typ...


As the lead developer (I think) I would expect him to counter any points made about the service to the positive or negative.


tptacek != moxie




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: