Hacker News new | past | comments | ask | show | jobs | submit login

I have specific reasons to believe that Apple has been subject to legal pressure. But if you didn’t believe six anonymous sources in a story by a reputable reporter, you’re not going to believe my secondhand reports either. Skepticism is fine: stubborn unfounded skepticism in the absence of a direct confirmatory statement from Apple isn’t possible to argue with.

Apple being legally pressured is not the full story. It is absolutely true that they have been pressured by the FBI and others, and simultaneously that they also have real concerns about user experience with lost backups. If you read the Reuters story, it doesn’t draw a straight line from the FBI to the backup situation, it just points out that legal pressure is a factor in Apple’s reasoning. Apple spent a lot of money building an E2EE key vault based on HSMs several years ago, and it’s also fairly obvious that they had bigger plans than securing passwords and browser history. Yet they have not made full E2EE backup available even as an option for advanced users, despite the fact that even Android now supports E2EE backups. And prior to enabling E2EE backups (one assumes that’s coming this year) they paused to build exactly the on-device scanning system that law enforcement has been exhorting cryptographers to build since William Barr’s letter in 2018. It does not take a great deal of imagination to see the pattern, but obviously only Tim Cook can prove it to your satisfaction.

ETA: Just to take this a step beyond “someone is arguing on HN”: this argument matters because I think we all intuitively understand how dangerous this system would be in a world where Apple’s engineering is responsive to government pressure. Your skepticism makes perfect sense if you want to believe this system is secure. I wish I could live in a world where I was able to share that skepticism, it would be a more relaxing place.




I’m not sure why it’s obvious to you that Tim Cook must personally whisper into my ears otherwise. The FBI and every other intelligence agency is probably pressuring Apple all the time. Elsewhere in the thread, I even say that I think law enforcement pressure is one reason Messenger has not turned on E2E by default. I understand how this works.

What you haven’t convinced me of is whether Apple’s priorities are being driven by the pressure. Apple can believe keeping known CSAM off their services is important, and just because someone else agrees doesn’t mean the outside party was critical or the cause of the decision. We live in a society where there are lots of non-government reasons to not be the world’s #1 CSAM host, especially as the famously anti-porn company.

To what extent Apple’s intentions are sincere or coerced is important to suss out because it changes the likelihood that Apple, in the long term, will build different features that endanger its users. I agree that the platform vendor is “intuitively” a source of risk, but I don’t think what they’ve announced is any more (technically) dangerous than anything else my device already did. Even if Apple is outright lying about the contents of the hash database and what their human reviewers will flag, they could’ve been outright lying about whether they slurp my iCloud Photo Library straight out off iCloud with the keys they escrow. Besides that, there is no other possibly untoward behavior that I can’t verify locally. In fact, if Apple built iCloud scanning, I’d be at least as concerned about future features, because there I have no audit rights.

I don’t “want to believe” the system is secure - I have the tools to confirm that the system exposes me to no risk that I’m not already comfortable with as an (for sake of argument) iCloud Photo Library user, and almost all of the other risks are hypothetical. I’m even open to believing that the other risks are more probable today than a month ago, but the evidence isn’t very strong. Some evidence that would change my mind: any information about NCMEC being compromised by nation states and Apple ignoring that evidence, any evidence from Apple sources stating that they worked with the FBI on this system design, any evidence that Apple is expanding the system beyond CSAM.

Which brings me back to a question you never answered: how confident are you that the system presages generalized full device content scanning, and what evidence would change your mind?


I never said the device presages full-device content scanning. All I’ve said (including in this NYT op-ed [0]) is that it enables full-device scanning. Apple’s decision to condition scanning on a toggle switch is a policy decision and not a technical restriction as it was in the past with server-side scanning. Server-side scanning cannot scan data you don’t upload, nor can it scan E2EE files. Most people agree that Apple will likely enable E2EE for iCloud in the reasonably-near future, so this isn’t some radical hypothetical — and the new system is manifestly different from server-side scanning in such a regime.

Regarding which content governments want Apple to scan for, we already have some idea of that. The original open letter from US AG William Barr and peers in 2018 [1] that started this debate (and more than arguably led to Apple’s announcement of this system) does not only reference CSAM. It also references terrorist content and “foreign adversaries’ attempts to undermine democratic values and institutions.” A number of providers already scan for “extremist content” [2], so while I can’t prove statements about Apple’s intentions in the future I can only point you to the working systems operating today as evidence that such applications exist and are being used. Governments have asked for these, will continue to ask for them, and Apple has already partially capitulated by building this client-side CSAM system. That should be an important data point, but you have to be open to considering such evidence as an indication of risk rather than intentionally rejecting it and demanding proof of the worst future outcomes.

Apple has also made an opinionated decision not only to scan shared photos, but also to scan entire photo libraries that are unshared with users. This isn’t entirely without precedent, but it’s a specific deployment decision that is inconsistent with existing deployments at other providers such as Dropbox [3] where scanning is (allegedly, according to scanning advocates) not done on upload, but on sharing. Law enforcement and advocates have consistently asked for broader scanning access, including unshared files. Apple’s deployment responds to that request in a way that their existing detection systems (and many industry standard systems) did not. Apple could easily have restricted their scans to shared albums and photos as a means to block distribution of CSAM: they did not. This is yet another difference.

I’m not sure how to respond to your requests for certainty and proof around future actions that might be taken by a secretive company. This demand for an unobtainable standard of evidence seems like an excellent way to “win” an argument on HN, but it is not an effective or reasonable standard to apply to an unprecedented new system that will instantly affect ~1 billion customers of the most popular device manufacturer in the world. There is context here that you are missing, and I think suggesting more reasonable standards of evidence would be more convincing than your demands for unobtainable proof of Apple’s future intentions.

[0] https://www.google.com/amp/s/www.nytimes.com/2021/08/11/opin...

[1] https://www.justice.gov/opa/press-release/file/1207081/downl...

[2] https://www.google.com/amp/s/amp.theguardian.com/technology/...

[3] see p.8: https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/6593...


> Apple’s decision to condition scanning on a toggle switch is a policy decision and not a technical restriction as it was in the past with server-side scanning.

This is not a meaningful distinction. There are many security and privacy protections of iOS that are equivalently "policy" decisions: letting iCloud Backups be turned off; not sending a copy of your device passcode to Apple servers; not MITMing iMessage which has no key transparency; not using existing Photos intelligence to detect terrorist content etc. In technical terms, there are many paths to full device scanning, and some of those paths were well trodden even a month ago (iCloud Backup and Spotlight, for starters, and Photos Intelligence as a direct comparison).

Making this claim also requires showing that the likelihood of Apple making one of many undesirable policy decisions has changed.

> Regarding which content governments want Apple to scan for, we already have some idea of that.

I asked about what Apple will scan for, not what governments want them to scan for. Again, I see a pattern in your argument where you state what government wants and then don't state how that desideratum translates into what Apple builds. The latter is entirely the source of ambiguity for me.

> Apple’s deployment responds to that request in a way that their existing detection systems (and industry standard systems) did not.

I could see that. If that's what Apple had built, would you have a different take on the system? It seems like no -- most of the risks you care about are unchanged since you are operating in a world of "policy decision" equivalence classes.

> I’m not sure how to respond to your requests for certainty and proof around future actions that might be taken by a secretive company.

You seem to misunderstand what I said. I'm asking for an estimate of your certainty, not absolute certainty. Furthermore, I'm asking you to provide examples of what information would change your mind for the same reason you keep repeatedly calling me stubborn and accusing me of bad faith: without it, I have no idea whether we are engaged in discussion or a shouting match.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: