Hacker News new | past | comments | ask | show | jobs | submit login

That seems like a reach.

>Current technical implementation limits scan only for images to be uploaded into cloud, which can be opted.

That is conflating policy with a technical limitation. Their changes negate the technical discussion at this point.

Their POLICY is that it will only scan for images to be uploaded. They no longer have a *legal* argument to not comply with government requests for device scanning of any data now, since the framework is now included.

That is a big change in that regard. Whereas in the past there was a layer of trust that Apple would hold governments accountable and push back on behalf of a users privacy (and there is a very tangible history there), this implementation creates a gaping hole in that argument.




Actually it is not just POLICY. This scanning is build very deeeep in to the iCloud upload process. They need huge revamp for the system, and it seems intentional just because of this speculation. So we are in the same discussion whether this is implemented or not.


None of the tech documents point to this being the case. In fact in many of the articles I have read, it’s quite the opposite. Including the peer reviewed paper that had the dangers of such a program outlined in the conclusions. [1][2]

Do you have any sources here here to the contrary?

[1] https://www.washingtonpost.com/opinions/2021/08/19/apple-csa...

[2] https://www.schneier.com/blog/archives/2021/08/more-on-apple...


Their threat model[1] states:

> This feature runs exclusively as part of the cloud storage pipeline for images being uploaded to iCloud Photos and cannot act on any other image content on the device. Accordingly, on devices and accounts where iCloud Photos is disabled, absolutely no images are perceptually hashed. There is therefore no comparison against the CSAM perceptual hash database, and no safety vouchers are generated, stored, or sent anywhere.

and

> Apple’s CSAM detection is a hybrid on-device/server pipeline. While the first phase of the NeuralHash matching process runs on device, its output – a set of safety vouchers – can only be interpreted by the second phase running on Apple’s iCloud Photos servers, and only if a given account exceeds the threshold of matches.

We should also take account the way how blinding the hash works from CSAM paper[2]:

> However, the blinding step using the server-side secret is not possible on device because it is unknown to the device. The goal is to run the final step on the server and finish the process on server. This ensures the device doesn’t know the result of the match, but it can encode the result of the on-device match process before uploading to the server.

What this means, that whole process is tied strictly to specific endpoint in the server. To be able to match some other files from device into the server, these are also required to be uploaded into the server (PSI implementation forces it). And based on the pipeline description, upload of other files should not be possible. However, if it is and they suddenly change policy to expand to scan all files of your device, they will end-up into the same iCloud as other files, and you will notice them and you can't opt out from that with the current protocol. So they have to modify whole protocol to include only those images which are actually meant to be synced, and then scan all the files (which are then impossible to match on server side because of the how PSI protocol works). If they create some other endpoint for files which are not supposed to end up into iCloud, they need store them in the cloud anyway, because of the PSI protocol. Otherwise, they have no possibility to detect matches.

It sounds like that this is pretty far away from just policy change away.

Many people have succumbed to populism as it benefits them, and it takes some knowledge and time to really understand the whole system, so I am not surprised that many keep talking, that it is just policy change away. Either way, we must trust everything what they say, or we can't trust a single feature they put on the devices.

[1]: https://www.apple.com/child-safety/pdf/Security_Threat_Model...

[2]: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


I just want to say thanks for the links and taking the time to explain it. I think it’s pretty logical. I see your viewpoint and I think I need to take some more time to consider my stance (again…).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: