> Can non-CSAM images be “injected” into the system to flag accounts for things other than CSAM?
Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by child safety organizations. Apple does not add to the set of known CSAM image hashes.
The problem is not that Apple can't add images to the database but that organizations from the outside can inject any hashes to the new, constantly sniffing system at the heart of iOS, padOS and macOS. Apple has no way to verify those or any hashes before they get injected into the database.
If the system detects any matches only some overworked and underpaid content checker from Bangladesh is there to stop your life from being destroyed by some SWAT team crashing through your front door at 3am, killing your barking dog. And who knows if those foreign sweatshops are even trustable.
> Existing images of CSAM that have been acquired and validated by X organizations
Will Apple be considered as responsible? punished?
Will Apple or the X publish the list of images and allow regular 3rd party validation? (I see security related product companies do that)
In this era it's hard, VERY hard to entrust our private properties to these giant tech companies. There is so little to NO negative feedback to their mis-behaviours. These companies need more regulations than individual citizens.
Child protection is an exception to all standards of due process in the US and Europe. There is NO organization that can interfere when people get mistreated based on these laws (the vast majority that fall victim to "side effects" are children, of course). Only generic "quality assurance" is done. There is no protection at all for individuals, whether children, parents, or third parties.
> Will Apple be considered as responsible? punished?
The EU court has judged just a few months ago that child protection authorities cannot be held responsible for the damage they cause, EVEN if it is shown that their actions were based on incomplete or wrong data.
> In this era it's hard, VERY hard to entrust our private properties to these giant tech companies.
Okay, well if you think this is a serious problem, then let me tell you what else social services data is being used for. In Belgium, social services, including homeless shelters, enter data into your medical records which you can't see, erase, or ... emergency departments will read this data, and use it to avoid the situation that they have to use the state insurance for non-insured persons, which is a polite wording for "refusing care to homeless persons".
This is already a thing today. Most major could providers perform server-side scanning, so if a nefarious party can smuggle problematic photos onto your cloud, you have the same problem.
To make it perfectly clear: I am absolutely agains this scanning system, but I think that we need to keep to high-quality arguments to successfully argue agains it.
There is one question missing: if China asks Apple to flag users having Winnie-the-Pooh images on their devices, or leave the chinese market, what will Apple choose?
I mean, that same situation already applies right now without this system being deployed. China already can mandate Apple to scan all iCloud or on-device photos for certain images etc. if they wanted to.
They answer that question. The answer they provide is that they would leave the Chinese market.
You might not believe them but they are pretty clear on that point: “we will not accede to any government’s request”
This is the standard you have to hold Apple to. As I said, you might not believe them – but if you don’t trust their statements on some level then it’s game over anyway.
There are about 69 countries where it is illegal to be gay, what if these countries want to know if you have a rainbow flag or two men kissing photo on your phone?
There are apps, like WhatsApp, that allow you to save photos you receive to your camera roll instantly.
If somebody, or another compromised device sends a large collection of CSAM to your device, they will be uploaded to iCloud, probably before you get a chance to remove them -- the equivalent of "swatting".
Besides the apps that you give permission to store photos in your Photos app, what about malware such as Pegasus we've seen again and again?
I wonder if we'll start hearing a year from now about journalists, political dissidents, or even candidates running for office going to jail for being in possession of CSAM. It would be much easier to take out your opponents when you know Apple will report it for you.
I guess all this does is disincentivize anyone who cares about their privacy from using iCloud Photos, which is sadly ironic since privacy is what Apple was going for.
> There are apps, like WhatsApp, that allow you to save photos you receive to your camera roll instantly.
Major cloud providers already scan photos for CSAM. So this feature does not change anything in this regard. If you are using cloud photo storage, you can be targeted with this attack no matter whether you use an iPhone, some android Phone or anything else, really.
If you're a nation-state or a group that wants to sow discord and distrust in another nation's citizens and their neighbors and institutions, what better way to do so by framing people, some significant and some insignificant, with CSAM?
Apple is not planning to scan SMS messages with this (that’s a completely separate feature, not related to CSAM fingerprints, and never involves law enforcement), and SMS photos are not automatically added to your iCloud Photo Library.
Feels like they messed up the comms on this in a quite un-Apple-like way.
My understanding (high level) is their system is designed to improve user privacy by meaning they don't need to be able to decrypt photos on iCloud (which is, if I understand correctly, how other cloud providers do this scanning, which they are required to do by law?), but rather do it on the device - without going in to the upsides and downsides of either approach, I'm surprised they didn't manage to communicate more clearly in the initial messaging that this is a "privacy" feature and why they are taking this approach, and instead are left dealing with some quite negative press.
For me, my problem with it is nothing more than selfish. There’s literally no upside for me. Best case, my battery dies a little faster or my power bill goes up a small amount that I will never notice. Worst case, I get a SWAT team in my face because someone accidentally clicks “yes” instead of “no” when reviewing a false-positive.
What’s the upside to me, for this tech? There’s less of this content in the world? I don’t see it anyway, so for me, it doesn’t do anything except become a potential (low) risk liability.
I don’t see why I’m paying their power bill to do something they’re legally obligated to do.
Maybe if you somehow gain some power over this trillion-dollar company, it will start to care about your opinion, or the fact that you've been disadvantaged.
> (which is, if I understand correctly, how other cloud providers do this scanning, which they are required to do by law?)
I don't think that is required by law - at least not in EU or US.
There was already a lot of online outrage in July when EU passed regulation that allowed such scanning for the next 3 years - I imagine requiring it would be much worse.
Apple is good a normal PR and terrible at crisis PR. This is crisis PR, but unlike with bend and antenna gate this should have seen this one coming, since they caused it.
Expectation : Political rivals and enemies of powerful people will be taken out because c*ild pornography will be found in their phone. Pegasus can already monitor and exfiltrate every ounce of data right now, it won't be that hard to insert compromising images on the infected device.
This argument does not work however, since it also applies to "old-school" cloud CSAM detection that everyone is doing anyway. The big problem with Apple's approach instead is that they essentially include a configurable (and very fancy) spyware engine on your phone that could be easily extended to do more in the future.
If we think very selfishly from the company's perspective - Apple already had one of the most secure, private and trusted platforms. And they must have anticipated the backlash against the new feature. So I still don't get why a company like Apple would consider the marginal benefit from this to be worth the cost.
Well, one very obvious benefits is that they gain leverage in discussing their alleged position as a monopolist. They could argue that they have implemented a sophisticated and private CSAM detection engine which is only possible if they continue to control the entire system.
>CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations.
How is Apple validating the datasets for non-US child safety organisations?
Something I'd missed before:
"By design, this feature only applies to photos that the user chooses to upload to iCloud Photos"
This is not about what people have on their own phones. This is about what people are uploading to iCloud, because Apple does not want CSAM on their servers!
They should have thought about that before going in the cloud business. I mean, the amount of nastiness you'll find online is beyond belief. Just ask any person at FB who is tasked with reviewing posts.
If you go back and read the earlier mega-threads, there were many people pointing this out and downvoted to oblivion. Hysteria is a helluva drug.
For anyone concerned about the hypothetical framing attack via WhatsApp auto-saving, you can selectively control which apps have access to your photo library (and thus iCloud) in settings.
This is met with disbelief because of one simple fact: There is nothing that limits Apple from scanning other files, other than “we pinkie swear we won’t.”
Corporations have a terrible record about breaking non-revenue-impacting pinkie swears. Breaking this one - especially when “what about the children” is involved - would have no meaningful impact on their revenue or share price, it could even go up.
They explicitly don't want to implement it server-side due to the privacy concerns of decrypting every image in iCloud Photos. Doing it on-device limits Apple's possession of decrypted photos to those likely to be CSAM.
Apple obviously does not want CSAM on their servers.
As the document states, they do not want to scan all images server-side for privacy reasons. They just want to flag the positives while keeping privacy standards as high as posible for everyone else.
Implementing it server-side would require the content to be on their servers to scan, doing it client-side reduces that risk (or, as they claim, don't want to do that for privacy reasons, or most likely a mix of both those two reasons).
Everyone has said everything wrong about it already. Nevertheless, Apple can sugarcoat it as much as they like. There’s no technical control (no actual nor possible one) making this exclusively about targeting CSAM.
It's frustrating (though not at all surprising) to see Apple continue to be so tone-deaf. They clearly think "If only we could make people understand how it works, they wouldn't be so upset, in fact they'd thank us."
This is not the case - we do understand how it works, and we think it's a bad idea.
An image being erroneously flagged will not have any effect. Until the threshold is reached, Apple is not able to know if or how many images have matched.
And even then, the voucher doesn’t include the key to decrypt the photo itself.
Apple has documentation that explains all of this.
Yes. If you read Apple’s documentation about this, the user is notified immediately when their account is disabled, and they have the opportunity to appeal the decision.
I doubt it, since those photos will likely be close to porn, ie. of naked people and Apple doesn't want people to know that the naked pictures they took of their 22 year old spouse is now floating around some random office in Bangladesh.
Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by child safety organizations. Apple does not add to the set of known CSAM image hashes.
The problem is not that Apple can't add images to the database but that organizations from the outside can inject any hashes to the new, constantly sniffing system at the heart of iOS, padOS and macOS. Apple has no way to verify those or any hashes before they get injected into the database.
If the system detects any matches only some overworked and underpaid content checker from Bangladesh is there to stop your life from being destroyed by some SWAT team crashing through your front door at 3am, killing your barking dog. And who knows if those foreign sweatshops are even trustable.