What Did Apple Just Do?
There seems to be a lot of confusion out there of the newly announced child safety features Apple has announced for iOS 15 coming this fall. Unsurprisingly, the Washington Post gets it almost all wrong, starting with the headline.
Apple is not “prying into iPhones.” Apple has announced three things, which are being conflated to some extent. The first is that if you use search or Siri to look for kiddie porn, you will see the notification on the right:
The notification on the left is for searches looking for how to report child exploitation. I don’t see how anyone could object to this, and no one really has.
The second one has raised some hairs, but not as much as the third one, which we will get to in a moment. This uses on-device intelligence (i.e., the ML cores on the Apple A-series chips) to look for porn in the Messages feed of children. This is opt-in. Parents decide whether they want to turn on this feature for children 12 or under. If they do, the child’s iPhone will look for explicit images in Messages, and children will see this series of screens beginning with the second one from the left if there is a hit:
The one on the left is the parent opt-in prompt. None of this is sent to Apple, it all happens on device, and in direct communication between the child’s and parents’ iPhones. This one has caused some confusion. The key thing is that this never leaves the family phones, and Apple never sees any of it. This is entirely within the parents’ control. This ties into the first feature, giving parents a quick way to report attempted child exploitation if this were to happen.
Again, I don’t see how anyone could object, but there have been some objections based on misinformation from places like the Washington Post, and some maximalism from EFF:
When Apple releases these “client-side scanning” functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.
It is the final feature that has privacy advocates like the EFF more up in arms, and has the Washington Post bending over backwards to mischaracterize:
Apple unveiled a sweeping new set of software tools Thursday that will scan iPhones and other devices for child pornography and text messages with explicit content and report users suspected of storing illegal pictures on their phones to authorities.
Golly!
But that’s not what is happening.
How This Actually Works
I haven’t written or even tweeted about this, because I wanted to understand what is actually happening here, and it’s quite a complex system that has been set up to avoid Apple seeing your photos. Apple is not “scanning iPhones.”
Apple is employing systems developed by three legends in the cryptography and computer vision worlds, and they are vouching for Apple’s implementation. Apple encrypts everything on their iCloud servers. But only the following get end-to-end encryption, meaning even Apple does not have the key:
Apple Card transactions
Home data
Health data
iCloud Keychain
Maps Favorites, Collections and search history
Memoji
Payment information
QuickType Keyboard learned vocabulary
Safari History and iCloud Tabs
Screen Time
Siri information
Wi-Fi passwords
W1 and H1 Bluetooth keys
You will notice the two crucial ones are not on this list: photos and device backups. Messages is end-to-end encrypted, but the key is stored in the backup, so that can be accessed by Apple as well. Users can always back up locally of course, but it is far less convenient and comes with a greater data-loss risk.
But Apple does have access to photos if ordered by a court via law enforcement, and that’s what this is about, and what has the EFF worried. So what is happening exactly here?
The National Center for Missing and Exploited Children (NCMEC) is technically a nonprofit, but in practice it is a quasi-government agency, and an arm of US law enforcement. They are the only organization in the US legally entitled to possess kiddie porn, and they have a lot. Probably many millions of images, if not billions. It’s fairly depressing. This grim database is the basis of the system.
Every image is given a cryptographic “NeuralHash.” This is a very large number derived from the image data. The important thing about the hash is that it is unique to each photo, cannot be turned back into the photo by reversing the algorithm, and that cropping or other common image alteration does not change the hash. This is some complex stuff, and Apple and their vouching cryptographers say the probability of an account being flagged with a false positive is a trillion to one every year. With probably north of a billion iCloud accounts, that’s not as big as it sounds.
All those hashes for all those images in the database are stored locally on every iPhone running iOS 15, and a user’s photos are compared to the hashes to see if there are any exact matches on device. This uses a technique pioneered by Benny Pinkas of Bar-Ilan University in Israel, who is one of the experts vouching for the system. He said this about the probability of a false positive on an image:
The probability that a fingerprint that does not exist in the database is falsely flagged as being in the database is crypto-level negligible. This probability is comparable to the extremely low likelihood of an attacker guessing the keys used by common encryption standards.
If he is talking about AES-128 encryption, that mean that the chance of a false positive is 340,282,366,920,938,463,463,374,607,431,768,211,456 to 1. But there are likely trillions of photos stored in iCloud Photo Libraries, so there needs to be more protections.
The first part of this matching technique, called “threshold private set intersection with associated data” happens on device, and actually blinds the device to the results. Even your phone will not know if there are any matches. Information about those matches, if any, are protected by two layers of encryption called a “safety voucher,” and uploaded with the user’s photos to iCloud if the user is using iCloud Photo Library. Likely most people do. It also uploads what Apple calls a “visual derivative” of the photo. This is a very low-resolution and distorted version of the image.
Then on the server side there is a two-step process. If a photo is a match to a known hash of kiddie porn from the database, then one of the two layers of encryption is unlocked to Apple. If the threshold is met, then both layers are unlocked, and Apple can see the visual derivatives. Apple says the threshold was chosen to make the probability of an account having that second encryption unlocked with false positives is a trillion-to-one per year. This is likely an average. Users with more photos will have a higher probability than that.
That is when Apple can see the visual derivatives inside the security vouchers. Someone at Apple will review them, and if they think they look like kiddie porn, then they will disable the account and notify NCMEC. Then likely a warrant gets issued, and Apple turns over unencrypted photos to law enforcement.
EFF’s Response
The Electronic Frontier Foundation was guaranteed to object to this and they have in a blistering response. They generally like Apple, but this looks like a betrayal from their one corporate ally. They are privacy maximalists, and you saw their objection earlier to the opt-in Messages feature, which goes too far in my book. That “anyone who talks to a minor through iMessage” will have to think twice about what they say to that minor strikes me as not a bad thing. Also, it is not iMessage the service, it is Messages the app, so it includes text messages.
There are other crucial misstatements of fact, for example that Apple reviews the actual photos. They do not. They only review the visual derivatives, and then notify NCMEC. From that a warrant can be obtained for the photos and then Apple unlocks the iCloud Photo Library and turns it over to law enforcement.
But to sum up their objections:
The chance for false positives is too high. I agree and this is easily fixable by upping the threshold. Remember, these probabilities are exponential, so even small changes can have large effects.
Any backdoor, even one as well-designed as this one, opens up a Pandora’s box. Basically, it’s a slippery slope argument. I also agree with this.
The weak link in the chain is NCMEC — the whole thing relies on trusting them, because Apple only has access to the hashes of the database, not the actual photos. They have no idea what is in them, and cannot vouch for the contents themselves.
But let’s say we trust NCMEC, and that no US government agency has inserted other photos without their knowledge, and that 100% of the photos are kiddie porn. Apple is rolling this out first in the US, and then to the rest of the world, and they operate in many countries with authoritarian governments. What if China wants a similar system, and inserts photos of dissidents like “Tank Man.” Vladimir Putin would certainly like to know who has certain photos on their phones.
There was this interesting exchange in the New York Times article between Matthew Green, a cryptography professor at Johns Hopkins, and Erik Neuenschwander, Apple’s Chief Privacy Engineer:
“What happens when other governments ask Apple to use this for other purposes?” Mr. Green asked. “What’s Apple going to say?”
Mr. Neuenschwander dismissed those concerns, saying that safeguards are in place to prevent abuse of the system and that Apple would reject any such demands from a government.
“We will inform them that we did not build the thing they’re thinking of,” he said.
Even if we take Apple’s intentions as pure, fiscal 2021 will be a record year for Apple everywhere, but especially in China, where they will book near $70 billion in net sales and $30 billion in operating profits. Not for nothing, but they also make most of their devices there. The Chinese Communist Party views this as leverage, and it is. Would Apple give up 20% of their top line if push came to shove? Let’s hope we don’t have to answer that question.
This is an extremely well-designed backdoor, but a backdoor is a backdoor, no matter how well designed it is.