Saw a lot on this topic on Friday, and hat tip to Donna Medrek for a LinkedIn post on it that first made me aware of it. As reported by several outlets, Apple plans to scan US iPhones for child sexual abuse material (CSAM).
The project is detailed in a new “Child Safety” page on Apple’s website. This article from NPR reports that Apple’s plan is drawing applause from child protection groups but raising concern among some security researchers that the system could be misused, including by governments looking to surveil their citizens.
The tool designed to detected known CSAM images of child sexual abuse, called “neuralMatch,” will scan images before they are uploaded to iCloud. If it finds a match, the image will be reviewed by a human. If child pornography is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children (NCMEC) notified.
The Washington Post reports the type of matching being done is something companies like Facebook already do. So do Microsoft, Google, Twitter and probably others as well, as Gregory Bufithis pointed out in a newsletter. “But in those systems, photos are scanned only after they are uploaded to servers owned by companies like Facebook.” In looking at what’s on a user’s device, Apple is treading into new “client-side” surveillance territory.
“Apple’s expanded protection for children is a game changer,” John Clark, the president and CEO of NCMEC, said in a statement. “With so many people using Apple products, these new safety measures have lifesaving potential for children.”
But a dejected Electronic Frontier Foundation, the online civil liberties pioneer, called Apple’s compromise on privacy protections “a shocking about-face for users who have relied on the company’s leadership in privacy and security.” And in a blistering critique, the Washington-based nonprofit Center for Democracy and Technology called on Apple to abandon the changes, which it said effectively destroy the company’s guarantee of “end-to-end encryption.” Scanning of messages for sexually explicit content on phones or computers effectively breaks the security, it said.
Separately, Apple plans to scan users’ encrypted messages for sexually explicit content as a child safety measure, which also alarmed privacy advocates.
The detection system will only flag images that are already in the center’s database of known child pornography CSAM images. Parents snapping innocent photos of a child in the bath presumably need not worry. But researchers say the matching tool — which doesn’t “see” such images, just mathematical “fingerprints” that represent them — could be put to more nefarious purposes.
ABC reports that Matthew Green, a Johns Hopkins cryptography researcher, claimed that someone could manipulate the system to frame a person by sending them “seemingly innocuous images designed to trigger matches for child pornography. That could fool Apple’s algorithm and alert law enforcement.” He claimed researchers have managed to do this.
This article from The Verge reports that the threshold system ensures that lone errors will not generate alerts, allowing apple to target an error rate of one false alert per trillion users per year.
Apple also commissioned technical assessments of the system from three independent cryptographers (here, here, and here), who found it to be mathematically robust. “In my judgement this system will likely significantly increase the likelihood that people who own or traffic in such pictures (harmful users) are found; this should help protect children,” said professor David Forsyth, chair of computer science at University of Illinois, in one of the assessments. “The accuracy of the matching system, combined with the threshold, makes it very unlikely that pictures that are not known CSAM pictures will be revealed.”
Alongside the new measures in iCloud Photos, Apple added two additional systems to protect young iPhone owners at risk of child abuse. The Messages app already did on-device scanning of image attachments for children’s accounts to detect content that’s potentially sexually explicit. Once detected, the content is blurred and a warning appears. A new setting that parents can enable on their family iCloud accounts will trigger a message telling the child that if they view (incoming) or send (outgoing) the detected image, their parents will get a message about it.
Online comments that I’ve seen have been pretty even divided, with some applauding Apple for the decision and others raising privacy and “slippery slope” concerns. My viewpoint is that Apple has already been using technology to scan for CSAM images sent over email. Also, Apple’s iOS and iCloud terms and conditions (T&Cs) include expectation that their users “not infringe or violate the rights of any other party or violate any laws”. Does the fact that they’re proposing to extend it to the client-side for devices that are subject to Apple license restrictions change anything? Not in my opinion.
So, what do you think? Do you applaud the decision by Apple or are you concerned about privacy rights and potential for abuse of the system? Or both? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.