Apple’s plans to scan iPhone users’ photos for child sexual abuse material (CSAM) has been met with concern from privacy advocates and rivals. Now the company is seeking to reassure users in a new Q&A posted to its website.
The tools, which are designed to prevent the spread of CSAM material and catch out those in possession of it, scans the photo on-device and gives it a safety certificate before uploading to iCloud. Thus far Apple will only enact the plans in the United States.
Critics, like WhatsApp boss Will Cathcart, say the system is essentially an Apple-built surveillance tool and could be used by governments to spy on citizens if weaknesses are exposed.
However, Apple has today reiterated its previously-held stance that it will “refuse any such demands” for governments to add any non-CSAM images to the hash list.
“Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups,” the company writes. “We have faced demands to build and deploy government-man- dated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.
“Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.”
You might like…
Apple also says addresses the other elements of the new Child Safety tools, including the Messages app, which will soon detect whether children are receiving or sending inappropriate imagery, with safeguards in place to warn parents. Children will have a choice over whether they want to send or de-blur the image in question, but if they proceed, parents will be notified. Apple is assuring users that this doesn’t affect the end-to-end encryption in Messages.
The company adds: “This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an in- tervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.”
How do you feel about Apple’s new Child Safety feature? A prudent move? Or too much potential for collateral damage to the privacy of innocent users? Let us know @trustedreviews on Twitter.