In a briefing on Thursday Apple announced plans to deploy new technology within iOS, macOS, watchOS, and iMessage that will detect potential child abuse imagery, but clarified crucial details from the ongoing project.
For devices in the US, new versions of iOS and iPadOS rolling out this fall have “new applications of cryptography to help limit the spread of child sexual abuse material (CSAM ) online, while designing for user privacy.”
Apple stated that the image scanning comes with several restrictions that are included to protect privacy:
- Apple does not learn anything about images that do not match the known CSAM
- database.
- Apple can’t access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account.
- The risk of the system incorrectly flagging an account is extremely low. In addition, Apple manually reviews all reports made to NCMEC to ensure reporting accuracy.
- Users can’t access or view the database of known CSAM images.
- Users can’t identify which images were flagged as CSAM by the system
The project is also detailed in a new “Child Safety” page on Apple’s website. The most invasive and potentially controversial implementation is the system that performs on-device scanning before an image is backed up in iCloud. From the description, scanning does not occur until a file is getting backed up to iCloud, and Apple only receives data about a match if the cryptographic vouchers (uploaded to iCloud along with the image) for a particular account meet a threshold of matching known CSAM.

