In iOS 18.2, Apple is adding a new feature that resurrects some of the intentions behind its halted CSAM scanning plan — this time, without breaking end-to-end encryption or providing a government backdoor. Rolling out first in Australia, the expansion of the company’s Communications Safety feature uses on-device machine learning to detect and blur nude content, adding warnings and requiring users to confirm before proceeding. If the child is under 13, they can’t proceed without entering the device’s Screen Time passcode.

If the device’s onboard machine learning detects nude content, the feature automatically blurs the photo or video, displays a warning that the content may be sensitive and provides ways to get help. Options include leaving the conversation or group thread, blocking the person, and accessing online safety resources.

The feature also displays a message that reassures the child that it’s okay not to see the content or leave the chat. There’s also an option to send the message to a parent or guardian. If the child is 13 or older, they can still confirm they want to continue after receiving warnings – with reminders reiterating that it’s okay to opt out and that further help is available. According to The Guardian, it also includes the option to report images and videos to Apple.

The feature analyzes photos and videos in Messages, AirDrop, Contact Poster (in the Phone or Contacts app) and FaceTime video messages on iPhone and iPad. In addition, it will scan “some third-party apps” if the child chooses a photo or video to share with them.

Supported apps vary slightly on other devices. On Mac, it scans Messages and some third-party apps if users choose to share content through them. On Apple Watch, it covers Messages, Contact Poster and FaceTime video messages. Finally, on Vision Pro, the feature scans messages, AirDrop, and some third-party apps (under the same conditions mentioned above).

The feature requires iOS 18, iPadOS 18, macOS Sequoia, or visionOS 2.

The Guardian reports that Apple plans to expand it globally after the Australia test. The company has likely chosen Australia for a specific reason: the country is set to impose new rules that will require Big Tech to monitor child exploitation and terror content. As part of the new rules, Australia agreed to add the clause that it was only mandatory “when technically feasible,” dropping the requirement to break end-to-end encryption and compromise security. Companies will have to comply by the end of the year.

User privacy and security were at the center of the controversy over Apple’s infamous attempt to monitor CSAM. In 2021, it was set to adopt a system that would scan for images of sexual abuse online, which would then be sent to human reviewers. (This was kind of shocking, since Apple has a history of standing up to the FBI in its attempts to unlock a terrorist’s iPhone.)

Privacy and security experts argued that the feature would open a backdoor for authoritarian regimes to spy on their citizens without any exploitative content. The following year, Apple abandoned the feature, which (indirectly) led to the more balanced child-safety feature announced today.

Once it rolls out globally, you can activate the feature under Settings > Screen Time > Communications Safety and toggle the option on. This section has been active by default since iOS 17.

Leave a Reply

Your email address will not be published. Required fields are marked *