Google Confirms Android SafetyCore Lets AI-Powered On-Device Content Classification

Feb 11, 2025Ravie LakshmananMobile Security / System Learning

Google has clarified that the recently released Android System SafetyCore game does not do any client-side content scanning.

Android offers a number of on-device protections, including mobile fraud protections, messaging email and abuse protections, and malware protections, while maintaining user privacy and ensuring users have complete control over their data, according to a spokesperson for the company, who was reached for comment.

A fresh Google program service for Android 9+ devices, SafetyCore, provides the on-device infrastructure for a secure and personally performed classification to assist users with identifying undesirable content. People have complete control over SafetyCore, and SafetyCore simply categorizes a particular piece of content when an application requests it via an optional feature.

SafetyCore ( offer name” web. facebook. iphone. Safetycore (” Google introduced in October of this year” ) was the first security feature to stop scams and other sensitive content on the Google Messages app for Android. It was a part of a series of security measures designed to combat scams and other content.

The feature, which requires 2GB of RAM, is rolling out to all Android devices, running Android version 9 and after, as well as those running Android Go, a portable version of the operating system for entry-level phones.

On the other hand, client-side scanning ( CSS) is seen as a different method of enabling on-device data analysis from weakening encryption or adding backdoors to existing systems. However, the strategy has raised serious privacy concerns, as it’s ripe for abuse by forcing the service provider to search for stuff beyond the first agreed-upon scope.

In some ways, Google’s Sensitive Content Warnings for the Messages application is similar to Apple’s Communication Safety element in iphone, which uses on-device machine learning to identify whether a photo or video appears to contain porn.

The maintainers of the GrapheneOS operating system, in a shared on X, reiterated that SafetyCore doesn’t provide client-side scanning, and is mainly designed to offer on-device machine-learning models that can be used by other applications to classify content as spam, scam, or malware.

According to GrapheneOS, identifying illegal content in such cases does not equate to attempting to identify it and report it to a service. That would significantly violate people’s privacy in numerous ways, and false positives would still be present. It’s not what this is, and it’s not a good thing for it.

Found this article interesting? Follow us on and Twitter to access more exclusive content.

Leave a Comment