Google announced that it will begin requiring that generative artificial intelligence applications on Android include a way to report offensive content. The technology company updated its rules so that they adhere to Google Play’s AI-generated content policy. Because of this, the developers They should place a button to report offensive material.
In an entry published on the Android developers blog, Google stated that its new policies seek ensure content is safe for people. With the explosion in popularity of ChatGPT, many companies have started implementing artificial intelligence in their applications. While some do it to get on the bandwagon and gain a foothold in the market, the truth is that others have long-term plans.
The new rule will require apps to report or flag offensive AI-generated content. Users will have a button or form at hand to make reports without leaving the app. Google said that developers will have to use this data to report content filtering and moderation that they do in their applications.
AI-generated content is content created by generative AI models based on user requests. Here are some examples of AI-generated content:
- Generative AI text-to-text conversational chatbots, where interaction with the chatbot is a primary feature of the application
- Images generated by AI from text, image or voice requests
To ensure user safety and in line with Google Play policy coverage, apps that generate content using AI must comply with Google Play developer policies.
What is considered offensive? According to the Play Console help website, anyone who can facilitate child exploitation or abuse, deceive users, or facilitate fraudulent conduct, is prohibited. AI apps that produce non-consensual sexual material will not be allowed (deepfake), voice or video recordings of real people to facilitate scams, or official documents to commit fraud, among others.
Google anticipates that we will see more generative AI applications
Google mentioned that as generative AI models become more available, developers may integrate them into their applications. This year we have seen a considerable advance in artificial intelligence which powers applications such as ChatGPT, Dall-E, Midjourney or the Adobe suite. Companies like OpenAI, Microsoft or Google itself refine their models to make them more advanced.
But as Uncle Ben said, with great power comes great responsibility, so It is necessary to implement mechanisms to avoid misuse of this technology. Google mentions that the long-term success of an app or game, in terms of installs and user ratings, is related to security.
“The Android community expects safe, high-quality experiences,” said Karina Newton, director of Global Product Policy and Developer Policy. “An app’s design, privacy protections, data quality and security standards all play an important role.”
Other security settings that will come to Android 14
Added to the new policies for generative AI, Google to set stricter limits around full-screen notifications. Apps for Android 14 and above will not be allowed to use this notification by default unless their core functionality requires it. In this case, they must have the user’s consent to use the permission.
Another relevant adjustment has to do with the way Android apps access your photos and videos. Under the new policy, they will only be able to use it for purposes directly related to the functionality of the application. That means we won’t see any more alarm apps, VPNs, or online loans that require authorization to look at your photos.