Google is implementing a new policy on its Play Store to address potential issues with generative AI apps. Starting early next year, developers of Android applications will be required to include a feature that allows users to report or flag offensive AI-generated content. This change comes in response to the proliferation of AI-generated apps, some of which have caused controversy due to their inappropriate or misleading content.
Key Takeaway
Google Play is enforcing a new policy that requires developers to implement in-app flagging and reporting features for offensive AI-generated content. The move comes in response to concerns regarding the spread of inappropriate and misleading AI content. Developers must also comply with Google’s existing policies, prohibiting the dissemination of restricted content and deceptive practices. In addition, app permissions related to photos and videos will undergo further review, and full-screen notifications will be limited to priority use cases. While historically Apple has led in policy implementations, Google’s initiative establishes guidelines for AI apps and chatbots.
Reporting Offensive Content
The updated policy will mandate that developers enable in-app flagging and reporting mechanisms for offensive AI-generated content. By providing this option, users can easily alert developers to any problematic or inappropriate content generated by AI algorithms. This feedback will not only help developers improve their apps but also assist in filtering and moderating the content effectively.
Examples of Concerning AI-Generated Content
Several instances have highlighted the need for this policy update. For instance, the app Remini, which gained viral popularity, came under scrutiny for enhancing certain features in AI-generated headshots, such as enlarging women’s breasts. Additionally, issues arose with Microsoft’s and Meta’s AI tools, as users found ways to manipulate the generated content to create misleading and offensive imagery.
More alarming still are cases where open-source AI tools have been misused to create inappropriate content at scale. Pedophiles, for example, were discovered utilizing these tools to generate child sexual abuse material (CSAM). The advent of upcoming elections also raises concerns about the use of AI to create deepfakes, which can mislead and deceive the public.
Compliance with Existing Policies
Google emphasizes that all apps, including AI content generators, must adhere to its existing developer policies. These policies prohibit the dissemination of restricted content, including CSAM, and any other behavior that promotes deception or harm.
Additional Review for App Permissions
Alongside the crackdown on AI content apps, Google will subject certain app permissions to further review. Apps that request broad photo and video permissions, for instance, will receive heightened scrutiny. Moving forward, apps will only be permitted access to users’ photos and videos if it directly relates to their functionality. If photo or video access is required sporadically or for a specific purpose, the app must utilize a system picker like the new Android photo picker.
Limiting Disruptive Notifications
Furthermore, Google will now restrict full-screen notifications to high-priority situations. Many apps have abused this feature to push users towards paid subscriptions or other offers. Going forward, apps will require a special app access permission, known as the “Full Screen Intent permission,” to display full-screen notifications. Only apps targeting Android 14 and above that genuinely require this functionality will be granted this permission.
Google Takes the Lead in AI Policy
It is notable that Google is the first to introduce explicit policies addressing AI apps and chatbots. Historically, Apple has taken the lead in implementing stringent rules to tackle unwanted behavior in apps, which Google then follows suit. However, Apple has yet to establish a specific AI or chatbot policy in its App Store Guidelines. While Google’s updated policies are being rolled out today, AI app developers have until early 2024 to incorporate the necessary changes to comply with the new reporting and flagging requirements.