Starting January 31, apps on the Google Play Store that allow users to create AI content will be required to have features so users can flag violative content to developers under the new policy.
App developers will use the user reports to filter and moderate content on their apps themselves, the guideline says.
Text-based generative AI chatbots and AI image generators are included within the policy, while apps that just host AI-generated content and don’t allow users to create new AI content aren’t able to create content using AI are free from the policy.
Some examples of violative AI-generated content determined by Google include non-consensual sexual deepfakes (or highly realistic fake AI images of real people), voice or video recordings of real people used to conduct scams and false or deceptive election-related content.
Google also announced last month it will soon require election advertisers to make “clear and conspicuous” disclosures for advertisements containing AI-generated content.
Eric Schmidt, the former CEO of Google parent Alphabet, warned in June that the “2024 elections are going to be a mess because social media is not protecting us from false generative AI.” Schmidt noted cuts to content moderation roles at companies like Meta and X, formerly known as Twitter, were a “big issue.”
Google is one of the larger tech companies leading the push on stricter AI rules. Apple has yet to back its own app store guidelines with AI or chatbot policies despite the increasingly controversial use of generative AI. Non-consensual deep fakes, not-safe-for-work memes and child sexual abuse material have been produced by users on generative AI apps, sometimes through tricking or manipulating apps to get around bans on certain blocked content.