google play store search


Google on Thursday is issuing new guidance for developers building AI apps distributed through Google Play, in hopes of cutting down on inappropriate and otherwise prohibited content. The company says apps offering AI features will have to prevent the generation of restricted content — which includes sexual content, violence and more — and will need to offer a way for users to flag offensive content they find. In addition, Google says developers need to “rigorously test” their AI tools and models, to ensure they respect user safety and privacy.

It’s also cracking down on apps where the marketing materials promote inappropriate use cases, like apps that undress people or create nonconsensual nude images. If ad copy says the app is capable of doing this sort of thing, it may be banned from Google Play, whether or not the app is actually capable of doing it.

The guidelines follow a growing scourge of AI undressing apps that have been marketing themselves across social media in recent months. An April report by 404 Media, for example, found that Instagram was hosting ads for apps that claimed to use AI to generate deepfake nudes. One app marketed itself using a picture of Kim Kardashian and the slogan, “Undress any girl for free.” Apple and Google pulled the apps from their respective app stores, but the problem is still widespread.

Schools across the U.S. are reporting problems with students passing around AI deepfake nudes of other students (and sometimes teachers) for bullying and harassment, alongside other sorts of inappropriate AI content. Last month, a racist AI deepfake of a school principal led to an arrest in Baltimore. Worse still, the problem is even affecting students in middle schools, in some cases.

Google says that its policies will help to keep out apps from Google Play that feature AI-generated content that can be inappropriate or harmful to users. It points to its existing AI-Generated Content Policy as a place to check its requirements for app approval on Google Play. The company says that AI apps cannot allow the generation of any restricted content and must also give users a way to flag offensive and inappropriate content, as well as monitor and prioritize that feedback. The latter is particularly important in apps where users’ interactions “shape the content and experience,” Google says, like apps where popular models get ranked higher or more prominently, perhaps.

Developers also can’t advertise that their app breaks any of Google Play’s rules, per Google’s App Promotion requirements. If it advertises an inappropriate use case, the app could be booted off the app store.

In addition, developers are responsible for safeguarding their apps against prompts that could manipulate their AI features to create harmful and offensive content. Google says developers can use its closed testing feature to share early versions of their apps with users to get feedback. The company strongly suggests that developers not only test before launching but document those tests, too, as Google could ask to review it in the future.

The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.



Source link