OpenAI hopes to win the trust of parents — and policymakers — by partnering with organizations that work to minimize tech and media harms to kids, preteens and teens.
Case in point, OpenAI today announced a partnership with Common Sense Media, the nonprofit organization that reviews and ranks the suitability of various media and tech for kids, to collaborate on AI guidelines and education materials for parents, educators and young adults.
As a part of the partnership, OpenAI will work with Common Sense Media to curate “family-friendly” GPTs — chatbot apps powered by OpenAI’s GenAI models — in the GPT Store, OpenAI’s GPT marketplace, based on Common Sense’s rating and evaluation standards, OpenAI CEO Sam Altman says.
“AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence,” Altman added in a canned statement.
The launch of the partnership comes after OpenAI said that it would participate in Common Sense’s new framework, launched in September, for AI ratings and reviews designed to assess the safety, transparency, ethical use and impact of AI products. Common Sense’s framework aims to produce a “nutrition label” for AI products, according to Common Sense co-founder and CEO James Steyer, toward shedding light on the contexts in which the products are used and highlight areas of potential opportunity and harm against a set of “common sense” tenets.
“Together, Common Sense and OpenAI will work to make sure that AI has a positive impact on all teens and families,” Steyer said in an emailed statement. “Our guides and curation will be designed to educate families and educators about safe, responsible use of [OpenAI tools like] ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology.”
OpenAI’s under pressure from regulators to show that its GenAI-powered apps, including ChatGPT, are a boon for society — not a detriment to it. Just last summer, the Federal Trade Commission opened an investigation into OpenAI over whether ChatGPT, its viral AI-powered chatbot, harmed consumers through its collection of data and its publication of false information on individuals.
OpenAI’s tools, like all GenAI tools, tend to confidently make things up and get basic facts wrong. And they’re biased — a reflection of the data that was used to train them.
Kids and teens, aware of the tools’ limitations or no, are increasingly turning toward them for help with not only with schoolwork but personal issues. According to a recent survey from the Center for Democracy and Technology, 29% of kids report having used ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.