Thorn and All Tech Is Human Forge Generative AI Principles with AI Leaders to Enact Strong Child Safety Commitments
April 23, 2024
5 Minute Read
Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI have all committed to Safety by Design principles across the development, deployment, and maintenance of generative AI
LOS ANGELES — April 23, 2024 — Thorn, a nonprofit that builds technology to defend children from sexual abuse, and All Tech Is Human, an organization focused on growing and strengthening the responsible tech ecosystem, have partnered with some of the world’s most influential technology companies to establish a foundation for child safety in the development of generative AI technologies.
Today, these companies are committing to implement principles that guard against the creation and spread of AI-generated child sexual abuse material (AIG-CSAM) and other sexual harms against children. The companies committing include Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI.
This alliance marks a critical moment in the tech industry as these companies commit to adopting comprehensive safety measures designed to prevent the misuse of AI in perpetuating sexual harm against children. By integrating Safety by Design principles into their generative AI technologies and products, these companies are not only protecting children but also leading the charge in ethical AI innovation.
The collective action by these tech giants sets a groundbreaking precedent for the industry, demonstrating a unified commitment to implement rigorous principles against the creation and dissemination of AIG-CSAM and other forms of sexual harm against children on their platforms.
This initiative comes at a pivotal time, as the misuse of generative AI poses significant risks to child safety, with the potential to exacerbate the challenges faced by law enforcement in identifying and rescuing existing victims of abuse and scale new victimization of more children.
“We’re at a crossroads with generative AI, which holds both promise and risk in our work to defend children from sexual abuse. I’ve seen firsthand how ML/AI accelerates victim identification and CSAM detection. But these same technologies are already, today, being misused to harm children.” said Dr. Rebecca Portnoff, Vice President of Data Science at Thorn. “That this diverse group of leading AI companies has committed to child safety principles should be a rallying cry for the rest of the tech community to prioritize child safety through Safety by Design. This is our opportunity to adopt standards that prevent and mitigate downstream misuse of these technologies to further sexual harms against children. The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action.”
Thorn, All Tech Is Human, and select participating companies (Anthropic, AWS Responsible AI, Civitai, Hugging Face, Inflection, Metaphysic, Stability AI and Teleperformance) have outlined the principles in a comprehensive paper, “Safety by Design for Generative AI: Preventing Child Sexual Abuse.” This paper further details recommended mitigations to enact these principles across all stages of AI development, deployment, and maintenance. Each of the opportunities and recommended mitigations for AI developers, providers, and more make it more difficult for bad actors to misuse generative AI for the sexual abuse of children.
Participating companies will develop, deploy, and maintain their generative AI technologies and products following concrete Safety by Design principles, such that these technologies are less capable of producing AIG-CSAM and other abuse material, the content that is created gets detected more reliably, and the distribution of the models used to create the content is limited. These principles cover the entire lifecycle of machine learning/AI, ensuring that preventative measures are taken at each stage.
As part of their commitment to the principles, each participant has agreed to release progress updates as they act on them. This approach ensures that technology companies proactively embed safety measures into their AI products, rather than retrofitting solutions after problems arise.
“It is imperative to prioritize child safety when building new technologies, including generative AI,” says Laurie Richardson, VP of Trust & Safety at Google. “The Safety by Design principles complement Google’s ongoing efforts in this space and will help mitigate the creation, dissemination, and promotion of AI-generated child sexual abuse and exploitation. We commend Thorn and All Tech Is Human for bringing key players in the industry together to standardize how we combat this material across the ecosystem.”
“Stability AI is committed to investing in research and development of reasonable safeguards to prevent bad actors from misusing AI technologies and products. We stand with Thorn, All Tech Is Human, and the broader tech community in this essential mission,” said Ella Irwin, SVP, Integrity, Stability AI. “As we pursue our mission, which is to provide the foundation to activate humanity’s potential, we want to ensure our technologies serve as a force for good, especially when it comes to protecting children. Therefore, we are committed to integrating Safety by Design principles into our AI development processes.”
Now that many of the leading AI companies have committed to implementing the principles, Thorn and All Tech Is Human’s goal is to urge all AI companies to publicly commit to adopting these Safety by Design principles and demonstrate their dedication to preventing the creation and spread of AIG-CSAM.
“We care deeply about the safety and responsible use of our tools, which is why we’ve built strong guardrails and safety measures into ChatGPT and DALL-E,” said Chelsea Carlson, Child Safety TPM at OpenAI. “We are committed to working alongside Thorn, All Tech is Human and the broader tech community to uphold the Safety by Design principles and continue our work in mitigating potential harms to children.”
By working together with the broader child safety ecosystem, AI companies can put in place commonsense mitigations that prevent bad actors from perpetuating child sexual abuse through generative AI.
“As we advance into new territories of AI capabilities, it’s critical that we build these technologies on a foundation of ethical responsibility, particularly when it comes to protecting children,” said David Polgar, Founder and President of All Tech Is Human. “This initiative represents a significant step forward in our collective effort to align AI’s potential with humanity’s highest values.”
Learn more about the recommendations and commitments for Safety by Design in Generative AI.
About Thorn
Thorn is a nonprofit that builds technology to defend children from sexual abuse. Founded in 2012, the organization creates products and programs to empower the platforms and people who have the ability to defend children.
Thorn’s tools have helped the tech industry detect and report millions of child sexual abuse files on the open web, connected investigators and NGOs with critical information to help them solve cases faster and remove children from harm, and provided parents and youth with digital safety resources to prevent abuse.
Thorn’s generative AI initiatives, including its leading role in this Safety by Design working group, have been made possible by support from the Patrick J. McGovern Foundation.