🇪🇺 🌍 Today, we hosted our third multi-stakeholder briefing on generative AI and its impacts on online child sexual exploitation and abuse (OCSEA), this time in Brussels. The event brought together European institutions, regulators, law enforcement, child safety advocates, and Tech Coalition industry members to discuss robust safety-by-design measures, shared challenges, and opportunities for deeper collaboration. As part of this ongoing dialogue, we announced new research projects selected for award through the Tech Coalition Safe Online Research Fund. These projects, led by the University of Kent, Western Sydney University, and Safernet Brasil, will tackle diverse aspects of this critical issue—from young people’s engagement with AI to the misuse of generative AI for creating and distributing CSAM. 📢 Learn more about the outcomes of today’s briefing and the newly announced research projects in our latest blog: https://lnkd.in/e8AMRRzS Thank you to all who participated and shared their expertise, and a special thanks to Microsoft for hosting us at their Brussels office. Together, we’re making strides toward a safer digital future for children. 💡 #GenerativeAI #ChildSafety #TechCoalition
Tech Coalition’s Post
More Relevant Posts
-
Exciting news! Y&R has been awarded one of only three prestigious Tech Coalition grants for research on generative AI. The Tech Coalition is an alliance of global tech companies that are working together to combat child sexual exploitation and abuse online. The project is called “Youth Voices on AI: Shaping a Safer Digital Future,” and it will focus on directly involving young people in AI safety and OCSEA prevention. It aims to align AI policies and development with the values, expectations, and safety concerns of young users. This project combines the talent and experience of five top-tier organisations working to protect children from online harm: · Western Sydney University’s Young and Resilient Research Centre, the leading university centre conducting child participative research on OCSEA · Save the Children, the largest independent child rights organisation in the world, active in 118 countries · The Australian eSafety Commissioner, Australia's independent regulator for online safety and the world's first government agency dedicated to keeping its residents safer online · Anthropic, a top-performing frontier AI model developer that was created to maximize AI alignment and safety, and · Common Good AI, a start-up platform for building consensus among disparate groups We are also very pleased to see that our long-term collaborators at SaferNet Brazil have also received one of the grants and we look forward to sharing and coordinating our work. More information is here: https://lnkd.in/e8AMRRzS Go team! 👏 Tech Coalition Safernet Brasil Western Sydney University Save the Children International Anthropic Common Good AI eSafety Commissioner #onlinesafety #GenerativeAI #GlobalTech #ResearchGrant #Innovation #AI #TechAlliance #childparticipativeresearch #childparticipation #childrights #youthparticipation #youthrights #digitalsafety #preventchildexploitation #preventchildabuseonline
🇪🇺 🌍 Today, we hosted our third multi-stakeholder briefing on generative AI and its impacts on online child sexual exploitation and abuse (OCSEA), this time in Brussels. The event brought together European institutions, regulators, law enforcement, child safety advocates, and Tech Coalition industry members to discuss robust safety-by-design measures, shared challenges, and opportunities for deeper collaboration. As part of this ongoing dialogue, we announced new research projects selected for award through the Tech Coalition Safe Online Research Fund. These projects, led by the University of Kent, Western Sydney University, and Safernet Brasil, will tackle diverse aspects of this critical issue—from young people’s engagement with AI to the misuse of generative AI for creating and distributing CSAM. 📢 Learn more about the outcomes of today’s briefing and the newly announced research projects in our latest blog: https://lnkd.in/e8AMRRzS Thank you to all who participated and shared their expertise, and a special thanks to Microsoft for hosting us at their Brussels office. Together, we’re making strides toward a safer digital future for children. 💡 #GenerativeAI #ChildSafety #TechCoalition
To view or add a comment, sign in
-
📢Exciting Updates Ahead! The Tech Coalition Safe Research Fund is stepping into its 4th year with a sharp focus: Generative AI and its impact on online child sexual exploitation and abuse (CSEA). Over the past 3 years, we’ve built strong bridges between researchers and the tech industry. Now, we’re taking collaboration further by bringing together key stakeholders - law enforcement, child safety advocates, academia, and tech companies - to address the challenges posed by generative AI. This initiative underscores the power of collaboration. Together, we are committed to keeping children safe from digital harm!
To view or add a comment, sign in
-
How can we tackle #disinformation, respond to #AI challenges, and reduce growing #inequality? Finding answers to these questions is at the core of the work of EU-funded researchers! Working for various projects, they're researching the role of independent media, bolstering trust in science, and examining society's political inequalities! Some examples? 🔹 @Fairville will propose pilot models of urban intervention to increase residents’ participation in democratic processes and improve disadvantaged urban neighbourhoods. 🔹 The @VIGILANT Project and @FERMI are supporting law enforcement agencies in the fight against disinformation campaigns, including those that target free and fair elections. 🔹 The MEDIADELCOM project developed a diagnostic tool to assess the health of Europe’s media landscape, and its impact on deliberative communication. This is #ScienceForDemocracy! Discover more in our article. ➡ https://europa.eu/!3ccggY
To view or add a comment, sign in
-
Ángel Pavon Perez is a Research Associate for the Ethical and Responsible Tech/AI stream at the Centre for Protecting Women Online. He also is in the process of completing his doctoral studies, in collaboration with VISA Europe. His expertise is includes the studying and analysis of radicalised online communities, particularly within the manosphere. Within his scholarly research, he focuses on identifying and mitigating biases in AI systems within the financial services, with a particular emphasis on how these systems may inadvertently disadvantage minority groups. #CPWO #ProtectingWomenOnline #WomenOnlineSafety #OnlineViolence #EndViolenceAgainstWomen
To view or add a comment, sign in
-
Fantastic and truly insightful presentation by Lorraine Finlay Australian Human Rights Commission during the 2024 Tech in Gov Expo in Canberra this morning. A couple of key themes include: - Risk to privacy. - Algorithmic bias. - Automation bias. - Quality of information (especially within democratic nations). Benefits and dangers of AI-informed decision making in: - Automated government services. - AI criminal justice systems. - Immigration settings. A couple of powerful quotes: “We can harness the benefits of AI, while protecting ourselves against the harms”. “Putting humanity and human rights at the heart of the technology”. “We should never prioritise data over humans”. “Human decision making is both an art and a science”. “Certain decisions cannot be morally left to be made by machines”. “Technology should speak humanity and not the other way around”. #techhumanrights #AIhumanity #technologyhumanity #algorithmshumanity #humanrightsandtechnology
To view or add a comment, sign in
-
What's strange about online safety? Talking about child safety and enjoying it. Let's be honest, it's a tough subject but a much needed role for every platform when it comes to online safety for children. This panel of experts delivered sharing insight, best practices, and ways that anyone can get involved. You will want to check it out! #trustandsafety #responsibletech #ethicalai
How can we reduce risk and exploitation of minors in online spaces? Dr. Rebecca Portnoff (Head of Data Science, Thorn), Afrooz Kaviani Johnson (Child Protection Specialist, UNICEF), Sean Litton (President and Chief Executive Officer, Tech Coalition), and Juliet Shen (Community Advisory Board, Integrity Institute; Research Associate for Columbia University's Trust and Safety Tools Consortium), explored current efforts to reduce risk and exploitation of minors in online spaces at our Future of Trust & Safety gathering on May 14, 2024. Moderated by Matthew Soeth (Head of Trust & Safety and Global Affairs, All Tech Is Human). 📺 Watch now: https://lnkd.in/eUwF-7Gc 🎙️ Do you want to join more than 9,000 people across 90 countries to discuss Trust & Safety? Apply to join our global Slack community now: https://lnkd.in/eA_Q-PHG
Panel: Safety by Design for Generative AI | The Future of Trust & Safety
https://www.youtube.com/
To view or add a comment, sign in
-
Artificial intelligence is a future development trend that will be used to improve the quality of human life and survival. It can also help or assist in completing human work, improve efficiency, and save time, especially in combating crime, including border security, smuggling, and human trafficking.
Since its launch just under two years ago, in July 2022, the AI for Safer Children team is proud to have over half the world's countries represented on the Global Hub and welcomes South Korea, Luxembourg and the Democratic Republic of the Congo as some of its newest members 🙌 A joint initiative of the Ministry of Interior of UAE and UNICRI, AI for Safer Children supports 700+ dedicated law enforcement investigators in leveraging AI to save children from sexual predators. If you are in law enforcement, join them today to learn how to use frontier technology responsibly and safeguard children worldwide! https://lnkd.in/e5mu2XEb UNICRI - United Nations Interregional Crime and Justice Research Institute, Dana Humaid Al Marzooqi, Guillaume Alvergnat, Irakli Beridze, Maria Eira, Inês Gonçalves Ferreira, Stephen McNamara, David Haddad, Odhran McCarthy, Inna Kotova
To view or add a comment, sign in
-
🚨 The AI Revolution and Child Safety: A Wake-Up Call 🚨 A heartbreaking lawsuit against Character.AI reveals the devastating impact of unregulated AI interactions. As the child safety community, we must acknowledge this as a critical failure. Children need guidance—not just in accessing modern tech tools but in understanding when and how to use them safely. Creating new features won’t replace robust age gating and awareness programs for parents, schools, and society. AI platforms must take responsibility, and laws should be firm—without exceptions. No child should pay the price for corporate negligence. Let’s make a collective call for stronger regulations and accountability. #DigitalSafety #TrustAndSafety #AIRegulation #ChildSafety #OnlineSafety A tragic incident has highlighted the need for stricter oversight of AI platforms. A Florida mother is suing Character.AI alleging that her son’s suicide was linked to his addiction to conversations with a chatbot on their platform. The lawsuit emphasizes the growing risks of unregulated AI interactions and the lack of safeguards for young users. This case, along with a simultaneous lawsuit against Google underscores the urgent need for clear age restrictions and accountability mechanisms to prevent further harm from unchecked AI use. https://lnkd.in/gaSBHAnc
To view or add a comment, sign in
-
This week's NCSEA On Location has something for everyone in child support. Raghavan Varadachari. Director of Division of State and Tribal Systems from the federal Office of Child Support Services, shares his insights into legacy child support systems and how states are transforming their systems to meet the needs of the present. https://bit.ly/3TIKlY4 Interested in AI, human-centered design, system security, the challenges of remote work, and the cost of child support system questions? You'll find interesting answers in this impressively wide-ranging discussion.
To view or add a comment, sign in
-
How should child safety be prioritised when it comes to AI in social media? Speaking with Colm Ó Mongáin on The Late Debate, RTE Radio One, Alex Cooney, CEO of CyberSafeKids said, "I think children need to be treated as children when they're using these services. If you think of children offline, we do a lot to ensure that their best interests are protected, their rights are upheld. We really need to apply that same approach to children online. We need to ensure that these services are protecting children who are on their platforms and using their services. I know there will be a focus on this through the online safety commissioner's work and also as a result of the Digital Services Act at a European level, but we do urgently need to see these changes happening." Listen back to Alex Cooney, CEO of CyberSafeKids; Malcolm Byrne, Fianna Fáil Senator; Maurice Quinlivan TD, Sinn Féin TD for Limerick City; Stephen Kinsella, Professor of Economics at University of Limerick; Elaine Burke, Science and Technology Journalist discuss how child safety should be prioritised when it comes to AI in social media with Colm Ó Mongáin, on The Late Debate, RTE Radio One. https://lnkd.in/dfeBGnDa
To view or add a comment, sign in
4,437 followers