Home > Our work > Responsible AI > Responsible AI for social media governance
Responsible AI for social media governance
This project is envisioned to respond to growing concerns about the misuse of social media platforms, which can be harmful and serve to propagate disinformation, extremism, violence, harassment and abuse. The aim of the project is to identify a set of technical and democratic methods that governments could adopt to safely ask a set of agreed questions and measurements about the effects of social media recommender systems. The project dovetails with ongoing work in the Christchurch Call and the Global Internet Forum to Counter Terrorism, which bring together governments, tech companies and citizens’ groups with the aim of eliminating terrorist and violent extremist content online.
Please see the article published: Generative AI models should include detection mechanisms as a condition for public release
Reports
2024
Social Media Governance project: Summary of work in 2024 (November 2024)
Crowdsourcing annotations for harmful content classifiers: An update from GPAI’s pilot project on political hate speech in India (November 2024)
How the DSA can enable a public science of digital platform social impacts (policy brief)
2023
Social Media Governance Project: Summary of Work in 2023 (October 2023)
2022
Transparency mechanisms for social media recommender algorithms: From proposals to action (November 2022)
2021
Social Media Governance project: Summary of work in 2024