Share

HomeOur work > Responsible AI

GPAI is transforming: further to the announcement of an integrated partnership with the OECD, this website will no longer be updated.

See here for more information

Working Group on Responsible AI

The work of the Working Group on Responsible AI (RAI) is grounded in a vision of AI that is human-centred, fair, equitable, inclusive and respectful of human rights and democracy, and that aims at contributing positively to the public good. RAI's mandate aligns closely with that vision and GPAI’s overall mission, striving to foster and contribute to the responsible development, use and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals.

RAI, as all other GPAI Working Groups, does not operate in a silo within GPAI. RAI seeks to collaborate with other Working Groups. For instance, RAI works with the Data Governance Working Group when their respective projects share common dimensions.

Finally, the ad hoc AI and Pandemic Response Subgroup that was created in July 2020 to support the responsible development and use of AI-enabled solutions to COVID-19 and other future pandemics, was merged to the RAI in February 2022. The projects this group were working on were also transferred under the stewardship of the RAI.

Current projects

The Working Group on Responsible AI is pursuing the following projects:

GPAI expert reports

2023

 RAI Working Group Report (December 2023)

The term “responsible AI” is gaining traction, both in tech circles and beyond, but how can it be achieved? By aligning closely with GPAI’s mission, which promotes fair, equitable and inclusive AI in accordance with the UN Sustainable Development Goals (SDGs), the Responsible AI Expert Working Group (RAI EWG) works to ensure that AI is developed in the interests of the public good. This report outlines their outputs from 2023 as well as plans for 2024.

 RAI Strategy for the Environment Workshop Report (RAISE) (November 2023)

From forecasting the impacts of climate change to highlighting the risks facing various ecosystems, AI can help us better understand and prepare for climate action and biodiversity preservation. Since 2020, the Responsible AI Strategy for the Environment (RAISE) has produced recommendations on these areas and taken steps to ensure their implementation. This year, GPAI Experts organised a workshop to design approaches to increase practical and international action to this end. 

 Social Media Governance Project - Summary of Work in 2023 (October 2023)

Social media is one of the most influential channels through which AI can impact our daily lives. Throughout 2023, GPAI Experts examined the power of algorithms to shape our interaction with online content and, ultimately, the way we perceive this information. Their work on this topic was influential in the development of the EU AI Act and the G7 Hiroshima AI Process. 

 Crowdsourcing the curation of the training set for harmful content classifiers used in social media (December 2023) 

How can we moderate harmful content on social media? Harmful content classifiers are key to identifying and flagging inappropriate or dangerous posts, but their use is neither consistent nor transparent to the public. This makes it increasingly difficult to develop effective policies to mitigate the dangers of online content while respecting the values of freedom and democracy. The report examines political hate speech during two elections in India, and proposes new models based on classifiers using semi-public datasets, rather than private ones within companies.

 Scaling Responsible AI Solutions - Challenges and Opportunities (December 2023) 

Along with its opportunities, AI brings with it a number of social challenges. In response, AI “solutions” have been developed to ensure that it upholds democratic values. In enabling developers to identify problematic stages within the AI lifecycle and adjust them accordingly, solutions are paramount to helping AI systems meet responsible best practices. This report presents the Scaling Responsible AI Solutions (SRAIS) project, which aimed to identify challenges to responsibility and scaling of such solutions and produced recommendations to overcome them.

 Pandemic Resilience - Developing an AI-calibrated ensemble of models to inform decision making (December 2023) 

AI is being applied in the health sector to inform policy in times of medical uncertainty. Using an ensemble model (a group of predictive algorithms), this report explores AI’s potential to forecast the epidemic spread and socio-economic impact of COVID-19 across various locations. Based on its findings, it provides policy recommendations, which include strengthening ties between modelers and decision makers, establishing a feedback mechanism to facilitate the adjustment of policies according to model outcomes, and developing a public data pipeline.

 Towards Real Diversity and Gender Equality in Artificial Intelligence - Advancement Report (November 2023) 

How can we avoid bias in AI systems? AI is trained on datasets containing biases which reinforce misconceptions of certain groups and threaten the safety and dignity of their members. In acknowledging the harms this can have on women and marginalised communities, this report calls for their increased consideration throughout the AI life cycle. Through reviewing literature, speaking with marginalised individuals, and analysing existing initiatives in this field, it gathers a comprehensive understanding of their experience to integrate their perspectives into policy.

 

2022

 AI for Net Zero Electricity (December 2022)

 R‌esponsible AI Working Group Report (November 2022)

 Transparency mechanisms for social media recommender algorithms: From proposals to action (November 2022)

 Biodiversity and Artificial Intelligence: Opportunities and recommendations for action (November 2022)

 AI-powered immediate response to pandemics: Summaries of top initiatives (March 2022)

 Measuring the environmental impacts of Artificial Intelligence compute and applications: The AI footprint (November 2022)

 AI for public good drug discovery: Advocacy efforts and a further call to action (October 2022)

 

2021

 R‌esponsible AI Working Group Report (November 2021)

Responsible AI for social media guidance: A proposed collaborative method for studying the effects of social media recommender systems on users (November 2021)

Climate change and AI: Recommendations for government action (November 2021)

 

2020

Responsible AI Working Group Report (November 2020)

Areas for future action in the responsible AI ecosystem (supporting report prepared for GPAI by the Future Society, December 2020)

Our experts

Group contact point: GPAI Montreal Centre of Expertise

 

Group participants

  • Amir Banifatemi, Co-Chair, XPRIZE
  • Francesca Rossi, Co-Chair, IBM Research
  • Aditya Mohan, Expert, National Standards Authority of Ireland
  • Adrian Weller, Expert, Centre for Data Ethics and Innovation (CDEI) advisory board, The Alan Turing Institute
  • Alistair Knott, Co-Lead, University of Otago
  • Anurag Agrawal, Expert, Council of Scientific and Industrial Research
  • Arunima Sarkar, Expert, World Economic Forum (WEF)
  • Clara Neppel, Expert, IEEE  European Business Operations
  • Catherine Régis, Expert, University of Montréal
  • Daniele Pucci, Expert, Istituto Italiano di Tecnologia, Genova
  • Dino Pedreschi, Co-Lead, University of Pisa
  • Farah Magrabi, Expert, Macquarie University, Australian Institute of Health Innovation
  • Hiroaki Kitano, Expert, Sony Computer Science Laboratories, Inc.
  • Inese Podgaiska, Expert, Association of Nordic Engineers
  • Ivan Bratko, Expert, University of Ljubljana
  • Juan David Gutierrez, Co-Lead, Universidad del Rosario
  • Juliana Sakai, Expert, Transparência Brasil
  • Kate Hannah, Expert, Director, The Disinformation project; Principal Investigator, Te Pūnaha Matatini
  • Konstantinos Votis, Expert, Visual Analytics Lab at CERTH/ITI
  • Mehmet Haklidir, Expert, TUBITAK BILGEM
  • Michael Justin O'Sullivan, Co-Lead, University of Auckland
  • Myuhng-Joo Kim, Expert, Seoul Women's University
  • Nicolas Miailhe, Co-Lead, The Future Society
  • Osamu Sudo, Expert, Chuo University
  • Paola Ricaurte Quijano, Co-Lead, Tecnológico de Monterrey
  • Rachel Dunscombe, Expert, Imperial College London
  • Ricardo Baeza-Yates, Expert, Institute for Experiential AI of Northeastern University
  • Rob Heyman, Expert, ICHEC – Brussels
  • Dubravko Ćulibrk, Expert, Institute for Artificial Intelligence Research and Development of Serbia
  • Sandro Radovanovi, Expert, University of Belgrade
  • Seydina Moussa Ndiaye, Expert, Director of the FORCE-N Program at the Virtual University of Senegal
  • Stuart Russell, Expert, UC Berkeley
  • Susan Leavy, Expert, School of Information and Communication, University College Dublin, Ireland
  • Tom Lenaerts, Expert, Université Libre de Bruxelles – FARI, the AI for common Good Institute
  • Toshiya Jitsuzumi, Expert, Chuo University
  • Venkataraman Sundareswaran (Sundar), Expert, World Economic Forum
  • Vilas Dhar, Expert, Patrick J. McGovern foundation
  • Virginia Dignum, Expert, Umeå University
  • Sarith Felber, Expert, Ministry of Justice
  • Cyrus Hodes, Co-Lead, AIGC Chain
  • Rachel Adams, Expert, Research ICT Africa
  • Przemyslaw Biecek, Expert, University of Warsaw
  • Sacha Alanoca, Expert, Harvard University
  • Nava Shaked, Expert, Holon Institute of Technology
  • Monica Lopez, Expert, HOLISTIC AI
  • Jeff Ward, Expert, Animikii Indigenous Technology
  • Barry O’Sullivan, Expert, University College Cork 
  • Benjamin Prud'homme, Co-Lead, MILA
  • Celine Caira, Observer, OECD
  • Dafna Feinholz, Observer, UNESCO
  • Rosanna Fanni, Observer, UNESCO