by Borys Stokalski, Bogumił Kamiński, Daniel Kaszyński
5 February 2024
This article discusses the need for policymakers, AI investors, and AI practitioners to see beyond current generative AI hype and retain a broad field of vision of AI, to avoid developing blind spots in areas that may become even more impactful than generative AI. The article explores the key drivers shaping the trajectory of AI growth, including monetization potential, regulation, and industrialization, and the challenges of principles/outcomes alignment. The article also discusses the CAST AI design framework, developed by the GPAI Future of Work working group, which offers support for conceptualization, evaluation and design of AI based products and solutions. The CAST model links capabilities such as generative insights or execution autonomy to design heuristics and patterns, as well as responsible AI principles that can provide support to solution architects and product owners in shaping their design decisions and backlogs in a way that leads to responsible, robust, and well-architected solutions.
How Can Standard Contract Terms Advance Responsible AI Data and Model Sharing for Generative AI and Other Applications?
by Lee Tiedrich and Alban Avdulla
8 December 2023
Stakeholders want to responsibly share more AI data and models but face challenges doing so. The challenges have escalated with the meteoric rise of generative AI. Standard contract terms can potentially help overcome the challenges, particularly when combined with business codes of conduct, technical tools, and education. The GPAI IP Advisory Committee started a project in 2021 aimed at fostering the development of standard AI contract terms, which can help translate AI principles to practice. This piece summarizes the Committee's work, including the outcomes of its two recent and well-attended multi-stakeholder workshops.