A systematic review of regulatory strategies and transparency mandates in AI regulation in Europe, the United States, and Canada

In this paper, we provide a systematic review of existing artificial intelligence (AI) regulations in Europe, the United States, and Canada. We build on the qualitative analysis of 129 AI regulations (enacted and not enacted) to identify patterns in regulatory strategies and in AI transparency requi...

Full description

Saved in:
Bibliographic Details
Main Authors: Mona Sloane, Elena Wüllhorst
Format: Article
Language:English
Published: Cambridge University Press 2025-01-01
Series:Data & Policy
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S2632324924000543/type/journal_article
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we provide a systematic review of existing artificial intelligence (AI) regulations in Europe, the United States, and Canada. We build on the qualitative analysis of 129 AI regulations (enacted and not enacted) to identify patterns in regulatory strategies and in AI transparency requirements. Based on the analysis of this sample, we suggest that there are three main regulatory strategies for AI: AI-focused overhauls of existing regulation, the introduction of novel AI regulation, and the omnibus approach. We argue that although these types emerge as distinct strategies, their boundaries are porous as the AI regulation landscape is rapidly evolving. We find that across our sample, AI transparency is effectively treated as a central mechanism for meaningful mitigation of potential AI harms. We therefore focus on AI transparency mandates in our analysis and identify six AI transparency patterns: human in the loop, assessments, audits, disclosures, inventories, and red teaming. We contend that this qualitative analysis of AI regulations and AI transparency patterns provides a much needed bridge between the policy discourse on AI, which is all too often bound up in very detailed legal discussions and applied sociotechnical research on AI fairness, accountability, and transparency.
ISSN:2632-3249