Automated Decision Impact Assessment (ADIA)
This program aimed to develop and test a risk assessment framework, called ADIA (Automated Decision Impact Assessment), for AI applications deployed in Europe. Ten European AI businesses participated, applying the framework to their own AI projects in a four-week design sprint. The program explored the effectiveness of ADIA in identifying and mitigating risks associated with AI and automated decision-making (ADM) systems.
Deployment Period | September - November 2020
Read the report now!
EUROPE | JANUARY 2021
This report presents the findings and recommendations of the Open Loop’s policy prototyping program on AI Impact assessment, which was rolled out in Europe from September to November 2020.
As the report outlines, the results of Open Loop’s first policy prototyping experiment were very promising. Based on feedback from the companies we collaborated with, our prototype version of a law requiring AI risk assessments, combined with a playbook for how to implement it, was clearly valuable to the participants as a tool for identifying and mitigating risks from their AI applications that they may not have addressed otherwise.
The experiences of our partners highlighted how this sort of risk assessment approach can inform a more flexible, practicable, and innovative method to assessing and managing AI risks compared to more prescriptive policy approaches.
PROGRAM DETAILS
Main Findings & Recommendations
The Open Loop report "AI Impact Assessment: A Policy Prototyping Experiment" details the positive outcomes of the program. A prototype law requiring AI risk assessments, along with a companion implementation guide, proved valuable for participants in identifying and mitigating risks in their AI applications.
The program demonstrated the effectiveness of ADIA assessments in proactive risk management for AI/ADM systems. The report highlights how the ADIA approach promotes a more adaptable, practical, and innovative method for assessing and managing AI risks compared to rigid policy approaches.
These findings suggest that risk assessments like ADIA can be a valuable tool for responsible AI development. The program's recommendations, detailed in the full report, aim to inform European policymakers as they create regulations for trustworthy AI.
Value of Risk Assessments
Performing ADIA assessments proved valuable for companies in identifying and mitigating risks associated with AI/ADM systems.
Procedural Focus
Regulations should focus on guiding the risk assessment process rather than prescribing specific actions for all AI applications.
Detailed Guidance
Clear and detailed instructions on implementing the ADIA process should accompany any regulations.
Risk Definition
Regulatory frameworks should provide specific definitions of the types of risks considered within their scope.
Documentation and Justification
Regulations should require thorough documentation of risk assessments and the rationale behind chosen mitigation measures.
Actor Taxonomy
Developing a clear classification system for the different parties involved in AI risk assessments is crucial.
Value Impact Assessment
Guidance should be provided on how AI/ADM systems might impact societal values and how potential conflicts between values can be addressed.
Integration with Existing Processes
New risk assessment processes should build upon and improve existing frameworks.
Risk-Based Regulation
A procedural risk assessment approach can inform the development of targeted regulations, applying proportionate regulatory requirements based on the assessed risk level of specific AI applications.
Partners & Observers
Explore other programs
Competition in AI Foundation Models
Meta’s Open Loop program is excited to have launched its first policy prototyping program in the United Kingdom, which is focused on testing the Competition and Markets Authority (CMA) AI Principles to ensure that they are clear, implementable and effective at guiding the ongoing development and use of AI Foundation Models, while protecting competition and consumers.
Generative AI Risk Management
Meta’s Open Loop launched its first policy prototyping research program in the United States in late 2023, focused on testing the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0. This program gave participating companies the opportunity to learn about NIST's AI RMF and subsequent “Generative AI Profile” (NIST AI 600-1), and to understand how this guidance can be applied to developing and deploying generative AI systems. At the same time, the program gathered evidence on current practices and provided valuable insights and feedback to NIST, which can inform future iterations of the RMF and Gen AI profile.
Artificial Intelligence Act
The EU AI Act program is the largest policy prototyping initiative to date, engaging over 60 participants from more than 50 companies developing AI and ML products. The program was structured into three pillars, each focusing on key articles of the EU proposal and assessing and scrutinizing them.
Privacy-Enhancing Technologies
The Open Loop Brazil program was launched in tandem with a twin policy prototyping program in Uruguay, with the aim of guiding and enabling companies in Brazil to leverage and apply privacy-enhancing technologies (PETs) to help deidentify data and mitigate privacy-related risks.
Human-centric AI
The Open Loop India program was a collaborative effort between Meta, ArtEZ University of the Arts and The Dialogue, to develop a stakeholder engagement framework that operationalizes the principle of human-centered AI.
Privacy-Enhancing Technologies
The Open Loop Uruguay program was launched in tandem with a twin policy prototyping program in Brazil, with the aim of guiding and enabling companies in Uruguay to leverage and select privacy-enhancing technologies (PETs) to help de-identify data and mitigate privacy-related risks.
GET INVOLVED
Do you have innovative ideas on how to govern emerging technologies?
Do you want to co-develop and test new policy ideas?
We want to hear from you!