FacebookLinkedInTwitter
European Union

Automated Decision Impact Assessment (ADIA)

This program aimed to develop and test a risk assessment framework, called ADIA (Automated Decision Impact Assessment), for AI applications deployed in Europe. Ten European AI businesses participated, applying the framework to their own AI projects in a four-week design sprint. The program explored the effectiveness of ADIA in identifying and mitigating risks associated with AI and automated decision-making (ADM) systems.

Deployment Period | September - November 2020 

This program aimed to develop and test a risk assessment framework, called ADIA (Automated Decision Impact Assessment), for AI applications deployed in Europe. Ten European AI businesses participated, applying the framework to their own AI projects in a four-week design sprint. The program explored the effectiveness of ADIA in identifying and mitigating risks associated with AI and automated decision-making (ADM) systems.

Deployment Period | September - November 2020 

Read the report now! 

EUROPE | JANUARY 2021

This report presents the findings and recommendations of the Open Loop’s policy prototyping program on AI Impact assessment, which was rolled out in Europe from September to November 2020.

As the report outlines, the results of Open Loop’s first policy prototyping experiment were very promising. Based on feedback from the companies we collaborated with, our prototype version of a law requiring AI risk assessments, combined with a playbook for how to implement it, was clearly valuable to the participants as a tool for identifying and mitigating risks from their AI applications that they may not have addressed otherwise.

The experiences of our partners highlighted how this sort of risk assessment approach can inform a more flexible, practicable, and innovative method to assessing and managing AI risks compared to more prescriptive policy approaches.

PROGRAM DETAILS

Main Findings & Recommendations

The Open Loop report "AI Impact Assessment: A Policy Prototyping Experiment" details the positive outcomes of the program. A prototype law requiring AI risk assessments, along with a companion implementation guide, proved valuable for participants in identifying and mitigating risks in their AI applications.

The program demonstrated the effectiveness of ADIA assessments in proactive risk management for AI/ADM systems. The report highlights how the ADIA approach promotes a more adaptable, practical, and innovative method for assessing and managing AI risks compared to rigid policy approaches.

These findings suggest that risk assessments like ADIA can be a valuable tool for responsible AI development. The program's recommendations, detailed in the full report, aim to inform European policymakers as they create regulations for trustworthy AI.

Value of Risk Assessments

Performing ADIA assessments proved valuable for companies in identifying and mitigating risks associated with AI/ADM systems.

Procedural Focus

Regulations should focus on guiding the risk assessment process rather than prescribing specific actions for all AI applications.

Detailed Guidance

Clear and detailed instructions on implementing the ADIA process should accompany any regulations.

Risk Definition

Regulatory frameworks should provide specific definitions of the types of risks considered within their scope.

Documentation and Justification

Regulations should require thorough documentation of risk assessments and the rationale behind chosen mitigation measures.

Actor Taxonomy

Developing a clear classification system for the different parties involved in AI risk assessments is crucial.

Value Impact Assessment

Guidance should be provided on how AI/ADM systems might impact societal values and how potential conflicts between values can be addressed.

Integration with Existing Processes

New risk assessment processes should build upon and improve existing frameworks.

Risk-Based Regulation

A procedural risk assessment approach can inform the development of targeted regulations, applying proportionate regulatory requirements based on the assessed risk level of specific AI applications.

Partners & Observers

GET INVOLVED

Do you have innovative ideas on how to govern emerging technologies?
Do you want to co-develop and test new policy ideas?

We want to hear from you!

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. The information in this form collected pursuant to the Open Loop Privacy Policy.