DataRobot Data Literacy for Responsible AI Governance and Accountability BKG

Data Literacy for Responsible AI: Governance and Accountability

October 5, 2021
by
· 3 min read

At the organizational level, artificial intelligence represents the potential for major, novel gains. But the technology’s ability to unleash rapid impact with great scope and unique dimensions can also increase organizational risk. For companies implementing AI systems, that risk extends beyond revenue to the reputational damage of using an algorithm that is perceived to be discriminatory or harmful to vulnerable groups.

Evaluating and mitigating the risk that comes with any new technology has been standard practice for organizations since World War II. Throughout history, introducing innovations in fields like aviation and nuclear power to society required robust risk management frameworks. AI is no different, and by its nature, it demands a comprehensive approach to governance utilizing risk management.

Here is a two-step process to maintain proper AI governance and identify, understand, test for, and mitigate potentially harmful behaviors.

Step 1: Classify the AI Decision Type

The first step is to classify AI decision types on a spectrum of risk, from low-risk cases representing small monetary losses to high-risk cases that have a possibility for either significant monetary losses or loss of life. Decision type classification helps prioritize governance efforts and resources on the cases where it is most impactful; low-risk AI decisions require less robust governance than medium- or high-risk types. For high-risk cases, it’s typically more appropriate for humans and AI systems to collaborate toward a final decision than to automate the decision-making completely. A good example of this kind of shared responsibility would be in the medical sphere, where an AI system might recommend a diagnosis but a doctor would evaluate the patient and make the final decision.

Step 2: Conduct an Impact Assessment

After classifying the decision type, the second step is to conduct a well-defined impact assessment to identify all stakeholders and assess the potential risk of harm for each of them with a given AI use case. Impact assessments are not formally mandated for AI governance, but as a proactive measure they go a long way toward developing fair, bias-free, low-risk AI systems. In the manufacturing space, for example, an environmental assessment would help weigh the benefits of establishing a new factory against the impact it would have on the surrounding communities. Applied to AI systems, impact assessments play the same role in balancing the benefits of the technology with the potential risks posed to stakeholders at every level.

Those two steps help organizations map out the potential for loss, the affected stakeholders, and the associated risks for a given AI system. Then risk matrices and practical unit tests provide a detailed understanding of each hazard and enable organizations to apply corresponding management and mitigation plans to each level of risk. These key model and organizational behaviors support best practices in AI governance:

  • Comprehensive and transparent checklists: Providing internal AI stakeholders with an open implementation process encourages diverse thinking.
  • Automated and user-guided testing: Defined procedures ensure consistent, repeatable, and auditable data science methods, implementations, and impacts.
  • Business rule testing: AI models should both adhere to data science and IT system requirements and satisfy standard operating procedures at the same time.
  • Detailed test reports: Sharing testing results broadly and regularly addresses the interdisciplinary nature of AI, supports system stability, and helps achieve AI’s ethical ends.
  • Direct responsibility: Requiring sign-offs from individuals in data science, legal, model risk management, IT, and business leadership standardizes acceptance of perceived risks.

Accountability for the power and the risks of AI systems rest with the technology’s human creators, facilitators, and operators. That’s why it’s crucial to include stakeholders at every level of an organization in AI governance practices, equipping each responsible party with the understanding and the motivation they need to properly govern an AI system and mitigate any risks that may arise.

White Paper
Data Literacy for Responsible AI
Download Now
About the author
Scott Reed
Scott Reed

Trusted AI Data Scientist

Scott Reed is a Trusted AI Data Scientist at DataRobot. On the Applied AI Ethics team, his focus is to help enable customers on trust features and sensitive use cases, contribute to product enhancements in the platform, and provide thought leadership on AI Ethics. Prior to DataRobot, he worked as a data scientist at Fannie Mae.  He has a M.S. in Applied Information Technology from George Mason University and a B.A. in International Relations from Bucknell University.

Meet Scott Reed
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog