Big Ideas: AI Governance, Assurance, Audit

Scaling AI Jacob Beswick

For a long while, Dataiku has proposed that AI Governance should be a fundamental consideration where AI is used within organizations and, in particular, where organizations are looking to scale their use of AI to new business lines and use cases.

As it turns out, Dataiku is not an island. And, indeed, the idea of AI Governance as a concept has been featured in a multitude of international domains that are proactively working towards responsible, safe, and trustworthy AI. See NIST’s draft AI Risk Management Framework, Singapore’s A.I. Verify Toolkit, and the U.K.’s AI Assurance work.

For casual onlookers, or individuals and teams looking to learn more about AI Governance, there are developments that might introduce some complexity and, perhaps, uncertainty: in particular, AI Assurance and AI Audits.

writing on a whiteboard

In this blog, we’ll parse through some of the big ideas that speak to important topics like: ensuring your organization is using AI in a way that it was intended; supporting outputs like explainability, transparency, accountability, and so on. These are, in our worldview, some of the compelling reasons to build out a strong AI Governance framework and embed it across teams. 

With that said, there are, in theory, other ways to get at some of these outcomes. Looking to AI Assurance and AI Audit, this blog will give a brief overview of what they are, what they seek to achieve, and how they relate to our ideas around AI Governance.

A fundamental argument that will carry throughout this blog follows: the merits of AI Assurance and AI Audits are only truly realized if organizations have content (e.g., information about key processes, practices, requirements) to feed auditors or third parties providing assurance. Such content and information, if highly decentralized, stands to create new kinds of burdens on organizations who skip governance practices and aim for Assurance and Audits as a first (and perhaps) only intervention.

What's the Difference? AI Governance, Assurance, and Audit

We’ve spoken before about AI Governance. In short, Dataiku’s view is that:

An AI Governance framework enforces organizational priorities through standardized rules, processes, and requirements that shape how AI is designed, developed, and deployed. 

Organizational priorities are informed by things like internal policy and external regulatory compliance; values-based considerations around implementing AI that meets established bias, fairness, or error-rate thresholds; or simply that information around who is doing what on a given AI-featured business line is known.

Standardized rules, processes, and requirements means that meeting priorities isn’t left to chance or creativity. Instead, practices are identified and implemented. Want to deploy a model? You must get it reviewed and signed off first. Need to ensure that a model has been tested for bias? Implement the agreed test pre-deployment and document it. Are you considering implementing a model that pushes the boundaries of the priorities around bias, fairness, or error-rate thresholds because the business value is extremely promising? You need to appeal to your senior manager or perhaps submit it to an internal committee for review. 

Finally setting the scope to include design, development, and deployment ensures that priorities, rules, processes, and requirements co-exist with a use case from concept stage through to post-deployment. This builds internal confidence that deployed models will continue to deliver on their intended objectives and, should they not, such performance will be identified either early on or late in the game and be corrected or, if that’s not possible, the model could be retired.

Without prescribing how to do all of the above best, and recognizing that organizations operating in different sectors and markets will have unique priorities that are contextually bound, there is nonetheless a universal output from a well designed and implemented AI Governance framework: systematized, centrally archived, and readily available information.

Why does this output matter? Let’s discuss this in relation to AI Assurance and AI Audit. 

AI Assurance

Assurance is not a new field; however, its application to AI is something of a recent innovation. Being pushed by the U.K. government’s Centre for Data Ethics and Innovation (CDEI) in the past year, the focal point of AI Assurance is “justified trust.” Justified trust lives at the intersection of trustworthiness (an AI use case is worthy of trust) and actual trust (an individual places their trust in the use case). The distinction is important for qualifying whether a use case is trusted but not deserving of that trust or vice versa. The question remains, at this point, what to do about that — a problem beyond the scope of this discussion.

The importance of trust with respect to leveraging AI, again, is not a new thing: It’s been the core purpose of trustworthy AI frameworks, it was at the foundations of the European AI Act, Singapore’s A.I. Verify, and beyond

Given that this focal point is shared widely, AI Assurance leans into exploring just how it can be provided proactively — this is distinct from the constrictive or penalizing means that regulation relies on. There is a long list of mechanisms discussed as ‘techniques for assuring AI systems’ and third parties (“assurance providers”) are seen as important vehicles for delivering. According to the CDEI, these can include the following and sit at different parts of the AI lifecycle:

assurance across AI system lifecycle

Source

Before honing in on the audits listed, I’d encourage the reader to reflect on the distinction between a model that is scoped to deliver the same outcomes — perhaps optimizing variable x across time while data ingested modifies said model — as compared to an automation function that consistently outputs an agreed deliverable over and over. AI, and in particular ML-based models, are subject to change. If we accept this, and we look to the assurance vehicles above, we’d be reasonable to think that while they deliver a point-in-time view, they are not guaranteed to deliver on assurance in perpetuity. So, if there’s a world where assurance practices become normalized and regularized, then they would no doubt benefit from information about models and use cases across time. Indeed, the CDEI notes this, emphasizing that ‘where the accuracy of predictions drifts over time … ongoing testing post-deployment can increase this certainty [about accuracy] by accounting for model drift.” AI Governance, as we see it, facilitates that. 

Note that audits make two appearances here: bias and compliance audits. Picking up on the point above, credible auditing requires information about the thing being audited. Early thinking on this from CDEI discusses ‘evidence’ in terms of: ‘the extent to which subject matter is in compliance with audit criteria’ for compliance audits and ‘satisfying quant benchmark for fairness or appropriate impact mitigation in place’ for bias audits. Of course, these criteria are evolving through new regulation, through advice and guidance from regulators and through standards organizations. And so what points of information matter most in the world of AI Assurance is TBD. 

Crucially, and a point that this blog seeks to drive home, is that for any of the assurance activities to be meaningful or impactful, there’s a critical dependency on information held internally by the organization seeking out AI Assurance. Such information might be outputs from business and ops teams, living in spreadsheets, emails, meeting minutes. Where strong governance practices are embedded, such information can live in a centralized repository, guess work as to whether something is missing isn’t a preoccupation, and the information deemed relevant or important will be available. 

Given this, a question for readers is whether, in some future state where you are seeking to realize AI Assurance within your organization, you have the right information readily available, systematically and consistently collected and aggregated? And, if not, what kind of burdens would you expect to face if confronted with any of the above assurance vehicles?

AI Audit

One of my favorite quotes about AI Audits (yes, I said that) comes from the U.K.’s Digital Regulator Cooperation Forum (DRCF). Published in a recent paper on the U.K.’s AI Audit landscape, stakeholders they consulted referred to the AI auditing industry as a

“largely unregulated market for auditing … [that] risked becoming a ‘wild west’ patchwork where entrants were able to enter the market for algorithm auditing without any assurance of quality.”

In the popular (sic my) imagination, the wild west was lawless; governed by fear, strength, and violence; and ‘winners,’ as we know them through the likes of the recent retelling of “West World,” depended on scheming, manipulations, and ready access to means of warfare (not to mention tech). These traits are not what one typically associates with the role of audit: A clinical exercise that seeks out, digests, and reproduces facts in order to draw sober conclusions about the state of a thing.

Why start with this opening? Well, in a world where there’s a growing pressure for organizations to be ready for AI regulation, for building trust and procuring, developing, and using AI responsibly, the means of achieving this — especially when relying on third parties — seems to suggest that they should be wary. 

From this humble author’s perspective, this isn’t reassuring and, when thinking about this in the context of AI Governance defined above and AI Assurance, it leaves a lot of questions unanswered. Here are a few:

  • First, whether a third-party provider can actually do their job — in terms of understanding concepts and why they matter; in terms of technical skills; and, finally, in terms of access to information from a customer organization;
  • Second, whether a third-party provider can do their job effectively (a cost proposition that resonates when we think of disordered customer information); and 
  • Third, whether an organization feels confident that the provision they receive will meaningfully represent the work done in the organization.

The reader may be picking up on a recurring theme by now: that all of these provisions, whether AI Assurance as an umbrella’s canopy and AI Audits as one of the ribs propping it up, can be undertaken with or without solidified AI Governance practices. However, I would argue that doing so could raise as many questions as they answer — some of which are fundamental. These are concerned with efficacy and cost, in particular.

In Summary

I must state that the discussion here represents ruminations in light of the fast and variously moving space where AI Governance sits. Organizations should be reassured that the risks around AI are attempting to be addressed in many ways, and that this will no doubt have positive impacts in several domains — including business health and customer or end user uptake/use (including trust and confidence) of services featuring AI. 

The big ‘however’ here is that, I would argue, the vehicles/means for producing trust in AI should not be seen as bandaids, but more like CRISPR: Information is needed before it can be interrogated, digested, and presented anew to serve a specified function. The starting question, therefore, shouldn’t be: Can I leverage AI Assurance or Audits to build trust or to prove something about how my company develops, procures, and uses AI? Instead, the starting questions should be: 

  • What are my organization’s priorities? 
  • What are our risks?
  • What do we think are the right operating procedures to consistently implement?
  • What is the right way to document information about how we meet our priorities and address our risks?
  • And how can we leverage this in future when we have to prove to ourselves, our customers, and potential regulators or certification bodies that we’ve done what we said?

You May Also Like

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More

6 Top-of-Mind Topics About AI & Trust in 2024

Read More

3 Concrete Ways to Drive AI ROI

Read More

Alteryx to Dataiku: The Visual Flow

Read More