Q&A: Breaking Down the Pentagon’s 5 Ethical Principles for AI

Scaling AI Catie Grasso

As businesses across industries continue to invest in AI and overall adoption of the technology becomes more prevalent, stakeholders recognize the importance of creating an overarching set of rules intended to shape how that technology is implemented and how to transition from the hypothetical to real-world applications.

In late February 2020, the U.S. Department of Defense released five key principles to guide its development of and adherence to ethical AI. The principles, which are based on recommendations from a 15-month study by the Defense Innovation Board, apply to both combat and non-combat functions and are designed to help the U.S. military uphold legal, ethical, and policy commitments in the field of AI.

We recently sat down with Dataiku’s in-house responsible AI evangelist and data scientist Dr. Triveni Gandhi to elaborate on the importance of each of the principles as well as to discuss how Dataiku’s Enterprise AI platform supports each one.

1. Responsible.

DoD personnel will exercise “appropriate levels of judgement and care, while remaining responsible for the development, deployment, and use of AI capabilities.”

Triveni: This is a large principle, but where it all begins. First, leaders need to determine what constitutes 'responsible' in their own organizations and specifically in the context of AI. This means outlining a mission statement for responsible AI — one that is ideally human-centric and pragmatic. Then, leaders will have to create a set of guidelines that give practitioners clear directions on how to keep their work aligned with this new mission. By tying these guidelines to a bigger statement, practitioners will understand how their work fits into a larger picture of responsibility.

The guidelines should include details on making AI explainable and trusted both inside and outside of the organization. They should detail how to communicate AI tenets both internally and externally to ensure alignment across those designing AI systems, other employees who depend on them for their jobs, and customers and end-users of products and services that are affected by AI systems.

While these five principles are thorough, they are not exhaustive. Organizations need to outline specifics so they can make sure their teams, processes, and tools are constantly aiming to become more responsible and transparent and are aligned internally and externally, as mentioned above. Any new AI systems need to be geared toward an inclusive design, bringing in people from diverse backgrounds and skill sets to be a part of the design and implementation of these systems. This, in turn, will innately help make it more responsible.

Dataiku is committed to this by supporting companies in building an AI strategy that is responsible through accountability, sustainability, and governability. We aim to ensure that models are designed and behave in ways that align with their purpose, establish the continued reliability of AI-augmented processes in their operation and execution, and do so in a way that is centrally controlled and managed.

2. Equitable.

The release says that the Department will “take deliberate steps to minimize unintended bias in AI capabilities.”

Triveni: Unintended bias can come up in many different parts of the AI pipeline. One common technique to detect this is through subpopulation analysis. The feature, which we offer in Dataiku, can understand if a model is equally good for all subpopulations involved — which it, at its core, should be. During model creation, users can eliminate unintended model biases in an effort to create a more transparent and fair deployment of AI.

3. Traceable.

Here, the Department aims to equip relevant personnel with “an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.”

Triveni: The first principle, responsible AI, plays an underlying role throughout this entire set of guidelines, but especially with regard to traceability. At any given point in time, teams should be able to know who is working on specific projects, who has access to what information, data experts can easily build and share projects to be used by others, and everyone has visibility of when a decision is made to use an AI system for a specific end goal.

Teams need to know who is actually responsible or accountable for saying, “We’re putting this model into production” or “We’re using this model to make decisions.” Across the DoD, the data science community is focused on providing helpful decision guidance to decision makers. These stakeholders need to be able to understand and trust why a certain course of action is being recommended.

At Dataiku, we accomplish this through our platform’s collaborative nature. By bringing the right people, processes, data, and technologies together in a transparent way, strategic decisions can be better made throughout the entirety of the model life cycle. Teams should keep robust documentation so any contributor can effectively explain what has been done in a specific project through wikis, to-do lists, model versioning, activity logs, and so on.

It is critical that all of these processes and guidelines are clear and upfront and that teams have a plan in place for comments, feedback, documentation, and reporting. To learn more about how organizations can move toward white-box, explainable AI, check out this webinar by VentureBeat and Dataiku.

4. Reliable.

The Department will maintain “explicit, well-defined uses” for its AI projects and the safety, security, and effectiveness of its AI capabilities will be subject to testing and quality assurance.

Triveni: Here, we’re really talking about impact monitoring. Post-deployment of the system, what steps are you taking to ensure the system itself is not causing unintended harm? What are you doing to make sure your model doesn’t drift too far from the truth?

As data changes or unforeseen circumstances arise that cause the data to be different, the model’s impact and real-world implications may also change. When a model has drifted or decayed and become out of touch with reality, it is an organization’s responsibility to perform comprehensive model monitoring by assessing and measuring the drift on the data to be scored and determining when the model must be retrained. With Dataiku’s model drift monitoring plugin, businesses can effectively detect drift between the original test dataset used by the model and the new data.

5. Governable.

This principle states that the Department will “design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

Triveni: Regardless of the industry, it is anyone’s responsibility who works on data science and machine learning projects to ensure both the data and models themselves are of sound quality. Companies can put metrics and checks into place so if they retrain a model and perform a subpopulation analysis and see that the previous model was more equitable than the current one, they can build those scenarios in to stop the deployment of the new model.

This principle comes down to the notion that team leaders and others in charge need to fundamentally understand the implications of their projects and how things are being done in order to be held accountable for that work. Not only does the system itself need to be governable, but the humans need to know what it means to apply governance and that information needs to be communicated broadly to everyone involved.

At Dataiku, we focus on making AI accessible to a wider population within the enterprise and providing a centralized, controlled, and governable environment. All of this needs to be done with human-in-the-loop intelligence, using our capacity to build AI systems that are responsible and sustainable for the long haul.

Q: Should principles like these be applied to all stages of AI projects, from where your data comes from to deploying full machine learning models?

Triveni: I think it’s pivotal to remember that values and guidelines of this nature aren’t just something we talk about after the fact or that play a minor role in one siloed stage of the machine learning process. They are truly a facet of the entire system, just as important as the general principles we follow in a sound data science pipeline.

Q: What steps can organizations take toward responsible AI?

Triveni: Organizations need to train people to know what responsible AI means to their unique organization, clearly outline their limits, know what is accepted versus what is not, and then take all of that a step further to use sophisticated tools and technology to stay in line with those clearly stated missions.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More