The Algorithmic Accountability Act of 2019: Taking the Right Steps Toward AI Success

April 12, 2019
by
· 4 min read

This week, two U.S. senators have introduced a bill that would require large companies and specialist data traders to assess and manage the risks associated with automated decision systems. The Algorithmic Accountability Act of 2019 would apply to any business which:

●     Has sales revenue of more than $50 million over the previous three years, or

●     Has the data for more than one million consumers or one million devices, or

●     Is a data trader that collects, sells or trades third-party consumer information.

This bill is part of a growing worldwide trend. From the EU’s broad-reaching General Data Protection Regulation (GDPR) to New York City Council’s Algorithmic Transparency Bill, legislators have been introducing new regulations to protect consumers in the rapidly developing global data economy.

The Act defines an automated decision system as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.” This definition of automated decision system is quite broad and can be interpreted to include many mainstream computerized business processes such as customer relationship management systems, marketing campaign management systems, as well as the fraud detection and loan acceptance rules engines commonly used in the finance industry.

Businesses will be required to conduct an impact assessment that covers the risk associated with accuracy, fairness, bias, discrimination, privacy, and security. The Act provides a list of sensitive personal attributes: race, color, national origin, political opinions, religion, trade union membership, genetic data, biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal convictions, or arrests.

The impact assessment must cover the decision system’s purpose, how it was designed and trained, and the data that it uses. It must also provide details on the actions taken to minimize the risks and any negative impacts upon consumers. Furthermore, the Act discusses disclosure to consumers, and implies a right for consumers to access the results of the system, and correct or appeal decisions.

While many media headlines have focused on the issues of unfair bias and discrimination, the Act applies across the broader ethical issues of algorithmic decisions. The regulatory requirements in the Act are also sound business practices for the governance of artificial intelligence (AI) systems. Regardless of whether the Act ever becomes law, if your business strategy includes AI (and it probably should if you wish to remain competitive!), then your strategic plan should include AI ethics. The application of AI ethics can improve your organization’s effectiveness, reducing regulatory risk, reputation risk and providing a net benefit to society. Much like the requirements of the proposed Act, AI ethics includes the principles of purpose, fairness, disclosure, and governance.

Here are six steps you can take to ensure algorithmic accountability and AI success in your organization:

  1. Your first step to ethical AI is to develop an AI Ethics Statement that will apply to all AI projects and deployed AIs across your organization. That policy will clearly state detailed guidelines for how the principles of ethical purpose, fairness, disclosure, and governance are to be applied and will reflect the values of both your organization and society.
  2. Don’t trust black box models. Insist that your AI provides human-friendly explanations that all stakeholders can understand. Insist that your AI gives reasons for its decisions.
  3. Check whether your model directly discriminates by asking your AI to tell you which data fields it is using, the patterns it applies to the values in the data fields, and the important words and phrases in the text that it uses.
  4. Check whether your model indirectly discriminates (using proxies for sensitive attributes) by building models that predict the content of sensitive attributes (e.g., gender) using other attributes (including textual data, e.g., a resume!).
  5. Use training data that is representative of the behavior you want your AI to learn. Actively monitor new data and new outcomes and alert when the new data drifts from the data upon which the system was originally trained.
  6. Create an audit trail by documenting the process that was used to design, train and deploy your AI. Since this document can be very detailed, often hundreds of pages long, automate the process of creating the audit trail.

While at first, these steps may seem difficult, maybe even overwhelming, there are enablement tools and experienced advisors who can help. DataRobot is the AI Success company, the only company that combines a robust and trustworthy AI Cloud Platform with a tried and proven approach to empowering you to manage your own AI strategy.

DataRobot makes algorithmic accountability and AI governance easier by:

  • offering hands-on AI Ethics workshops that teach you how to write an AI Ethics statement and how to apply ethics to your AI projects
  • automatically providing human-friendly explanations for how each model works and the decisions your AI makes
  • automatically generating detailed model documentation suitable for audit reviews and regulatory approval
  • proactively alerting you of inconsistencies between the new data you are using and the data used originally for model training

Are you looking to improve your organization’s effectiveness, reducing regulatory risk, and reputation risk while still providing a net benefit to society? If your AI is a black box that can’t explain the decisions that it makes, then it’s time to upgrade to DataRobot. Click here to arrange for a demonstration of DataRobot, showing you how to have AI you can trust.

New call-to-action

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog