Humans and AI Business Subject Matter Experts Forests and Trees Background

Humans and AI: Business Subject Matter Experts, Forests, and Trees

April 5, 2021
by
· 4 min read

Gone are the days when AI projects were cool science experiments, worked on exclusively by data scientists, far away from business world realities.

A few years ago at a trade show, I asked a business manager who came to our booth if her employer hired any data scientists. She said they did have one data scientist, so I asked if their data scientist was any good. “Umm, yes, I think so,” she replied. The uncertainty in her reply piqued my interest. I wanted to know why she was so uncertain. “Well, I don’t understand anything he says or does, but he must be good because he’s got a PhD,” she explained.

Should you need a PhD to work on AI projects?

dreamstime m 95443762

Include the C-Suite and Business Experts

Recent research shows that AI success requires much more than mathematics and coding. Considering only one in ten companies report significant financial benefits from implementing AI, the collaboration of business subject matter experts and technical experts is critical. 

According to one recent McKinsey research report

“[S]ome of the biggest gaps between AI high performers and others aren’t only in technical areas…but also in the human aspects of AI. [H]igher performers…now realize that AI solutions typically need to be developed or adapted in close collaboration with business users to address real business needs and enable adoption, scale, and real value creation.”

Related studies have shown the benefits of C-suite involvement. According to Gartner, “companies that allocate funding for AI projects at the C-suite level are twice as likely to achieve high levels of AI maturity.” Other studies show a difference in AI success rates depending on which executive oversees the project. In “Winning With AI” in the MIT Sloan Management Review, the authors report that companies where the CEO leads AI projects have twice the success rate of companies where the CIO is in charge of AI projects. This isn’t a criticism of CIOs, but rather a reminder that AI projects require a broader frame of reference than conventional IT projects.

Business experts help focus AI projects on business value. Their practical experience is vital for:

  • Aligning AI projects with business needs 
  • Improving business processes and decision-making
  • Ensuring that AI behaviors are consistent with business rules
  • Managing organizational change, such as staff acceptance of AI systems

Business and Technical Experts Speak Different Languages

But collaboration between business experts and technical experts isn’t so easy. The C-suite and business subject matter experts don’t usually have specialized education in mathematics and computer science. Similarly, data scientists typically don’t have degrees in marketing or an MBA. Sometimes it’s like business managers and technical experts speak different languages!

Some claim that fully interpretable models are the solution, that we should constrain the complexity of algorithms so that a layperson can understand them. Others have argued that AI doesn’t need to be fully interpretable because we don’t hold human decision-making to the same standards. For example, we don’t subject a job applicant to an MRI scan to make a hiring decision. There’s now objective evidence for which of these opinions is correct.

An Experiment in Information Overload

In a series of experiments, the researchers and authors of “Manipulating and Measuring Model Interpretability” asked participants to predict apartment prices with the assistance of a machine learning model. The researchers varied the model’s complexity, the amount of disclosure about how the model worked, and the visibility of past predictions.

The results of the experiments are surprising. While participants who saw a clear model with a small number of features were better able to predict the model output, increased model interpretability did not change whether the participants adjusted their own estimates to be closer to those of the machine learning model. Furthermore, and counterintuitively, higher model interpretability resulted in the reduced ability of participants to detect model mistakes. 

The researchers hypothesized that participants were overwhelmed by the internal details like the mathematical formula of the clear models, so they ran another experiment. Some participants were advised of potential model shortcomings due to specific missing inputs. They were told to look for outliers occurring from unusual apartment layouts. With this guidance, participants’ ability to detect model mistakes ceased to differ by the degree of interpretability of the model. The remaining participants who received no guidance continued to be less likely to detect errors in clear models versus complex black-box models. This result confirmed that exposing the details of a model’s internals, even for a clear and simple model, can cause information overload.

Full interpretability does not improve people’s acceptance of AI, but it’s more likely to lead to undetected errors. Business subject matter experts can experience information overload— they cannot see the forest for the trees. The solution is to show them how the AI behaves, not the model’s internals.

Humans and AI Best Practices

AI projects should be aligned to business strategy and deliver a return on investment. Because the focus of AI projects should be on the business value, organizations should estimate and track the business value of each use case.

Business subject matter experts are essential for the success of your AI projects. They define the relevant ethics, regulatory rules, business rules, operational processes, and desired behaviors. 

After your new model has been trained and built, business experts perform user acceptance testing and sign off on the model behaviors’ suitability. To maximize the detection of model errors, provide business subject matter experts with intuitive explanations of AI behavior, such as plots displaying the relationship between model inputs and the AI prediction, and prediction explanations applied to selected examples.

Demo
See DataRobot in Action
Request a demo
About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog