Karin Lindström
Redaktör

Assessing the business risk of AI bias

Feature
Jun 09, 20234 mins
Artificial IntelligenceCIOGenerative AI

The lengths to which AI can be biased are still being understood. The potential damage is, therefore, a big priority as companies increasingly use various AI tools for decision-making.

A team of people working together in an office workspace at computers.
Credit: NDAB Creativity / Shutterstock

AI doesn’t get better than the data it’s trained on. This means that biased selection and human preferences can propagate into the AI ​​and cause the results that come out to be skewed.

In the US, authorities are now using new laws to enforce instances of discrimination due to prejudicial AI, and the Consumer Financial Protection Bureau currently investigates housing discrimination due to biases in algorithms for lending or housing valuation.

“There is no exception in our nation’s civil rights laws for new technologies and artificial intelligence that engage in unlawful discrimination,” said its director Rohit Chopra recently on CNBC.

And many CIOs and other senior managers are aware of the problem, according to an international survey commissioned by Swedish software supplier Progress. In the survey, 56% of Swedish managers stated they believe there’s definitely or probably discriminatory data in their operations today, while 62% also believe or think it’s likely such data will become a bigger problem for their business as AI and ML become more widely used.

Elisabeth Stjernstoft, CIO at Swedish energy giant Ellevio, agrees that there’s a risk of using biased data that’s not representative of the customer group or population being looked at.

“It can, of course, affect AI’s ability to make accurate predictions,” she says. “We have to look at the data on which the model is trained, but also at how the algorithms are designed and the selection of functions. The bottom line is the risk is there, so we need to monitor the models and correct them if necessary.”

Having said that, however, she’s not concerned about the AI ​​solutions that Ellevio uses today.

“We use AI primarily to write code faster and better, so I’m not worried about that,” she says. “After the code is developed, it’s also reviewed. Machine learning is mainly used for things of a technical nature, to be able to make predictive analyses.”

However, she has encountered problems when it comes to obtaining relevant training data to predict energy load when the cold is at its most severe.

“It’s a challenge because the occasions with such cold are so rare that there is simply not much to train on,” she says.

Consider the consequences

Göran Kördel, CIO at Swedish metals company Boliden, agrees it’s vital to understand the risks that exist with biased AI.

“This is probably something that all CIOs are thinking about now, and I think it’s important we do even if we don’t know what AI will look like in a few years or how it will be used then,” he says. “We have to think about the consequences of that.”

But for Boliden, he sees no major risks in the short term.

“We mostly use image analysis where we’ve done some pilots with cameras that examine things,” he says. “I worry more about biased data when it comes to things related to humans and generative AI; about AI that produces consumer-oriented information and uses consumer data.”

Kördel also sees a risk that AI can dampen creative ability and inhibit thinking outside the box, he says.

A thorough examination

Skandia’s CIO Johan Clausén points to the importance of an evaluation of needs and risk when new solutions are introduced. But for the time being, he, like Boliden and Stjernstoft, doesn’t see any risks with the use of AI in his company.

“We use AI to a very limited extent, which is why I don’t see biased data as a challenge at the moment,” he says. “The external data sources that we have are reliable.”

But in order to guard against any instances in the future, Clausén says it’s important to think about where secure data should be and where it might be acceptable to use skewed data to begin with, and from there set up a control based on that model.

But these CIOs agree that a lack of competence that exists in AI would presently affect risk. Kördel specifically points out that there’s always a lack of skills when new technology comes along, which is a general obstacle for all companies. At the same time, he believes companies have more time than many might think.

“With all due respect to the developing technology, the question is how quickly will we apply it,” he says. “I think it will take longer than we think to do so on a large scale.”