Contributing writer

4 ways to ask hard questions about emerging tech risks

Opinion
Aug 04, 20236 mins
Emerging TechnologyIT LeadershipRisk Management

For too long we’ve accepted all technology as progress. Today, that comfort zone can be a detriment to the business unless tech leaders start scrutinizing beyond face value.

Software engineers working on project and programming in company. Startup business group working as team to find solution to problem. Woman programmer working on computer with colleagues standing by.
Credit: Shutterstock / Jacob Lund

As CIOs and technology leaders, we’re almost always in the role of the technology evangelist, bringing both mainstream and emerging technology into the organization for business benefit.

In collaboration with our peers, we have a solid business sense that carefully weighs innovation and risk in order to gain valuable ROI while protecting the organization from all forms of risk associated with each project.

This has served us well for many years, but the time has come where we need to step up the risk management mindset, including R&D and innovation-related projects, due to technology’s increasing rate of adoption, scale, and impact. This is not a call to become a Luddite or tech-denier, but a wake-up call to bolster your risk management mindset—especially related to transformative technology—and ask harder questions before giving the green light.

What’s new and different today?

Technology has always been used for positive and negative effect. What’s different today is we’re witnessing increasingly bold ambitions coupled with rapid adoption and widespread impact. With the tech adoption curve now at hundreds of millions within a few months or even days—ChatGPT gained over 100m monthly active users after two months, and then Threads eclipsed it with 100m users in just five days—it now reaches global audiences in record time before it’s fully understood.

AI is just getting started and it’s now that we need to be asking hard questions. The Thinkers360 AI Trust Index 2023, which measures annual sentiment among both AI end users and AI providers, found that over 75% of respondents were somewhat concerned or very concerned in terms of their level of trust in AI today. Ultimately, it’s a question of which organizations we trust to advance the technology, what we trust them to do on our behalf, and how we regulate its use to innovate as fast and as safely possible.

Of course, many of these considerations have national and global implications, but as innovations begin to influence your organization, you’re also accountable to stakeholders and end users. Here are four steps which may help in your planning and how you can start, or continue, asking the hard questions about technology.

Start with your core values

Your organization’s core values spell out the behaviors the organization expects of itself and of all employees. These can also be a guide as to what not to do. Google’s “Don’t be evil” became Alphabet’s “Do the right thing” and was intended to guide the organization when some other organizations were less scrupulous.

This is a starting point, but we also need to examine each proposed future action and initiative, whether in-house or off-the-shelf, to explore where each good intention may lead. The common advice is to start small with lower complexity and lower risk projects and build experience prior to taking on the larger, more impactful initiatives. You can also borrow from Amazon’s technique of asking whether decisions or actions are reversible or not. If reversible, then there’s clearly less risk.

Interrogate transformative technology

This means going beyond the typical business and technical questions related to a project and, where needed, asking legal and ethical questions as well. While innovation often gets non-productive pushback due to internal politics (for instance, not invented here syndrome), a productive type of pushback is asking probing questions like what’s the impact of mistakes? Will an AI-informed decision simply be wrong, or could it become catastrophically wrong? What level of careful piloting or real-world testing can help to address the unknowns and lower the level of risk? What’s an acceptable level of risk when it comes to cybersecurity, society, and opportunity?

The work of non-profits such as the Future of Life Institute looks at transformative technology such as AI and biotechnology with the goal of steering it toward benefiting life and away from extreme large-scale risks. These organizations and others can be valuable resources to raise awareness of the risks at hand.

Establish guardrails at the organizational level

While guardrails may not be applicable for the global AI military arms race, they can be beneficial at a more granular level within specified use cases and industries. Guardrails in the form of responsible procurement practices, guidelines, targeted recommendations, and regulatory initiatives are widespread and there’s much already available. Legislature is also stepping up its actions with the recent EU AI Act proposing different rules for different risk levels with the aim of reaching an agreement by the end of this year.

A simple guardrail at the organizational level is to craft your own corporate use policy as well as sign on to various industry agreements as appropriate. For AI and other areas, a corporate use policy can help to educate users to potential risk areas, and hence manage risk, while still encouraging innovation.  

Continuously improve your risk governance

With technology advancing so rapidly, it’s important to continually monitor developments, on both the innovation and risk side of the equation, and adjust accordingly. This means pivoting as needed and even pulling the plug on projects that develop major risks or concerns. Google, for instance, pulled its Google Glass product after just eight months in the consumer market due to various mounting privacy concerns. The product found traction in more targeted enterprise scenarios over a decade later, though.

As we embrace transformative technologies like AI within our organizations, it’ll be vital to spend more time and effort on the safety and risk governance side of the innovation-risk equation to balance the growing power of the technology.

In the words of the Future of Life Institute, “Civilization only flourishes as long as we can win the race between the growing power of technology and the wisdom with which we design and manage it. With AI, the best way to win that race is not to impede the former, but to accelerate the latter by supporting AI safety research and risk governance.”

Sometimes you have to slow down to go fast.

Contributing writer

Nicholas D. Evans is the Chief Innovation Officer at WGI, a national design and professional services firm. He is the founder of Thinkers360, the world’s premier B2B thought leader and influencer marketplace as well as Innovators360.