by Douglas Merrill and Megha Sinha

Generative AI is a make-or-break moment for CIOs

Opinion
Aug 07, 20235 mins
Generative AIIT Leadership

IT leaders must bring a pragmatic technology lens to gen AI’s potential — and its pitfalls — by optimizing their strategies for use cases first.

Business man looking out through the office balcony seen through glass doors.
Credit: Shutterstock / paul prescott

Hardly a day goes by without some new business-busting development on generative AI surfacing in the media. And, in fact, McKinsey research argues the future could indeed be dazzling, with gen AI improving productivity in customer support by up to 40%, in software engineering by 20% to 30%, and in marketing by 10%.

Still, it’s worth remembering that we’ve seen this movie before, with companies piling into exciting new technologies with a melee of premature experiments and pilots. CIOs and CTOs have a crucial role in avoiding those pitfalls when it comes to gen AI. They can bring a pragmatic technology lens to determine when and where gen AI can generate the greatest value — and where it is not the best option.

Doing so requires developing use cases based on a deep understanding of the unit economics of gen AI, the resources needed to capture those benefits, and the feasibility of executing the work given existing capabilities. With AI increasingly viewed as a business accelerator and disruptor, this complex equation is a challenge CIOs must get right.

Gen AI archetypes: Takers, shapers, and makers

One key question CIOs face in determining the best strategic fit for gen AI in their enterprise is whether to rent, buy, or build gen AI capabilities for their various use cases. The basic rule is to invest in creating a unique gen AI capability only when there is a proprietary advantage. We’ve found it helpful to think in terms of three archetypes:

Takers use a chat interface or an API to quickly access a commodity service via a publicly available model. Examples include GitHub Copilot, an off-the-shelf solution to generate code, or Adobe Firefly, which assists designers with image generation and editing. This archetype is the simplest, both in terms of engineering and infrastructure needs, and is generally the fastest to get up and running. It does not allow for integration of proprietary data and offers the fewest privacy and IP protections. While the changes to the tech stack are minimal when simply accessing gen AI services, CIOs will need to be ready to manage substantial adjustments to the tech architecture and to upgrade data architecture. 


Shapers 
want to develop proprietary capabilities and have higher security or compliance needs. In shaper use cases, CIOs need to integrate existing gen AI models with internal data and systems to work together seamlessly and generate customized results. One example is a model that supports sales deals by connecting gen AI tools to CRM and financial systems to incorporate customers’ prior sales and engagement history. 

There are two common approaches for Shapers. One is to “bring the model to the data” — that is, hosting the model on the organization’s infrastructure, either on-premises or in the cloud environment. The other is to “bring data to the model” — that is, when an organization puts a copy of the large model itself on cloud infrastructure through hyperscalers. In either case, CIOs need to develop pipelines to connect gen AI models to internal data sources. Training a model on internal data makes the model’s predictions that much better and more specific to company needs. Companies will need to store much more interaction information, such as conversations with customer service agents, and continually use huge amounts of data to make gen AI systems effective.

Makers build a foundation model from scratch. This is expensive and complex, requiring huge volumes of data, internal AI expertise and computing power. There is a substantial one-off investment to build the model and train employees, starting at $5 million, and can go up to hundreds of millions, depending on such factors as training infrastructure, model parameters, and choice of model architecture. Because of the cost and complexity, this will be the least-common archetype.

Getting gen AI strategy right

Experimenting with gen AI use cases is relatively easy; scaling them up in a way that unlocks value is much more challenging. Without the right internal organization, even the most promising gen AI programs could fall short. Redesigning business processes and workflows, and retraining users to take advantage of gen AI capabilities must occur. Upgrading enterprise technology architecture to integrate and manage generative AI models is also key in orchestrating how they operate with existing AI and machine learning (ML) models, applications, and data sources.

The CIO’s first move should be to centralize gen AI capabilities to coordinate activities, build expertise, and allocate capabilities to priority initiatives. The goal of this team, including data engineers, MLOps engineers, and risk and legal experts, is to collaborate on building gen AI for the first few use cases. The focus should be on connecting gen AI models to internal systems, enterprise applications, and tools. Only by doing the structural work at the tech stack level can a business get past developing a few isolated use cases to industrializing to capturing substantial value. The principle is to manage and deploy gen AI as a foundational platform service that is ready for use by product and application teams.

In the best-case scenario, all of the above would be in place as an organization begins its gen AI journey. In the absence of such ideal conditions, CIOs should still begin developing a platform for a set of priority use cases, adapting, and adding as they learn. 

The buzz around gen AI is that it has the potential to transform business as we know it. Potential, though, is not certainty, or even probability. CIOs and CTOs will be on the front lines to ensure that organizations execute with strategic intent and focus, and don’t get trapped in endless, and expensive, pilot purgatory. 

by Douglas Merrill

Douglas Merrill is a partner in McKinsey & Company’s Southern California office.

by Megha Sinha

Megha Sinha is a partner in McKinsey & Company’s Bay Area office.