Bob Lewis
Columnist

The last thing most CIOs need is an AI plan

Opinion
Feb 20, 20247 mins
Artificial IntelligenceGenerative AIIT Strategy

When it comes to your AI strategy, forget the “plans” part. A random walk will serve you better.

Programmer sitting at the desk in his office and trying to solve problem of new code.
Credit: BalanceFormCreative / Shutterstock

Sometimes knowing history (aka “remembering what happened”) can keep you out of trouble. Artificial intelligence presents a case in point. To succeed with AI, your rollout could benefit from a rear-view mirror.

Take, for example, how IT’s strategic planning process paved the way for smoothly deploying personal computers throughout the enterprise.

Oh, that’s right. It didn’t. IT steadfastly ignored the new technology. It was business users, with, surprisingly enough, accounting and financial analysts in the lead, who brought in PCs and electronic spreadsheets while IT looked the other way.

A few years later the internet happened. Many IT shops figured it was out of scope. So nobody was looking when a bunch of independent enthusiasts and marketing-department renegades figured out that HTML was pretty easy to figure out.

The next thing anyone knew, the World Wide Web was overflowing with brochureware and the brochureware was generating actual business. Marketing knocked on IT’s doors, IT figured out how to feed the web-based orders into the company’s order processing systems, and brochureware gave way to ecommerce.

And here we are, eyeball to eyeball with the next wave.

Which is AI and its exploding portfolio of capabilities. The experts are telling us to start with a plan.

Yeah, but judging from the planless successes of the PC and internet, an AI plan is just exactly what we shouldn’t waste our time on.

So I asked Dr. Yeahbut to pick apart some experts’ advice on how to define, structure, and execute an AI plan:

Expert No. 1: “Define the problem you need AI to solve.”

Dr. Yeahbut: You don’t know enough to do this. What you should do is choose a broad class of problems to point AI at — just broad enough to pick a promising AI tool to get things rolling.

Expert No. 2: “Start small.”

Dr. Yeahbut says: Aw. I wanted to say that! Yes, start small. Then iterate.

Expert No. 3: “Leverage AI to help make data-driven decisions.”

Dr. Yeahbut says: The word “help” salvages this advice, but only barely. Explainable AI is still in its infancy, and until AIs can answer the follow-up question — “Why do you think so?” — and avoid falling into well-known and usually ignored fallacies like mistaking correlation for causation and regression toward the mean, be skeptical of the conclusions your AI reaches.

Expert No. 4: “Begin by augmenting human efforts with AI.”

Dr. Yeahbut says: I can get behind this advice, and augment it, too. When it comes to AI, organizations will face a choice between relegating humans to the care and feeding of the organization’s AIs and its AIs augmenting human capabilities. AI can lead to either of two radically different business cultures — one dehumanizing, the other empowering.

Expert No. 5: “Create plans on a per-department basis.”

Dr. Yeahbut says: No, I don’t think so. Organizations will get a lot more mileage by creating a cross-functional AI brain trust whose members try a bunch of stuff and share it with one another. Repurposing the company org chart to organize the company’s AI efforts would encourage AI-powered silo-based dysfunction and little else.

Expert No. 6: “Consider how AI can enhance productivity.”

Dr. Yeahbut says: No, no, no, no, no! Consider how AI can enhance effectiveness.

Productivity is a subset of effectiveness. It’s for assembly lines. AI-augmented humanity (above) is about better-applying human knowledge and judgment to address complex challenges we humans can’t fully address on our own. AI-augmented humans could be more effective, whether they work on an assembly line or at a desk surrounded by data.

Expert No. 7: “Focus on removing bottlenecks.”

Dr. Yeahbut says: Well, okay, maybe. Rewind to the top and start with the class of problem you want AI to address. If current-state processes suffer from bottlenecks and it’s process optimization you need, then have at it. Chicken, meet egg.

Expert No. 8: “Make sure the plan includes security controls.”

Dr.  Yeahbut: Saying “security controls” is the easy part. Figuring out how and where to deploy AI-based countermeasures to AI-based threats? That will be an order of magnitude harder.

Then our Expert added: “The power of leveraging AI is the ability to turn over a level of control, allowing advanced learning techniques to see patterns and make decisions without human oversight.” This goes beyond reckless. We’re nowhere near ready for unsupervised AI, not to mention the “volitional AI” it could easily turn into.

Expert No. 9: “Find an easy problem to solve (and an easy way to solve it).”

Dr. Yeahbut says: See “Start small,” above.

Expert No. 10: “Tap into the ‘wisdom of crowds.’”

Dr. Yeahbut says: The full text of this advice suggests tapping into what your employees, partners, suppliers, and others know. And yes, I agree. Do this, and do it all the time.

It has nothing at all to do with AI, but do it anyway.

Expert No. 11: “First address needed changes to company culture.”

Dr. Yeahbut says: It would be nice if you could do this. But you can’t. Culture is “how we do things around here.” Which means AI-driven culture change can only co-evolve with the AI deployment itself. It can’t precede it because, well, how could it?

Expert No. 12: “Ensure there’s value in each anticipated use case.”

Dr. Yeahbut says: No. Don’t do this. To achieve it you’d have to create an oversight bureaucracy that’s less knowledgeable about the subject than the cross-functional team we’ve already introduced (see above; Departmental AI). It’s a cure that would be far worse than the disease.

Expert No. 13: “Outline the project’s value and ROI before implementation.”

Dr. Yeabut: Noooooooooooooo! Don’t do this. Or, rather, ignore the part about ROI. Insisting on a financial return ensures tactical-only efforts, an oversight bureaucracy, and more time spent justifying than you’d spend just doing.

Organizations will have to learn their way into AI success. If each project must be financially justified, only a fraction of this learning will ever happen.

Expert No. 14: “Define your business model and work from there.”

Dr. Yeahbut says: Yes, business leaders should know how their business works. If they don’t, AI is the least of the company’s problems.

Expert No. 15: “Develop a strategy that drives incremental change while respecting the human element.”

Dr. Yeahbut says: Incremental change? We’ve already covered this. Respecting the human element? Sounds good. I hope it means focusing on AI-augmented humanity.

Expert No. 16: “Include principles for ethical AI usage.”

Dr. Yeahbut says: I’m in favor of ethics. I’m less in favor of crafting a new and different ethical framework just for AI.

Think of it this way: Before phishing attacks, theft was considered unethical. After someone invented phishing attacks, theft continued to be unethical. Ethics isn’t about the tools. It’s about what you decide to use them for, and how you decided.

Is Dr. Yeahbut being too mean to this panel of experts? Mebbe so. But he’s pretty sure of one thing: It’s early in the AI game — so early that there are no experts yet.

Bob Lewis
Columnist

Bob Lewis is a senior management and IT consultant, focusing on IT and business organizational effectiveness, strategy-to-action planning, and business/IT integration. And yes, of course, he is Digital. He can also be found on his blog, Keep the Joint Running.