The cloud is not a slam dunk platform for generative AI

With public cloud providers chasing generative AI, it may be a surprise when dollars flow in other directions. Vendors and customers have a lot to consider.

The cloud is not a slam dunk platform for generative AI
Andrey Burmakin / Shutterstock

As I’ve been saying for the past year or so, cloud conferences have become generative AI conferences, as have data center conferences, databases conferences, and you name it. It’s clearly more than just a trend—it’s a game-changing push. But we’ve seen this happen enough times in the past 30 years to know nothing is guaranteed to be a true trend. Remember “push technology?” Exactly.

As enterprises rush headlong into generative AI, selecting an appropriate infrastructure is critical for optimal performance and cost-effectiveness. Comparing cloud computing and traditional on-premises solutions reveals some interesting things when cloud platforms host generative AI applications. These weaknesses may mean public cloud computing platforms are not a slam dunk when it comes to the best place for generative AI systems to live. Let’s explore this.

Convenience versus cost efficiency

The cloud is a better platform for generative AI when it comes to convenience. Public cloud platforms are well entrenched throughout the ecosystem of generative AI tools and development assistance, which makes building and deploying generative AI systems on public cloud platforms the “easy button” of AI.

This fact alone will make the cloud the first platform most entrepreneurs use, considering that they are just getting smart about the use cases for genAI and how systems should be deployed. I have focused on the cloud for most of my AI projects in the past 10 years for similar reasons. It’s just easier.

But is it more cost-efficient? History tells us that we go to the cloud for ease of deployment and scalability, but we quickly learn that cloud platforms generally cost more than on-premises analogs. Your mileage may vary, and it depends specifically on your use case. But it can be generally stated that the cloud will be a more expensive platform for generative AI, all in. This is coming from a guy who has a cloud blog, cloud podcast, cloud book, and cloud YouTube channel.

Learning from the recent past

This does not mean that enterprises should buy or build their own data centers. Better options include colocation (colo) providers and managed services providers that rent hardware and data center space and can also operate those systems for you.

Also, you must consider the microclouds that are emerging. These are .ai cloud provider startups that provide GPUs and TPUs as a service. They are going to have to charge less than the public cloud providers to compete. Thus, they may be more cost-effective for any enterprise that wants to take a chance with them. It’s safe to assume that most of them will be swallowed up by the larger providers within a few years.

The lessons we have learned in the recent past are applicable here. Public clouds are good but come at a cost that many enterprises find less than helpful, at about 2.5 times higher than what they thought. That number is arbitrary but pretty accurate, based on my experience.

Of course, most of these cost overruns are self-inflicted wounds. Many enterprises moved workloads into the cloud expecting to modernize them at some point so they would burn less money. They never did, and now some of them are being returned to on-premises systems. Generative AI systems will mostly be net-new, so these types of “kicking the can down the road” mistakes should not occur.

What to consider?

Of course, there are other issues besides cost. Security comes to mind. Housing sensitive data in the cloud raises security apprehensions, as cloud providers may not offer the same level of security as on-premises setups. Certain industries have specific regulatory demands concerning data storage and processing.

Some of this is perception versus reality. In many cases, public cloud providers can provide better security than on-premises. However, some use cases involve very sensitive data and knowledge models that are a bet-the-business situation if that data were to be lost. Many enterprises therefore insist on keeping data and AI models in-house.

Moreover, cloud infrastructure can introduce latency due to data transmission to remote processing locations, and the distributed nature of cloud setups may surface data privacy concerns. Also, accessing cloud services necessitates a stable internet connection for seamless operations. Outages can disrupt service availability, impacting operational continuity.

Finally, hybrid cloud scenarios may encounter challenges to properly structuring data for multiple platforms and managing various capabilities across different environments. Managing synchronization processes and ensuring data consistency can prove complex in a distributed data setting, which is what cloud computing is.

All this means that the people who see the cloud as the only platform for generative AI systems have not yet figured out the bill. I suspect that a few years in the cloud, millions paid in cloud infrastructure fees, and the fact that hardware is now cheap will drive many enterprises back to traditional data centers for generative AI.

In response, I think many cloud providers will temporarily lower their prices in hopes of locking in large enterprise players and then raise them later. They have invested billions to get solidly into the generative AI space and eventually will have to recoup their investment.

We all have decisions to make about what will return the most value to our respective businesses. I see the battle lines being drawn now. May the platform that returns the most value win.

Copyright © 2024 IDG Communications, Inc.