Hardware

Exafunction aims to reduce AI dev costs by abstracting away hardware

Comment

Futuristic digital blockchain background. Abstract connections technology and digital network. 3d illustration of the Big data and communications technology.
Image Credits: v_alex / Getty Images

The most sophisticated AI systems today are capable of impressive feats, from directing cars through city streets to writing human-like prose. But they share a common bottleneck: hardware. Developing systems on the bleeding edge often requires a huge amount of computing power. For example, creating DeepMind’s protein structure-predicting AlphaFold took a cluster of hundreds of GPUs. Further underlining the challenge, one source estimates that developing AI startup OpenAI’s language-generating GPT-3 system using a single GPU would’ve taken 355 years.

New techniques and chips designed to accelerate certain aspects of AI system development promise to (and, indeed, already have) cut hardware requirements. But developing with these techniques calls for expertise that can be tough for smaller companies to come by. At least, that’s the assertion of Varun Mohan and Douglas Chen, the co-founders of infrastructure startup Exafunction. Emerging from stealth today, Exafunction is developing a platform to abstract away the complexity of using hardware to train AI systems.

“Improvements [in AI] are often underpinned by large increases in … computational complexity. As a consequence, companies are forced to make large investments in hardware to realize the benefits of deep learning. This is very difficult because the technology is improving so rapidly, and the workload size quickly increases as deep learning proves value within a company,” Chen told TechCrunch in an email interview. “The specialized accelerator chips necessary to run deep learning computations at scale are scarce. Efficiently using these chips also requires esoteric knowledge uncommon among deep learning practitioners.”

With $28 million in venture capital, $25 million of which came from a Series A round led by Greenoaks with participation from Founders Fund, Exafunction aims to address what it sees as the symptom of the expertise shortage in AI: idle hardware. GPUs and the aforementioned specialized chips used to “train” AI systems — i.e., feed the data that the systems can use to make predictions — are frequently underutilized. Because they complete some AI workloads so quickly, they sit idle while they wait for other components of the hardware stack, like processors and memory, to catch up.

Lukas Beiwald, the founder of AI development platform Weights and Biases, reports that nearly a third of his company’s customers average less than 15% GPU utilization. Meanwhile, in a 2021 survey commissioned by Run:AI, which competes with Exafunction, just 17% of companies said that they were able to achieve “high utilization” of their AI resources while 22% said that their infrastructure mostly sits idle.

The costs add up. According to Run:AI, 38% of companies had an annual budget for AI infrastructure — including hardware, software and cloud fees — exceeding $1 million as of October 2021. OpenAI is estimated to have spent $4.6 million training GPT-3.

“Most companies operating in deep learning go into business so they can focus on their core technology, not to spend their time and bandwidth worrying about optimizing resources,” Mohan said via email. “We believe there is no meaningful competitor that addresses the problem that we’re focused on, namely, abstracting away the challenges of managing accelerated hardware like GPUs while delivering superior performance to customers.”

Seed of an idea

Prior to co-founding Exafunction, Chen was a software engineer at Facebook, where he helped to build the tooling for devices like the Oculus Quest. Mohan was a tech lead at autonomous delivery startup Nuro responsible for managing the company’s autonomy infrastructure teams.

“As our deep learning workloads [at Nuro] grew in complexity and demandingness, it became apparent that there was no clear solution to scale our hardware accordingly,” Mohan said. “Simulation is a weird problem. Perhaps paradoxically, as your software improves, you need to simulate even more iterations in order to find corner cases. The better your product, the harder you have to search to find fallibilities. We learned how difficult this was the hard way and spent thousands of engineering hours trying to squeeze more performance out of the resources we had.”

Exafunction
Image Credits: Exafunction

Exafunction customers connect to the company’s managed service or deploy Exafunction’s software in a Kubernetes cluster. The technology dynamically allocates resources, moving computation onto “cost-effective hardware” such as spot instances when available.

Mohan and Chen demurred when asked about the Exafunction platform’s inner workings, preferring to keep those details under wraps for now. But they explained that, at a high level, Exafunction leverages virtualization to run AI workloads even with limited hardware availability, ostensibly leading to better utilization rates while lowering costs.

Exafunction’s reticence to reveal information about its technology — including whether it supports cloud-hosted accelerator chips like Google’s tensor processing units (TPUs) — is cause for some concern. But to allay doubts, Mohan, without naming names, said that Exafunction is already managing GPUs for “some of the most sophisticated autonomous vehicle companies and organizations at the cutting edge of computer vision.”

“Exafunction provides a platform that decouples workloads from acceleration hardware like GPUs, ensuring maximally efficient utilization — lowering costs, accelerating performance, and allowing companies to fully benefit from hardware …  [The] platform lets teams consolidate their work on a single platform, without the challenges of stitching together a disparate set of software libraries,” he added. “We expect that [Exafunction’s product] will be profoundly market-enabling, doing for deep learning what AWS did for cloud computing.”

Growing market

Mohan might have grandiose plans for Exafunction, but the startup isn’t the only one applying the concept of “intelligent” infrastructure allocation to AI workloads. Beyond Run:AI — whose product also creates an abstraction layer to optimize AI workloads — Grid.ai offers software that allows data scientists to train AI models across hardware in parallel. For its part, Nvidia sells AI Enterprise, a suite of tools and frameworks that lets companies virtualize AI workloads on Nvidia-certified servers. 

But Mohan and Chen see a massive addressable market despite the crowdedness. In conversation, they positioned Exafunction’s subscription-based platform not only as a way to bring down barriers to AI development but to enable companies facing supply chain constraints to “unlock more value” from hardware on hand. (In recent years, for a range of different reasons, GPUs have become hot commodities.) There’s always the cloud, but, to Mohan’s and Chen’s point, it can drive up costs. One estimate found that training an AI model using on-premises hardware is up to 6.5x cheaper than the least costly cloud-based alternative.

“While deep learning has virtually endless applications, two of the ones we’re most excited about are autonomous vehicle simulation and video inference at scale,” Mohan said. “Simulation lies at the heart of all software development and validation in the autonomous vehicle industry … Deep learning has also led to exceptional progress in automated video processing, with applications across a diverse range of industries. [But] though GPUs are essential to autonomous vehicle companies, their hardware is frequently underutilized, despite their price and scarcity. [Computer vision applications are] also computationally demanding, [because] each new video stream effectively represents a firehose of data — with each camera outputting millions of frames per day.”

Mohan and Chen say that the capital from the Series A will be put toward expanding Exafunction’s team and “deepening” the product. The company will also invest in optimizing AI system runtimes “for the most latency-sensitive applications” (e.g., autonomous driving and computer vision).

“While currently we are a strong and nimble team focused primarily on engineering, we expect to rapidly build the size and capabilities of our org in 2022,” Mohan said. “Across virtually every industry, it is clear that as workloads grow more complex (and a growing number of companies wish to leverage deep-learning insights), demand for compute is vastly exceeding [supply]. While the pandemic has highlighted these concerns, this phenomenon, and its related bottlenecks, is poised to grow more acute in the years to come, especially as cutting-edge models become exponentially more demanding.”

More TechCrunch

These messaging features, announced at WWDC 2024, will have a significant impact on how people communicate every day.

At last, Apple’s Messages app will support RCS and scheduling texts

Welcome to TechCrunch Fintech! This week, we’re looking at Rippling’s controversial decision to ban some former employees from selling their stock, Carta’s massive valuation drop, a GenZ-focused fintech raise, and…

Rippling’s tender offer decision draws mixed — and strong — reactions

Google is finally making its Gemini Nano AI model available to Pixel 8 and 8a users after teasing it in March.

Google’s June Pixel feature drop brings Gemini Nano AI model to Pixel 8 and 8a users

At WWDC 2024, Apple introduced new options for developers to promote their apps and earn more from them in the App Store.

Apple adds win-back subscription offers and improved search suggestions to the App Store

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The acquisition comes as BeReal was struggling to grow its user base and was looking for a buyer.

BeReal is being acquired by mobile apps and games company Voodoo for €500M

Unlike Light’s older phones, the Light III sports a larger OLED display and an NFC chip to make way for future payment tools, as well as a camera.

Light introduces its latest minimalist phone, now with an OLED screen but still no addictive apps

Since April, a hacker with a history of selling stolen data has claimed a data breach of billions of records — impacting at least 300 million people — from a…

The mystery of an alleged data broker’s data breach

Diversity Spotlight is a feature on Crunchbase that lets companies add tags to their profiles to label themselves.

Crunchbase expands its diversity-tracking feature to Europe

Thanks to Apple’s newfound — and heavy — investment in generative AI tech, the company had loads to showcase on the AI front, from an upgraded Siri to AI-generated emoji.

The top AI features Apple announced at WWDC 2024

A Finnish startup called Flow Computing is making one of the wildest claims ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its…

Flow claims it can 100x any CPU’s power with its companion chip and some elbow grease

Five years ago, Day One Ventures had $11 million under management, and Bucher and her team have grown that to just over $450 million.

The VC queen of portfolio PR, Masha Bucher, has raised her largest fund yet: $150M

Particle announced it has partnered with news organization Reuters to collaborate on new business models and experiments in monetization.

AI news reader Particle adds publishing partners and $10.9M in new funding

The TechCrunch team runs down all of the biggest news from the Apple WWDC 2024 keynote in an easy-to-skim digest.

Here’s everything Apple announced at the WWDC 2024 keynote, including Apple Intelligence, Siri makeover

Mistral AI has closed its much-rumored Series B funding round, raising €600 million (around $640 million) in a mix of equity and debt.

Paris-based AI startup Mistral AI raises $640M

Cognigy is helping create AI that can handle the highly repetitive, rote processes center workers face daily.

Cognigy lands cash to grow its contact center automation business

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

Featured Article

Raspberry Pi is now a public company

Raspberry Pi priced its IPO on the London Stock Exchange on Tuesday morning at £2.80 per share, valuing it at £542 million, or $690 million at today’s exchange rate.

7 hours ago
Raspberry Pi is now a public company

Hello and welcome back to TechCrunch Space. What a week! In the same seven-day period, we watched Boeing’s Starliner launch astronauts to space for the first time, and then we…

TechCrunch Space: A week that will go down in history

Elon Musk’s posts seem to misunderstand the relationship Apple announced with OpenAI at WWDC 2024.

Elon Musk threatens to ban Apple devices from his companies over Apple’s ChatGPT integrations

“We’re looking forward to doing integrations with other models, including Google Gemini, for instance, in the future,” Federighi said during WWDC 2024.

Apple confirms plans to work with Google’s Gemini ‘in the future’

When Urvashi Barooah applied to MBA programs in 2015, she focused her applications around her dream of becoming a venture capitalist. She got rejected from every school, and was told…

How Urvashi Barooah broke into venture after everyone told her she couldn’t

Slack CEO Denise Dresser is speaking at TechCrunch Disrupt 2024.

Slack CEO Denise Dresser is coming to TechCrunch Disrupt this October

Apple kicked off its weeklong Worldwide Developers Conference (WWDC 2024) event today with the customary keynote at 1 p.m. ET/10 a.m. PT. The presentation focused on the company’s software offerings…

Watch the Apple Intelligence reveal, and the rest of WWDC 2024 right here

Apple’s SDKs (software development kits) have been updated with a variety of new APIs and frameworks.

Apple brings its GenAI ‘Apple Intelligence’ to developers, will let Siri control apps

Older iPhones or iPhone 15 users won’t be able to use these features.

Apple Intelligence features will be available on iPhone 15 Pro and devices with M1 or newer chips

Soon, Siri will be able to tap ChatGPT for “expertise” where it might be helpful, Apple says.

Apple brings ChatGPT to its apps, including Siri

Apple Intelligence will have an understanding of who you’re talking with in a messaging conversation.

Apple debuts AI-generated … Bitmoji

To use InSight, Apple TV+ subscribers can swipe down on their remote to bring up a display with actor names and character information in real time.

Apple TV+ introduces InSight, a new feature similar to Amazon’s X-Ray, at WWDC 2024

Siri is now more natural, more relevant and more personal — and it has new look.

Apple gives Siri an AI makeover