AI

When it comes to large language models, should you build or buy?

Comment

Gift bags for guests to a child's party decorated to look like a robot head.
Image Credits: Jenny Dettrick (opens in a new window) / Getty Images

Tanmay Chopra

Contributor

Tanmay Chopra works in machine learning at AI search startup Neeva, where he wrangles language models large and small. Previously, he oversaw the development of ML systems globally to counter violence and extremism on TikTok.

Last summer could only be described as an “AI summer,” especially with large language models making an explosive entrance. We saw huge neural networks trained on a massive corpora of data that can accomplish exceedingly impressive tasks, none more famous than OpenAI’s GPT-3 and its newer, hyped offspring, ChatGPT.

Companies of all shapes and sizes across industries are rushing to figure out how to incorporate and extract value from this new technology. But OpenAI’s business model has been no less transformative than its contributions to natural language processing. Unlike almost every previous release of a flagship model, this one does not come with open-source pretrained weights — that is, machine learning teams cannot simply download the models and fine-tune them for their own use cases.

Instead, they must either pay to use them as-is, or pay to fine-tune the models and then pay four times the as-is usage rate to employ it. Of course, companies can still choose other peer open-sourced models.

This has given rise to an age-old corporate — but entirely new to ML — question: Would it be better to buy or build this technology?

It’s important to note that there is no one-size-fits-all answer to this question; I’m not trying to provide a catch-all answer. I mean to highlight pros and cons of both routes and offer a framework that might help companies evaluate what works for them while also providing some middle paths that attempt to include components of both worlds.

Buying: Fast, but with clear pitfalls

Let’s start with buying. There are a whole host of model-as-a-service providers that offer custom models as APIs, charging per request. This approach is fast, reliable and requires little to no upfront capital expenditure. Effectively, this approach de-risks machine learning projects, especially for companies entering the domain, and requires limited in-house expertise beyond software engineers.

Projects can be kicked off without requiring experienced machine learning personnel, and the model outcomes can be reasonably predictable, given that the ML component is being purchased with a set of guarantees around the output.

Unfortunately, this approach comes with very clear pitfalls, primary among which is limited product defensibility. If you’re buying a model anyone can purchase and integrate it into your systems, it’s not too far-fetched to assume your competitors can achieve product parity just as quickly and reliably. That will be true unless you can create an upstream moat through non-replicable data-gathering techniques or a downstream moat through integrations.

What’s more, for high-throughput solutions, this approach can prove exceedingly expensive at scale. For context, OpenAI’s DaVinci costs $0.02 per thousand tokens. Conservatively assuming 250 tokens per request and similar-sized responses, you’re paying $0.01 per request. For a product with 100,000 requests per day, you’d pay more than $300,000 a year. Obviously, text-heavy applications (attempting to generate an article or engage in chat) would lead to even higher costs.

You must also account for the limited flexibility tied to this approach: You either use models as-is or pay significantly more to fine-tune them. It is worth remembering that the latter approach would involve an unspoken “lock-in” period with the provider, as fine-tuned models will be held in their digital custody, not yours.

Building: Flexible and defensible, but expensive and risky

On the other hand, building your own tech allows you to circumvent some of these challenges.

In most cases, “building” refers to leveraging and fine-tuning open sourced backbones not building from scratch (although that also has its place). This approach grants you exponentially greater flexibility for everything from modifying model architectures to reducing serving latencies through distilling and quantization.

It’s worth remembering that while purchased models might be impressive at many tasks, models trained in-house may well achieve sizable performance improvements on a specific task or domain. At scale, these models are much cheaper to deploy and can lead to the development of significantly defensible products that can take competitors much longer to replicate.

The most prominent example of this is the TikTok recommendation algorithm. Despite much of its details being publicly available in various research papers, even massive ML teams at its competitors are yet to replicate and deploy a similarly effective system.

Of course, there are no free lunches: Developing, deploying and maintaining elaborate machine learning systems in-house requires data engineering, machine learning and DevOps expertise, all of which are scarce and highly sought-after. Obviously, that requires high upfront investment.

The success of machine learning projects is also less predictable when you’re building them in-house, and some estimates put the likelihood of success at around the 20% mark. This may prolong the time-to-market.

All in all, while building looks extremely attractive in the long run, it requires leadership with a strong appetite for risk over an extended time period as well as deep coffers to back said appetite.

The middle road

That said, there are middle ground approaches that attempt to balance these positives and negatives. The first and most oft-discussed is prompt engineering.

This approach starts with buying then building a custom input template that serves to replace fine-tuning in some sense. It aims to guide the off-the-shelf model with clear examples or instructions, creating a middling level of defensibility in the form of custom prompts while retaining the benefits of buying.

Another way is to seek open source alternative backbones of closely equivalent quality and build atop them. This reduces upfront costs and lack of output predictability to some extent while retaining the flexibility offered by building. For example, GPT-J and GPT-Neo are two open source alternatives to GPT-3.

A slightly more intricate and newer approach is closed source approximation. This involves attempting to train an in-house model that aims to minimize the difference between GPT-3 and its own output, either terminally or at an earlier embedding stage. This will reduce time-to-market by leveraging GPT-3 in the short term and then transitioning to in-house systems as their quality improves in the long term to enable cost optimization and defensibility.

Still confused about which way to go? Here’s a three-question quiz:

Are you currently an AI business?

If yes, you’ll need to build to maintain defensibility.

If you’re not, buy for now and prompt engineering to tailor the model to your use cases.

If you want to be an AI business, work toward that over time: store data cleanly, start building an ML team and identify monetizable use cases.

Is your use case addressed by existing pre-trained models?

Can you simply afford to buy without putting in much additional work? If so, you should probably shell out the cash if time-to-market is a factor.

Building is not fast, easy or cheap. This is especially true if your use case is non-monetizable or you need a model for internal use.

Do you have unpredictable or extremely high request latency?

If yes, buying might not be economically feasible, especially in a consumer setting. That said, be realistic — quantify your request latency and buying costs to whatever extent possible. Building can be deceptively expensive, especially because you’ll need to hire ML engineers, buy tooling and pay for hosting.

Hopefully, this helps you kick off your journey!

More TechCrunch

Accurate weather forecasts are critical to industries like agriculture, and they’re also important to help prevent and mitigate harm from inclement weather events or natural disasters. But getting forecasts right…

Deal Dive: Can blockchain make weather forecasts better? WeatherXM thinks so

pcTattletale’s website was briefly defaced and contained links containing files from the spyware maker’s servers, before going offline.

Spyware app pcTattletale was hacked and its website defaced

Featured Article

Synapse, backed by a16z, has collapsed, and 10 million consumers could be hurt

Synapse’s bankruptcy shows just how treacherous things are for the often-interdependent fintech world when one key player hits trouble. 

5 hours ago
Synapse, backed by a16z, has collapsed, and 10 million consumers could be hurt

Sarah Myers West, profiled as part of TechCrunch’s Women in AI series, is managing director at the AI Now institute.

Women in AI: Sarah Myers West says we should ask, ‘Why build AI at all?’

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI and publishers are partners of convenience

Evan, a high school sophomore from Houston, was stuck on a calculus problem. He pulled up Answer AI on his iPhone, snapped a photo of the problem from his Advanced…

AI tutors are quietly changing how kids in the US study, and the leading apps are from China

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Well,…

Startups Weekly: Drama at Techstars. Drama in AI. Drama everywhere.

Last year’s investor dreams of a strong 2024 IPO pipeline have faded, if not fully disappeared, as we approach the halfway point of the year. 2024 delivered four venture-backed tech…

From Plaid to Figma, here are the startups that are likely — or definitely — not having IPOs this year

Federal safety regulators have discovered nine more incidents that raise questions about the safety of Waymo’s self-driving vehicles operating in Phoenix and San Francisco.  The National Highway Traffic Safety Administration…

Feds add nine more incidents to Waymo robotaxi investigation

Terra One’s pitch deck has a few wins, but also a few misses. Here’s how to fix that.

Pitch Deck Teardown: Terra One’s $7.5M Seed deck

Chinasa T. Okolo researches AI policy and governance in the Global South.

Women in AI: Chinasa T. Okolo researches AI’s impact on the Global South

TechCrunch Disrupt takes place on October 28–30 in San Francisco. While the event is a few months away, the deadline to secure your early-bird tickets and save up to $800…

Disrupt 2024 early-bird tickets fly away next Friday

Another week, and another round of crazy cash injections and valuations emerged from the AI realm. DeepL, an AI language translation startup, raised $300 million on a $2 billion valuation;…

Big tech companies are plowing money into AI startups, which could help them dodge antitrust concerns

If raised, this new fund, the firm’s third, would be its largest to date.

Harlem Capital is raising a $150 million fund

About half a million patients have been notified so far, but the number of affected individuals is likely far higher.

US pharma giant Cencora says Americans’ health information stolen in data breach

Attention, tech enthusiasts and startup supporters! The final countdown is here: Today is the last day to cast your vote for the TechCrunch Disrupt 2024 Audience Choice program. Voting closes…

Last day to vote for TC Disrupt 2024 Audience Choice program

Featured Article

Signal’s Meredith Whittaker on the Telegram security clash and the ‘edge lords’ at OpenAI 

Among other things, Whittaker is concerned about the concentration of power in the five main social media platforms.

1 day ago
Signal’s Meredith Whittaker on the Telegram security clash and the ‘edge lords’ at OpenAI 

Lucid Motors is laying off about 400 employees, or roughly 6% of its workforce, as part of a restructuring ahead of the launch of its first electric SUV later this…

Lucid Motors slashes 400 jobs ahead of crucial SUV launch

Google is investing nearly $350 million in Flipkart, becoming the latest high-profile name to back the Walmart-owned Indian e-commerce startup. The Android-maker will also provide Flipkart with cloud offerings as…

Google invests $350 million in Indian e-commerce giant Flipkart

A Jio Financial unit plans to purchase customer premises equipment and telecom gear worth $4.32 billion from Reliance Retail.

Jio Financial unit to buy $4.32B of telecom gear from Reliance Retail

Foursquare, the location-focused outfit that in 2020 merged with Factual, another location-focused outfit, is joining the parade of companies to make cuts to one of its biggest cost centers –…

Foursquare just laid off 105 employees

“Running with scissors is a cardio exercise that can increase your heart rate and require concentration and focus,” says Google’s new AI search feature. “Some say it can also improve…

Using memes, social media users have become red teams for half-baked AI features

The European Space Agency selected two companies on Wednesday to advance designs of a cargo spacecraft that could establish the continent’s first sovereign access to space.  The two awardees, major…

ESA prepares for the post-ISS era, selects The Exploration Company, Thales Alenia to develop cargo spacecraft

Expressable is a platform that offers one-on-one virtual sessions with speech language pathologists.

Expressable brings speech therapy into the home

The French Secretary of State for the Digital Economy as of this year, Marina Ferrari, revealed this year’s laureates during VivaTech week in Paris. According to its promoters, this fifth…

The biggest French startups in 2024 according to the French government

Spotify is notifying customers who purchased its Car Thing product that the devices will stop working after December 9, 2024. The company discontinued the device back in July 2022, but…

Spotify to shut off Car Thing for good, leading users to demand refunds

Elon Musk’s X is preparing to make “likes” private on the social network, in a change that could potentially confuse users over the difference between something they’ve favorited and something…

X should bring back stars, not hide ‘likes’

The FCC has proposed a $6 million fine for the scammer who used voice-cloning tech to impersonate President Biden in a series of illegal robocalls during a New Hampshire primary…

$6M fine for robocaller who used AI to clone Biden’s voice

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Is it…

Tesla lobbies for Elon and Kia taps into the GenAI hype

Crowdaa is an app that allows non-developers to easily create and release apps on the mobile store. 

App developer Crowdaa raises €1.2M and plans a US expansion