Startups

Building better startups with responsible AI

Comment

Digital generated image of split net/turbulence structure of artificial intelligence brain on purple surface.
Image Credits: Andriy Onufriyenko (opens in a new window) / Getty Images

Tom Zick

Contributor

Tom Zick is a researcher in AI ethics at the Berkman Klein Center for Internet and Society at Harvard University, where she is also a J.D. candidate. She holds a Ph.D. from UC Berkeley and was previously a fellow at Bloomberg Beta and the City of Boston.

Founders tend to think responsible AI practices are challenging to implement and may slow the progress of their business. They often jump to mature examples like Salesforce’s Office of Ethical and Humane Use and think that the only way to avoid creating a harmful product is building a big team. The truth is much simpler.

I set out to learn how founders were thinking about responsible AI practices on the ground by speaking with a handful of successful early-stage founders and found many of them were implementing responsible AI practices.

Only they didn’t call it that. They just call it “good business.”

It turns out, simple practices that make business sense and result in better products will go a long way toward reducing the risk of unforeseen societal harms. These practices rely on the insight that people, not data, are at the heart of deploying an AI solution successfully. If you account for the reality that humans are always in the loop, you can build a better business, more responsibly.

Think of AI as a bureaucracy. Like a bureaucracy, AI relies on having some general policy to follow (“the model”) that makes reasonable decisions in most cases. However, this general policy can never account for all possible scenarios a bureaucracy will need to handle — much like an AI model cannot be trained to anticipate every possible input.

When these general policies (or models) fail, those who are already marginalized are disproportionately impacted (a classic algorithmic example is of Somali immigrants being tagged for fraud because of their atypical community shopping habits).

Bureaucracies work to solve this problem with “street-level bureaucrats” like judges, DMV agents and even teachers, who can handle unique cases or decide not to enforce the policy. For example, teachers can waive a course prerequisite given extenuating circumstances, or judges can be more or less lenient in sentencing.

If any AI will inevitably fail, then — like with a bureaucracy — we must keep humans in the loop and design with them in mind. As one founder told me, “If I were a Martian coming to Earth for the first time, I would think: Humans are processing machines — I should use them.”

Whether the humans are operators augmenting the AI system by stepping in when it’s uncertain, or users choosing whether to reject, accept or manipulate a model outcome, these people determine how well any AI-based solution will work in the real world.

Here are five practical suggestions that founders of AI companies shared for keeping, and even harnessing, humans in the loop to build a more responsible AI that’s also good for business:

Introduce only as little AI as you need

Today, many companies plan to launch some services with an end-to-end AI-driven process. When those processes struggle to function under a wide range of use cases, the people who are most harmed tend to be those already marginalized.

In trying to diagnose failures, founders subtract one component at a time, still hoping to automate as much as possible. They should consider the opposite: introducing one AI component at a time.

Many processes are — even with all the wonders of AI — still just less expensive and more reliable to run with humans in the loop. If you build an end-to-end system with many components coming online at once, you may find it hard to identify which are best suited to AI.

Many founders we spoke with view AI as a way to delegate the most time-consuming, low-stakes tasks in their system away from humans, and they started with all human-run systems to identify what these important-to-automate tasks were.

This “AI second” approach also enables founders to enter fields where data is not immediately available. The people who operate parts of a system also create the very data you’ll need to automate those tasks. One founder told us that, without the advice to introduce AI gradually, and only when it was demonstrably more accurate than an operator, they would have never gotten off the ground.

Create some friction

Many founders believe that to be successful, a product must run out of the box, with as little user input as possible.

Because AI is typically used to automate part of an existing workflow — complete with associated preconceptions on how much to trust that workflow output — a perfectly seamless approach can be catastrophic.

For example, when an ACLU audit showed that Amazon’s facial recognition tool would misidentify 28 members of Congress (a disproportionately large fraction of whom were Black) as criminals, lax default settings were at the heart of the problem. The accuracy threshold out of the box was set to only 80%, clearly the wrong setting if a user takes a positive result at face value.

Motivating users to engage with a product’s strengths and weaknesses before deploying it can offset the potential for harmful assumption mismatches. It can also make customers happier with eventual product performance.

One founder we spoke with found that customers ultimately used their product more effectively if the customer had to customize it before use. He views this as a dominant component of a “design-first” approach and found it helped users play to the strengths of the product on a context-specific basis. While this approach required more upfront time to get going, it ended up translating into revenue gains for customers.

Give context, not answers

Many AI-based solutions focus on providing an output recommendation. Once these recommendations are made, they have to be acted on by humans.

Without context, poor recommendations could be blindly followed, causing downstream harm. Similarly, great recommendations could be rejected if the humans in the loop do not trust the system and lack context.

Rather than delegating decisions away from users, consider giving them the tools to make decisions. This approach harnesses the power of humans in the loop to identify problematic model outputs while securing the user buy-in necessary for a successful product.

One founder shared that when their AI made direct recommendations, users didn’t trust it. Their customers were happy with the accuracy that their model predictions turned out to have, but individual users just ignored the recommendations. Then they nixed the recommendation feature and instead used their model to augment the resources that could inform a user’s decision (e.g., this procedure is like these five past procedures and here is what worked). This led to increased adoption rates and revenue.

Consider your not-users and not-buyers

It is a known problem in enterprise tech that products can easily serve the CEO and not the end users. This is even more problematic in the AI space, where a solution is often part of a greater system that interfaces with a few direct users and many more indirect ones.

Take, for example, the controversy that arose when Starbucks began using automated scheduling software to assign shifts. The scheduler optimized for efficiency, completely disregarding working conditions. After a successful labor petition and a high-profile New York Times article, the baristas’ input was taken under consideration, improving morale and productivity.

Instead of taking a customer literally on what they ask you to solve, consider mapping out all of the stakeholders involved and understanding their needs before you decide what your AI will help optimize. That way, you will avoid inadvertently making a product that is needlessly harmful and possibly find an even better business opportunity.

One founder we spoke with took this approach to heart, camping out next to their users to understand their needs before deciding what to optimize their product for. They followed this up by meeting with both customers and union representatives to figure out how to make a product that worked for both.

While customers originally wanted a product that would allow each user to take on a greater workload, these conversations revealed an opportunity to unlock savings for their customers by optimizing the existing workload.

This insight allowed the founder to develop a product that empowered the humans in the loop and saved management more money than the solution they thought they wanted would have.

Be clear on what’s AI theater

If you limit the degree to which you hype up what your AI can do, you can both avoid irresponsible consequences and sell your product more effectively.

Yes, the hype around AI helps sell products. However, knowing how to keep those buzzwords from getting in the way of precision is crucial. While talking up the autonomous capabilities of your product might be good for sales, it can backfire if you apply that rhetoric indiscriminately.

For example, one of the founders we spoke to found that playing up the power of their AI also increased their customers’ privacy concerns. This concern persisted even when the founders explained that the portions of the product in question did not rely on data, but rather on human judgment.

Language choice can help align expectations and build trust in a product. Rather than using the language of autonomy with their users, some of the founders we talked to found that words like “augment” and “assist” were more likely to inspire adoption. This “AI as a tool” framing was also less likely to engender the blind trust that can lead to bad outcomes down the line. Being clear can both dissuade overconfidence in AI and help you sell.

These are some practical lessons learned by real founders for mitigating the risk of unforeseen harms from AI and creating more successful products built for the long term. We also believe there’s an opportunity for new startups to build services that help make it easier to create ethical AI that’s also good for business. So here are a couple of requests for startups:

  • Engage humans in the loop: We need startups that solve the “human in the loop” attention problem. Delegating to humans requires making sure those humans notice when an AI is uncertain so that they can meaningfully intervene. If an AI is correct 95% of the time, research shows that people get complacent and are unlikely to catch the 5% of instances the AI gets wrong. The solution requires more than just technology; much like social media was more of a psychological innovation than a technical one, we think startups in this space can (and should) emerge from social insights.
  • Standard compliance for responsible AI: There’s opportunity for startups that consolidate existing standards around responsible AI and measure compliance. Publication of AI standards has been on the rise in the past two years as public pressure on AI regulation has been increasing. A recent survey showed 84% of Americans think AI should be carefully managed and rate this as a top priority. Companies want to signal they are taking this seriously and showing they are following standards put forth by IEEE, CSET and others would be useful. Meanwhile, the current draft of the EU’s expansive AI Act (AIA) strongly emphasizes industry standards. If the AIA passes, compliance will become a necessity. Given the market that formed around GDPR compliance, we think this is a space to watch.

Whether you’re trying one of these tips or starting one of these companies, simple, responsible AI practices can let you unlock immense business opportunities. To avoid creating a harmful product, you need to be thoughtful in your deployment of AI.

Luckily, this thoughtfulness will pay dividends when it comes to the long-term success of your business.

More TechCrunch

Generative AI improvements are increasingly being made through data curation and collection — not architectural — improvements. Big Tech has an advantage.

AI training data has a price tag that only Big Tech can afford

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: Can we (and could we ever) trust OpenAI?

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has learned.

General Catalyst-backed Jasper Health lays off staff

Featured Article

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Live Nation says its Ticketmaster subsidiary was hacked. A hacker claims to be selling 560 million customer records.

17 hours ago
Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Featured Article

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV with up to 600 miles of range. A “fully connected mobility device” for young urban innovators to be built by Foxconn and priced under $30,000. The next Popemobile. Over the past eight years, famed vehicle designer Henrik Fisker…

17 hours ago
Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging…

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Featured Article

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

18 hours ago
Hacked, leaked, exposed: Why you should never use stalkerware apps

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation

The 2024 election is likely to be the first in which faked audio and video of candidates is a serious factor. As campaigns warm up, voters should be aware: voice…

Voice cloning of political figures is still easy as pie

When Alex Ewing was a kid growing up in Purcell, Oklahoma, he knew how close he was to home based on which billboards he could see out the car window.…

OneScreen.ai brings startup ads to billboards and NYC’s subway

SpaceX’s massive Starship rocket could take to the skies for the fourth time on June 5, with the primary objective of evaluating the second stage’s reusable heat shield as the…

SpaceX sent Starship to orbit — the next launch will try to bring it back

Eric Lefkofsky knows the public listing rodeo well and is about to enter it for a fourth time. The serial entrepreneur, whose net worth is estimated at nearly $4 billion,…

Billionaire Groupon founder Eric Lefkofsky is back with another IPO: AI health tech Tempus

TechCrunch Disrupt showcases cutting-edge technology and innovation, and this year’s edition will not disappoint. Among thousands of insightful breakout session submissions for this year’s Audience Choice program, five breakout sessions…

You’ve spoken! Meet the Disrupt 2024 breakout session audience choice winners

Check Point is the latest security vendor to fix a vulnerability in its technology, which it sells to companies to protect their networks.

Zero-day flaw in Check Point VPNs is ‘extremely easy’ to exploit

Though Spotify never shared official numbers, it’s likely that Car Thing underperformed or was just not worth continued investment in today’s tighter economic market.

Spotify offers Car Thing refunds as it faces lawsuit over bricking the streaming device