Startups

MVP versus EVP: Is it time to introduce ethics into the agile startup model?

Comment

Laptop computer with ethernet cable forming a heart, coming out of the back on a plain background
Image Credits: I Like That One (opens in a new window) / Getty Images

Anand Rao

Contributor

Anand Rao is global head of AI at PwC.

The rocket ship trajectory of a startup is well known: Get an idea, build a team and slap together a minimum viable product (MVP) that you can get in front of users.

However, today’s startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market grows increasingly conscious of the ethical implications of AI augmenting or replacing humans in the decision-making process.

An MVP allows you to collect critical feedback from your target market that then informs the minimum development required to launch a product — creating a powerful feedback loop that drives today’s customer-led business. This lean, agile model has been extremely successful over the past two decades — launching thousands of successful startups, some of which have grown into billion-dollar companies.

However, building high-performing products and solutions that work for the majority isn’t enough anymore. From facial recognition technology that has a bias against people of color to credit-lending algorithms that discriminate against women, the past several years have seen multiple AI- or ML-powered products killed off because of ethical dilemmas that crop up downstream after millions of dollars have been funneled into their development and marketing. In a world where you have one chance to bring an idea to market, this risk can be fatal, even for well-established companies.

Startups do not have to scrap the lean business model in favor of a more risk-averse alternative. There is a middle ground that can introduce ethics into the startup mentality without sacrificing the agility of the lean model, and it starts with the initial goal of a startup — getting an early-stage proof of concept in front of potential customers.

However, instead of developing an MVP, companies should develop and roll out an ethically viable product (EVP) based on responsible artificial intelligence (RAI), an approach that considers the ethical, moral, legal, cultural, sustainable and social-economic considerations during the development, deployment and use of AI/ML systems.

And while this is a good practice for startups, it’s also a good standard practice for big technology companies building AI/ML products.

Here are three steps that startups — especially the ones that incorporate significant AI/ML techniques in their products — can use to develop an EVP.

Find an ethics officer to lead the charge

Startups have chief strategy officers, chief investment officers — even chief fun officers. A chief ethics officer is just as important, if not more so. This person can work across different stakeholders to make sure the startup is developing a product that fits within the moral standards set by the company, the market and the public.

They should act as a liaison between the founders, the C-suite, investors and the board of directors with the development team — making sure everyone is asking the right ethical questions in a thoughtful, risk-averse manner.

Machines are trained based on historical data. If systemic bias exists in a current business process (such as unequal racial or gender lending practices), AI will pick up on that and think that’s how it should continue to behave. If your product is later found to not meet the ethical standards of the market, you can’t simply delete the data and find new data.

These algorithms have already been trained. You can’t erase that influence any more than a 40-year-old man can undo the influence his parents or older siblings had on his upbringing. For better or for worse, you are stuck with the results. Chief ethics officers need to sniff out that inherent bias throughout the organization before it gets ingrained in AI-powered products.

Integrate ethics into the entire development process

Responsible AI is not just a point in time. It is an end-to-end governance framework focused on the risks and controls of an organization’s AI journey. This means that ethics should be integrated throughout the development process — starting with strategy and planning through development, deployment and operations.

During scoping, the development team should work with the chief ethics officer to be aware of general ethical AI principles that represent behavioral principles that are valid in many cultural and geographic applications. These principles prescribe, suggest or inspire how AI solutions should behave when faced with moral decisions or dilemmas in a specific field of usage.

Above all, a risk and harm assessment should be conducted, identifying any risk to anyone’s physical, emotional or financial well-being. The assessment should look at sustainability as well and evaluate what harm the AI solution might do to the environment.

During the development phase, the team should be constantly asking how their use of AI is in alignment with the company’s values, whether models are treating different people fairly and whether they are respecting people’s right to privacy. They should also consider if their AI technology is safe, secure and robust and how effective the operating model is at ensuring accountability and quality.

A critical component of any machine learning model is the data that is used to train the model. Startups should be concerned not only about the MVP and how the model is proved initially, but also the eventual context and geographic reach of the model. This will allow the team to select the right representative dataset to avoid any future data bias issues.

Don’t forget about ongoing AI governance and regulatory compliance

Given the implications on society, it’s just a matter of time before the European Union, the United States or some other legislative body passes consumer protection laws governing the use of AI/ML. Once a law is passed, those protections are likely to spread to other regions and markets around the world.

It’s happened before: The passage of the General Data Protection Regulation (GDPR) in the EU led to a wave of other consumer protections around the world that require companies to prove consent for collecting personal information. Now, people across the political and business spectrum are calling for ethical guidelines around AI. Again, the EU is leading the way after releasing a 2021 proposal for an AI legal framework.

Startups deploying products or services powered by AI/ML should be prepared to demonstrate ongoing governance and regulatory compliance — being careful to build these processes now before the regulations are imposed on them later. Performing a quick scan of the proposed legislation, guidance documents and other relevant guidelines before building the product is a necessary step of EVP.

In addition, revisiting the regulatory/policy landscape prior to launch is advisable. Having someone who is embedded within the active deliberations currently happening globally on your board of directors or advisory board would also help understand what is likely to happen. Regulations are coming, and it’s good to be prepared.

There’s no doubt that AI/ML will present an enormous benefit to humankind. The ability to automate manual tasks, streamline business processes and improve customer experiences are too great to dismiss. But startups need to be aware of the impacts AI/ML will have on their customers, the market and society at large.

Startups typically have one shot at success, and it would be a shame if an otherwise high-performing product is killed because some ethical concerns weren’t uncovered until after it hits the market. Startups need to integrate ethics into the development process from the very beginning, develop an EVP based on RAI and continue to ensure AI governance post-launch.

AI is the future of business, but we can’t lose sight of the need for compassion and the human element in innovation.

More TechCrunch

Zen Educate, an online marketplace that connects schools with teachers, has raised $37 million in a Series B round of funding. The raise comes amid a growing teacher shortage crisis…

Zen Educate raises $37M and acquires Aquinas Education as it tries to address the teacher shortage

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.”

Scarlett Johansson says that OpenAI approached her to use her voice

A new self-driving truck — manufactured by Volvo and loaded with autonomous vehicle tech developed by Aurora Innovation — could be on public highways as early as this summer.  The…

Aurora and Volvo unveil self-driving truck designed for a driverless future

The European venture capital firm raised its fourth fund as fund as climate tech “comes of age.”

ETF Partners raises €284M for climate startups that will be effective quickly — not 20 years down the road

Copilot, Microsoft’s brand of generative AI, will soon be far more deeply integrated into the Windows 11 experience.

Microsoft wants to make Windows an AI operating system, launches Copilot+ PCs

Hello and welcome back to TechCrunch Space. For those who haven’t heard, the first crewed launch of Boeing’s Starliner capsule has been pushed back yet again to no earlier than…

TechCrunch Space: Star(side)liner

When I attended Automate in Chicago a few weeks back, multiple people thanked me for TechCrunch’s semi-regular robotics job report. It’s always edifying to get that feedback in person. While…

These 81 robotics companies are hiring

The top vehicle safety regulator in the U.S. has launched a formal probe into an April crash involving the all-electric VinFast VF8 SUV that claimed the lives of a family…

VinFast crash that killed family of four now under federal investigation

When putting a video portal in a public park in the middle of New York City, some inappropriate behavior will likely occur. The Portal, the vision of Lithuanian artist and…

NYC-Dublin real-time video portal reopens with some fixes to prevent inappropriate behavior

Longtime New York-based seed investor, Contour Venture Partners, is making progress on its latest flagship fund after lowering its target. The firm closed on $42 million, raised from 64 backers,…

Contour Venture Partners, an early investor in Datadog and Movable Ink, lowers the target for its fifth fund

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

2 days ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’