AI

Four investors explain why AI ethics can’t be an afterthought

Comment

Image Credits: Bryce Durbin / TechCrunch

Billions of dollars are flooding into AI. Yet, AI models are already being affected by prejudice, as evidenced by mortgage discrimination toward Black prospective homeowners.

It’s reasonable to ask what role ethics plays in the building of this technology and, perhaps more importantly, where investors fit in as they rush to fund it.

A founder recently told TechCrunch+ that it’s hard to think about ethics when innovation is so rapid: People build systems, then break them, and then edit. So some onus lies on investors to make sure these new technologies are being built by founders with ethics in mind.

To see whether that’s happening, TechCrunch+ spoke with four active investors in the space about how they think about ethics in AI and how founders can be encouraged to think more about biases and doing the right thing.


We’re widening our lens, looking for more investors to participate in TechCrunch surveys, where we poll top professionals about challenges in their industry.

If you’re an investor and would like to participate in future surveys, fill out this form.


Some investors said they tackle this by doing due diligence on a founder’s ethics to help determine whether they’ll continue to make decisions the firm can support.

“Founder empathy is a huge green flag for us,” said Alexis Alston, principal at Lightship Capital. “Such people understand that while we are looking for market returns, we are also looking for our investments to not cause a negative impact on the globe.”

Other investors think that asking hard questions can help separate the wheat from the chaff. “Any technology brings with it unintended consequences, be it bias, reduced human agency, breaches of privacy or something else,” said Deep Nishar, managing director at General Catalyst. “Our investment process centers around identifying such unintended consequences, discussing them with founding teams and assessing whether safeguards are or will be in place to mitigate them.”

Government policies are also taking aim at AI: The EU has passed machine learning laws, and the U.S. has introduced plans for an AI task force to start looking at the risks associated with AI. That’s in addition to the AI Bill of Rights introduced last year. With many top VC firms injecting money into AI efforts in China, it’s important to ask how global ethics within AI can be enforced across borders as well.

Read on to find out how investors are approaching due diligence, the green flags they look for and their expectations of regulations in AI.

We spoke with:


Alexis Alston, principal, Lightship Capital

When investing in an AI company, how much due diligence do you do on how its AI model purports or handles bias?

For us, it’s important to understand exactly what data the model takes in, where the data comes from and how they’re cleaning it. We do quite a bit of technical diligence with our AI-focused GP to make sure that our models can be trained to mitigate or eliminate bias.

We all remember not being able to have faucets turn on automatically to wash our darker hands, and the times when Google image search “accidentally” equated Black skin with primates. I’ll do everything in my power to make sure we don’t end up with models like that in our portfolio.

How would the U.S. passing machine learning laws similar to the EU’s affect the pace of innovation the country sees in this sector?

Given the lack of technical knowledge and sophistication in our government, I have very little faith in the U.S.’ ability to pass actionable and accurate legislation around machine learning. We have such a long tail when it comes to timely legislation and for technical experts to be a part of task forces to inform our legislators.

I actually don’t see legislation making any major changes in the pace of the development of ML, given how our laws are usually structured. Similarly to the race to the bottom for legislation around designer drugs in the U.S. a decade ago, the legislation never could keep up.

How could and should ethics be defined and enforced globally and across cultures?

We have a deep responsibility to ensure that our investments have no negative implications for national security or contribute to any sort of hypercontrolled police state. We have turned down plenty of investments that contribute to the prison industrial complex, threaten national security, or otherwise target marginalized groups of people in a way that elicits harm.

Each firm and each nation has to have its own standard for ethics development, and I don’t think there is a blanket framework for AI ethics that would work for all.

Training AI models against discrimination will require expertise beyond engineering. What roles will sociologists, historians, philosophers and other humanities professions play in the future of AI?

I think that sociologists, psychologists and philosophers will play a very large role in these conversations, as they have a deeper understanding of the larger societal implications of legislation and changes in innovation on a global scale than an investor would.

Which sectors of AI seem to be ahead of the rest when it comes to adopting ethical oversight? Which could use more of it?

Facial recognition is likely the furthest along, given its tenure as an established area in ML that has had strong leadership in communities of color and was led by engineers of color for years. Many of these teams were at the forefront of [studying] the implications of AI in immigration, policing and other policy-driven initiatives.

Every other aspect of AI could use deeper ethical oversight, especially the use of AI in predictive policing, drone deployment and most defense uses. All of these are areas I would not be comfortable driving AI innovation in.

What are the red and green flags you look for when investing in an AI product with regard to ethical considerations?

Founder empathy is a huge green flag for us, as such people understand that while we are looking for market returns, we are also looking for our investments to not cause a negative impact on the globe. Diversity of team and thought, particularly in product and engineering, is also crucial, as these teams have to have a keen eye for factors that can negatively impact algorithmic development.

Red flags for us are homogeneous teams or a general lack of forethought or accountability around the larger implications of AI, machine learning and computer vision.

What is investors’ responsibility for ensuring that ethics stays at the forefront of the conversation surrounding innovation within AI?

I think we all have a deep responsibility here — funders, founders, operators, policymakers and thought leaders in sociology — to ensure that AI and ethics go hand in hand. We’ve been having these conversations for years now, and I’m glad that ChatGPT is bringing it back to the forefront of people’s minds for us to collectively work toward a more equitable and safe future.

Justyn Hornor, angel investor and serial founder

When investing in an AI company, how much due diligence do you do on how its AI model purports or handles bias?

I look for two key elements:

  • Are the key risks for bias clearly understood and measured?
  • Does the system include human-in-the-loop capabilities for constant learning?

Bias is going to be very specific to the model and its use cases, and the risks are going to be highly dependent upon the industry. For medical-based AI products, for example, these risks would be examined with a great deal of detail. But a manufacturing system where AI is used to determine the quality of a bolt will not need the same level of scrutiny.

How would the U.S. passing machine learning laws similar to the EU’s affect the pace of innovation the country sees in this sector?

The pace may be slowed in the short term, but markets will adapt quickly with standards and systems that will be commoditized in short order. I believe these laws in the EU are well designed and are being thoughtfully implemented.

We should anticipate similar laws in the near future and begin self-policing our respective industries and systems to get ahead of these regulatory changes.

How could and should ethics be defined and enforced globally and across cultures?

These are two huge questions. Global ethics are typically very, very high level. That doesn’t mean they’re not valuable, but abstracting an ethical framework for AI products at such a high level may lead to an inability to enforce those standards. Many of the bigger challenges with regard to China can be addressed through non-AI ethics frameworks. For example, consumer privacy and theft of intellectual property are clearly defined in most modern markets.

Enforcing standards with regard to China will likely come from major trade pacts. The [Trans-Pacific Partnership] was a particularly powerful approach until it got pulled into culture war nonsense here in the U.S. Outside of major multilateral trade agreements, there are few means of enforcing any kind of international standards with China outside of saber-rattling or war.

Training AI models against discrimination will require expertise beyond engineering. What roles will sociologists, historians, philosophers and other humanities professions play in the future of AI?

We will see multidisciplinary teams become the norm with regard to AI training. There are a couple of facets of interest here depending on the type of AI products being built.

For generative AI, these systems will have to find a balance between responding with “correct” information and being prompted by humans with their own biases. Many historical events can be viewed through a number of different viewpoints, for example. It will be challenging to find a sweet spot, especially when humans can exert a significant influence on the outputs of generative AI: text, video, audio, images, etc.

For classification systems, I believe we’ll see similar teams that will face challenges in how the various labels influence the outputs of their AI. A common use case that I have run into many times is AI vision products that treat nudity in art in the same way as pornography. That’s a form of bias that must be understood within the context of a culture. There are no easy answers.

What are the red and green flags you look for when investing in an AI product with regard to ethical considerations?

I want to see that the team has been mindful and deliberate about defining, measuring and controlling biases. They may not get it right, but being intentional is very important.

Additionally, I want to understand their source of underlying data used for training. Besides the direct sources, feature engineering is a common means of extrapolating data from primary sources, so I want to understand if and how this process has been applied to any training.

What is investors’ responsibility for ensuring that ethics stays at the forefront of the conversation surrounding innovation within AI?

Investors should be asking the hard questions early. If you don’t have expertise on the team to understand the systems, hire subject matter experts to help dig into the technology. You may find a lot of red flags — that doesn’t mean you shouldn’t invest; just make sure the use of funds includes elevating the systems being built.

I also believe that any company with products or services that have significant amounts of AI running systems should have a trust and safety executive at or near the C-suite. There should be oversight of these systems and someone on the ground in the company with access to engineering teams who can also access the C-suite without risks of raising concerns.

Investors should be pushing for these roles and accept that use of funds include onboarding this type of expertise.

Deep Nishar, managing director, General Catalyst

When investing in an AI company, how much due diligence do you do on how its AI model purports or handles bias?

Bias is one of many dimensions within ethical AI (or responsible AI, as we refer to it at GC) that we evaluate in every investment decision we make. The idea of ethical AI cuts across our three primary responsible innovation pillars of inclusive prosperity, sustainable development and good citizenship. We believe this framework is a toolkit for us and our companies: it extends beyond due diligence into scaling companies and scoping second/third acts.

Any technology brings with it unintended consequences, be it bias, reduced human agency, breaches of privacy or something else. Our investment process centers around identifying such unintended consequences, discussing them with founding teams and assessing whether safeguards are or will be in place to mitigate them.

How would the U.S. passing machine learning laws similar to the EU’s affect the pace of innovation the country sees in this sector?

Across history, we see policy impacts on innovation occupy a spectrum. We’ve seen too little thus far to diagnose Capitol Hill’s influence on AI’s direction of travel.

That said, basic measures of fairness, transparency, privacy and reliability should be instituted with standard protocols and methodologies backing them. We believe that if guidelines are meant to be universal, compliance therein must be universally accessible and understandable. We believe the first step is a modicum of standard transparency that, similar to nutrition labels, will afford users knowledge of what they are consuming.

Training AI models against discrimination will require expertise beyond engineering. What roles will sociologists, historians, philosophers and other humanities professions play in the future of AI?

Both direct and indirect roles for the arts and humanities exist in this AI era, and they fill critical gaps in purely technical reasoning. It is perhaps easiest to envisage them at the inception and terminus of AI workflows: Does an architecture reflect the intentions of the architect (and society)? Do the inputs fed to the architecture holistically represent intentions and population(s)? Are these results aligned with the intentions outlined at inception? Of what consequence may they be to broader populations?

What are the red and green flags you look for when investing in an AI product with regard to ethical considerations?

Every investment memo we write includes a diagnostic on principles of responsible innovation. To this end, we have conversations with founders about this by the time we get to a term sheet. For us, ethical AI is about fostering the right mindsets and mechanisms such that responsible AI emanates from the core of the company. We probe for extant safeguards at the technological and organizational levels, and have discussions with teams about the potential unintended consequences of their products.

It’s a red flag if our conversations with teams demarcate the first time these topics have surfaced.

What is investors’ responsibility for ensuring that ethics stays at the forefront of the conversation surrounding innovation within AI?

Our fundamental belief is that every stakeholder — technologist or not, from builders to end users — plays a role in ethical AI.

At the end of the day, we vote with our checkbooks and our governance rights. We believe that ethical AI and financial returns are not in competition with one another — quite the opposite, actually.

Responsible and ethical innovation contributes to stronger and more enduring companies, which in turn leads to better investment outcomes. To that end, the way we see it, ethical AI is a natural extension of the fiduciary duties by which we are already bound.

Henri Pierre-Jacques, managing partner, Harlem Capital

When investing in an AI company, how much due diligence do you do on how its AI model purports or handles bias?

Given we invest at the pre-seed and seed stage, most of the AI companies at that point are pre-product or pre-revenue, so it’s very early in the tech product development. We are [doing due diligence on] the founder’s ethics to determine if they will make decisions we support over many years.

How would the U.S. passing machine learning laws similar to the EU’s affect the pace of innovation the country sees in this sector?

Innovation won’t be stopped, just altered. Whether it’s the EU or China, both have stricter rules, but both are still innovating. The right balance of laws is still unclear.

How could and should ethics be defined and enforced globally and across cultures?

Every country and region will have to make that decision for themselves; no one knows the right solution at this point, as everyone is just figuring it out. In reality, corporations will make decisions ahead of governments in most regions.

Training AI models against discrimination will require expertise beyond engineering. What roles will sociologists, historians, philosophers and other humanities professions play in the future of AI?

In an ideal world, they would work similarly to a marketing and engineering team, but I don’t have a lot of hope that this will be the case, as it wasn’t for web3, either.

Which sectors of AI seem to be ahead of the rest when it comes to adopting ethical oversight? Which could use more of it?

AI for [autonomous vehicles] has been a long and slow rollout. They have spent time thinking about insurance, death, regulation, job loss and more. The rollout of generative AI for consumer-facing products like images or chatbots has felt like it’s gone really fast.

The stakes seem lower at first, because death by a car accident isn’t an option, but there are still some serious implications from the consumer-facing technology that hasn’t been fully thought through, in my opinion.

What is investors’ responsibility for ensuring that ethics stays at the forefront of the conversation surrounding innovation within AI?

Given the power of this technology shift, I think it’s critical. We believe that companies should be making governance decisions even if their governments don’t require it.

More TechCrunch

Among other things, this includes the ability to trace code from source to binary packages across both platforms, single sign-on support and unified project structures.

JFrog and GitHub team up to closely integrate their source code and binary platforms

The company’s public fund disbursement and e-commerce platform makes accepting school tuition and enabling educational enrichment more accessible. 

Tech startup Odyssey goes on journey to help states implement school choice programs

A new startup called Kinnect aims to help people privately save generational memories, traditions, recipes, and more. The company’s app, launched this month, lets people create invite-only spaces where they…

Kinnect’s new app aims to help families record and store generational memories

Spotify has hiked its premium subscription in France by an eye-watering €0.13, in response to a new music-streaming tax.

Spotify hikes subscription price in France by 1.2% to match new music-streaming tax

The European Union has taken the wraps off the structure of the new AI Office, the ecosystem-building and oversight body that’s being established under the bloc’s AI Act. The risk-based…

With the EU AI Act incoming this summer, the bloc lays out its plan for AI governance

Solutions by Text, a company that gives people a way to pay their bills and apply for loans via text messaging, has secured $110 million in new growth funding. Edison…

Bootstrapped for over a decade, this Dallas company just secured $110M to help people pay bills by text

Owners of small- and medium-sized businesses check their bank balances daily to make financial decisions. But it’s entrepreneur Yoseph West’s assertion that there’s typically information and functions missing from bank…

Relay raises $32.2 million to help smaller businesses manage their cashflow

When other firms were investing and raising eye-popping sums, Clean Energy Ventures took a different approach. It appears to be paying off.

How Clean Energy Ventures avoided the pandemic bubble and raised a $305M fund

PwC, the management consulting giant, will become OpenAI’s biggest customer to date, covering 100,000 users.

OpenAI signs 100K PwC workers to ChatGPT’s enterprise tier as PwC becomes its first resale partner

Tech enthusiasts and entrepreneurs, the clock is ticking! With just 72 hours remaining until the early-bird ticket deadline for TechCrunch Disrupt 2024, now is the time to secure your spot…

72 hours left of the Disrupt early-bird sale

Avendus, the top investment bank for venture deals in India, confirmed on Wednesday it is looking to raise up to $350 million for its new private equity fund.  The new…

Avendus, India’s top venture advisor, confirms it’s looking to raise a $350 million fund

China has closed a third state-backed investment fund to bolster its semiconductor industry and reduce reliance on other nations, both for using and for manufacturing wafers — prioritizing what is…

China’s $47B semiconductor fund puts chip sovereignty front and center

Apple’s annual list of what it considers the best and most innovative software available on its platform is turning its attention to the little guy.

Apple’s Design Awards nominees highlight indies and startups, largely ignore AI (except for Arc)

The spyware maker’s founder, Bryan Fleming, said pcTattletale is “out of business and completely done,” following a data breach.

Spyware maker pcTattletale says it’s ‘out of business’ and shuts down after data breach

AI models are always surprising us, not just in what they can do, but what they can’t, and why. An interesting new behavior is both superficial and revealing about these…

AI models have favorite numbers, because they think they’re people

On Friday, Pal Kovacs was listening to the long-awaited new album from rock and metal giants Bring Me The Horizon when he noticed a strange sound at the end of…

Rock band’s hidden hacking-themed website gets hacked

Jan Leike, a leading AI researcher who earlier this month resigned from OpenAI before publicly criticizing the company’s approach to AI safety, has joined OpenAI rival Anthropic to lead a…

Anthropic hires former OpenAI safety lead to head up new team

Welcome to TechCrunch Fintech! This week, we’re looking at the long-term implications of Synapse’s bankruptcy on the fintech sector, Majority’s impressive ARR milestone, and more!  To get a roundup of…

The demise of BaaS fintech Synapse could derail the funding prospects for other startups in the space

YouTube’s free Playables don’t directly challenge the app store model or break Apple’s rules. However, they do compete with the App Store’s free games.

YouTube’s free games catalog ‘Playables’ rolls out to all users

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Smaller-sized…

21 hours ago
A comprehensive list of 2024 tech layoffs

OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company’s projects and operations. But, in a move that’s sure to raise the ire…

OpenAI’s new safety committee is made up of all insiders

Time is running out for tech enthusiasts and entrepreneurs to secure their early-bird tickets for TechCrunch Disrupt 2024! With only four days left until the May 31 deadline, now is…

Early bird gets the savings — 4 days left for Disrupt sale

AI may not be up to the task of replacing Google Search just yet, but it can be useful in more specific contexts — including handling the drudgery that comes…

Skej’s AI meeting scheduling assistant works like adding an EA to your email

Faircado has built a browser extension that suggests pre-owned alternatives for ecommerce listings.

Faircado raises $3M to nudge people to buy pre-owned goods

Tumblr, the blogging site acquired twice, is launching its “Communities” feature in open beta, the Tumblr Labs division has announced. The feature offers a dedicated space for users to connect…

Tumblr launches its semi-private Communities in open beta

Remittances from workers in the U.S. to their families and friends in Latin America amounted to $155 billion in 2023. With such a huge opportunity, banks, money transfer companies, retailers,…

Félix Pago raises $15.5 million to help Latino workers send money home via WhatsApp

Google said today it’s adding new AI-powered features such as a writing assistant and a wallpaper creator and providing easy access to Gemini chatbot to its Chromebook Plus line of…

Google adds AI-powered features to Chromebook

The dynamic duo behind the Grammy Award–winning music group the Chainsmokers, Alex Pall and Drew Taggart, are set to bring their entrepreneurial expertise to TechCrunch Disrupt 2024. Known for their…

The Chainsmokers light up Disrupt 2024

The deal will give LumApps a big nest egg to make acquisitions and scale its business.

LumApps, the French ‘intranet super app,’ sells majority stake to Bridgepoint in a $650M deal

Featured Article

More neobanks are becoming mobile networks — and Nubank wants a piece of the action

Nubank is taking its first tentative steps into the mobile network realm, as the NYSE-traded Brazilian neobank rolls out an eSIM (embedded SIM) service for travelers. The service will give customers access to 10GB of free roaming internet in more than 40 countries without having to switch out their own existing physical SIM card or…

1 day ago
More neobanks are becoming mobile networks — and Nubank wants a piece of the action