Startups

Level AI lands $13M Series A to build conversational intelligence for customer service

Comment

Woman with headset is sitting at her computer and talking with client. Clients assistance, call center, hotline operator, consultant manager, technical support and customer care. Vector illustration.
Image Credits: lankogal / Getty Images

Level AI, an early-stage startup from a former member of the Alexa product team, wants to help companies process customer service calls faster by understanding the interactions they’re having with customers in real time.

Today the company launched publicly, while announcing a $13 million Series A led by Battery Ventures, with help from seed investors Eniac and Village Global as well as some unnamed angels. Battery’s Neeraj Agrawal will be joining the startup’s board under the terms of the agreement. The company reports it has now raised $15 million, including an earlier $2 million seed.

Company founder Ashish Nagar helped run product for the Amazon Alexa team, working on an experimental project to get Alexa to have an extended human conversation. While they didn’t achieve that, as the technology is just not there yet, it did help him build his understanding of conversational AI, and in 2019 he launched Level AI to bring that knowledge to customer service.

“Our product helps agents in real time to perform better, resolve customer queries faster and make them clear faster. Then after the call, it helps the auditor, the folks who are doing quality assurance and training audits for those calls, do their jobs five to 10 times faster,” Nagar explained.

Leverage AI to optimize customer service outcomes

He says that the Level AI solution involves several activities. The first is understanding the nature of the conversation in real time by breaking it down into meaningful chunks that the technology can understand. Once they do that, they take that information and run it against workflows running in the background to deliver helpful resources, and finally use all that conversational data they are collecting to help companies learn from all this activity.

“We now have all this call data, email data, chat data, and we can look at it through a new lens to train agents better and provide insights to other aspects of the business like product managers and so on,” Nagar said.

He makes clear that this isn’t looking at sentiment or using keyword analysis to drive actions and understanding. He says that it is truly trying to understand the language in the interaction, and deliver the right kind of information to the agent to help the customer resolve the problem. That involves modeling intent, memory and understanding multiple things at the same time, which as he says, is how humans interact, and what conversational AI is trying to mimic.

While it’s not completely there yet, they are working at solving each of these problems as the technology advancements allow.

The company launched in 2018, and the first idea was to build voice assistants for front-line workers, but after talking to customers, Nagar learned there wasn’t a real demand for this, but there was for using conversational AI to help augment human workers, especially in customer service.

He decided to build that instead, and launched the first version of the product in March 2020. Today the company has 27 employees spread out in the U.S. and India, and Nagar believes that by being remote and hiring anywhere, he can hire the best people, while driving diversity.

Agrawal, who is lead investor for the round, sees a company solving a fundamental problem of delivering the right information to an agent in real time. “What ​​he’s built has real time in mind. And that’s kind of the holy grail of helping the customer service agents. You can provide information after the call ends, and that’s […] helpful, but […] you get the real value [by delivering information] during the call and that’s where real business value is,” he said.

Nagar acknowledges this technology could extend to other parts of the business like sales, but he intends to keep his focus on customer service for the time being.

More TechCrunch

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals