AI

CommonGround raises $25M for immersive video avatar technology that doesn’t rely on VR gear

Comment

man sitting in front of computer screen on which there are three avatars seated around a conference table
Image Credits: CommonGround (opens in a new window)

The trials and tribulations that a giant company like Meta (née Facebook) has been facing in overcoming skepticism, creating user interest (let alone revenue) and building quality experiences for its all-in metaverse vision highlights just how much work lies ahead for any company working in mixed reality. Today, a startup in that bigger ecosystem, which believes it can fix one aspect of how this works — how we ourselves appear — is announcing some funding along with a beta of its live avatar software that has been years in the making.

CommonGround — an Israeli/Silicon Valley startup that has built technology for people to use their smartphones to scan their faces for responsive, real-time three-dimensional avatars that can be used in video applications — has raised $25 million, money that it is using both to continue developing its tech and getting it launched into the world.

Marius Nacht, the co-founder and former chairman of CheckPoint Software, led the round, with VCs Grove, Matrix and StageOne also participating. The latter three are repeat backers: Collectively, they invested $19 million in CommonGround when it was still in stealth mode.

CommonGround actually raised this latest funding a year ago, but it chose to delay announcing until it had a product ready to show. Now, you can go to the site to scan yourself and create an avatar; in Q1 2023, the company plans to release the first application to use that avatar: meeting software where your likeness, or an idealized version of your likeness, will be able to sit around a virtual table to engage and respond to others in the conversation — complete with reactions and movements mirroring those you are making IRL. (For now, you can share the avatars with friends and put them into a dancing animation.)

Like “TrueSelf Scan,” the name of the initial application that’s used to scan a person’s image, the meeting software also will not require a VR headset to use and engage with — users will be “seated” in a room that will be shown on a video screen. Amir Bassan-Eskenazi, the CEO of CommonGround who co-founded the company with Ran Oz, said the avatar preview link for now will work for the first 500 people, although it’s not clear how many will be able to speak concurrently on the conferencing app.

Why videoconferencing? The medium definitely had a moment in the spotlight with the arrival and peak of the COVID-19 pandemic and a huge shift of people opting to work remotely. Fast-forward to today, with millions of hours in aggregate clocked up on services like Zoom, Microsoft’s Teams, Google’s Meet, WebEx and the many other videoconferencing apps out there, and skeptics might argue that what we have on the market today has been good enough.

CommonGround’s bet is that the experience could be better, and when people are presented with an easy way of having that, they will use it.

“There is Zoom and there are phone calls, but we think there is a big aspect of remote meetings [not being addressed by technology today],” said Bassan-Eskenazi. “Our goal is to enable taking experience — closer connections — and making that digital. We think moving video conferencing from 2D to 3D could even make it better than face-to-face.”

The computer vision technology is built from the ground up — a project that seems to have started as early as 2019 and has been complex enough that this launch was postponed from its original target date of 2021. Based around machine learning, CommonGround’s platform is theoretically learning all the time from its users: The more you use it, the more you train it and the more accurate it becomes.

And to be clear, the startup confirms that the tech is not in any way connected to what others are building around the same concept. One would-be competitor that I found comes from Avatar SDK, which is part of itSeez3D, which itself was acquired by Intel several years ago — albeit not for this particular piece of technology, at least not at the time of the deal in 2016 (its USPs then were IOT and automotive applications).

Avatars have had a lot of currency in more fun, consumer-focused applications, and there have been a few examples of how AI and computer vision can spark delight in these when they become more anthropomorphic: Apple’s animated memoji, based on your facial expressions, can feel familiar and cute, if a little eery.

But Bassan-Eskenazi believes that avatars also very much have a place in enterprise environments. For one thing, the numbers of calls today that are made with the camera turned off — either because a person does not feel presentable or in the right environment for a call — are one use case: Now you can continue to maintain your privacy while still making eye contact and responding to what others are saying, qualities that go a long way toward communication that might otherwise get lost in virtual environments.

And if you think immersive meetings are the future, you may not want to ever have them in VR. Although some have held the new wave of headsets as the answer to more immersive virtual meetings, there’s no question that wearing a headset for extended periods — those work meetings that could last for hours — is uncomfortable.

Whether the idea really catches on with businesses and is as scalable as CommonGround believes it could be are still bets that have yet to come good, but investors have been interested not least because of the pedigree of the founders. Between them, Bassan-Eskenazi and Oz have started seven companies, had three IPOs, two exits and won two Emmy awards for streaming technology. That points to resourcefulness, and artificial intelligence technology with multipurpose potential at the end of the day.

Update: corrected to note that ItSeez (acquired by Intel) is not related to itSeez3D (independent startup).

More TechCrunch

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Ilya Sutskever, OpenAI’s longtime chief scientist and one of its co-founders, has left the company. OpenAI CEO Sam Altman announced the news in a post on X Tuesday evening. pic.twitter.com/qyPMIcvcsY…

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video