Enterprise

Blackshark.ai’s digital twin of Earth attracts $20M in funding

Comment

A digital recreation of Seattle with layers of information like rooftop area labeled.
Image Credits: Blackshark.ai

Blackshark.ai, the Austrian startup behind the digital globe you fly over in Microsoft’s Flight Simulator, has raised a $20 million round A to develop and scale its replica-Earth tech. The potential applications for a planetary “digital twin” are many and various, and the company has a head start even on mapping giants like Google.

The world got a glimpse of a fully traversable and remarkably (if not 100%) accurate globe in Flight Simulator last year; we called it a “technical marvel” and later went into detail about how it was created and by whom.

Blackshark.ai was spun out of gaming studio Bongfish with the intention, founder and CEO Michael Putz told me, of taking their world-building technology beyond game environments. The basis of their technique is turning widely available 2D imagery into accurate 3D representations with machine learning, a bit of smart guesswork and a lot of computing power.

The details are here, but essentially the Blackshark.ai system has a canny understanding of what different buildings look like from above, even in suboptimal lighting and incomplete imagery. The machine learning system they’ve built can extrapolate from imperfect outlines by considering the neighborhood (residential versus commercial), roof type (slanted versus flat) and other factors like the presence of air conditioning units and so on. Using all this it creates a plausible 3D reconstruction of the building.

The hard part, of course, isn’t how to do that once but how to do it a billion times on a regular basis, in order to create an up-to-date 3D representation of every building on the planet. As Putz explained: “Even if you could afford to buy all the computing power for this, building the back end to serve it is hard! This was a real-world issue we had to deal with.”

Their solution, as is often necessary for AI-powered services, was to optimize. Putz said that the process of calculating the 3D model for every building on the planet originally took about a month of computation but now can be done in about three days, an acceleration of about 300x.

Having this ability to update regularly based on new imagery from satellites is crucial to their business proposition, Putz explained. A lot of 3D map data, like what you see in Google and Apple’s maps, is based on photogrammetry, aerial photography combining multiple aerial images and comparing parallax data (like our eyes do) to determine size and depth. This produces great data … for when the photo is taken.

If you want your 3D map to represent what a block in Chicago looked like last week, not two years ago, and you want to provide that level of recency to as much of the globe as possible, the only option these days is satellite imagery. But that also necessitates the aforementioned 2D-to-3D method.

Meet the startup that helped Microsoft build the world of Flight Simulator

Putz noted that although the Blackshark.ai 3D map and those from Google and Apple have superficial similarities, they’re not really competitors. All provide a realistic “canvas,” but they differ greatly in intention.

“Google Maps is the canvas for local businesses,” he said, and what’s important to both the company and its users is locations, reviews, directions, things like that. “For us, say for flooding, a climate change use case, we provide the 3D data for say, Seattle, and others who specialize in water physics and fluid simulation can use the real world as a canvas to draw on. Our goal is to become a searchable surface of the planet.”

A digital recreation of a hillside with simulated windmills and data on their operations.
Image Credits: Blackshark.ai

What’s the total flat rooftop area available in this neighborhood of San Diego? What regional airports have an open 4,000-square-meter space? How do wildfire risk areas overlap with updated wind models? It’s not hard to come up with ways this could be helpful.

“This is one of those ideas where the more you think about it, the more use cases come up,” Putz said. “There’s obviously government applications, disaster relief, smart cities, autonomous industries — driving and flying. All these industries need synthetic environments. This wasn’t just like, ‘Hey we want to do this,’ it was needed. And this 2D-3D thing is the only way to solve this massive problem.”

The $20 million round was led by M12 (Microsoft’s venture fund) and Point72 Ventures. Putz was excited to have a few familiar advising faces aboard: Google Earth co-founder Brian McClendon, former CEO of Airbus Dirk Hoke and Qasar Younis, former Y Combinator COO and now CEO of Applied Intuition. (These folks are advising, not joining the board, as this paragraph mistakenly had earlier.)

Scaling is more a matter of going to market rather than building out the product; while of course more engineers and researchers will be hired, the company needs to go from “clever startup” to “global provider of 3D synthetic Earths” in a hurry or it may find some other clever startup eating its lunch. So a sales and support team will be built out, along with “the remaining pieces of a hyperscaling company,” Putz said.

Beyond the more obvious use cases he listed, there’s a possibility of — you knew it was coming — metaverse applications. In this case however it’s less hot air and more the idea that if any interesting AR/VR/etc. applications, from games to travel guides, wanted to base their virtual experience in a recently rendered version of Earth, they can. Not only that, but worlds beyond our own can be generated by the same method, so if you wanted to scramble the layout of the planet and make a  new one (and who could blame you?) you could do so by the end of the week. Doesn’t that sound nice?

Once the new funding gets put to use, expect to see “powered by Blackshark.ai” or the like on a new generation of ever more detailed simulations of the complex markets and processes taking place on the surface of our planet.

More TechCrunch

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during its I/O 2024 by its own count. CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Everything announced so far

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google gets serious about AI-generated video at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google reveals plans for upgrading AI in the real world through Gemini Live at Google I/O 2024

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets

The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch has learned. The…

Threat actor scraped Dell support tickets, including customer phone numbers

If you write the words “cis” or “cisgender” on X, you might be served this full-screen message: “This post contains language that may be considered a slur by X and…

On Elon’s whim, X now treats ‘cisgender’ as a slur

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch the AI reveals live

Facebook once had big ambitions to be a major player in enterprise communication and productivity, but today the social network’s parent company Meta will be closing a very significant chapter…

Meta is shutting down Workplace, its enterprise communications business

The Oversight Board has overturned Meta’s decision to take down a documentary revealing the identities of child abuse victims in Pakistan.

Meta’s Oversight Board overturns takedown decision for Pakistan child abuse documentary