Startups

SLAIT’s real-time sign language translation promises more accessible online communication

Comment

Two phones showing each side of a video call, with a text box describing what one is signing and the other is saying.
Image Credits: Slait.ai

Sign language is used by millions of people around the world, but unlike Spanish, Mandarin or even Latin, there’s no automatic translation available for those who can’t use it. SLAIT claims the first such tool available for general use, which can translate around 200 words and simple sentences to start — using nothing but an ordinary computer and webcam.

People with hearing impairments, or other conditions that make vocal speech difficult, number in the hundreds of millions, rely on the same common tech tools as the hearing population. But while emails and text chat are useful and of course very common now, they aren’t a replacement for face-to-face communication, and unfortunately there’s no easy way for signing to be turned into written or spoken words, so this remains a significant barrier.

We’ve seen attempts at automatic sign language (usually American/ASL) translation for years and years. In 2012 Microsoft awarded its Imagine Cup to a student team that tracked hand movements with gloves; in 2018 I wrote about SignAll, which has been working on a sign language translation booth using multiple cameras to give 3D positioning; and in 2019 I noted that a new hand-tracking algorithm called MediaPipe, from Google’s AI labs, could lead to advances in sign detection. Turns out that’s more or less exactly what happened.

SLAIT is a startup built out of research done at the Aachen University of Applied Sciences in Germany, where co-founder Antonio Domènech built a small ASL recognition engine using MediaPipe and custom neural networks. Having proved the basic notion, Domènech was joined by co-founders Evgeny Fomin and William Vicars to start the company; they then moved on to building a system that could recognize first 100, and now 200 individual ASL gestures and some simple sentences. The translation occurs offline, and in near real time on any relatively recent phone or computer.

Animation showing ASL signs being translated to text, and spoken words being transcribed to text back.
Image Credits: SLAIT

They plan to make it available for educational and development work, expanding their dataset so they can improve the model before attempting any more significant consumer applications.

Of course, the development of the current model was not at all simple, though it was achieved in remarkably little time by a small team. MediaPipe offered an effective, open-source method for tracking hand and finger positions, sure, but the crucial component for any strong machine learning model is data, in this case video data (since it would be interpreting video) of ASL in use — and there simply isn’t a lot of that available.

As they recently explained in a presentation for the DeafIT conference, the first team evaluated using an older Microsoft database, but found that a newer Australian academic database had more and better quality data, allowing for the creation of a model that is 92% accurate at identifying any of 200 signs in real time. They have augmented this with sign language videos from social media (with permission, of course) and government speeches that have sign language interpreters — but they still need more.

Animated image of a woman saying "deaf understand hearing" in ASL.
A GIF showing one of the prototypes in action — the consumer product won’t have a wireframe, obviously. Image Credits: SLAIT

Their intention is to make the platform available to the deaf and ASL learner communities, who hopefully won’t mind their use of the system being turned to its improvement.

And naturally it could prove an invaluable tool in its present state, since the company’s translation model, even as a work in progress, is still potentially transformative for many people. With the amount of video calls going on these days and likely for the rest of eternity, accessibility is being left behind — only some platforms offer automatic captioning, transcription, summaries, and certainly none recognize sign language. But with SLAIT’s tool people could sign normally and participate in a video call naturally rather than using the neglected chat function.

“In the short term, we’ve proven that 200 word models are accessible and our results are getting better every day,” said SLAIT’s Evgeny Fomin. “In the medium term, we plan to release a consumer facing app to track sign language. However, there is a lot of work to do to reach a comprehensive library of all sign language gestures. We are committed to making this future state a reality. Our mission is to radically improve accessibility for the Deaf and hard-of-hearing communities.”

From left, Evgeny Fomin, Antonio Domènech and Bill Vicars. Image Credits: SLAIT

He cautioned that it will not be totally complete — just as translation and transcription in or to any language is only an approximation, the point is to provide practical results for millions of people, and a few hundred words goes a long way toward doing so. As data pours in, new words can be added to the vocabulary, and new multigesture phrases as well, and performance for the core set will improve.

Right now the company is seeking initial funding to get its prototype out and grow the team beyond the founding crew. Fomin said they have received some interest but want to make sure they connect with an investor who really understands the plan and vision.

When the engine itself has been built up to be more reliable by the addition of more data and the refining of the machine learning models, the team will look into further development and integration of the app with other products and services. For now the product is more of a proof of concept, but what a proof it is — with a bit more work SLAIT will have leapfrogged the industry and provided something that deaf and hearing people both have been wanting for decades.

5 emerging use cases for productivity infrastructure in 2021

More TechCrunch

The TechCrunch team runs down all of the biggest news from the Apple WWDC 2024 keynote in an easy-to-skim digest.

Here’s everything Apple announced at the WWDC 2024 keynote, including Apple Intelligence, Siri makeover

Hello and welcome back to TechCrunch Space. What a week! In the same seven-day period, we watched Boeing’s Starliner launch astronauts to space for the first time, and then we…

TechCrunch Space: A week that will go down in history

Elon Musk’s posts seem to misunderstand the relationship Apple announced with OpenAI at WWDC 2024.

Elon Musk threatens to ban Apple devices from his companies over Apple’s ChatGPT integrations

“We’re looking forward to doing integrations with other models, including Google Gemini, for instance, in the future,” Federighi said during WWDC 2024.

Apple confirms plans to work with Google’s Gemini ‘in the future’

When Urvashi Barooah applied to MBA programs in 2015, she focused her applications around her dream of becoming a venture capitalist. She got rejected from every school, and was told…

How Urvashi Barooah broke into venture after everyone told her she couldn’t

Slack CEO Denise Dresser is speaking at TechCrunch Disrupt 2024.

Slack CEO Denise Dresser is coming to TechCrunch Disrupt this October

Apple kicked off its weeklong Worldwide Developers Conference (WWDC 2024) event today with the customary keynote at 1 p.m. ET/10 a.m. PT. The presentation focused on the company’s software offerings…

Watch the Apple Intelligence reveal, and the rest of WWDC 2024 right here

Apple’s SDKs (software development kits) have been updated with a variety of new APIs and frameworks.

Apple brings its GenAI ‘Apple Intelligence’ to developers, will let Siri control apps

Older iPhones or iPhone 15 users won’t be able to use these features.

Apple Intelligence features will be available on iPhone 15 Pro and devices with M1 or newer chips

Soon, Siri will be able to tap ChatGPT for “expertise” where it might be helpful, Apple says.

Apple brings ChatGPT to its apps, including Siri

Apple Intelligence will have an understanding of who you’re talking with in a messaging conversation.

Apple debuts AI-generated … Bitmoji

To use InSight, Apple TV+ subscribers can swipe down on their remote to bring up a display with actor names and character information in real time.

Apple TV+ introduces InSight, a new feature similar to Amazon’s X-Ray, at WWDC 2024

Siri is now more natural, more relevant and more personal — and it has new look.

Apple gives Siri an AI makeover

The company has been pushing the feature as integral to all of its various operating system offerings, including iOS, macOS and the latest, VisionOS.

Apple Intelligence is the company’s new generative AI offering

In addition to all the features you can find in the Passwords menu today, there’s a new column on the left that lets you more easily navigate your password collection.

Apple is launching its own password manager app

With Smart Script, Apple says it’s making handwriting your notes even smoother and straighter.

Smart Script in iPadOS 18 will clean up your handwriting when using an Apple Pencil

iOS’ perennial tips calculating app is finally coming to the larger screen.

Calculator for iPad does the math for you

The new OS, announced at WWDC 2024, will allow users to mirror their iPhone screen directly on their Mac and even control it.

With macOS Sequoia, you can mirror your iPhone on your Mac

At Apple’s WWDC 2024, the company announced MacOS Sequoia.

Apple unveils macOS Sequoia

“Messages via Satellite,” announced at Apple’s WWDC 2024 keynote, works much like the SOS feature does.

iPhones will soon text via satellite

Apple says the new design will lead to less time searching for photos.

Apple revamps its Photos app for iOS 18

Users will be able to lock an app when they hand over their phone.

iOS 18 will let you hide and lock apps

Apple’s WWDC 2024 keynote was packed, including a number of key new updates for iOS 18. One of the more interesting additions is Tap to Cash, which is more or…

Tap to Cash lets you pay by touching iPhones

In iOS 18, Apple will now support long-requested functionality, like the ability to set app icons and widgets wherever you want.

iOS 18 will finally let you customize your icons and unlock them from the grid

As expected, this is a pivotal moment for the mobile platform as iOS 18 is going to focus on artificial intelligence.

Apple unveils iOS 18 with tons of AI-powered features

Apple today kicked off what it promised would be a packed WWDC 2024 with a handful of visionOS announcements. At the top of the list is the ability to turn…

visionOS can now make spatial photos out of 3D images

The Apple Vision Pro is now available in eight new countries.

Apple to release Vision Pro in international markets

VisionOS 2 will come to Vision Pro as a free update later this year.

Apple debuts visionOS 2 at WWDC 2024

The security firm said the attacks targeting Snowflake customers is “ongoing,” suggesting the number of affected companies may rise.

Mandiant says hackers stole a ‘significant volume of data’ from Snowflake customers

French startup Kelvin, which uses computer vision and machine learning to make it easier to audit homes for energy efficiency, has raised $5.1M.

Kelvin wants to help save the planet by applying AI to home energy audits