Sber presented AI models capable of recognizing Russian sign language

by alex

The current version of the model allows you to recognize more than 2500 gestures

At the international conference on artificial intelligence AI Journey, the Sberbank team presented neural network models that allow recognizing Russian sign language.

The Vision RnD team at SberDevices, which is developing one of these solutions, was the first in the world to present a prototype of communication with a generative language model using sign language in the public domain. This became possible thanks to the use of the GigaChat API, a software interface for accessing the GigaChat service.

The GigaChat generative model itself, without additional transformations, understands the context of recognized gestures. For example, the service itself converts the recognized individual words: “I’m Going Street for a Walk” into the correct phrase: “I’m going for a walk outside,” preserving the context of information transfer.

The current version of the model can recognize more than 2,500 gestures, including understanding dactyl (spelling words) and the ability to recognize compound gestures. In addition, the model understands terminology on topics in banking, transport, animals, and even a few words from the field of medicine and education. This volume covers a significant part of the Russian sign language dictionary, allowing you to create services with the desired application.

READ
Huang is on the drive. The CEO of NVIDIA “got rich” by $9.6 billion in a day and he is about to enter the top 20 richest people in the world

Another team of researchers has developed and publicly published a lightweight, computationally inexpensive model for sign language recognition. The model runs on a CPU, which reduces the cost of solutions created on its basis. This enables a wide range of developers to design inclusive software, such as communication products and services or tools for learning sign language. The algorithm currently recognizes 1,600 gestures and converts up to three gestures per second into words on standard personal computers. In 2024, it is planned to test and implement the Russian sign language recognition model and solutions based on it in a number of regions of Russia.

You may also like

Leave a Comment