Priyanjali Gupta, a third-year engineering student at India’s Vellore Institute of Technology (VIT), has created a stunning artificial intelligence model capable of translating American sign language into English in real time.
Priyanjali credits her newly constructed AI-powered model to data scientist Nicholas Renotte’s video on Real-Time Sign Language Detection. She created the AI model by utilizing the Tensorflow object detection API, which translates hand motions using transfer learning from a pre-trained model called ssd mobilenet.
“The dataset is manually made with a computer webcam and given annotations. The model, for now, is trained on single frames. To detect videos, the model has to be trained on multiple frames, for which I’m likely to use Long short-term memory (LSTM) networks.” said Priyanjali in an interview with Analytics Drift.
She also stated that developing a deep learning model devoted to sign language identification is difficult, but she is certain that the open-source community will discover a solution soon. She went on to say that in the future, it would be possible to construct deep learning models specifically for sign languages.
Earlier this year, two University of Washington students named Thomas Pryor and Navid Azodi developed a pair of gloves called ‘SignAloud,’ which could convert sign language into speech or text.
They won the Lemelson-MIT competition with their SignAloud entry.
Student in India develops AI model that turns sign language to English|Inquirer