Abstract:
Sign language (SL) is a visual language that people with speech and hearing disabilities
use to communicate in their everyday conversations. It is entirely an optical communication
language due to its native grammar. Sadly, learning and practicing sign language is not that
common in our society; as a result, this research created a prototype for sign language
recognition. Hand detection was used to create a system that will serve as a learning tool
for sign language beginners. In this project work, we have created a improved Deep CNN
model that can recognize which letter, word or digit of the American Sign Language (ASL)
is being signed from an image of a signing hand [1]. We have extracted the features from
the images by using Transfer Learning and build a model using Deep Convolutional Neural
Network or Deep CNN. We used Tensorflow and Keras as a framework for our project.
We have evaluated the proposed model with our custom dataset as well as with an existing
dataset. Our improved Deep CNN model gives only an 4.95% error rate. In addition, we
have compared the improved Deep CNN model with other traditional methods and here
the improved Deep CNN model achieved an accuracy rate of 95 % and outperforms the
other models.