Abstract:
This project about sign language conversation interpreter system. It is made for speak disable
people. Here a man who don’t know the sign language can make conversation with a speak
disable people. In this project we used efficient methods to convert sign language into text.
We used our own dataset and collected dataset of Bangla Sign Languages using hand gestures.
Inputs will take by webcam and recognize by neural network system and give text output by
using gestures. We have created a CNN which is similar model of the MNIST classifying
model. It can use both Tensorflow and Keras. This model was trained using Keras by video
streaming. In a video stream sign language, interpreter can turn it fast into text. We used
histogram back projection method to recognize people’s hand as object. For better result we
calculated the grey level of hands.