Abstract:
Instead of spoken words, sign languages use the visual-manual modality to communicate
meaning. Manual articulation and non-manual markers are used to convey meaning in
sign languages. It is used by dumb or silent, blind and disabled people all over the world.
Having their own grammar and lexicon, sign languages are completely natural
languages.Therefore, a machine translator is required to enable them to communicate
with the broader public. Computer vision technologies are now well known for helping
people translate their languages so that everyone can comprehend them. We have used
deep learning methods to detect Bangla sign language. Here we used a custom made data
set of 2100 images of our 42 beloved Bangla words. We employed Convolutional Neural
Network (CNN) models to classify the words. These models deliver results for photo
classification that are more precise. Before these models can be used, the image data must
be processed. The employment of particular techniques is required for data preparation.
The choices include RGB conversion, filtering, resizing and scaling, and categorization.
After using these techniques, image data is preprocessed and made ready for classifier
algorithms.
.