Abstract:
The number of people who are deaf and mute on the planet is alarmingly increasing. Approximately 2.6 million people in Bangladesh are unable to communicate withothers through language. This research investigates the development of a systemfor
translating Bangla Sign Language (BdSL) into textual Bangla characters. This paper
aims to bridge the communication gap between the deaf and hearing communities inBangladesh. The system aims to more accurately recognise sign language andtranslate it into Bangla (Bangla characters). We have a dataset of 11,422 Bangla signlanguage images, encompassing 36 Bangla alphabets and 10 numerical values. Out of
these, 1,906 are original images that we personally collected from various places andvarious people. This diverse dataset aims to ensure robust and accurate sign languagerecognition. This research employs five advanced pre-trained deep learning models—ResNet50, InceptionV3, Xception, VGG16, and a hybrid model that combines
ResNet50 and InceptionV3. The implementation and evaluation of these models arediscussed in detail. Among the tested models, the hybrid model (ResNet 50+InceptionV3) achieved the best accuracy 96%, compared with the other models used. The Xception Model achieved an accuracy of 95%, while VGG16 achieved 94%. TheInceptionV3 model performed quite poorly, with an accuracy of 93%, andtheResNet50 model achieved the lowest accuracy of 91%. Due to its superior
performance, the Hybrid Model is used in the Web Application as a part of this work. This web application demonstrates the conversion of the Bangla Sign Languageimages to characters. This will make it easier for our people to communicate withthedeaf and dumb community by utilising a variety of approaches.