Abstract:
This research paper describes how advanced deep learning methods were used to create
a system that can recognise Bangla Sign Language (BSL). The goal of the project is to
help the deaf people who speak Bangla communicate better by creating a system that
can accurately recognise sign language in real time. At first, a collection of 1,745
pictures of BSL hand signs was gathered and preprocessed by normalizing and
segmenting the data to make it better. Three main models—ResNet50, VGG19, and
DenseNet201—were put into action and tested. It turned out that DenseNet201 was the
best model. It had the highest accuracy rate of 89.03% and was better at reading
complex hand signs. The accuracy for VGG19 was 73% and for ResNet50 it was 57%.
The method involved fine-tuning DenseNet201, which used its tightly connected layers
to make feature reuse and gradient flow better, which made identification work better.
To make sure it was reliable, the system was put through a lot of tests using accuracy,
recall, and F1-score measures. The model was also made to work best for real-time
applications and was built to work with an easy-to-use web interface to make it more
accessible. The in-depth literature study, methodology, execution, system testing, and
assessment are all covered in this report. The report also talks about the project's social
and environmental effects. To make sure the system is used responsibly, ethical issues
and plans for long-term use were also talked about. This project shows how deep
learning can be used to make strong BSL recognition systems. These systems will help
the deaf community communicate better and feel more included. The results show how
important advanced neural network designs and careful data preparation are for getting
good results in tasks that require recognising sign language.