DSpace Repository

Exploring Deep Learning Architecture For Complex Hand Sign Language

Show simple item record

dc.contributor.author Islam, Gazi Rashidul
dc.date.accessioned 2025-09-14T06:01:50Z
dc.date.available 2025-09-14T06:01:50Z
dc.date.issued 2024-07-13
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/14452
dc.description Project report en_US
dc.description.abstract This research paper describes how advanced deep learning methods were used to create a system that can recognise Bangla Sign Language (BSL). The goal of the project is to help the deaf people who speak Bangla communicate better by creating a system that can accurately recognise sign language in real time. At first, a collection of 1,745 pictures of BSL hand signs was gathered and preprocessed by normalizing and segmenting the data to make it better. Three main models—ResNet50, VGG19, and DenseNet201—were put into action and tested. It turned out that DenseNet201 was the best model. It had the highest accuracy rate of 89.03% and was better at reading complex hand signs. The accuracy for VGG19 was 73% and for ResNet50 it was 57%. The method involved fine-tuning DenseNet201, which used its tightly connected layers to make feature reuse and gradient flow better, which made identification work better. To make sure it was reliable, the system was put through a lot of tests using accuracy, recall, and F1-score measures. The model was also made to work best for real-time applications and was built to work with an easy-to-use web interface to make it more accessible. The in-depth literature study, methodology, execution, system testing, and assessment are all covered in this report. The report also talks about the project's social and environmental effects. To make sure the system is used responsibly, ethical issues and plans for long-term use were also talked about. This project shows how deep learning can be used to make strong BSL recognition systems. These systems will help the deaf community communicate better and feel more included. The results show how important advanced neural network designs and careful data preparation are for getting good results in tasks that require recognising sign language. en_US
dc.description.sponsorship DIU en_US
dc.language.iso en_US en_US
dc.publisher Daffodil International University en_US
dc.subject Human–computer interaction (HCI) en_US
dc.subject Deep learning architecture en_US
dc.subject Human–computer interaction (HCI) en_US
dc.title Exploring Deep Learning Architecture For Complex Hand Sign Language en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account