DSpace Repository

Sign Language Detection Using Deep Learning

Show simple item record

dc.contributor.author Rahman, Oliur
dc.date.accessioned 2026-04-12T04:07:54Z
dc.date.available 2026-04-12T04:07:54Z
dc.date.issued 2025-01-11
dc.identifier.citation CSE en_US
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/16673
dc.description Thesis en_US
dc.description.abstract Sign Language Detection Using with deep Learning is utilized and advance process to detect sign languages by using artificial intelligence (Ai) method like CNN and RNN to recognize hand or arm gesture/posture from image, therefore basically concerned with the design of sign language recognition technology targeted at enhancing communication for the hearing and speech impaired populace. This strongly utilizes deep learning from still pictures, in addition to video sequences of American Sign Language (ASL) gestures. To correctly categorize the gestures, features such as DenseNet, MobileNet, InceptionV3, VGG16, VGG19 and CNNs were used. It translates sign images to human readable text. A dataset of around 87k images from the ASL symbols is used in this work and enacted about all the images with early-stage preprocessing and random augmentation. It basically improves communication adaptability and accessibility for the deaf and hard of hearing. It lets the user upload images or record a video feed in real-time, with prompt recognition feedback and model choices. Experimental outcomes also show DenseNet yielded the highest accuracy for gesture classification used in the paper, while InceptionV3 can be effective for multi-scale DenseNet architecture. The performance of the resulting system was assessed using parameters including accuracy, precision, recall, and F1-score, with cross-validation to obtain reliability. Outcomes of this study endorse the applicability of deep learning in the aiding device for sign language interpretation needed. Future enhancements are incorporating more gestures and the sign language system in to the application as well as making changes in the API interface so that it is suitable for many android mobiles. This project supports efforts toward the development of friendly communication equipment for those with hearing and speech impediments and so on. en_US
dc.description.sponsorship DIU en_US
dc.language.iso en_US en_US
dc.publisher Daffodil International University en_US
dc.subject Human-Computer Interaction en_US
dc.subject Gesture Recognition en_US
dc.subject Computer Vision en_US
dc.subject Sign Language Recognition en_US
dc.title Sign Language Detection Using Deep Learning en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account