Abstract:
Hearing loss is a barrier to living a normal life for the deaf. Approximately 2.6 million people in
Bangladesh are suffering from hearing loss. For these people, sign language plays a very
important role in communicating with others. Traditional methods of sign language are limited
by availability, cost, and accessibility. This paper proposes an approach to sign language
detection using MediaPipe, a cross-platform framework for building pipelines of machine
learning and computer vision algorithms. The proposed model can process different machine
learning algorithms to detect Bengali and English letters, words, and numbers from sign
language gestures captured by a webcam with high efficiency. The model is trained on a dataset
of over 135,000 hand gesture images and it has achieved more than 98% recognition accuracy. It
can also process data in variations of lighting, background, and hand posture making it suitable
for real-world applications. The proposed system provides a low-cost, accessible, and real-time
sign language interpretation tool that has great potential to revolutionize communication
problems among hearing and deaf people in our country.