Abstract:
My project is aim to make a change on communication barrier amoung the deaf and normal people. This system is a CNN based deep learning model that is focused on recognizing hand gestures. It captures real time images and translates the sign gestures to text so that the speaking people can understand the deaf language . It makes the communication easier and simpler by using a computing system. A CNN-based deep learning model using gesture recognition, which works effectively in image & pattern analysis. Real-time input is captured through a camera & processed frame by frame using OpenCV program, while TensorFlow & Keras handle the feature of the system rooting & classification aspects. A Flask backend ensures smooth combination with a web-based frontend that provides real-time text display & optional audio output by a TTS engine. During testing, my system maintained more than 90% recognition accuracy at low latency & was therefore fit for practical use. It contributes to the relatively unexplored field of Proportional & also showcases that AI driven systems can support overall communications between deaf and hearing people.