Abstract:
Human senses are the only medium to communicate with environment. Our brain makes a
combination of different senses and convert it into a meaningful mixture. I, humans have five
different senses as in two eyes to see, one nose to smell, two ears to hear, one tongue to taste
and a skin to feel touch. All of these senses are connected deeply with each other. One sense
always has the backup of another one. There are thousands of people in the whole world who
are blind. They face many kinds of hindrances in their everyday life. They suffer badly in
travelling here and there like shopping, work, educational institutions and so on due to their
problem in sight. Hence, many papers have been published on making blind or visually
impaired people’s navigation easier. Different papers proposed different methods and different
ways to navigate their way. My paper aims to represent a proposed system that helps a visually
impaired, which detects various objects and finds right path for them to reach to their
destination. My projected model is mainly based on a well-known object detect and identifier
algorithm that needs to be trained by a dataset. Step by step, simulations using dynamic object
detection algorithm results in a better accuracy to detect those objects in my proposed model.
Hereby, my paper presents the idea of an object detection system based on object extractions;
networks for segmentations match with the recognized dataset and locate those objects from
the live video-image extractions, which gives an audio output.