Loading

Guiding and Navigation for the Blind using Deep Convolutional Neural Network Based Predictive Object Tracking
Sai Nikhil Alisetti1, Swarnalatha Purushotham2, Lav Mahtani3
1Sai Nikhil Alisetti, B.Tech, Department of Computer Science and Engineering, Vellore Institute of Technology VIT, Vellore (Tamil Nadu), India.
2Swarnalatha Purushotham, Associate Professor, Department of Computer Science and Engineering, Vellore Institute of Technology VIT, Vellore (Tamil Nadu), India.
3Lav Mahtani, B.Tech, Department of Computer Science and Engineering, Vellore Institute of Technology VIT, Vellore (Tamil Nadu), India.
Manuscript received on 16 December 2019 | Revised Manuscript received on 23 December 2019 | Manuscript Published on 31 December 2019 | PP: 306-313 | Volume-9 Issue-1S3 December 2019 | Retrieval Number: A10581291S319/19©BEIESP | DOI: 10.35940/ijeat.A1058.1291S319
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Indoor and outdoor Navigation is a tough task for a visually impaired person and they would most of the time require human assistance. Existing solutions to this problem are in the form of smart canes and wearable. Both of which use sensors like on – board proximity and obstacle detection, as well as a haptic or auditory feedback system to warn the user of stationary or incoming obstacles so that they do not collide with any of them as they move. This approach has many drawbacks as it is not yet a stand – alone reliable device for the user to trust when navigating, and when frequently triggered in crowded areas, the feedback system will confuse the user with too many requests resulting in loss of actual information. Our Goal here is to create a Personalized assistant to the user, which they can interact naturally with their voice to mimic the aid of an actual human assistance while they are on the move. It works by using its object detection module with a high reliability training accuracy to detect the boundaries of objects in motion per frame and once the bounding box crosses the threshold accuracy, recognize the object in the box and pass the information to the system core, where it verifies if the information needed to be passed onto the user or not, if yes it passes the converted speech information to the voice interaction model. The voice interaction model is consent-based, it would accept and respond to navigation queries from the user and will intelligently inform them about the obstacle which needs to be avoided. This ensures only the essential information in the form of voice requests is passed onto the user, which they can use to navigate and also interact with the assistant for more information.
Keywords: Vision Processing, Medical Aid, Voice Assistant, Real Time Object Detection, YOLO2000 Model.
Scope of the Article: Deep Learning