SIAS – Sensory Impairment Assistive Software, presents an innovative application of YOLO (You Only Look Once), and CNN (Convolutional Neural Network). Through Python-based deep learning techniques, the system leverages YOLO’s speed and accuracy to identify objects within a live camera feed, providing vocal announcements to the user. Furthermore, it offers a few additional features, which include gesture-to-speech conversion to facilitate communication between the physically challenged, Text-to-speech conversion, and Image Processing. This proposes an innovative communication system framework for deaf, dumb and blind people within a single compact device.
Author
(s) Details
Supriya C
Department of Information Science and Engineering, Acharya
Institute of Technology, Bengaluru, Karnataka -560107, India.
Janavi Mahesh
Department of Information Science and Engineering, Acharya Institute of
Technology, Bengaluru, Karnataka -560107, India.
Karan
Department of Information Science and Engineering, Acharya Institute of
Technology, Bengaluru, Karnataka -560107, India.
Keerthana K S
Department of Information Science and Engineering, Acharya Institute of
Technology, Bengaluru, Karnataka -560107, India.
Navyashree R
Department of Information Science and Engineering, Acharya Institute of
Technology, Bengaluru, Karnataka
-560107, India.
Please see the book here:- https://doi.org/10.9734/bpi/mono/978-93-48859-98-3/CH16
No comments:
Post a Comment