0% found this document useful (0 votes)
445 views19 pages

A Google Glass Based Real-Time Scene Analysis For The Visually Impaired

1) The document describes an internship project that uses Google Glass and computer vision APIs to provide real-time scene analysis and descriptions for visually impaired users. It analyzes scenes through the Google Glass camera and converts the output of vision APIs into speech descriptions. 2) The methodology captures images with the Google Glass camera, sends them to a smartphone for processing with vision APIs, and converts the captions into speech for the user. User testing found the system helped identify locations and understand surroundings. 3) Advantages included minimal training, no visual cues needed, comfort, and real-time results under 1 second. The system aimed to provide independent scene recognition assistance through an easy-to-use wearable device.

Uploaded by

N.Yathish raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
445 views19 pages

A Google Glass Based Real-Time Scene Analysis For The Visually Impaired

1) The document describes an internship project that uses Google Glass and computer vision APIs to provide real-time scene analysis and descriptions for visually impaired users. It analyzes scenes through the Google Glass camera and converts the output of vision APIs into speech descriptions. 2) The methodology captures images with the Google Glass camera, sends them to a smartphone for processing with vision APIs, and converts the captions into speech for the user. User testing found the system helped identify locations and understand surroundings. 3) Advantages included minimal training, no visual cues needed, comfort, and real-time results under 1 second. The system aimed to provide independent scene recognition assistance through an easy-to-use wearable device.

Uploaded by

N.Yathish raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Department of Computer Science &

Engineering Internship
“A Google Glass Based Real-Time Scene
Analysis for the Visually Impaired”

Presented by:
YathishRaj.N
(1AH19CS089)

Internal Guide
Dr. Sivakumar D
Mrs. Shruthi A
Professor Dept,
Professor Dept,
ACSCE
ACSCE
Contents

• Introduction
• Objectives
• Literature Survey
• Problem Statement
• Methodology
• Snapshots
• Results & Discussion
• Conclusion
• References

2
Introduction

Google Glasses:-
• Google Glass is a brand of smart glasses with a prism projector for
display, a bone conduction transducer, a microphone, accelerometer,
gyroscope, magnetometer, ambient light sensor, proximity sensor, a
touchpad, and a camera. It can connect to other devices using a
Bluetooth connection, a micro USB, or a Wi-Fi connection.
• Application development for the device can be done using the Android
development platform and toolkit available for mobile devices running
Android OS.
• Blind and Visually Impaired People (BVIP) are likely to experience
difficulties with tasks that involve scene recognition. Wearable
technology has played a significant role in researching and evaluating
systems developed for and with the BVIP community.

3
Introduction

• The output of the Vision API is converted to speech, which is heard by


the BVIP user wearing the Google Glass.
• This application helps the BVIP better recognize their surrounding
environment in real-time, proving the device effective as a potential
assistant for the BVIP.
4
Objectives
Google Glasses:-
• Google Glass is a wearable Android gadget that can be voice- and
motion-controlled and projects information right into the user's field of
view.

• Through the use of visual, audio, and location-based inputs, Google


Glass delivers an augmented reality experience. For instance, a user
may instantly get flight status information upon entering an airport.

• Customers instantly expressed their worry that the glasses would be an


invasion of privacy when the initial version was introduced in 2013.
Google Glass symbolised the unavoidable recording of daily life.
Google first tried to reposition the glasses as a tool for professionals
like doctors or manufacturing employees. However, worries persisted,
and in 2015 Google stopped all development on the Glass project.
5
Literature Survey

Google Glass Used as Assistive Technology Its Utilization for Blind and Visually
Impaired People by Ales Berger(&), Andrea Vokalova, Filip Maly, and Petra
Poulova.-2020,[1]
• Primary purpose of the paper is to provide and test developed application for
basic navigation issues, which are usable for daily support of blind or visually
impaired people.
• Google Glass represents a great opportunity not only for business enterprises,
but also for medical organizations, education institutions and social services.
Primary goal of this paper is to provide and mainly test developed application
for basic navigation issues, which are usable for daily support of blind or
visually impaired people.
• Next research is aimed to improve the recognition process of these obstacles in
larger participants sample with focus on mentioned obstacles and their various
forms.

6
Literature Survey

R. Schaer, D. Markonis, and H. Müller,‘‘Facilitating medical


information search using Google glass connected to a
content-based medical image retrieval system’’-2019,[2].
• Widmer et al developed a medical information search system on
Google Glass by connecting it to a content-based medical image
retrieval system. The device takes a photo and sends it along with
keywords associated with the image to a medical image retrieval
system to retrieve similar cases, thus helping the user make an
informed decision.
• As a preliminary assessment of the usability of the application, we
tested the application under three conditions (images of the skin;
printed CT scans and MRI images; and CT and MRI images
acquired directly from an LCD screen) to explore whether using
Google Glass affects the accuracy of the results returned by the
medical image retrieval system.
7
Literature Survey
Smart Glasses for the Visually Impaired People by Esra Ali
Hassan and Tong Boon Tang -2017,[3]
• This paper presents a new design of assistive smart glasses for
visually impaired students. The objective is to assist in multiple
daily tasks using the advantage of wearable design format.
• As a proof of concept, this paper only presents one example
application, i.e. text recognition technology that can help reading
from hardcopy materials.
• The system design, working mechanism and principles were
discussed along with some experiment results. This new concept is
expected to improve the visually impaired students’ lives despite
their economic situations. Immediate future work includes
assessing the user-friendliness and optimizing the power
management of the computing unit.

8
Methodology

Fig 1.0 Proposed Application Development

10
Methodology

• The system design diagram is shown in Fig.2.0 The system can be


divided into three major sections: the app on the Google Glass device,
the smartphone, and the Vision API.
• The BVIP user interacts directly with the app on Google Glass. On
receiving a user voice command, the camera image handler built into
the app uses the camera present on the smart glasses to capture the
image of the user’s surroundings. This image is compressed and then
sent to the smartphone using a socket connection over the internet
• Upon receiving the image from Google Glass, the server- side
application on the smartphone decompresses the image. The captions
of the decompressed image are then generated by using the Vision
API. The response from the API is received in a JSON format by the
Cognitive

11
Methodology

Fig 2.0 System Design

Fig2.0 SYSTEM DESIGN 12


Snapshots

Home screen Main activity screen. Describe scene: Invocation screen.

Image captured using the camera on glass Caption response. Objects detected response

13
Results & Discussion

The following Analysis were formulated by the study:


1) 2)

14
Results & Discussion

The following Analysis were formulated by the study:


3) 4)

15
Results & Discussion

LIKERT SCALE ANALYSIS


With the help of students (50) and teachers (5) at the Roman and
Catherine Lobo School for the Visually Impaired at Mangalore,
Karnataka, India, the application was tested, and its usefulness to the
BVIP community was determined.
Using the description given by the device, the students could
accurately identify their current location within the school.

16
Advantages

• User training period is minimal.


• The most significant advantage of the proposed system is that the
user does not require any visual cues to use the application.
• No extra effort is required to use this device daily: The device is
fairly simple to use.
• The application helps the user to understand the scene: Since the
application generates captions of whatever scene the person is
looking at, it was hypothesized that the application would help the
user better understand their surroundings.
• Google Glass provides more comfort and usability when
compared with smartphone apps for the visually impaired.

17
Conclusion

• The system is designed to be highly portable, easy to wear, and


works in real-time.
• The overall response time of the proposed application was
measured and is less than 1 second, thereby providing accurate
results in real-time.
• The proposed application describes the scene and identifies the
various objects present in front of the user.
• Further, there exists a possibility of moving the application
entirely to Google Glass by removing the dependency on the
smartphone. Currently, the smartphone device is used to process
the captured image before making the API calls to the Custom
Vision API, which can be avoided by using the Android SDK for
Vision API9 directly on Google Glass

18
References

[1] A. Berger, A. Vokalova, F. Maly, and P. Poulova, ‘‘Google glass used as assistive
technology its utilization for blind and visually impaired people,’’ in Proc. Int. Conf.
Mobile Web Inf. Syst. Cham, Switzerland: Springer, Aug. 2017, pp. 70–82.
[2] C. Van Lansingh and K. A. Eckert, ‘‘VISION 2020: The right to sight in 7 years?’’
Med. Hypothesis, Discovery Innov. Ophthalmol., vol. 2, no. 2, p. 26, 2013.
[3] N. A. Bradley and M. D. Dunlop, ‘‘An experimental investigation into wayfinding
directions for visually impaired people,’’ Pers. Ubiquitous Comput., vol. 9, no. 6, pp. 395–
403, Nov. 2005.
[4] F. Battaglia and G. Iannizzotto, ‘‘An open architecture to develop a handheld device
for helping visually impaired people,’’ IEEE Trans. Consum. Electron., vol. 58, no. 3, pp.
1086–1093, Aug. 2012.
[5] M. E. Meza-de-Luna, J. R. Terven, B. Raducanu, and J. Salas, ‘‘A socialaware
assistant to support individuals with visual impairments during social interaction: A
systematic requirements analysis,’’ Int. J. Hum.- Comput. Stud., vol. 122, pp. 50–60, Feb.
2019.

19
Thank You

20

You might also like