Android Based Application For Visually Impaired Using Deep Learning Approach
Android Based Application For Visually Impaired Using Deep Learning Approach
Haslinah Mohd Nasir1, Noor Mohd Ariff Brahin2, Mai Mariam Mohamed Aminuddin3,
Mohd Syafiq Mispan4, Mohd Faizal Zulkifli5
1,2,4,5Fakulti Teknologi Kejuruteraan Elektrik dan Elektronik, Universiti Teknikal Malaysia Melaka, Malaysia
3Fakulti Kejuruteraan Elektronik dan Kejuruteraan Komputer, Universiti Teknikal Malaysia Melaka. Malaysia
Corresponding Author:
Haslinah Mohd Nasir
Fakulti Teknologi Kejuruteraan Elektrik dan Elektronik
Universiti Teknikal Malaysia Melaka
Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia
Email: haslinah@utem.edu.my
1. INTRODUCTION
According to 2018 statistics from world health organization (WHO), at least 2.2 billion people from
all over the world have vision impairment [1]. The visually impaired people face a lot of problems in their
daily life. These people have difficulties to recognize and differentiate the objects around them thus they only
rely on the guidance to help them especially for daily task. The most challenging for visually impaired are
that the ability of them to recognize the colour, shape and differentiate the currency of the money. Nowadays,
there are many of assistive technology that can help them as a sighted guidance and improves the quality life
of visually impaired [2], [3]. The assistive technology based on computer is expected to help on the visually
impaired daily task. It can be screen reading software, magnification software, dictation software, refreshable
Braille displays, optical character recognition (OCR) systems, and many more [4]. The assistive technology
growth from the simple technology devices to the sophisticated high technology solution using [5]-[7].
The visually impaired mostly now have the smartphone as it become a basic necessity of each
individual. Thus, it provides a great platform to develop an application specifically for visually impaired to
assist them. A survey done by Nora and the team have found that the person with visual impairments are
frequently used the mobile application for their daily activities [8]. Furthermore, they are looking for some
improvement and new application that can help them to less dependent on others. An experimental study has
been done by Manduchi on the specific mobile task as such landmark detection using the mobile phone. His
findings become the platform on designing the technology that facilitates visually impaired [9].
Many studies have developed the assistive technology using android application which useful in
today world and people with visually impaired can make use of this technology to help them accomplished
on their daily routine. Tharkude et al. [10] and Parkhi et al. [11], the authors proposed a smart android
application for blind people with the use of object detection. The developed apps are using mobile video
camera to know the direction of object, voice instructions for current locations and directions as well as to
gives warning of the obstacles in front of the user [10]. While in [11], the authors design an object detection
from imaged captured by smartphone’s camera. The performance of the apps is quite good however it is
depending on the quality of the inbuilt camera of the smartphone. Kadam et al. [12], the authors designed an
apps providing with the speech output for the objects detected by using ANN classification approach.
However, the apps may give variable accuracy and still need improvement to be more efficient. Other than
that a mobile application called Intelligent Eye with features of light, colour detection and object, banknote
recognition is developed by using image deep learning, CNN architecture [13]. The survey of User
acceptability test has been done and the results show that the apps is good in general and well accepted.
Deep learning has outstanding performance and provide high quality intelligent services on mobile
devices applicationss. It is mainly applied to image and voice processing at can be empower more to make
people’s daily life more convenient [14]. Deep learning approach such as CNN model is knownly a method
that provides high accuracy in image classification [15]. It provides numerical results between 0 and 1 which
obtained faster and higher accuracy values for classification purposes.
This paper proposed mobile application through deep learning approach specifically by convolution
neural network (CNN) that might help the visually impaired on their daily lives. The training set for CNN is
developed at Google Dataset which is developed by Google for Big Data Analytics. CNN needs the cloud
storage that able to analyse the big data analysis for decision making, classification, prediction with high
accuracy [16]. The proposed application doesn’t require internet connection to operates and it consists of
three types of detection as mentioned in Methodology section. The developed application allows the visually
impaired to capture things around them with their own smartphone and will help them recognize the captured
object. This will make their life easier without depending to the people around them who somethimes
insincere and just take advantage on them.
This paper is organized as: section 2 describes the method used to develop the application. The
results and discussion will be covered in section 3 and finally in section 4 will conclude and mentions the
future recommendation.
2. RESEARCH METHOD
2.1. Overview of the application
In the phase of android mobile application development, the various useful assistant are combined in
single application as:
a. Object detection: it works on the image captured by the mobile phone’s camera. It will be trained with
the database objects to identify the image. It helps the person with visually impairment to find their
items.
b. Colour detection: it works on the image taken which the colour name is based on the RGB values of the
detected image. This feature may help in their daily routing such as cloth colour and shoe selection.
c. Currency note detection: The image taken from the camera will be compared with the trained dataset for
recognition. This will help visually impaired people from being cheated by others.
The features of the application are associated with audio output of verbal message for user
notification as it is important for visually impaired person to identify the object [17]-[19]. Overal, this
application features include:
a. Android based mobile application that can be accessed at anytime and anymwhere without internet
connection. This is to ensure to secure the user personal data from third party [20], [21].
b. The input from user can be swipe gesture, and speech input which is specially designed for visually
impaired for easy usage [22].
c. Incorporated with deep learning, CNN for fast and accurate image processing for detection and
prediction.
d. The application consists of three different modes, object detection, colour detection and currency
detection. The user can anaytime turning the mode by swiping or voice input.
e. Incorporated with output of verbal message to notify the user on the identified objects.
Int J Artif Intell, Vol. 10, No. 4, December 2021: 879 - 888
Int J Artif Intell ISSN: 2252-8938 881
Figure 1 shows the flowchart of the application on how it operates from starting the application is
ready to use by the user. It started with the Main menu which consists of three different modes where the user
can choose by click the menu or by the speech voice input. The camera will automatically on, and the user
can capture the image for detection and prediction using CNN model. The application will notify the user
through the audio output.
Android based application for visually impaired using deep learning approach (Haslinah Mohd Nasir)
882 ISSN: 2252-8938
the image classification will produce high accuracy. In the second stage of designing the algorithm of CNN
model using TensorFlow, the parameter with 10,000 iterations were processed with different epochs. The
results of the training accuracy for 10 and 100 epochs are shown in section 4. In last stage for image
classification performance, the different images were testing, and the accuracy percentage based on CNN
image classification were calculated.
Int J Artif Intell, Vol. 10, No. 4, December 2021: 879 - 888
Int J Artif Intell ISSN: 2252-8938 883
The application is tested in real time with currency note and colour detection. The results with the
accuracy percentage are presented in Table 1 and Table 2. Based on the both tables, the detection accuracy is
relatively good with high confidence and accuracy level. The application is able to detect the image captured
with high accuracy.
Android based application for visually impaired using deep learning approach (Haslinah Mohd Nasir)
884 ISSN: 2252-8938
Int J Artif Intell, Vol. 10, No. 4, December 2021: 879 - 888
Int J Artif Intell ISSN: 2252-8938 885
Android based application for visually impaired using deep learning approach (Haslinah Mohd Nasir)
886 ISSN: 2252-8938
4. CONCLUSION
This paper has presented the development of android application for people with visually
impairment with some novelty compare to the existing application. Based on the results, the application is
able to predict the image captured by the user with high accuracy up to 99.95%. On top of that, the site
testing with visually impaired people has been done with positive feedback received. As for future work, the
additional feature can be added as well as the inclusion of internet of things (IoT) for more advance
application.
ACKNOWLEDGEMENTS
The authors would like to thanks to the Universiti Teknikal Malaysia Melaka for the support
received in the accomplishment of this work. Not forgotten, a special thanks to the people with visually
impaired at the reflexology centre in Melaka Mall, Melaka, Malaysia.
REFERENCES
[1] https://www.who.int/en/news-room/fact-sheets/detail/blindness-and-visual-impairment [Online] Accessed on 5
October 2020.
[2] https://www.afb.org/blindness-and-low-vision/ [Online] Accessed on 3 Mac 2021.
[3] https://guides.library.illinois.edu/blind/visualimpairment [Online] Accessed on 3 Mac 2021.
[4] https://www.miusa.org/resource/tipsheet/assistivetechnologyforblind [Online] Accessed on 3 Mac 2021.
Int J Artif Intell, Vol. 10, No. 4, December 2021: 879 - 888
Int J Artif Intell ISSN: 2252-8938 887
[5] A. Bhowmick and S. M. Hazarika, “An insight into assistive technology for the visually impaired and blind people:
state of art and future trends,” Journal on Multimodal User Interfaces, Vol.11, No.2, pp.1-24, 2017,
DOI:10.1007/s12193-016-0235-6.
[6] Y. Rosner and A. Perlman, “The effect of the Usage of Computer-Based Assistive Devices on the Functioning and
Quality of Live of Individuals who are Blind or have low vision,” Journal of Visual Impairment & Blindness,
Vol.112, No.1, pp.87-99, 2018, DOI:10.1177/0145482X1811200108.
[7] J. Hwang, K. H. Kim, J. G. Hwang, S. Jun, J. Yu and C. Lee, “Technological Opportunity Analysis: Assistive
Technology for Blond and Visually Impaired People,” Sustainability, Vol.12, No.20, pp.8689, 2020,
DOI:10.3390/su12208689.
[8] N. Griffin-Shirley, D. R. Banda, P. M. Ajuwon, J. Cheon, J. Lee, H. Ran Park, and S. N. Lyngdoh, “A Survey in
the use of mobile applications for people who are visually impaired,” Journal of Visual Impairement & Blindness,
CE Article, pp.308-323, 2017, DOI:10.1177/0145482X1711100402.
[9] R. Manduchi, “Mobile vision as Assistive Technology for the Blind: An Experimental Study,” Lecture Notes in
Computer Science, Vol. 7383. Pp.9-16, 2012.
[10] K. B. Tharkude, A. K. Wayase, P. S. More, and S. S. Kothey, “Smart android application for blind people based on
object detection,” International Journal of Innovative Research in Computer and Communication Engineeting,
Vol.4, No.4, pp.5149-5155, 2016, https://doi.org/10.17762/ijritcc.v3i10.4979
[11] S. Parkhi, S. S. Lokhande, and N. D. Thombare, “Vocal vision android application for visually impaired person,”
International Journal of Science, Engineering and Technology Research, Vol.5, No.6, pp.2233-2239, 2016.
[12] A. J. Kadam, S. Awate, S. Desai, R. Khese, and G. Patange, “Android application for visually impaired users,”
Imperial Journal of Interdisciplinary Research, Vol.3, No.2, pp.1674-1677, 2017.
[13] M. Awad, J. El Haddad, and E. Khneisser, “Intelligent Eye: A mobile application for assisting blind people,” in
IEEE Middle East and North Africa Conference. pp1-6, 2018, DOI:10.1109/MENACOMM.2018.8371005.
[14] J. Wang, B. Cao, P. S. Yu, L. Sun, W. Bao and X. Zhu, “Deep Learning Towards Mobile applications,” 2018 IEEE
38th International Conference on Distributed Computing Systems, pp.1385-1393, 2018,
DOI:10.1109/ICDCS.2018.00139.
[15] D. Choe, E. Choi and D. K. Kim, “The Real-Time Mobile Application for Classifying of Endangered Parrot
Species using the CNN Models Based on Transfer Learning,” Deep Learning in Mobile Information Systems,
Vol.2020, pp.1-13, 2020, DOI:10.1155/2020/1475164.
[16] M. M. Najafabadi et al., “Deep learning applications and challenges in big data analytics,” Journal of Big Data,
Vol.2, No.1, pp.1-21, 2015, https://doi.org/10.1186/s40537-014-0007-7.
[17] A. Csapo, G. Wersényi, H. Nagy, and T. Stockman, “A survey of assistive technologies and application for blind
users on mobile platforms: A review and foundation for reserch,” Journal Multimodal User Interface, Vol.9,
pp.275-286, 2015, https://doi.org/10.1007/s12193-015-0182-7.
[18] C. Willings, “Auditory Access Device” [Online] Available in teachingvisuallyimpaired.com. [Accessed on 5
October 2020].
[19] P. Lavanya, S. Nandhini, V. Hemalatha, and P. Gomathi, P., “Currency pinpointing through mobile application for
visually impaired people,” International Research Journal of Engineering and Technology, Vol.6, No.3, pp.1796-
1798, 2019.
[20] R. Balebako, A. Marsh, J. Lin, J. Hong and L. F. Cranor. “The Privacy and Security Behaviors of Smartphone App
Developers,” Conference: Workshop on Usable Security, pp.1-10, 2014, DOI:10.14722/usec.2014.23006.
[21] P. Weichbroth and L. Lysik, “Mobile Security: Threats and Best Practices,” Mobile Information System, Vol.2020,
pp.1-15, 2020, DOI:10.1155/2020/8828078.
[22] R. Sayal, C. Subbalakhmi and H. S. Saini, “Mobile App Accessibility for Visually Impaired,” International Journal
of Advanced Trends in Computer Science and Engineering, Vol.9, No.1, pp.182-185, 2020,
DOI:10.30534/ijatcse/2020/27912020.
[23] M. Abadi et al., “TensorFlow: A system for large-scale machine learning,” Proceedings of the 12th USENIX
Symposium on Operating Systems Design and Implementation (OSDI’16), pp. 265-283.
[24] B. S. Lin, C. C. Lee and P. Y. Ching, “Simple Smartphone-Based Guiding System for Visually Impaired People,”
Sensors, Vol. 17, pp. 1-22, 2017, DOI:10.3390/s17061371.
[25] J. Anitha, A. Subalaxmi and G. Vijayalakshmi, “Real Time Object Detection for Visually Challenged Persons,”
International Journal of Innovative Technology and Exploring Engineering, Vol.8, No.8, pp. 312-314, 2019.
BIOGRAPHIES OF AUTHORS
Haslinah Mohd Nasir received her Bachelor Degree in Electrical - Electronic Engineering
(2008) from Universiti Teknologi Malaysia (UTM), MSc (2016) and PhD (2019) in Electronic
Engineering from Universiti Teknikal Malaysia Melaka (UTeM). She had 5 years (2008-1013)
experience working in industry and currently a lecturer in UTeM. Her research interest
includes microelectronics, artificial intelligence and biomedical.
Android based application for visually impaired using deep learning approach (Haslinah Mohd Nasir)
888 ISSN: 2252-8938
Noor Mohd Ariff Brahin received B. Eng Electrical - Electronic (Hons) from Univeristi
Teknologi Malaysia (UTM) in 2008. He had experienced working in semiconductor industries
as a structural design engineer from 2008 until 2013. He is currently a teaching engineer in
Universiti Teknikal Malaysia Melaka. His research interests include artificial intelligence,
microelectronics and IC design.
Mohd Syafiq Mispan received B. Eng Electrical (Electronics) and M. Eng Electrical
(Computer and Microelectronic System) from Universiti Teknologi Malaysia, Malaysia in
2007 and 2010 respec- tively. He had experienced working in semiconductor industries from
2007 until 2014 before pursu- ing his Ph.D. degree. He obtained his Ph.D. degree in
Electronics and Electrical Engineering from University of Southampton, United Kingdom in
2018. He is currently a senior lecturer in Fakulti Teknologi Kejuruteraan Elektrik dan
Elektronik, Universiti Teknikal Malaysia Melaka. His current research interests include
hardware security, CMOS reliability, VLSI design, and Electronic Systems Design.
Mohd Faizal Zulkifli received B. Eng Electrical (Hons) from Universiti Teknologi MARA in
2008. He worked as a component design engineer for 5 years prior joining Universiti Teknikal
Malaysia Melaka as a teaching engineer. His research interests include ASIC physical design,
Programmable Devices, and Artificial Intelligence.
Int J Artif Intell, Vol. 10, No. 4, December 2021: 879 - 888