A Tensor Flow Lite-Powered Wearable Navigation Assistant Using Raspberry Pi for Real-Time Obstacle Detection and Autonomous Mobility in the Visually Impaired

Authors

  • Leong Kah Meng ᵃFaculty of Engineering & Information Technology, Southern University College, 81300 Johor, Malaysia ᶜFaculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Johor, Malaysia
  • Ng Khai Le Faculty of Engineering & Information Technology, Southern University College, 81300 Johor, Malaysia
  • Jahanzeb Sheikh ᵇDepartment of Biomedical Engineering, Sir Syed University of Engineering and Technology, Karachi, Pakistan; ᶜFaculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Johor, Malaysia
  • Ngeu Chee Hau @ Yeo Chee Hau Faculty of Engineering & Information Technology, Southern University College, 81300 Johor, Malaysia
  • Tan Tian Swee Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Johor, Malaysia
  • Kang Eng Siew Faculty of Engineering & Information Technology, Southern University College, 81300 Johor, Malaysia
  • Chan Bun Seng Faculty of Engineering & Information Technology, Southern University College, 81300 Johor, Malaysia
  • Chng Chern Wei Faculty of Computer Science and Technology, First City University College, Petaling Jaya, Selangor, Malaysia
  • Jose-Javier Serrano Olmedo Centre for Biomedical Technology Madrid, Universidad Politécnica de Madrid, Madrid, Spain
  • Vasanthan A/L Maruthapillai Faculty of Engineering & Information Technology, Southern University College, 81300 Johor, Malaysia

DOI:

https://doi.org/10.11113/mjfas.v22n1.4862

Keywords:

Camera Serial Interface; Electronic Travel Aid (ETA); Electronic Design Automation; Tensor processing unit; Text To Speech

Abstract

In year 2023, around 2.2 billion people globally have near or distance visual impairment. Previous systems lacked comprehensive functionality, focusing only on basic obstacle detection or standalone features like fall detection. Moreover, the studies struggled with bulky designs, poor low-light performance, and limited environmental awareness. To address these challenges, the study develops ObstaSense, a wearable Electronic-Travel-Aid (ETA) for obstacle detection and navigation assistance. The system employed TensorFlow Lite, a Raspberry Pi-5, and a Pi-Camera Module-V3 to detect objects (e.g., people, potholes, vehicles) and relay avoidance instructions via Bluetooth earbuds. Its Real-Time Navigation (RTN) feature combined Global Positioning System (GPS), a compass sensor, and Plus Codes for precise guidance, enhanced by Google’s Speech-To-Text (STT) and Text-to-Speech (TTS). Operating at 4–10 Frames Per Second (FPS), ObstaSense further integrated the Gemini Application programming interface (API) for multilingual (50-languages) image-to-text conversion. The system achieved consistent results by leveraging precise RTN functionality, which uses compass sensor data and vibration feedback to guide users accurately. Offline dataset training and evaluation were conducted solely to support the deployment of a real-time embedded assistive system on Raspberry Pi 5. Obstacle avoidance performance varied across rows, with the highest accuracy (100%) in the first row, followed by 66.7% in the third row and 50% in the second row. ObstaSense aids visually impaired, elderly, and cognitively impaired users, aligning with Sustainable Development Goals (SDGs) 3 and 10 for inclusive well-being.

References

Patel, I., Kulkarni, M., & Mehendale, N. (2024). Review of sensor-driven assistive device technologies for enhancing navigation for the visually impaired. Multimedia Tools and Applications, 83, 52171–52195.

Ghosh, A., Mahmud, S. A., Uday, T. I. R., & Farid, D. M. (2020). Assistive technology for visually impaired using TensorFlow object detection in Raspberry Pi and Coral USB accelerator. In IEEE Region 10 Symposium (TENSYMP).

Cardona, A. A., & Vasquez, S. (2021). Mobility aids for visually impaired persons: Journals reviewed. Wearable Technology, 1, 73–81.

Nazri, N. M., Fauzi, S., Gining, R., Razak, T. R., & Jamaluddin, M. (2021). Smart cane for visually impaired with obstacle, water detection and GPS. International Journal of Computing and Digital Systems, 10.

Ashiq, F., Asif, M., Ahmad, M. B., Zafar, S., Masood, K., & Mahmood, T. (2022). CNN-based object recognition and tracking system to assist visually impaired people. IEEE Access, 10, 14819–14834.

Padilla, R., Netto, S. L., & Da Silva, E. A. (2020). A survey on performance metrics for object-detection algorithms. In 2020 International Conference on Systems, Signals and Image Processing (IWSSIP).

Konaite, M., Owolawi, P. A., Mapayi, T., Malele, V., Odeyemi, K., & Aiyetoro, G. (2021). Smart hat for the blind with real-time object detection using Raspberry Pi and TensorFlow Lite. In Proceedings of the International Conference on Artificial Intelligence and its Applications.

Andrea, R., Ikhsan, N., & Sudirman, Z. (2022). Face recognition using histogram of oriented gradients with TensorFlow in surveillance camera on Raspberry Pi. International Journal of Information Engineering & Electronic Business, 14(1). https://doi.org/10.5815/ijieeb.2022.01.01.

Rahman, M., Islam, M., Ahmed, S., & Khan, S. A. (2020). Obstacle and fall detection to guide the visually impaired people with real time monitoring. SN Computer Science, 4, 219. https://doi.org/10.1007/s42979-020-00219-1.

Okolo, G. I., Althobaiti, T., & Ramzan, N. (2024). Assistive systems for visually impaired persons: Challenges and opportunities for navigation assistance. Sensors, 24, 3572. https://doi.org/10.3390/s24093572.

Bujacz, M., & Strumiłło, P. (2016). Sonification: Review of auditory display solutions in electronic travel aids for the blind. Archives of Acoustics, 41, 401–414. https://doi.org/10.1515/aoa-2016-0042.

Patil, L. H., Dahale, N. P., Thakare, A. L., Fursule, S. S., & Vaidya, A. V. (2024). Visionary assistance: Integrating ultrasonic and GPS technologies for the visually impaired. International Journal of Innovative Research in Technology and Science, 122, 410–415. https://doi.org/10.1234/ijirts.2024.122.410.

Bismark, A. A. (2024). Development of a wearable assistive device for navigation for the visually impaired with command and request support (PhD Thesis). Soka University, California.

Bhatlawande, S., Borse, R., Solanki, A., & Shilaskar, S. (2024). A smart clothing approach for augmenting mobility of visually impaired people. IEEE Access, 99, 1–11.

Masadeh, M., Alkhdour, E., AbuDiak, H., & Obaidat, R. (2024). Approximate computing-based assistive shopping trolley for visually challenged people. International Journal of Computing and Digital Systems, 15, 1405–1416. https://doi.org/10.12785/ijcds.2024.15.1405.

Hoogsteen, K. M., Szpiro, S., Kreiman, G., & Peli, E. (2022). Beyond the cane: Describing urban scenes to blind people for mobility tasks. ACM Transactions on Accessible Computing (TACCESS), 15, 1–29.

Kostyuchenko, N., & Smolennikov, D. (2022). Social responsibility for sustainable development goals. In Reducing Inequalities Towards Sustainable Development Goals. River Publishers.

Oerther, S. E., & Rosa, W. E. (2020). Advocating for equality: The backbone of the sustainable development goals. AJN The American Journal of Nursing, 120, 60–62.

United Nations. (2024, August 20). The 17 goals. https://sdgs.un.org/goals.

Hoogsteen, K. M., Szpiro, S., Kreiman, G., & Peli, E. (2022). Beyond the cane: Describing urban scenes to blind people for mobility tasks. ACM Transactions on Accessible Computing (TACCESS), 15, 1–29.

Mai, C., Xie, D., Zeng, L., Li, Z., Li, Z., Qiao, Z., & Li, L. (2022). Laser sensing and vision sensing smart blind cane: A review. Sensors, 23(2), 869. https://doi.org/10.3390/s23020869.

Downloads

Published

27-02-2026