Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and Voice

In recent years, various haptic rendering methods have been proposed to help people obtain interactive experiences with virtual textures through vibration feedback. However, due to impaired vision, the blind or visually impaired (BVI) is still unable to effectively perceive and learn virtual texture...

Full description

Saved in:
Bibliographic Details
Main Authors: Dapeng Chen, Yi Ding, Hao Wu, Qi Jia, Hong Zeng, Lina Wei, Chengcheng Hua, Jia Liu, Aiguo Song
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Transactions on Neural Systems and Rehabilitation Engineering
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10836946/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832590414937325568
author Dapeng Chen
Yi Ding
Hao Wu
Qi Jia
Hong Zeng
Lina Wei
Chengcheng Hua
Jia Liu
Aiguo Song
author_facet Dapeng Chen
Yi Ding
Hao Wu
Qi Jia
Hong Zeng
Lina Wei
Chengcheng Hua
Jia Liu
Aiguo Song
author_sort Dapeng Chen
collection DOAJ
description In recent years, various haptic rendering methods have been proposed to help people obtain interactive experiences with virtual textures through vibration feedback. However, due to impaired vision, the blind or visually impaired (BVI) is still unable to effectively perceive and learn virtual textures through these methods. To help BVIs have the opportunity to improve their object cognition by learning virtual textures, we built a virtual texture learning system based on multimodal feedback. We first propose an Informer based haptic texture rendering model that can fuse texture images with real-time action information to generate vibration acceleration (VA) signals. We further propose a texture classification method using the generated VA signals, and broadcast the classified texture description information to BVI through a speaker. We described the construction process of rendering model and classification method in detail, and compared the perceptual effects of subjects on textures under four rendering models through user experiments, as well as the accuracy of texture matching under two learning modes. The experimental results show that the proposed rendering model can accurately and efficiently generate VA signals, providing subjects with realistic vibration feedback. The constructed learning system enables BVI to know the type, material and other attribute information of virtual texture in the process of obtaining vibrotactile sensation. By establishing the correspondence between haptic stimuli and texture attributes, the system enables BVIs to enhance their ability to recognize objects through learning a large number of virtual textures.
format Article
id doaj-art-aafd1add9dc84a29bd130ba89c6180a6
institution Kabale University
issn 1534-4320
1558-0210
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Transactions on Neural Systems and Rehabilitation Engineering
spelling doaj-art-aafd1add9dc84a29bd130ba89c6180a62025-01-24T00:00:09ZengIEEEIEEE Transactions on Neural Systems and Rehabilitation Engineering1534-43201558-02102025-01-013345346510.1109/TNSRE.2025.352804810836946Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and VoiceDapeng Chen0https://orcid.org/0000-0002-1930-419XYi Ding1Hao Wu2Qi Jia3Hong Zeng4https://orcid.org/0000-0002-4587-6263Lina Wei5Chengcheng Hua6Jia Liu7https://orcid.org/0000-0002-8383-4048Aiguo Song8https://orcid.org/0000-0002-1982-6780Tianchang Research Institute, School of Automation, C-IMER, CICAEET, B-DAT, Nanjing University of Information Science and Technology, Nanjing, ChinaTianchang Research Institute, School of Automation, C-IMER, CICAEET, B-DAT, Nanjing University of Information Science and Technology, Nanjing, ChinaTianchang Research Institute, School of Automation, C-IMER, CICAEET, B-DAT, Nanjing University of Information Science and Technology, Nanjing, ChinaTianchang Research Institute, School of Automation, C-IMER, CICAEET, B-DAT, Nanjing University of Information Science and Technology, Nanjing, ChinaSchool of Instrument Science and Engineering, Southeast University, Nanjing, ChinaSchool of Computer and Computing Science, Hangzhou City University, Hangzhou, ChinaTianchang Research Institute, School of Automation, C-IMER, CICAEET, B-DAT, Nanjing University of Information Science and Technology, Nanjing, ChinaTianchang Research Institute, School of Automation, C-IMER, CICAEET, B-DAT, Nanjing University of Information Science and Technology, Nanjing, ChinaSchool of Instrument Science and Engineering, Southeast University, Nanjing, ChinaIn recent years, various haptic rendering methods have been proposed to help people obtain interactive experiences with virtual textures through vibration feedback. However, due to impaired vision, the blind or visually impaired (BVI) is still unable to effectively perceive and learn virtual textures through these methods. To help BVIs have the opportunity to improve their object cognition by learning virtual textures, we built a virtual texture learning system based on multimodal feedback. We first propose an Informer based haptic texture rendering model that can fuse texture images with real-time action information to generate vibration acceleration (VA) signals. We further propose a texture classification method using the generated VA signals, and broadcast the classified texture description information to BVI through a speaker. We described the construction process of rendering model and classification method in detail, and compared the perceptual effects of subjects on textures under four rendering models through user experiments, as well as the accuracy of texture matching under two learning modes. The experimental results show that the proposed rendering model can accurately and efficiently generate VA signals, providing subjects with realistic vibration feedback. The constructed learning system enables BVI to know the type, material and other attribute information of virtual texture in the process of obtaining vibrotactile sensation. By establishing the correspondence between haptic stimuli and texture attributes, the system enables BVIs to enhance their ability to recognize objects through learning a large number of virtual textures.https://ieeexplore.ieee.org/document/10836946/Haptic texture renderingmulti-source data fusiontactile texture classificationmultimodal feedbackBVI
spellingShingle Dapeng Chen
Yi Ding
Hao Wu
Qi Jia
Hong Zeng
Lina Wei
Chengcheng Hua
Jia Liu
Aiguo Song
Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and Voice
IEEE Transactions on Neural Systems and Rehabilitation Engineering
Haptic texture rendering
multi-source data fusion
tactile texture classification
multimodal feedback
BVI
title Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and Voice
title_full Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and Voice
title_fullStr Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and Voice
title_full_unstemmed Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and Voice
title_short Visually Impaired People Learning Virtual Textures Through Multimodal Feedback Combining Vibrotactile and Voice
title_sort visually impaired people learning virtual textures through multimodal feedback combining vibrotactile and voice
topic Haptic texture rendering
multi-source data fusion
tactile texture classification
multimodal feedback
BVI
url https://ieeexplore.ieee.org/document/10836946/
work_keys_str_mv AT dapengchen visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT yiding visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT haowu visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT qijia visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT hongzeng visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT linawei visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT chengchenghua visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT jialiu visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice
AT aiguosong visuallyimpairedpeoplelearningvirtualtexturesthroughmultimodalfeedbackcombiningvibrotactileandvoice