A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks
Tactile reproduction technology represents a promising advancement within the rapidly expanding field of virtual/augmented reality, necessitating the development of innovative methods specifically tailored to correspond with tactile sensory labels. Since human tactile perception is known to be influ...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10835063/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832592890229948416 |
---|---|
author | Koki Hatori Takashi Morikura Akira Funahashi Kenjiro Takemura |
author_facet | Koki Hatori Takashi Morikura Akira Funahashi Kenjiro Takemura |
author_sort | Koki Hatori |
collection | DOAJ |
description | Tactile reproduction technology represents a promising advancement within the rapidly expanding field of virtual/augmented reality, necessitating the development of innovative methods specifically tailored to correspond with tactile sensory labels. Since human tactile perception is known to be influenced by visual information, this study has developed a cross-modal tactile sensory display using Conditional Generative Adversarial Networks, CGANs, to generate both mechanical and visual information. Initially, sensory evaluation experiments were conducted with 32 participants using twelve metal plate samples to collect tactile information. Subsequently, we prepared 320 images of variety of materials and conducted sensory evaluation experiments with 30 participants per image to gather tactile information evoked by viewing the images. Utilizing the collected tactile information, used as labels, and images as a dataset, we developed four types of visual information generation models using CGAN, each trained with weighted concatenated data of images and labels, in which image elements are amplified by factors of 1, 1,000, 5,000, and 10,000, respectively. Each of these four models was then used to generate twelve images corresponding to the sensory evaluation result of twelve different metal plate samples. We performed a cross-modal tactile reproduction experiment using the previously developed tactile information generation model to input signals to a tactile display, alongside the images generated by the visual information generation model. In this experiment, 20 subjects conducted sensory evaluations where tactile sensations were displayed concurrently with the visual display of the images. The results confirmed that the concurrent display of mechanical and visual information significantly reduced the mean absolute error between the displayed tactile information and that of the metal plate samples from 2.2 to 1.6 out of a 7-digit scale in sensory evaluation. These findings underscore the effectiveness of the visual information generation model and highlight the potential of integrating tactile and visual information for enhanced tactile reproduction systems. |
format | Article |
id | doaj-art-c71284a30d154a748d270815b22abab6 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-c71284a30d154a748d270815b22abab62025-01-21T00:02:04ZengIEEEIEEE Access2169-35362025-01-01139223922910.1109/ACCESS.2025.352794610835063A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial NetworksKoki Hatori0https://orcid.org/0009-0005-4440-9902Takashi Morikura1https://orcid.org/0000-0003-1287-8809Akira Funahashi2https://orcid.org/0000-0003-0605-239XKenjiro Takemura3https://orcid.org/0000-0002-0298-5558School of Science for Open and Environmental Science, Keio University, Yokohama, JapanDepartment of Biosciences and Informatics, Keio University, Yokohama, JapanDepartment of Biosciences and Informatics, Keio University, Yokohama, JapanDepartment of Mechanical Engineering, Keio University, Yokohama, JapanTactile reproduction technology represents a promising advancement within the rapidly expanding field of virtual/augmented reality, necessitating the development of innovative methods specifically tailored to correspond with tactile sensory labels. Since human tactile perception is known to be influenced by visual information, this study has developed a cross-modal tactile sensory display using Conditional Generative Adversarial Networks, CGANs, to generate both mechanical and visual information. Initially, sensory evaluation experiments were conducted with 32 participants using twelve metal plate samples to collect tactile information. Subsequently, we prepared 320 images of variety of materials and conducted sensory evaluation experiments with 30 participants per image to gather tactile information evoked by viewing the images. Utilizing the collected tactile information, used as labels, and images as a dataset, we developed four types of visual information generation models using CGAN, each trained with weighted concatenated data of images and labels, in which image elements are amplified by factors of 1, 1,000, 5,000, and 10,000, respectively. Each of these four models was then used to generate twelve images corresponding to the sensory evaluation result of twelve different metal plate samples. We performed a cross-modal tactile reproduction experiment using the previously developed tactile information generation model to input signals to a tactile display, alongside the images generated by the visual information generation model. In this experiment, 20 subjects conducted sensory evaluations where tactile sensations were displayed concurrently with the visual display of the images. The results confirmed that the concurrent display of mechanical and visual information significantly reduced the mean absolute error between the displayed tactile information and that of the metal plate samples from 2.2 to 1.6 out of a 7-digit scale in sensory evaluation. These findings underscore the effectiveness of the visual information generation model and highlight the potential of integrating tactile and visual information for enhanced tactile reproduction systems.https://ieeexplore.ieee.org/document/10835063/Tactile reproductioncross-modal recognitionconditional generative adversarial networks |
spellingShingle | Koki Hatori Takashi Morikura Akira Funahashi Kenjiro Takemura A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks IEEE Access Tactile reproduction cross-modal recognition conditional generative adversarial networks |
title | A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks |
title_full | A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks |
title_fullStr | A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks |
title_full_unstemmed | A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks |
title_short | A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks |
title_sort | cross modal tactile reproduction utilizing tactile and visual information generated by conditional generative adversarial networks |
topic | Tactile reproduction cross-modal recognition conditional generative adversarial networks |
url | https://ieeexplore.ieee.org/document/10835063/ |
work_keys_str_mv | AT kokihatori acrossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks AT takashimorikura acrossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks AT akirafunahashi acrossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks AT kenjirotakemura acrossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks AT kokihatori crossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks AT takashimorikura crossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks AT akirafunahashi crossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks AT kenjirotakemura crossmodaltactilereproductionutilizingtactileandvisualinformationgeneratedbyconditionalgenerativeadversarialnetworks |