Resting-state functional connectivity changes following audio-tactile speech training

Understanding speech in background noise is a challenging task, especially when the signal is also distorted. In a series of previous studies, we have shown that comprehension can improve if, simultaneously with auditory speech, the person receives speech-extracted low-frequency signals on their fin...

Full description

Saved in:
Bibliographic Details
Main Authors: Katarzyna Cieśla, Tomasz Wolak, Amir Amedi
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-04-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnins.2025.1482828/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Understanding speech in background noise is a challenging task, especially when the signal is also distorted. In a series of previous studies, we have shown that comprehension can improve if, simultaneously with auditory speech, the person receives speech-extracted low-frequency signals on their fingertips. The effect increases after short audio-tactile speech training. In this study, we used resting-state functional magnetic resonance imaging (rsfMRI) to measure spontaneous low-frequency oscillations in the brain while at rest to assess training-induced changes in functional connectivity. We observed enhanced functional connectivity (FC) within a right-hemisphere cluster corresponding to the middle temporal motion area (MT), the extrastriate body area (EBA), and the lateral occipital cortex (LOC), which, before the training, was found to be more connected to the bilateral dorsal anterior insula. Furthermore, early visual areas demonstrated a switch from increased connectivity with the auditory cortex before training to increased connectivity with a sensory/multisensory association parietal hub, contralateral to the palm receiving vibrotactile inputs, after training. In addition, the right sensorimotor cortex, including finger representations, was more connected internally after the training. The results altogether can be interpreted within two main complementary frameworks. The first, speech-specific, factor relates to the pre-existing brain connectivity for audio–visual speech processing, including early visual, motion, and body regions involved in lip-reading and gesture analysis under difficult acoustic conditions, upon which the new audio-tactile speech network might be built. The other framework refers to spatial/body awareness and audio-tactile integration, both of which are necessary for performing the task, including in the revealed parietal and insular regions. It is possible that an extended training period is necessary to directly strengthen functional connections between the auditory and the sensorimotor brain regions for the utterly novel multisensory task. The results contribute to a better understanding of the largely unknown neuronal mechanisms underlying tactile speech benefits for speech comprehension and may be relevant for rehabilitation in the hearing-impaired population.
ISSN:1662-453X