Waveforms of unimodal stimuli.
Unimodal visual (A) and auditory stimuli (B). 0.5, 1, 2.5, 5: frequency of unimodal auditory stimuli is 0.5, 1, 2.5, 5 kHz, respectively.

Waveforms of unimodal stimuli. Unimodal visual (A) and auditory stimuli (B). 0.5, 1, 2.5, 5: frequency of unimodal auditory stimuli is 0.5, 1, 2.5, 5 kHz, respectively.

Source publication
Article
Full-text available
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to resp...

Similar publications

Article
Full-text available
Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep lea...
Conference Paper
Full-text available
Gamma-aminobutyric acid (GABA) is the chief inhibitory neurotransmitter of the central nervous system, and it has been extensively studied in recent magnetic resonance spectroscopy (MRS) experiments. It has been shown that resting GABA concentration in the visual cortex is inversely correlated with the BOLD magnitude response to a visual stimulus....
Article
Full-text available
The encoding, maintenance, and subsequent retrieval of memories over short time intervals is an essential cognitive function. Load effects on the neural dynamics supporting the maintenance of short-term memories have been well studied, but experimental design limitations have hindered the study of similar effects during the encoding of information...
Article
Full-text available
Context: Individuals following anterior cruciate ligament reconstruction (ACLR) demonstrate altered postural stability and functional movement patterns. It is hypothesized that individuals following ACLR may compensate with sensory adaptations with greater reliance on visual mechanisms during activities. It is unknown if visual compensatory strate...
Article
Full-text available
Previous research has demonstrated that negative affect influences attentional processes. Here, we investigate whether pre-experimental negative affect predicts a hypervigilant neural response as indicated by increased event-related potential amplitudes in response to neutral and positive visual stimuli. In our study, seventeen male participants fi...

Citations

... This integration process is known as audio-visual integration [1], [2]. Audio-visual stimuli elicit faster and more accurate responses compared to unimodal stimuli, highlighting the effectiveness of audiovisual integration [3], [4]. ...
Preprint
The seamless integration of visual and auditory information is a fundamental aspect of human cognition. Although age-related functional changes in AudioVisual Integration (AVI) have been extensively explored in the past, thorough studies across various age groups remain insufficient. Previous studies have provided valuable insights into age-related AVI using EEG-based sensor data. However, these studies have been limited in their ability to capture spatial information related to brain source activation and their connectivity. To address these gaps, our study conducted a comprehensive audiovisual integration task with a specific focus on assessing the aging effects in various age groups, particularly middle-aged individuals. We presented visual, auditory, and audiovisual stimuli and recorded EEG data from Young (18-25 years), Transition (26-33 years), and Middle (34-42 years) age cohort healthy participants. We aimed to understand how aging affects brain activation and functional connectivity among hubs during audiovisual tasks. Our findings revealed delayed brain activation in middle-aged individuals, especially for bimodal stimuli. The superior temporal cortex and superior frontal gyrus showed significant changes in neuronal activation with aging. Lower frequency bands (theta and alpha) showed substantial changes with increasing age during AVI. Our findings also revealed that the AVI-associated brain regions can be clustered into five different brain networks using the k-means algorithm. Additionally, we observed increased functional connectivity in middle age, particularly in the frontal, temporal, and occipital regions. These results highlight the compensatory neural mechanisms involved in aging during cognitive tasks.
... This process is called audiovisual integration (Laurienti et al., 2006;Stein, 2012). The benefit of simultaneous (bimodal) presentation of visual and auditory stimuli over unimodal presentation has been repeatedly validated; for instance, the response time to audiovisual stimuli is faster than that to visual or auditory stimuli alone (Teder-Sälejärvi et al., 2005;Yang et al., 2015). ...
Article
Full-text available
Audiovisual integration is an essential process that influences speech perception in conversation. However, it is still debated whether older individuals benefit more from audiovisual integration than younger individuals. This ambiguity is likely due to stimulus features, such as stimulus intensity. The purpose of the current study was to explore the effect of aging on audiovisual integration, using event-related potentials (ERPs) at different stimulus intensities. The results showed greater audiovisual integration in older adults at 320–360 ms. Conversely, at 460–500 ms, older adults displayed attenuated audiovisual integration in the frontal, fronto-central, central, and centro-parietal regions compared to younger adults. In addition, we found older adults had greater audiovisual integration at 200–230 ms under the low-intensity condition compared to the high-intensity condition, suggesting inverse effectiveness occurred. However, inverse effectiveness was not found in younger adults. Taken together, the results suggested that there was age-related dissociation in audiovisual integration and inverse effectiveness, indicating that the neural mechanisms underlying audiovisual integration differed between older adults and younger adults.
... For the AV discrimination task (Fig. 1C), the visual nontarget stimulus was a black and white checkerboard image (B/W checkerboard, 52 × 52 mm, with a visual angle of 5 • ), and the visual target stimulus was a B/W checkerboard image with two black dots contained within each white checkerboard. The auditory nontarget stimulus was a 1000-Hz sinusoidal tone, and the auditory target stimulus was white noise (Ren et al., 2020;Ren et al., 2018;Ren et al., 2016;Yang et al., 2015). The AV target stimulus was the combination of a visual target and auditory target stimuli, and the AV nontarget stimulus was the combination of a visual nontarget and auditory nontarget stimuli. ...
Article
Studies have revealed that visual attentional load modulated audiovisual integration (AVI) greatly; however, auditory and visual attentional resources are separate to some degree, and task-irrelevant auditory information could arouse much faster and larger attentional alerting effects than visible information. Here, we aimed to explore how auditory attentional load influences AVI and how aging could have an effect. Thirty older and 30 younger adults participated in an AV discrimination task with an additional auditory distractor competing for attentional resources. The race model analysis revealed highest AVI in the low auditory attentional load condition (low > no > medium > high, pairwise comparison, all p ≤ 0.047) for younger adults and a higher AVI under the no auditory attentional-load condition (p = 0.008), but there was a lower AVI under the low (p = 0.019), medium (p < 0.001), and high (p = 0.021) auditory attentional-load conditions for older adults than for younger adults. The time-frequency analysis revealed higher theta- and alpha-band AVI oscillation under no and low auditory attentional-load conditions than under medium and high auditory attentional-load conditions for both older (all p ≤ 0.011) and younger (all p ≤ 0.024) adults. Additionally, Weighted Phase lag index (WPLI) analysis revealed higher theta-band and lower alpha-band global functional connectivity for older adults during AV stimuli processing (all p ≤ 0.031). These results suggested that the AVI was higher in the low attentional-load condition than in the no attentional-load condition but decreased inversely with increasing of attentional load and that there was a significant aging effect in older adults. In addition, the strengthened theta-band global functional connectivity in older adults during AV stimuli processing might be an adaptive phenomenon for age-related perceptual decline.
... Analysis of sentiments has seen considerable progress during last two decades. A number of research studies were conducted for detection of human sentiments using peripheral physiological signals [7,8], facial expressions [2,[9][10][11] and brain signals [12][13][14][15][16]. Data available in blogs, social media and other online media was processed. ...
Article
Full-text available
Identification of sentiments using EEG is challenging and functional research area in human–computer interaction. In recent years, considerable work is done regarding detection and classification of emotions in affective computing. This review paper aims to comprehensively summarize various techniques and methods last updated in this field. The results of various techniques are mapped quantitatively by evaluating famous publications like sentiment analysis using EEG signals, detection of emotions using bio-potential signals, linear discriminant analysis (LDA) classifiers, identification of emotions from multichannel EEG through deep forest. It delineates an integrated informative approach in which various aspects of statistics from continuum of structured data sources are placed together. The most recent publications are inspected to examine the reliable approach for detection of sentiments. Moreover, there are some specific inputs for each research which are helpful to improve the performance of existing approaches in practical applications. The analysis and comparison of all the methods show that identification of emotions using multi-channel EEG through deep forest, facial expression recognition and recognition of sentiments using classifiers including LDA and SVM show best accuracies than state-of-art methods. We analyzed results of standard signals to measure the rare artifact-eliminated EEG signals. DEAP and DREAMER are challenging datasets which are being used in most of the techniques for detection and analysis of sentiments. Other datasets like GAPED, MANHOB HCl, ACSERTAINL, MULSEMEDIA and DECAF are also analyzed in various methods of sentiment analysis. The main target of this survey is to provide nearly full image of techniques regarding analysis of sentiments (detection of emotions and building resources). It is observed that the traditional procedure of feature extraction is followed along with addition of some new features in recent publications. Among all methods, it is observed that deep forest model for analysis of sentiments is oblivious to hyper-parameter settings that lead reducing complexity of recognition of sentiments.
... For the auditory/visual discrimination task, the visual nontarget stimulus was a black and white checkerboard image (B/W checkerboard, 52 × 52 mm, with a visual angle of 5 °), and the visual target stimulus was a black-and-white checkerboard image with two black dots contained within each white checkerboard (He et al., 1996;Laura et al., 2005;Ren et al., 2016; see Figure 1, Panel A). The auditory nontarget stimulus was a 1000-Hz sinusoidal tone, and the auditory target stimulus was white noise (Ren et al., 2018;Ren et al., 2016;Yang et al., 2015). In line with previous studies about the effect of visual perceptual load on audiovisual integration Ren, Zhou, et al., 2020) The stimuli in the RSAP task consisted of 10 distractor characters taken from six letters (B, C, P, R, T, and V) and four digits (6, 7, 8, and 9) presented through the speakers located on the right/left sides of the computer monitor (see Figure 1, Panel B). ...
Article
Full-text available
Attention modulates numerous stages of audiovisual integration, and studies have shown that audiovisual integration is higher in attended conditions than in unattended conditions. However, attentional resources are limited for each person, and it is not yet clear how audiovisual integration changes under different attentional loads. Here, we explored how auditory attentional load affects audiovisual integration by applying an auditory/visual discrimination task to evaluate audiovisual integration and a rapid serial auditory presentation (RSAP) task to manipulate auditory attentional resources. The results for peak benefit and positive area under the curve of different probability showed that audiovisual integration was highest in the low attentional load condition and lowest in the high attentional load condition (low > no = medium > high). The peak latency and time window revealed that audiovisual integration was delayed as the attentional load increased (no < low < medium < high). Additionally, audiovisual depression was found in the no, medium, and high attentional load conditions but not in the low attentional load condition. These results suggest that mild auditory attentional load increases audiovisual integration, and high auditory attentional load decreases audiovisual integration.
... The difference wave [AV-(A+V)] was calculated as the effect of audiovisual integration (39,40). In other words, audiovisual integration was the difference between the ERPs to bimodal (AV) stimuli and the ERPs to the sum of the unimodal stimuli (A+V) (41)(42)(43)(44)(45). The ERP components analyzed in the present study included N1, P2, N2, and LPP. ...
Article
Full-text available
Depression is related to the defect of emotion processing, and people's emotional processing is crossmodal. This article aims to investigate whether there is a difference in audiovisual emotional integration between the depression group and the normal group using a high-resolution event-related potential (ERP) technique. We designed a visual and/or auditory detection task. The behavioral results showed that the responses to bimodal audiovisual stimuli were faster than those to unimodal auditory or visual stimuli, indicating that crossmodal integration of emotional information occurred in both the depression and normal groups. The ERP results showed that the N2 amplitude induced by sadness was significantly higher than that induced by happiness. The participants in the depression group showed larger amplitudes of N1 and P2, and the average amplitude of LPP evoked in the frontocentral lobe in the depression group was significantly lower than that in the normal group. The results indicated that there are different audiovisual emotional processing mechanisms between depressed and non-depressed college students.
... The auditory nontarget stimulus was a 1000-Hz sinusoidal tone, and the auditory target stimulus was white noise (Ren et al., 2016(Ren et al., , 2018Yang et al., 2015). The visual nontarget stimulus was a black and white checkerboard image (B/W checkerboard, 52 Â 52 mm, with a visual angle of 5 ), and the visual target stimulus was a B/W checkerboard image with two black dots located within each white checkerboard (He et al., 1996;Laura et al., 2005;Ren et al., 2016). ...
Article
Full-text available
Previous studies have demonstrated that exogenous attention decreases audiovisual integration (AVI); however, whether the AVI is different when exogenous attention is elicited by bimodal and unimodal cues and its aging effect remain unclear. To clarify this matter, 20 older adults and 20 younger adults were recruited to conduct an auditory/visual discrimination task following bimodal audiovisual cues or unimodal auditory/visual cues. The results showed that the response to all stimulus types was faster in younger adults compared with older adults, and the response was faster when responding to audiovisual stimuli compared with auditory or visual stimuli. Analysis using the race model revealed that the AVI was lower in the exogenous-cue conditions compared with the no-cue condition for both older and younger adults. The AVI was observed in all exogenous-cue conditions for the younger adults (visual cue > auditory cue > audiovisual cue); however, for older adults, the AVI was only found in the visual-cue condition. In addition, the AVI was lower in older adults compared to younger adults under no-and visual-cue conditions. These results suggested that exogenous attention decreased the AVI, and the AVI was lower in exog-enous attention elicited by bimodal-cue than by unimodal-cue conditions. In addition, the AVI was reduced for older adults compared with younger adults under exogenous attention. Yanna Ren and Ying Zhang contributed equally to this work and should be considered co-first authors.
... For the AV discrimination task, the visual non-target stimulus was a black and white checkerboard image (B/W checkerboard, 52 × 52 mm, with a visual angle of 5 • ), and the visual target stimulus was a B/W checkerboard image with two black dots contained within each white checkerboard (He et al., 1996;Talsma and Woldorff, 2005;Ren et al., 2016). The auditory nontarget stimulus was a 1000 Hz sinusoidal tone, and the auditory target stimulus was white noise (Yang et al., 2015;Ren et al., 2016Ren et al., , 2018b. The AV target stimulus was the combination of visual target and auditory target stimuli, and the AV non-target stimulus was the combination of visual non-target and auditory non-target stimuli. ...
... According to previous studies and topographic response patterns, five regions of interest (ROIs) (frontal: F7, F3, Fz, F4, F8; fronto-central: FC5, FC1, FC2, FC6; central: C3, Cz, C4; centroparietal: CP5, CP1, CP2, CP6; and occipital: O1, Oz, O2) in the 0-600 ms time interval were selected (Yang et al., 2015;Ren et al., 2018b). One-way ANOVA showed no significant lateralization effect for any of these ROIs; therefore, we chose one representative electrode with the highest activity in each ROI (Fz, FC1, Cz, CP1, and Oz) for further analysis. ...
Article
Full-text available
Audio-visual integration (AVI) is higher in attended conditions than in unattended conditions. Here, we explore the AVI effect when the attentional recourse is competed by additional visual distractors, and its aging effect using single- and dual-tasks. The results showed the highest AVI effect under single-task-attentional-load condition than under no- and dual-task-attentional-load conditions (all P < 0.05) in both older and younger groups, but the AVI effect was weaker and delayed for older adults compared to younger adults for all attentional-load conditions (all P < 0.05). The non-phase-locked oscillation for AVI analysis illustrated the highest theta and alpha oscillatory activity for single-task-attentional-load condition than for no- and dual-task-attentional-load conditions, and the AVI oscillatory activity mainly occurred in the Cz, CP1 and Oz of older adults but in the Fz, FC1, and Cz of younger adults. The AVI effect was significantly negatively correlated with FC1 (r2 = 0.1468, P = 0.05) and Cz (r2 = 0.1447, P = 0.048) theta activity and with Fz (r2 = 0.1557, P = 0.043), FC1 (r2 = 0.1042, P = 0.008), and Cz (r2 = 0.0897, P = 0.010) alpha activity for older adults but not for younger adults in dual task. These results suggested a reduction in AVI ability for peripheral stimuli and a shift in AVI oscillation from anterior to posterior regions in older adults as an adaptive mechanism.
... Additionally, numerous studies have shown that AVI is greatly influenced by stimulus intensity, and the AVI is more pronounced for weak stimuli rather than strong stimuli when presented individually, which is called inverse effectiveness (IE) (Stein, 2012;Stein & Meredith, 1993;Yang et al., 2015). However, the IE is controversial in complex tasks, such as tool and speech recognition (Tye-Murray, Sommers, Spehar, Myerson, & Hale, 2010). ...
Article
Full-text available
Introduction Previous studies have confirmed increased functional connectivity in elderly adults during processing of simple audio–visual stimuli; however, it is unclear whether elderly adults maximize their performance by strengthening their functional brain connectivity when processing dynamic audio–visual hand‐held tool stimuli. The present study aimed to explore this question using global functional connectivity. Methods Twenty‐one healthy elderly adults and 21 healthy younger adults were recruited to conduct a dynamic hand‐held tool recognition task with high/low‐intensity stimuli. Results Elderly adults exhibited higher areas under the curve for both the high‐intensity (3.5 versus. 2.7) and low‐intensity (3.0 versus. 1.2) stimuli, indicating a higher audio–visual integration ability, but a delayed and widened audio–visual integration window for elderly adults for both the high‐intensity (390 – 690 ms versus. 360 – 560 ms) and low‐intensity (460 – 690 ms versus. 430 – 500 ms) stimuli. Additionally, elderly adults exhibited higher theta‐band (all p < .01) but lower alpha‐, beta‐, and gamma‐band functional connectivity (all p < .05) than younger adults under both the high‐ and low‐intensity‐stimulus conditions when processing audio–visual stimuli, except for gamma‐band functional connectivity under the high‐intensity‐stimulus condition. Furthermore, higher theta‐ and alpha‐band functional connectivity were observed for the audio–visual stimuli than for the auditory and visual stimuli and under the high‐intensity‐stimulus condition than under the low‐intensity‐stimulus condition. Conclusion The higher theta‐band functional connectivity in elderly adults was mainly due to higher attention allocation. The results further suggested that in the case of sensory processing, theta, alpha, beta, and gamma activity might participate in different stages of perception.
... Therefore, for signal with same physical intensity, it is relatively stronger for younger psychologically than for older. ERPs investigation showed that higher intensity auditory signal elicited a greater ERPs components [40], indicating a higher brain activity. So it is reasonable for the decline of alerting effect in older adults. ...