Figure 4 - uploaded by Jiro Gyoba
Content may be subject to copyright.
Illustration of T1 display and T1 and T2 identification performance in Experiment 2. (a) T1 display: T1 and three distracters were presented at random locations on a 363 virtual matrix. Two visual stimuli were displayed on each side (left or right) of the fixation. (b) T1 accuracy and T2 accuracy (given that T1 is correct) in each of the Tone, Visual field, and Lag conditions. (c) T2 accuracy (given that T1 is correct) in each of the Tone and T2 visual field conditions. The vertical axes indicate the T2 accuracy (percent correct). Error bars represent standard errors of the mean (n = 9). doi:10.1371/journal.pone.0104131.g004

Illustration of T1 display and T1 and T2 identification performance in Experiment 2. (a) T1 display: T1 and three distracters were presented at random locations on a 363 virtual matrix. Two visual stimuli were displayed on each side (left or right) of the fixation. (b) T1 accuracy and T2 accuracy (given that T1 is correct) in each of the Tone, Visual field, and Lag conditions. (c) T2 accuracy (given that T1 is correct) in each of the Tone and T2 visual field conditions. The vertical axes indicate the T2 accuracy (percent correct). Error bars represent standard errors of the mean (n = 9). doi:10.1371/journal.pone.0104131.g004

Source publication
Article
Full-text available
Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sen...

Contexts in source publication

Context 1
... visual and auditory stimuli were the same as in Experiment 1A. However, the color of T1 was black in the present experiment, and the target was presented simultaneously with three distracters (see Figure 4a). The locations of T1 and the distracters were determined randomly using a 363 (4.564.5 deg) virtual matrix. ...
Context 2
... accuracy in identifying T1 and T2, with the latter contingent upon T1 being correct, was calculated for each of the conditions (results shown in Figures 4b and 4c). The T1 identification performance was high in each condition. ...

Citations

... To enact an intended behavior, intention memory must transfer information to intuitive behavior control. This system enables automatic sensorimotor coordination and spatial attention in the action phase (Lehéricy et al., 2006;Takeshima & Gyoba, 2014). Accordingly, intuitive behavior control includes perceptual networks that have a broad and intuitive awareness of context stimuli and flexibly adapt ongoing behavior to changes in the environment as they occur during performance (Lehéricy et al., 2006;Takeshima & Gyoba, 2014;Thill et al., 2013). ...
... This system enables automatic sensorimotor coordination and spatial attention in the action phase (Lehéricy et al., 2006;Takeshima & Gyoba, 2014). Accordingly, intuitive behavior control includes perceptual networks that have a broad and intuitive awareness of context stimuli and flexibly adapt ongoing behavior to changes in the environment as they occur during performance (Lehéricy et al., 2006;Takeshima & Gyoba, 2014;Thill et al., 2013). ...
Article
Several psychological approaches concern explaining the dynamic psychological processes and mechanisms that render personality a coherent whole, a “well-sounding concert.” Building upon personality systems interactions (PSI) theory, which explains personality functioning on the basis of interactions among cognitive and affective-motivational personality systems, we demonstrate how diverse perspectives on personality coherence may functionally be integrated. To do so, we describe interactions among four cognitive personality systems considered to underlie and optimize two meta principles of personality functioning—self-growth (in terms of the integration of adverse experiences) and action control (in terms of goal pursuit). These meta principles establish different subtypes of personality coherence differentially focused by psychological perspectives. We highlight the interdisciplinary relevance and practical application of the present approach and conclude with implications for future research.
... Olivers and Van der Burg (2008) examined the effect of simultaneous auditory stimuli on AB. The results showed that AB was also reduced when targets were presented simultaneously from the audiovisual modality, and this phenomenon was attributed to an audiovisual enhancement effect (Kranczioch & Thorne, 2015;Schneider, 2013;Yasuhiro et al., 2014). However, these studies considered only that simultaneously presented audiovisual targets enhance attention, ignoring the fact that stimulus salience also contributes to attentional enhancement. ...
... Additionally, similar results to previous studies were observed in the synchronized with T2 condition (Kranczioch & Thorne, 2015;Yasuhiro et al., 2014). Since it is not clear whether highly salient stimuli can affect the reduction in AB due to attentional enhancement and the enhancement appears perceptual in nature, as warning or alertness manipulations had little to no effect (especially not on T2) , we established the synchronized with all condition to exclude the effect of salience on cross-modal attentional enhancement. ...
... Opposite changes in the accuracy of target recognition were observed in the lag1 and lag5 conditions in Experiment 1, which may be due to the task setting. The higher accuracy in the synchronized with all condition at lag1 due to auditory-driven processing of visual stimuli is consistent with previous studies and theoretical explanations Yasuhiro et al., 2014), whereas the reversal exhibited at lag5 may have been caused by an increase in the distractor interference effect presented between the two targets as the distance increases. This resulted in increased attentional input from the participants to elements prior to T2 and reduced the ability to recognize subsequent targets. ...
Article
Full-text available
In the rapid serial visual presentation (RSVP) paradigm, response accuracy for the target decreases when it appears within a short time window (200~500 ms) after the previous target. This phenomenon is termed the attentional blink (AB). Although mechanisms of cross-modal processing that reduce the AB have been documented, researchers have not explored the differences across modal attentional conditions. In the present study, we used the RSVP paradigm to investigate the effect of auditory-driven visual target perceptual enhancement on the AB under modality-specific selective attention (Experiment 1) and bimodal-divided attention (Experiment 2). The results showed that cross-modal attentional enhancement was not moderated by stimulus salience. Moreover, the results also showed that accuracy was higher when the attended sound appeared simultaneously with the target. These results indicated that audiovisual enhancement reduced AB and that stronger attentional enhancement in the bimodal-divided attentional condition led to the disappearance of AB.
... Broad attentional scope is often facilitated by positive affect (Fredrickson, 2001;Lindquist, Satpute, Wager, Weber, & Barrett, 2015) and especially by positive emotions related to hedonic rewards (as incentive rewards narrow one's focus on the goal: Gable & Harmon-Jones, 2008;Pool et al., 2016). Another basic class of cognitive function is sensorimotor processing and coordination (Lehéricy et al., 2006;Takeshima & Gyoba, 2014). Sensorimotor processing can operate without conscious attention or deliberation, for example, when stimulus-response patterns become automatically elicited as, for example, in non-verbal social interaction such as sensorimotor synchronization, emotional contagion, or intuitive parenting (Boccia, Piccardi, Di Marco, Pizzamiglio, & Guariglia, 2016;Dumas, Nadel, Soussignan, Martinerie, & Garnero, 2010;Keller, Chasiotis, & Runde, 1992;Miller, Xia, & Hastings, 2019). ...
... By contrast, volitional functions come into play more heavily when a decision of a goal has been made and distractions or low levels of motivation render action planning, enactment, and goal maintenance difficult. With respect to low-level cognition, sensorimotor functions are inherently relevant to implementation in the action phase (Lehéricy et al., 2006;Takeshima & Gyoba, 2014). Error awareness plays a particular role in the evaluation phase and is typically accompanied by an immediate negative emotional response upon conscious detection of a deviation from expectations in goal progress. ...
Article
Full-text available
Over the last few decades, most personality psychology research has been focused on assessing personality via scores on a few broad traits and investigating how these scores predict various behaviors and outcomes. This approach does not seek to explain the causal mechanisms underlying human personality and thus falls short of explaining the proximal sources of traits as well as the variation of individuals' behavior over time and across situations. On the basis of the commonalities shared by influential process-oriented personality theories and models, we describe a general Dynamics of Personality Approach (DPA). The DPA relies heavily on theoretical principles applicable to complex adaptive systems that self-regulate via feedback mechanisms, and parses the sources of personality in terms of various psychological functions relevant in different phases of self-regulation. Thus, we consider personality to be rooted in individual differences in various cognitive, emotional-motivational, and volitional functions, as well as their causal interactions. In this article, we lay out twenty tenets for the DPA that may serve as a guideline for integrative research in personality science.
... Negative 9,10 and positive 11,12 emotional stimuli rapidly and strongly attract attention. Audio-visual integration could be modulated by visual attention 13,14 . Therefore, emotional information from visual stimuli could directly affect the audio-visual integration process. ...
... The processing speed for visual stimuli, which is controlled by visual complexity or spatial frequency, would affect fission illusion processing 22,23 . Moreover, it is difficult to induce a fission illusion with images of familiar faces and buildings 14 . While familiarity is a higher-level characteristic of visual stimuli, it influences the early stages of audio-visual integration 24 . ...
... Selective attention enhances the neural processes associated with the fission illusion 46 . Moreover, attention to one sensory modality can spread to another sensory modality and enhance multisensory integration processing 13,14 . Negative 9,10 and positive 11,12 stimuli strongly attract attention. ...
Article
Full-text available
Multisensory integration is affected by various types of information coming from different sensory stimuli. It has been suggested that emotional information also influences the multisensory integration process. The perceptual phenomena induced by audio-visual integration are modulated by emotional signals through changing individuals’ emotional states. However, the direct effects of emotional information, without changing emotional states on the multisensory integration process have not yet been examined. The present study investigated the effects of an emotional signal on audio-visual integration. The experiments compared the magnitude of audio-visual fission and fusion illusions using facial expression stimuli and simple geometric shapes. Facial expression stimuli altered the criterion difference for discerning the number of flashes when two beeps were simultaneously presented in Experiment 1. These stimuli did not affect the fission illusion’s magnitude. For simple geometric shapes, emotional shapes perceptually induced a larger fission illusion in Experiment 2. The present study found that the emotional valence included in simple geometric shapes induced a larger fission illusion. Moreover, current results suggest that emotional faces modulate response criterion for fission illusion in discernment of the number of flashes. Future studies should elucidate in detail the mechanism of emotional valence effects on audio-visual integration.
... This classic example of multisensory integration (e.g., Partan & Marler, 1999) occurs when certain pairs of 1 Using the term multisensory processing in the present article follows the definition put forward by Stein et al. (2010Stein et al. ( , p. 1719, that B... processing involve more than one sensory modality but not necessarily specifying the exact nature of the interaction between them.^Given the fact that attention may involve various types of multisensory processing that would not always lead to an outcome of multisensory integration (i.e., forming a new multisensory representation), we therefore agree with using multisensory processing to cover these heterogeneous phenomena. Diesch (1995) Spatial processing Auditory facilitation of visual localization performance Takeshima & Gyoba (2014) Right Left Linguistic processing Auditory facilitation of visual letter identification performance Takeshima & Gyoba (2014) incongruent visual lip movements and auditory speech stimuli are integrated, thus leading to a new percept (McGurk & MacDonald, 1976). The research shows that the McGurk effect occurs more frequently when the visual stimulus (i.e., the lip movements) is presented in the left rather than the right hemispace (Baynes, Funnell, & Fowler, 1994;Diesch, 1995). ...
... This classic example of multisensory integration (e.g., Partan & Marler, 1999) occurs when certain pairs of 1 Using the term multisensory processing in the present article follows the definition put forward by Stein et al. (2010Stein et al. ( , p. 1719, that B... processing involve more than one sensory modality but not necessarily specifying the exact nature of the interaction between them.^Given the fact that attention may involve various types of multisensory processing that would not always lead to an outcome of multisensory integration (i.e., forming a new multisensory representation), we therefore agree with using multisensory processing to cover these heterogeneous phenomena. Diesch (1995) Spatial processing Auditory facilitation of visual localization performance Takeshima & Gyoba (2014) Right Left Linguistic processing Auditory facilitation of visual letter identification performance Takeshima & Gyoba (2014) incongruent visual lip movements and auditory speech stimuli are integrated, thus leading to a new percept (McGurk & MacDonald, 1976). The research shows that the McGurk effect occurs more frequently when the visual stimulus (i.e., the lip movements) is presented in the left rather than the right hemispace (Baynes, Funnell, & Fowler, 1994;Diesch, 1995). ...
... This asymmetry has been explained in terms of a righthemisphere advantage for face processing (e.g., Borod et al., 1998;Ellis, 1983;Sergent, Ohta, & MacDonald, 1992). Takeshima and Gyoba (2014) recently demonstrated a larger auditory facilitation resulting from the presentation of a simultaneous tone on visual localization performance in the left as compared to the right hemispace. Their suggestion was that this asymmetry could be attributed to the right hemisphere being specialized for the processing of spatial information (Kimura, 1969;Umiltà et al., 1974). ...
Article
Full-text available
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants' attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load-that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.
Article
Full-text available
In the rapid serial visual presentation (RSVP) paradigm, sound affects participants’ recognition of targets. Although many studies have shown that sound improves cross-modal processing, researchers have not yet explored the effects of sound semantic information with respect to different locations and processing modalities after removing sound saliency. In this study, the RSVP paradigm was used to investigate the difference between attention under conditions of consistent and inconsistent semantics with the target (Experiment 1), as well as the difference between top-down (Experiment 2) and bottom-up processing (Experiment 3) for sounds with consistent semantics with target 2 (T2) at different sequence locations after removing sound saliency. The results showed that cross-modal processing significantly improved attentional blink (AB). The early or lagged appearance of sounds consistent with T2 did not affect participants’ judgments in the exogenous attentional modality. However, visual target judgments were improved with endogenous attention. The sequential location of sounds consistent with T2 influenced the judgment of auditory and visual congruency. The results illustrate the effects of sound semantic information in different locations and processing modalities.