Fig 1 - uploaded by Liad Mudrik
Content may be subject to copyright.
Experimental stimuli. In each trial of the continuous flash suppression condition (a), the test scene (either congruent or incongruent) was gradually introduced to one eye to compete with a Mondrian presented to the dominant eye. The contrast of the test scene was linearly ramped up from 0% to 100% within 1 s of the beginning of the trial; the contrast of the Mondrian decreased at a rate of 2% every 100 ms for the next 5,100 ms. In each trial of the control condition (b), the test scene was blended into the dynamic noise pattern of the Mondrian and presented binocularly; contrast of the scene was ramped up at a rate of 2.5% every 100 ms. In both conditions, each scene was shown in both an incongruent version and a congruent version in separate trials. The example scenes shown here (c) depict a woman putting either food or a chessboard in the oven, a boy holding a bow and either an arrow or a tennis racket, and two athletes playing basketball with either a ball or a watermelon.  

Experimental stimuli. In each trial of the continuous flash suppression condition (a), the test scene (either congruent or incongruent) was gradually introduced to one eye to compete with a Mondrian presented to the dominant eye. The contrast of the test scene was linearly ramped up from 0% to 100% within 1 s of the beginning of the trial; the contrast of the Mondrian decreased at a rate of 2% every 100 ms for the next 5,100 ms. In each trial of the control condition (b), the test scene was blended into the dynamic noise pattern of the Mondrian and presented binocularly; contrast of the scene was ramped up at a rate of 2.5% every 100 ms. In both conditions, each scene was shown in both an incongruent version and a congruent version in separate trials. The example scenes shown here (c) depict a woman putting either food or a chessboard in the oven, a boy holding a bow and either an arrow or a tennis racket, and two athletes playing basketball with either a ball or a watermelon.  

Source publication
Article
Full-text available
Human conscious awareness is commonly seen as the climax of evolution. However, what function-if any-it serves in human behavior is still debated. One of the leading suggestions is that the cardinal function of conscious awareness is to integrate numerous inputs-including the multitude of features and objects in a complex scene-across different lev...

Similar publications

Article
Full-text available
The lateral geniculate nucleus (LGN) of the dorsal thalamus is the primary recipient of the two eyes' outputs. Most LGN neurons are monocular in that they are activated by visual stimulation through only one (dominant) eye. However, there are both intrinsic connections and inputs from binocular structures to the LGN that could provide these neurons...
Article
Full-text available
Abstract Deprivation of visual information from one eye for a 120-minute period in normal adults results in a temporary strengthening of the patched eye’s contribution to binocular vision. This plasticity for ocular dominance in adults has been demonstrated by binocular rivalry as well as binocular fusion tasks. Here, we investigate how its dynamic...
Article
Full-text available
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds wh...
Article
Full-text available
Alpha rhythms (∼10Hz) in the human brain are classically associated with idling activities, being predominantly observed during quiet restfulness with closed eyes. However, recent studies demonstrated that alpha (∼10Hz) rhythms can directly relate to visual stimulation, resulting in oscillations, which can last for as long as one second. This alpha...
Article
Full-text available
During binocular rivalry visual consciousness fluctuates between two dissimilar monocular images. We investigated the role of attention in this phenomenon by comparing event-related potentials (ERPs) when binocular-rivalry stimuli were attended with when they were unattended. Stimuli were dichoptic, orthogonal gratings that yielded binocular rivalr...

Citations

... The dominant view holds that, whereas consciousness might not be required for lowlevel perceptual binding, consciousness is necessary for high-level integrative mechanisms (Mudrik et al., 2014). This common belief seems to be challenged by several studies showing that unconscious integration that requires high-level semantic processing of multiple items, such as the object-scene congruency (Mudrik et al., 2011), multiple-word expressions, and complex arithmetic equations (Sklar et al., 2012). Unfortunately, some of them have been questioned in recent years for replication failure or inconclusive evidence (Moors & Hesselmann, 2018;Moors et al., 2016). ...
... We performed a priori power analysis using G*Power 3.1 (Faul et al., 2007) to estimate the appropriate number of participants. The effect size (Cohen's d) was estimated to be 0.7 according to previous studies detecting the unconscious integration effect using similar breaking CFS paradigms (Mudrik et al., 2011;Sklar et al., 2012;Stein et al., 2015). The power calculation yielded an estimated minimum of 24 participants to detect such an effect size with 90% power (with α set to .05). ...
Article
Full-text available
One central question in the scientific and philosophical study of consciousness is regarding the scope of human consciousness. There is a lively debate as to whether high-level information integration is necessarily dependent on consciousness. This study presents a new form of unconscious integration based on the facingness between two individuals. Using a breaking continuous flash suppression paradigm, Experiments 1–3 found that two facing human heads got a privilege in breaking into awareness compared to nonfacing pairs. Experiments 4 and 5 demonstrated that the breakthrough difference between facing and nonfacing pairs could not be attributed to low-level or mid-level factors. Experiments 6, 7a, and 7b showed that the unconscious priority of facing pairs was significantly diminished when the holistic processing of the two agents was disrupted. Experiments 8–11 demonstrated that the advantage of facing pairs was only observable for human agents and not for daily objects, directional arrows, or nonhuman animals. These findings have critical implications for better understanding the scope of human consciousness and the origins of social vision.
... Previous research has attributed faster CFS breakthrough (equivalently, lower contrast) to unconscious processing of suppressed images (Gayet et al., 2014;Mudrik et al., 2011). As the current study found uniform suppression depth for all tested images, even though bCFS thresholds varied, it is clear that differences in bCFS thresholds alone should not be interpreted in terms of expeditious unconscious processing of semantically relevant images. ...
... It is not clear that all variation in bCFS thresholds can be explained by low-level image properties. There may be important high-level factors that also contribute to the salience of a given target image that make it visible at lower contrasts than other images (Gayet et al., 2014;Jiang et al., 2007;Mudrik et al., 2011;Yang et al., 2007). For example, faces provide essential social information and we are very highly attuned to them. ...
Article
Full-text available
When the eyes view separate and incompatible images, the brain suppresses one image and promotes the other into visual awareness. Periods of interocular suppression can be prolonged during continuous flash suppression (CFS) – when one eye views a static ‘target’ while the other views a complex dynamic stimulus. Measuring the time needed for a suppressed image to break CFS (bCFS) has been widely used to investigate unconscious processing, and the results have generated controversy regarding the scope of visual processing without awareness. Here, we address this controversy with a new ‘CFS tracking’ paradigm (tCFS) in which the suppressed monocular target steadily increases in contrast until breaking into awareness (as in bCFS) after which it decreases until it again disappears (reCFS), with this cycle continuing for many reversals. Unlike bCFS, tCFS provides a measure of suppression depth by quantifying the difference between breakthrough and suppression thresholds. tCFS confirms that (i) breakthrough thresholds indeed differ across target types (e.g. faces vs gratings, as bCFS has shown) – but (ii) suppression depth does not vary across target types. Once the breakthrough contrast is reached for a given stimulus, all stimuli require a strikingly uniform reduction in contrast to reach the corresponding suppression threshold. This uniform suppression depth points to a single mechanism of CFS suppression, one that likely occurs early in visual processing because suppression depth was not modulated by target salience or complexity. More fundamentally, it shows that variations in bCFS thresholds alone are insufficient for inferring whether the barrier to achieving awareness exerted by interocular suppression is weaker for some categories of visual stimuli compared to others.
... Firstly, the effect of visual perspective per se is not trivial but, rather, consistent with a number of studies with the same paradigm suggesting that not only low-level, but also high-level stimulus properties can be processed before consciousness (i.e., something typically thought not to be possible without awareness; see [35][36][37] for discussions on this topic). For instance, it has been demonstrated that face orientation (e.g., 24,38,39 ), facial emotional content (e.g., [40][41][42], eye gaze direction 43,44 , familiarity and emotional valence of words 45 , multimodal congruency 46 , and degree of natural content of the visual scene 47 can modulate the suppression time. Some of these high-level effects have been replicated also with more objective and accuracy-based techniques (e.g., face orientation 34 and eye gaze direction 48 ). ...
Article
Full-text available
Spatial perspective and identity of visual bodily stimuli are two key cues for the self-other distinction. However, how they emerge into visual awareness is largely unknown. Here, self- or other-hands presented in first- or third-person perspective were compared in a breaking-Continuous Flash Suppression paradigm (Experiment 1) measuring the time the stimuli need to access visual awareness, and in a Binocular Rivalry paradigm (Experiment 2), measuring predominance in perceptual awareness. Results showed that, irrespectively of identity, first-person perspective speeded up the access, whereas the third-person one increased the dominance. We suggest that the effect of first-person perspective represents an unconscious prioritization of an egocentric body coding important for visuomotor control. On the other hand, the effect of third-person perspective indicates a conscious advantage of an allocentric body representation fundamental for detecting the presence of another intentional agent. Summarizing, the emergence of self-other distinction into visual awareness would strongly depend on the interplay between spatial perspectives, with an inverse prioritization before and after conscious perception. On the other hand, identity features might rely on post-perceptual processes.
... The current study adds solid evidence to the robustness and flexibility of subliminal semantic processing, contrary to the traditional opinion that unconscious processing is short-lived and stereotypical [81,82]. Given the close interplay of top-down attention and subliminal information, the boundary between conscious control and unconscious processing becomes vague [83]. ...
Article
Full-text available
Theories of embodied cognition suggest that hand motions and cognition are closely interconnected. An emerging technique of tracking how participants move a computer mouse (i.e., the mouse-tracking technique) has shown advantages over the traditional response time measurement to detect implicit cognitive conflicts. Previous research suggests that attention is essential for subliminal processing to take place at a semantic level. However, this assumption is challenged by evidence showing the presence of subliminal semantic processing in the near-absence of attention. The inconsistency of evidence could stem from the insufficient sensitivity in the response time measurement. Therefore, we examined the role of attention in subliminal semantic processing by analyzing participants’ hand motions using the mouse-tracking technique. The results suggest that subliminal semantic processing is not only enhanced by attention but also occurs when attention is disrupted, challenging the necessity of facilitated top-down attention for subliminal semantic processing, as claimed by a number of studies. In addition, by manipulating the color of attentional cues, our experiment shows that the cue color per se could influence participants’ response patterns. Overall, the current study suggests that attentional status and subliminal semantic processing can be reliably revealed by temporal–spatial features extracted from cursor motion trajectories.
... This, however, disagrees with previous studies that suggested even the meaning of complex scenes can be analysed without conscious awareness. Scenes containing incongruent object relations (such as a basketball player dunking a melon) were faster to break masking in a continuous flash suppression [39] paradigm [40] or impaired response times of subsequently presented target scenes [41]. These findings have, however, been called into question [42,43], and brain activity does not differentiate between masked congruent and incongruent scenes [44]. ...
Article
Full-text available
The visual cortex contains information about stimuli even when they are not consciously perceived. However, it remains unknown whether the visual system integrates local features into global objects without awareness. Here, we tested this by measuring brain activity in human observers viewing fragmented shapes that were either visible or rendered invisible by fast counterphase flicker. We then projected measured neural responses to these stimuli back into visual space. Visible stimuli caused robust responses reflecting the positions of their component fragments. Their neural representations also strongly resembled one another regardless of local features. By contrast, representations of invisible stimuli differed from one another and, crucially, also from visible stimuli. Our results demonstrate that even the early visual cortex encodes unconscious visual information differently from conscious information, presumably by only encoding local features. This could explain previous conflicting behavioural findings on unconscious visual processing.
... In fact, when reviewing the history of the field, one might conclude that it is going in circles; at each iteration, strong claims are made about the scope of unconscious processing, which are then followed by methodological criticisms of some sort, questioning the validity of these claims (for a description of this process with respect to unconscious semantic processing, see Kouider & Dehaene, 2007). More recently, a surge of findings reporting remarkably complicated unconscious processes (Mudrik et al., 2011;Sklar et al., 2012;Van Opstal, Calderon, et al., 2011a) was followed by a wave of replication failures (Biderman & Mudrik, 2017;Moors et al., 2016;Moors & Hesselmann, 2017;Stein et al., 2020;Zerweck et al., 2021) and methodological criticism (e.g., Meyen et al., 2022;Rothkirch & Hesselmann, 2017;Rothkirch et al., 2022;Schmidt, 2015;Shanks, 2017). And so, since its inception (see again Kouider & Dehaene, 2007), the field has been characterized by pendulum-like oscillations between assigning high-level functions to unconscious processes and suggesting that they strongly depend on conscious processing. ...
Article
Full-text available
How convincing is current evidence for unconscious processing? Recently, a major criticism suggested that some, if not much, of this evidence might be explained by a mere statistical phenomenon: regression to the mean (RttM). Excluding participants based on an awareness assessment is a common practice in studies of unconscious processing, and this post hoc data selection might lead to false effects that are driven by RttM for aware participants wrongfully classified as unaware. Here, we examined this criticism using both simulations and data from 12 studies probing unconscious processing (35 effects overall). In line with the original criticism, we confirmed that the reliability of awareness measures in the field is concerningly low. Yet, using simulations, we showed that reliability measures might be unsuitable for estimating error in awareness measures. Furthermore, we examined other solutions for assessing whether an effect is genuine or reflects RttM; all suffered from substantial limitations, such as a lack of specificity to unconscious processing, lack of power, or unjustified assumptions. Accordingly, we suggest a new nonparametric solution, which enjoys high specificity and relatively high power. Together, this work emphasizes the need to account for measurement error in awareness measures and evaluate its consequences for unconscious processing effects. It further suggests a way to meet the important challenge posed by RttM, in an attempt to establish a reliable and robust corpus of knowledge in studying unconscious processing.
... Unexpected stimuli are known to capture exogenous attention (reviewed in Carrasco, 2011), and exogenous feature-based attention increases initial dominance in binocular rivalry (Chong & Blake, 2006;Mitchell et al., 2004). Moreover, natural scenes with embedded incongruous objects are more likely to be initially dominant in binocular rivalry, compared to congruous objects (Mudrik, Breska, Lamy, & Deouell, 2011), and they also have extended periods of perceptual dominance . However, unexpected natural images were found to be more likely to be initially dominant than expected images, with no significant difference in duration of dominance (Denison et al., 2016). ...
Article
Full-text available
Perception is influenced by predictions about the sensory environment. These predictions are informed by past experience and can be shaped by exposure to recurring patterns of sensory stimulation. Predictions can enhance perception of a predicted stimulus, but they can also suppress it by favoring novel and unexpected sensory information that is inconsistent with the predictions. Here we employed statistical learning to assess the effects of exposure to consistent sequences of oriented gratings on subsequent visual perceptual selection, as measured with binocular rivalry. Following statistical learning, the first portion of a learned sequence of stimulus orientations was presented to both eyes, followed by simultaneous presentation of the next grating in the sequence to one eye and an orthogonal unexpected orientation to the other eye. We found that subjects were more likely to perceive the grating that matched the orientation that was consistent with the predictive context. That is, observers were more likely to see what they expected to see, compared to the likelihood of perceiving the unexpected stimulus. Some other studies in the literature have reported the opposite effect of prediction on visual perceptual selection, and we suggest that these inconsistencies may be due to differences across studies in the level of the visual processing hierarchy at which competing perceptual interpretations are resolved.
... When attention is captured or drawn to a stimulus, it means that it has properties that are processed prior to the arrival of attention (preattentively), drawing attention to that location for further analysis. Importantly, objects in unexpected contexts, or with unexpected features, can also capture attention in change detection and change blindness tasks (Horstmann & Ansorge, 2016;Lapointe & Milliken, 2016;Mudrik et al., 2011;Underwood et al., 2008). ...
Article
The expected color of an object influences how it is perceived. For example, a banana in a greyscale photo may appear slightly yellow because bananas are expected to be yellow. This phenomenon is known as the memory color effect (MCE), and the objects with a memory color are called "color-diagnostic." The MCE is theorized to be a top-down influence of color knowledge on visual perception. However, its validity has been questioned because most evidence for the MCE is based on subjective reports. Here a change detection task is used as an objective measure of the effect and the results show that change detection differs for color-diagnostic objects. Specifically, it was predicted and found that unnaturally colored color-diagnostic objects (e.g., a blue banana) would attract attention and thus be discovered more quickly and accurately. In the experiment, two arrays alternated with the target present in one array and absent in the other while all other objects remained unchanged. Participants had to find the target as quickly and accurately as possible. In the experimental condition, the targets were color-diagnostic objects (e.g., a banana) presented in either their natural (yellow) or an unnatural (blue) color. In the control condition, non-color-diagnostic objects (e.g., a mug) were presented with the same colors as the color-diagnostic objects. Unnaturally colored color-diagnostic objects were found more quickly, which suggests that the MCE is a top-down, preattentive process that can influence a nonsubjective visual perceptual task such as change detection.
... This is possible because global image statistics of natural scenes are sufficient for a coarse understanding of those scenes (Oliva & Schyns, 2000;Torralba et al., 2006). The human visual system likely encodes scene gist representations efficiently and utilizes them for object processing (Bar et al., 2008;Munneke et al., 2013), although it is still controversial whether the scene consistency effect occurs unconsciously (Moors et al., 2016;Mudrik et al., 2011). The efficient scene gist processing may facilitate object recognition by facilitating perceptual processings of object images by neural feedback or perceptual sharpening (Bar, 2004;Brandman & Peelen, 2017;Rossel et al., 2022), while several studies argued that the scene consistency effect of object recognition was attributable to relatively higher, 1 3 cognitive processes such as response bias Hollingworth & Henderson, 1998). ...
Article
Visual object recognition is facilitated by contextually consistent scenes in which the object is embedded. Scene gist representations extracted from the scenery backgrounds yield this scene consistency effect. Here we examined whether the scene consistency effect is specific to the visual domain or if it is crossmodal. Through four experiments, the accuracy of the naming of briefly presented visual objects was assessed. In each trial, a 4-s sound clip was presented and a visual scene containing the target object was briefly shown at the end of the sound clip. In a consistent sound condition, an environmental sound associated with the scene in which the target object typically appears was presented (e.g., forest noise for a bear target object). In an inconsistent sound condition, a sound clip contextually inconsistent with the target object was presented (e.g., city noise for a bear). In a control sound condition, a nonsensical sound (sawtooth wave) was presented. When target objects were embedded in contextually consistent visual scenes (Experiment 1: a bear in a forest background), consistent sounds increased object-naming accuracy. In contrast, sound conditions did not show a significant effect when target objects were embedded in contextually inconsistent visual scenes (Experiment 2: a bear in a pedestrian crossing background) or in a blank background (Experiments 3 and 4). These results suggested that auditory scene context has weak or no direct influence on visual object recognition. It seems likely that consistent auditory scenes indirectly facilitate visual object recognition by promoting visual scene processing.
... Similarly, Gray et al. (2013) (Akechi et al., 2014;Costello et al., 2009;Jiang et al., 2007;Li & Li, 2015;Madipakkam et al., 2015;Mudrik et al., 2011;Paffen et al., 2018;Stein & Sterzer, 2012;Zhou et al., 2010) A third concern arises from the inevitable involvement of motor activity production in RT measures. Studies using bCFS assume that post-perceptual motor processes -including the decision to make a specific response (e.g., to press a particular key indicating stimulus location), preparation of the relevant motor plan, and motor activity production -all unfold at an equal rate for different stimulus categories. ...
Article
Full-text available
Detecting faces and identifying their emotional expressions are essential for social interaction. The importance of expressions has prompted suggestions that some emotionally relevant facial features may be processed unconsciously, and it has been further suggested that this unconscious processing yields preferential access to awareness. Evidence for such preferential access has predominantly come from reaction times in the breaking continuous flash suppression (bCFS) paradigm, which measures how long it takes different stimuli to overcome interocular suppression. For instance, it has been claimed that fearful expressions break through suppression faster than neutral expressions. However, in the bCFS procedure, observers can decide how much information they receive before committing to a report, so although their responses may reflect differential detection sensitivity, they may also be influenced by differences in decision criteria, stimulus identification, and response production processes. Here, we employ a procedure that directly measures sensitivity for both face detection and identification of facial expressions, using predefined exposure durations. We apply diverse psychophysical approaches—forced-choice localization, presence/absence detection, and staircase-based threshold measurement; across six experiments, we find that emotional expressions do not alter detection sensitivity to faces as they break through CFS. Our findings constrain the possible mechanisms underlying previous findings: faster reporting of emotional expressions' breakthrough into awareness is unlikely to be due to the presence of emotion affecting perceptual sensitivity; the source of such effects is likely to reside in one of the many other processes that influence response times.