Table 2 - uploaded by Hermann J Müller
Content may be subject to copyright.
Feature Contrasts (Rather Than Absolute Feature Values) of the Targets and of the Weak and Strong Additional Singletons (ASs), Relative to the Distractors 

Feature Contrasts (Rather Than Absolute Feature Values) of the Targets and of the Weak and Strong Additional Singletons (ASs), Relative to the Distractors 

Source publication
Article
Full-text available
In efficient search for feature singleton targets, additional singletons (ASs) defined in a nontarget dimension are frequently found to interfere with performance. All search tasks that are processed via a spatial saliency map of the display would be predicted to be subject to such AS interference. In contrast, dual-route models, such as feature in...

Similar publications

Article
Full-text available
For decades, researchers have examined visual search. Much of this work has focused on the factors (e.g., movement, set size, luminance, distractor features and proximity) that influence search speed. However, no research has explored whether people are aware of the influence of these factors. For instance, increases in set size will typically slow...
Article
Full-text available
Infants respond preferentially to faces and face‐like stimuli from birth, but past research has typically presented faces in isolation or amongst an artificial array of competing objects. In the current study infants aged 3‐ to 12‐months viewed a series of complex visual scenes; half of the scenes contained a person, the other half did not. Infants...
Preprint
Full-text available
Visual attention is very an essential factor that affects how human perceives visual signals. This report investigates how distortions in an image could distract human's visual attention using Bayesian visual search models, specifically, Maximum-a-posteriori (MAP) \cite{findlay1982global}\cite{eckstein2001quantifying} and Entropy Limit Minimization...
Article
Full-text available
Although diverse, theories of visual attention generally share the notion that attention is controlled by some combination of three distinct strategies: (1) exogenous cuing from locally contrasting primitive visual features, such as abrupt onsets or color singletons (e.g., L. Itti, C. Koch, & E. Neiber, 1998), (2) endogenous gain modulation of exog...

Citations

... Kumada (1999) showed that if participants were asked to detect the presence of an orientation-defined target, orientation-defined distractors caused interference, but colour-defined distractors did not; likewise, orientation-defined distractors failed to interfere with search for a colour-defined target. Whilst crossdimensional interference can be observed in present-absent search in some limited conditions (e.g., Zehetleitner et al., 2009), it remains much weaker than the equivalent within dimension interference. Thus, depending on the response required in a task, the influence of a salient distractor may vary. ...
... One characteristic of the current work is that the prevalence of the singleton distractor was relatively low since the distractor appeared only on 50% of trials, and within those 50% of trials there were three different possible singletons, giving a singleton prevalence of 16.66% for each singleton. Previous work (e.g., Müller et al., 2009;Zehetleitner et al., 2009) has shown how reducing distractor prevalence can lead to increased interference from a cross-dimensional distractor. However, it is important to acknowledge that this previous work used single-feature and not conjunction search tasks. ...
Article
Full-text available
The current study reassessed the potential of salient singleton distractors to interfere in conjunction search. Experiment 1 investigated conjunctions of colour and orientation, using densely packed arrays that produced highly efficient search. The results demonstrated clear interference effects of singleton distractors in task-relevant dimensions colour and orientation, but no interference from those in a task-irrelevant dimension (motion). Goals exerted an influence in constraining this interference such that the singleton interference along one dimension was modulated by target relevance along the other task relevant dimension. Colour singleton interference was much stronger when the singleton shared the target orientation, and orientation interference was much stronger when the orientation singleton shared the target colour. Experiments 2 and 3 examined singleton-distractor interference in feature search. The results showed strong interference particularly from task-relevant dimensions but a reduced role for top-down, feature-based modulation of singleton interference, compared with conjunction search. The results are consistent with a model of conjunction search based on core elements of the guided search and dimension weighting approaches, whereby weighted dimensional feature contrast signals are combined with top-down feature guidance signals in a feature-independent map that serves to guide search.
... Successor models like guided search (Wolfe, 1994) allow for weights to be set at each of these levels, with attention guided by a priority map (Fecteau & Munoz, 2006;Serences & Yantis, 2006). The priority map combines inputs from multiple features and attributes into a single representation, though it is possible to find situations where a stimulus seems to produce an attentional deployment and a response without the need for a map that combines signals from multiple dimensions (Chan & Hayward, 2009 (but see Zehetleitner, Proulx, & Müller, 2009). ...
Chapter
In visual search tasks, observers typically look for one or more target items among distracting items. Visual search lies at an important intersection between vision and attention. It is impossible to fully process everything in the visual scene at once. Most acts of visual object recognition require that resources be directed to one (or a very few) items. Visual selective attention is used to restrict processing for this purpose. Explaining visual search behavior involves explaining how visual selective attention is deployed to get this done. This chapter reviews why we search, how search experiments have been conducted in the lab, and what the resulting data can (and cannot) tell us. Attention is guided by a limited set of stimulus attributes. The candidates for these attributes are discussed here. The chapter also considers how attention is guided by scene structure and the interaction of attentional mechanisms with long‐term and working memory.
... Vision science strives to determine how attention is distributed over objects in our surroundings, the relation this distribution has with upcoming eye movements, and the information that is extracted during the process. The allocation of covert attention is a competitive process jointly influenced by bottom-up and top-down factors (Cave & Wolfe, 1990; Wolfe, 1994), where each perceived object is processed according to local salience-based features (e.g., brightness) and weighted by task relevance (Müller & Krummenacher, 2006; Zehetleitner, Proulx, & Müller, 2009). Accordingly, when salient objects are displayed along with a search target, they compete for attentional resources and have been shown to cause substantial interference (Bacon & Egeth, 1994; Becker, 2007; Fecteau & Munoz, 2006; Folk & Remington, 1998; Folk, Remington, & Johnston, 1992; Lamy, Tsal, & Egeth, 2003; Theeuwes, 1991 Theeuwes, , 1992 Yantis & Hillstrom, 1994). ...
... At the beginning of the trial, the placeholders were potential target locations, requiring attention be directed at these locations. physically more salient than the target provide a high incentive for suppression (Müller, Geyer, Zehetleitner, & Krummenacher, 2009; Zehetleitner, Proulx, & Müller, 2009). Thus, participants in our study could have acquired a suppression strategy (Müller et al., 2009), limiting distractor processing to initial stages. ...
Thesis
Full-text available
Our visual system is fovea-heavy, which means that in-depth processing occurs only in the centre of the retina, forcing the eyes to make constant movements in order to bring visual elements into focus. Despite this, eye movements go largely unnoticed and the environment is perceived as visually stable. Pre-saccadic shifts of attention might be guaranteeing this stability by easing the transition from one foveated image to another. Before an eye movement attention shifts to the location where the eyes will land and visual elements presented there are preferentially processed. A similar mechanism, also based on the allocation of attention in eye-centred coordinates, is known as remapping. It allows attention to be maintained on locations of interest across eye movements while accounting for the retinal displacement caused by each upcoming movement. In the current thesis, we are concerned with how the visual elements present in the environment shape the allocation of attention before eye movements. We first aimed to determine whether pre-saccadic shifts of attention are a precondition of all saccades, irrespective of goals. We showed that whether the saccade was goal-directed, to the intended target, or involuntary, erroneously directed to a capturing distractor, made little difference to the pre-saccadic shift of attention. Retinal displacement caused by involuntary saccades was also accounted for by the visual system. Next, the project focused on how the presented visual elements affect the programming of eye movements, by investigating how the decision to make an eye movement is affected by the number of target alternatives. We saw evidence that a larger set-size can reduce saccadic reaction times without increasing the error rate, a finding not predicted by a popular model. Further, whether the presence of visual elements in and around the saccade landing point influences the shifts of attention was investigated. We demonstrate that objects and their arrangement shape the distribution of attention, and that the effect is not driven by saccade metrics alone. Finally, we looked at the spatial and temporal distribution of visual attention when a saccade target is removed shortly before the eye movement.
... While the search slopes did indeed differ as a function of contrast, none of the search slopes was actually flat in Wolfe et al.'s experimental conditions-that is, the whole range of sampled feature contrasts produced inefficient searches. At the other end of the efficiency spectrum, several control studies from our lab (Goschy, Koch, Müller, & Zehetleitner, 2014, Footnote 3;Töllner et al., 2011, p. 3;Zehetleitner, Hegenloh, & Müller, 2011, Footnote 2;Zehetleitner, Koch, Goschy, & Müller, 2013, supplementary material;Zehetleitner, Krummenacher, Geyer, Hegenloh, & Müller, 2011, Appendix;Zehetleitner, Krummenacher, & Müller, 2009, Appendix B;Zehetleitner & Müller, 2010, p. 6;Zehetleitner, Proulx, & Müller, 2009, p. 1776 used at least two levels of feature contrast and two set sizes, but all were in the efficient range, thus again limiting the examination of parametric effects of feature contrast on search slopes. ...
Article
Full-text available
Searching for an object among distracting objects is a common daily task. These searches differ in efficiency. Some are so difficult that each object must be inspected in turn, whereas others are so easy that the target object directly catches the observer's eye. In four experiments, the difficulty of searching for an orientation-defined target was parametrically manipulated between blocks of trials via the target-distractor orientation contrast. We observed a smooth transition from inefficient to efficient search with increasing orientation contrast. When contrast was high, search slopes were flat (indicating pop-out); when contrast was low, slopes were steep (indicating serial search). At the transition from inefficient to efficient search, search slopes were flat for target-present trials and steep for target-absent trials within the same orientationcontrast block - suggesting that participants adapted their behavior on target-absent trials to the most difficult, rather than the average, target-present trials of each block. Furthermore, even when search slopes were flat, indicative of pop-out, search continued to become faster with increasing contrast. These observations provide several new constraints for models of visual search and indicate that differences between search tasks that were traditionally considered qualitative in nature might actually be due to purely quantitative differences in target discriminability.
... Successor models like guided search (Wolfe, 1994) allow for weights to be set at each of these levels, with attention guided by a priority map (Fecteau & Munoz, 2006;Serences & Yantis, 2006). The priority map combines inputs from multiple features and attributes into a single representation, though it is possible to find situations where a stimulus seems to produce an attentional deployment and a response without the need for a map that combines signals from multiple dimensions (Chan & Hayward, 2009 (but see Zehetleitner, Proulx, & Müller, 2009). ...
Article
The most recent Guided Search model (GS4, Wolfe, 2007) combines serial and parallel processes. Parallel guidance by "preattentive" features prioritizes items for serial selection. Items are selected every ~50 msec, starting a diffusion process that decides if the item is target or distractor. Each diffusion process takes ~300 msec/item. Thus, several items apparently undergo identification in parallel. New extended search paradigms involving multiple possible targets and multiple actual targets in each display require modifications to GS and to other models that assume a single search template and that consider only simple present/absent trial structures. 1: In "Hybrid Search" tasks, observers search visual arrays for any of N distinct target items, held in memory. This is quite easy, even for memory sets of 100 unique objects (Wolfe, 2012). This suggests that selection of each visual item may start accumulation of information to each of N decision boundaries in parallel. Moreover large memory sets show that observers' "search templates" aren't limited to the current contents of working memory. 2: In foraging experiments, observers look for multiple instances of the same target (e.g., berry picking). Uncertainty about the number of targets requires new search termination rules. GS5 adopts these from Optimal Foraging Theory within a Bayesian framework in which observers are continually updating their estimates of target probability in the current display. 3: Finally, in "hybrid foraging" tasks, observers easily search visual arrays for multiple instances of N items held in memory. The speed with which items are collected suggests "multi-tasking" in which observers are simultaneously clicking on one target, storing locations of others, and searching for still more. We seem to require memory for locations of targets that have been identified but not collected. The GS5 architecture has implications for real-world extended search tasks such as radiology or satellite image analysis. Meeting abstract presented at VSS 2015.
... example, a salient singleton distractor might attract dimension-based attention to distractor-relevant feature maps, such that search based on target-relevant feature maps might show reduced performance. However, Zehetleitner, Proulx, and Müller (2009) showed that this explanation was insufficient because they observed attentional capture in basic feature detection that was spatial in nature. Specifically, they manipulated distance between a target and a singleton distractor in a basic feature search task, and found that capture size was larger with a shorter target-distractor distance. ...
... This explanation retains the advantage of being able to explain the broader finding that attentional capture is stronger and more robust in compound search and is most commonly near-absent in basic feature detection; the strategic engagement of attention here is specific to conditions such as a mixed trials design. However, this discrepancy of capture effects across search tasks is not clearly explained by the single-route account (e.g., Zehetleitner et al., 2009). ...
... The main motivation for the present study was to address recent data that showed attentional capture effects in basic feature search tasks. For instance, Zehetleitner et al. (2009) found that attentional capture was observable as long as distractor-present and distractor-absent trials were mixed. The capture effects they observed showed similar characteristics to those of compound search tasks, suggesting that attentional capture in basic feature search and compound search was due to a common process. ...
Article
Full-text available
An enduring question in visual attention research is whether unattended objects are subject to perceptual processing. The traditional view suggests that, whereas focal attention is required for the processing of complex features or for individuating objects, it is not required for detecting basic features. However, other models suggest that detecting basic features may be no different from object identification and also require focal attention. In the present study, we approach this problem by measuring the effect of attentional capture in simple and compound visual search tasks. To make sure measurements did not reflect strategic components of the tasks, we measured accuracy with brief displays. Results show that attentional capture influenced only compound but not basic feature searches, suggestive of a distinction between attentional requirements of the 2 tasks. We discuss our findings, together with recent results of top-down word cue effects and dimension-specific intertrial effects, in terms of the dual-route account for visual search, which suggests that the task that is being completed determines whether search is based on attentive or preattentive mechanisms. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
... The fact that those three observations were made not only for the overall percentage fixation distribution, but also for the percentage fixations distribution of the 25% fastest saccadic latencies, demonstrates that very fast attentional selection, too, can be subject to topdown control. This is in line with Guided-Search-type models (e.g., Müller et al., 1995;Wolfe, 1994; see also Zehetleitner, Proulx, & Müller, 2009), which assume a top-down modulation of bottom-up salience signals at preattentive levels: Accordingly, even very fast attentional deployments should be subject to the observer's goals (e.g., ignoring an irrelevant singleton). ...
Article
Full-text available
Previous research on the contribution of top-down control to saccadic target selection has suggested that eye movements, especially short-latency saccades, are primarily salience driven. The present study was designed to systematically examine top-down influences as a function of time and relative salience difference between target and distractor. Observers performed a saccadic selection task, requiring them to make an eye movement to an orientation-defined target, while ignoring a color-defined distractor. The salience of the distractor was varied (five levels), permitting the percentage of target and distractor fixations to be analyzed as a function of the salience difference between the target and distractor. This analysis revealed the same pattern of results for both the overall and the short-latency saccades: When the target and distractor were of comparable salience, the vast majority of saccades went directly to the target; even distractors somewhat more salient than the target led to significantly fewer distractor, as compared with target, fixations. To quantify the amount of top-down control applied, we estimated the point of equal selection probability for the target and distractor. Analyses of these estimates revealed that, to be selected with equal probability to the target, a distractor had to have a considerably greater bottom-up salience, as compared with the target. This difference suggests a strong contribution of top-down control to saccadic target selection-even for the earliest saccades.
... In fact, previous studies using the visual search paradigm have shown that target-dissimilar distractors can be actively inhibited, which significantly modulates capture by salient, target-dissimilar distractors (e.g., Becker, 2007Becker, , 2010bGeyer, Mueller, & Krummenacher, 2008;Sayim, Grubert, Herzog, & Krummenacher, 2010;Theeuwes & Burger, 1998;Zehetleitner, Proulx, & Mueller, 2009;Wolfe, Butcher, Lee, & Hyle, 2003). For example, Geyer and colleagues (2008) found that a salient color distractor captured attention and/or the gaze when it was presented only rarely, whereas it could be largely ignored when it was presented frequently. ...
... For example, Geyer and colleagues (2008) found that a salient color distractor captured attention and/or the gaze when it was presented only rarely, whereas it could be largely ignored when it was presented frequently. These results indicate that a salient distractor can be actively inhibited once it is presented frequently (Sayim et al., 2010;Zehetleitner et al., 2009). Geyer and colleagues (2008) argued that rare salient distractors are not inhibited because observers do not have enough of an incentive to inhibit the feature of the distractor. ...
... Apart from investigating the effects of different distractors on visual search performance and eye movement behavior, we also examined possible effects of repeating the distractor feature and the distractor location on distractor selection rates. Previous studies have shown that an irrelevant distractor captures less when the distractor feature is repeated over consecutive trials, indicating that inhibition of the distractor feature can automatically carry over to the next trial and modulate visual selection (e.g., Becker, 2007Becker, , 2010bGeyer et al., 2008;Lamy & Yashar, 2008;Sayim et al., 2010;Zehetleitner et al., 2009). It is difficult to distinguish these automatic carryover effects from top-down strategies to inhibit the irrelevant distractor (or a feature-specific selection strategy), because automatic carryover effects can be cumulative and increase in strength as the number of repetitions increases (e.g., Becker, 2007Becker, , 2010b; see also Maljkovic & Nakayama, 1994). ...
Article
Full-text available
One of the most widespread views in vision research is that top-down control over visual selection is achieved by tuning attention to a particular feature value (e.g., red/yellow). Contrary to this view, previous spatial cueing studies showed that attention can be tuned to relative features of a search target (e.g., redder): An irrelevant distractor (cue) captured attention when it had the same relative color as the target (e.g., redder), and failed to capture when it had a different relative color, regardless of whether the distractor was similar or dissimilar to the target. The present study tested whether the same effects would be observed for eye movements when observers have to search for a color or shape target and when selection errors were very noticeable (resulting in an erroneous eye movement to the distractor). The results corroborated the previous findings, showing that capture by an irrelevant distractor does not depend on the distractor's similarity to the target but on whether it matches or mismatches the relative attributes of the search target. Extending on previous work, we also found that participants can be pretrained to select a color target in virtue of its exact feature value. Contrary to the prevalent feature-based view, the results suggest that visual selection is preferentially biased toward the relative attributes of a search target. Simultaneously, however, visual selection can be biased to specific color values when the task requires it, which rules out a purely relational account of attention and eye movements. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
... The findings regarding target detection, however, are not so consistent. In a visual search study by Zehetleitner et al. (2009) reaction times slowed as a function of target-distractor distance, when a luminance-defined distractor appeared near to an orientation-defined target, but see Mounts (2000) for opposite findings. Processes of visual search and texture segmentation, however, should not be considered as identical (Wolfe, 1992; see also Meinecke and Donk, 2002;Schubö et al., 2004). ...
Article
Full-text available
The saliency map model (Itti and Koch, 2000) is a hierarchically structured computational model, simulating visual saliency processing. Iso-feature processing on feature maps and conspicuity maps precedes cross-dimensional signal processing on the master map, where the most salient location of the visual field is selected. This texture segmentation study focuses on a possible spatial structure on the master map. In four experiments the spatial distance between a texture irregularity in the stimulus ("target") and a cross-dimensional task irrelevant texture irregularity in the backward mask ("patch") was varied. The results show that the target-patch distance modulates target detection, and that this modulation is limited to critical distances around the target. We conclude that the signals from different feature dimensions compete on a spatial master map. There is first evidence that the critical distances increase with target eccentricity.
... However, it is noteworthy that Chan and Hayward's 57 and Mortier et al.'s 60 findings may only reflect different weights given to top-down and bottom-up factors in each search type. For instance, Zehetleitner et al. 61 found that by manipulating the incentive to apply top-down selection in search, more effective top-down selection was observed in localization tasks as well. Therefore, it requires further investigation to tell whether localization and detection involve qualitatively different search processes. ...
Article
Full-text available
Visual search is the act of looking for a predefined target among other objects. This task has been widely used as an experimental paradigm to study visual attention, and because of its influence has also become a subject of research itself. When used as a paradigm, visual search studies address questions including the nature, function, and limits of preattentive processing and focused attention. As a subject of research, visual search studies address the role of memory in search, the procedures involved in search, and factors that affect search performance. In this article, we review major theories of visual search, the ways in which preattentive information is used to guide attentional allocation, the role of memory, and the processes and decisions involved in its successful completion. We conclude by summarizing the current state of knowledge about visual search and highlight some unresolved issues. WIREs Cogn Sci 2013, 4:415-429. doi: 10.1002/wcs.1235 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. Copyright © 2013 John Wiley & Sons, Ltd.