Figure - available from: Experimental Brain Research
This content is subject to copyright. Terms and conditions apply.
Group data. a Weight of visual cue in the two combined cue conditions (mean ± standard deviation). b, c Trial-by-trial target error variability (standard deviation) for each combined cue condition (blue) and the two corresponding single cue conditions (red visual, green haptic). Asterisks represent p values <0.05 from comparisons between the combined cue condition and the more reliable of the two single cue conditions. Grey triangles show weights and target error variabilities predicted for the combined cue conditions from the single cue data using the ML estimation model for each subject (mean ± standard deviation)

Group data. a Weight of visual cue in the two combined cue conditions (mean ± standard deviation). b, c Trial-by-trial target error variability (standard deviation) for each combined cue condition (blue) and the two corresponding single cue conditions (red visual, green haptic). Asterisks represent p values <0.05 from comparisons between the combined cue condition and the more reliable of the two single cue conditions. Grey triangles show weights and target error variabilities predicted for the combined cue conditions from the single cue data using the ML estimation model for each subject (mean ± standard deviation)

Source publication
Article
Full-text available
To effectively interpret and interact with the world, humans weight redundant estimates from different sensory cues to form one coherent, integrated estimate. Recent advancements in physical assistance systems, where guiding forces are computed by an intelligent agent, enable the presentation of augmented cues. It is unknown, however, if cue weight...

Citations

... Sometimes, only the group-determined best cue is used as comparator, as significant effects relative to this cue can make the contrast with the group-determined worst cue redundant. The vast majority of studies that tested for cue combination used this approach (e.g., Adams, 2016;Bates & Wolbers, 2014;Bultitude & Petrini, 2021;Burr et al., 2009;Chancel et al., 2016;Chen et al., 2017;Elliott et al., 2010;Ernst & Banks, 2002;Fetsch et al., 2009;Frissen et al., 2011;Gabriel et al., 2022;Gibo et al., 2017;Goeke et al., 2016;Gori et al., 2008Gori et al., , 2021Gori et al., 2012a, b;Helbig & Ernst, 2007Jicol et al., 2020;Jürgens & Becker, 2006;Mac-Neilage et al., 2007;Nardini et al., 2008Nardini et al., , 2010Newman & McNamara, 2021, 2022Petrini et al., 2014Petrini et al., , 2016Ramkhalawansingh et al., 2018;Risso et al., 2020;Scheller et al., 2020;Seminati et al., 2022;Senna et al., 2021;Sjolund et al., 2018;Zanchi et al., 2022;Zhao & Warren, 2015). (b) Another way in which cue combination has been evidenced at the group level is by contrasting the combined cue condition with the individually determined best cue. ...
Article
Full-text available
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains – the classic hallmark of cue combination – is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives. Supplementary Information The online version contains supplementary material available at 10.3758/s13428-023-02227-w.
... Cue recruitment studies show that people can also learn to perceive via newly learned cues or statistical associations (Backus & Haijiang, 2007;Di Luca et al., 2010;Haijiang et al., 2006;Harrison & Backus, 2010;Harrison et al., 2011;Harrison & Backus, 2012) and recent cue combination studies show that newly learned cues can be combined with familiar cues to make more precise decisions (Aston, Beierholm, & Nardini, 2022;Ernst, 2007;Gibo, Mugge, & Abbink, 2017;Negen, Wen, Thaler, & Nardini, 2018, Negen, Bird, Slater, Thaler, & Nardini, 2021. For example, a newly learned audio cue to depth is combined with visual information to improve precision in depth judgements after short-term training (Negen et al., 2018. ...
... Our results add to the growing literature showing that novel cues can be learned and integrated with familiar cues to enhance perception (Aston et al., 2022;Ernst, 2007;Gibo et al., 2017;Negen et al., 2018Negen et al., , 2021 and extend this finding to the domain of object recognition. We have expanded upon these previous findings by testing for a perceptual effect of the novel cue to color that we introduced. ...
Article
Full-text available
Reliability-weighted averaging of multiple perceptual estimates (or cues) can improve precision. Research suggests that newly learned statistical associations can be rapidly integrated in this way for efficient decision-making. Yet, it remains unclear if the integration of newly learned statistics into decision-making can directly influence perception, rather than taking place only at the decision stage. In two experiments, we implicitly taught observers novel associations between shape and color. Observers made color matches by adjusting the color of an oval to match a simultaneously presented reference. As the color of the oval changed across trials, so did its shape according to a novel mapping of axis ratio to color. Observers showed signatures of reliability-weighted averaging-a precision improvement in both experiments and reweighting of the newly learned shape cue with changes in uncertainty in Experiment 2. To ask whether this was accompanied by perceptual effects, Experiment 1 tested for forced fusion by measuring color discrimination thresholds with and without incongruent novel cues. Experiment 2 tested for a memory color effect, observers adjusting the color of ovals with different axis ratios until they appeared gray. There was no evidence for forced fusion and the opposite of a memory color effect. Overall, our results suggest that the ability to quickly learn novel cues and integrate them with familiar cues is not immediately (within the short duration of our experiments and in the domain of color and shape) accompanied by common perceptual effects.
... A limited number of studies suggest newly learned novel cues are also combined with familiar cues (Ernst, 2007;Gibo et al., 2017;Michel & Jacobs, 2008;Negen et al., 2018). Importantly, although combination of novel and familiar cues is often suboptimal, with the gain in precision from combining the two cues less than that predicted by reliability-weighted averaging (Ernst, 2007;Gibo et al., 2017;Negen et al., 2018), it is "Bayes-like" in the sense that it shows some signatures of Bayes-optimal combination, such as weighting by reliability (Negen et al., 2018). ...
... A limited number of studies suggest newly learned novel cues are also combined with familiar cues (Ernst, 2007;Gibo et al., 2017;Michel & Jacobs, 2008;Negen et al., 2018). Importantly, although combination of novel and familiar cues is often suboptimal, with the gain in precision from combining the two cues less than that predicted by reliability-weighted averaging (Ernst, 2007;Gibo et al., 2017;Negen et al., 2018), it is "Bayes-like" in the sense that it shows some signatures of Bayes-optimal combination, such as weighting by reliability (Negen et al., 2018). ...
... We refer to our novel cues as abstract as they do not have a natural relationship to location. This contrasts with previous studies where observers learned to use an echolocation cue to make depth judgements (Negen et al., 2018) or made movements with the assistance of a force cue that guided movements in a particular direction (Gibo et al., 2017). ...
Article
Full-text available
Mature perceptual systems can learn new arbitrary sensory signals (novel cues) to properties of the environment, but little is known about the extent to which novel cues are integrated into normal perception. In normal perception, multiple uncertain familiar cues are combined, often near-optimally (reliability-weighted averaging), to increase perceptual precision. We trained observers to use abstract novel cues to estimate horizontal locations of hidden objects on a monitor. In experiment 1, 4 groups of observers each learned to use a different novel cue. All groups benefited from a suboptimal but significant gain in precision using novel and familiar cues together after short-term training (3 ∼1.5 hr sessions), extending previous reports of novel-familiar cue combination. In experiment 2, we tested whether 2 novel cues may also be combined with each other. One pair of novel cues could be combined to improve precision but the other could not, at least not after 3 sessions of repeated training. Overall, our results provide extensive evidence that novel cues can be learned and combined with familiar cues to enhance perception, but mixed evidence for whether perceptual and decision-making systems can extend this ability to the combination of multiple novel cues with only short-term training. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... A potential explanation for this is that when sensory information is combined, the information from one sense can be used to judge the reliability of the other (Atkins et al., 2001). So, the reliability of the visual information may have been judged to be poor because it directly conflicted with information from the more contextually-relevant tactile information (Gibo et al., 2017). Importantly, this condition provides further evidence for an interaction between modality and higher-level expectations in heaviness perception. ...
Article
Full-text available
The material-weight illusion (MWI) demonstrates how our past experience with material and weight can create expectations that influence the perceived heaviness of an object. Here we used mixed-reality to place touch and vision in conflict, to investigate whether the modality through which materials are presented to a lifter could influence the top-down perceptual processes driving the MWI. University students lifted equally-weighted polystyrene, cork and granite cubes whilst viewing computer-generated images of the cubes in virtual reality (VR). This allowed the visual and tactile material cues to be altered, whilst all other object properties were kept constant. Representation of the objects’ material in VR was manipulated to create four sensory conditions: visual-tactile matched, visual-tactile mismatched, visual differences only and tactile differences only. A robust MWI was induced across all sensory conditions, whereby the polystyrene object felt heavier than the granite object. The strength of the MWI differed across conditions, with tactile material cues having a stronger influence on perceived heaviness than visual material cues. We discuss how these results suggest a mechanism whereby multisensory integration directly impacts how top-down processes shape perception.
... The model subsumes optimal fusion but provides valid predictions also if the weights are not optimal [71]. In terms of human postural control and task performing, human's sensorimotor system also tends to integrate multiple senses of visual and haptic [72][73][74]. Previous experimental results, where body sway is evoked by manipulation of individual and combined sensory cues, appear to be consistent with an essentially linear model [75,76]. ...
Preprint
Full-text available
The thesis presents contributions made to the evaluation and design of a haptic guidance system on improving driving performance in cases of normal and degraded visual information, which are based on behavior experiments, modeling and numerical simulations. The effect of shared control on driver behavior in cases of normal and degraded visual information has been successfully evaluated experimentally and numerically. The evaluation results indicate that the proposed haptic guidance system is capable of providing reliable haptic information, and is effective on improving lane following performance in the conditions of visual occlusion from road ahead and declined visual attention under fatigue driving. Moreover, the appropriate degree of haptic guidance is highly related to the reliability of visual information perceived by the driver, which suggests that designing the haptic guidance system based on the reliability of visual information would allow for greater driver acceptance. Furthermore, the parameterized driver model, which considers the integrated feedback of visual and haptic information, is capable of predicting driver behavior under shared control, and has the potential of being used for designing and evaluating the haptic guidance system.
... The model subsumes optimal fusion but provides valid predictions also if the weights are not optimal [71]. In terms of human postural control and task performing, human's sensorimotor system also tends to integrate multiple senses of visual and haptic [72][73][74]. Previous experimental results, where body sway is evoked by manipulation of individual and combined sensory cues, appear to be consistent with an essentially linear model [75,76]. ...
Thesis
The thesis presents contributions made to the evaluation and design of a haptic guidance system on improving driving performance in cases of normal and degraded visual information, which are based on behavior experiments, modeling and numerical simulations. The effect of shared control on driver behavior in cases of normal and degraded visual information has been successfully evaluated experimentally and numerically. The evaluation results indicate that the proposed haptic guidance system is capable of providing reliable haptic information, and is effective on improving lane following performance in the conditions of visual occlusion from road ahead and declined visual attention under fatigue driving. Moreover, the appropriate degree of haptic guidance is highly related to the reliability of visual information perceived by the driver, which suggests that designing the haptic guidance system based on the reliability of visual information would allow for greater driver acceptance. Furthermore, the parameterized driver model, which considers the integrated feedback of visual and haptic information, is capable of predicting driver behavior under shared control, and has the potential of being used for designing and evaluating the haptic guidance system.
... The likelihood of the haptic assistance's accuracy on any given trial can be inferred from the variability of the previous trial-by-trial random errors. Recent studies suggest that humans can estimate the trial-by-trial variability of sensory cues [20], [21]. In [20], we investigated how people rely on haptic assistance (reliance on a haptic cue versus a visual cue) when the haptic assistance contained trial-by-trial random errors. ...
... Recent studies suggest that humans can estimate the trial-by-trial variability of sensory cues [20], [21]. In [20], we investigated how people rely on haptic assistance (reliance on a haptic cue versus a visual cue) when the haptic assistance contained trial-by-trial random errors. Here, the trial-by-trial variability of the haptic assistance (i.e., haptic cue) was kept constant. ...
... There were two possible distributions for the haptic cue, resulting in either high (arc length SD s H;t ¼ 3:2 cm) or low (arc length SD s H;t ¼ 0:8 cm) variability of the trial-by-trial random error. If a subject perfectly followed the force channel, the theoretical target error would have a standard deviation of 3.2 cm and 0.8 cm in the high and low haptic variability conditions, respectively The selection of s V;t and s H;t was informed by pilot studies and related published studies [20], [24], [25]. The standard deviations were chosen to be large enough such that the trialby-trial variability was discernible and not masked by perceptual noise. ...
Article
When using an automated system, user trust in the automation is an important factor influencing performance. Prior studies have analyzed trust during supervisory control of automation, and how trust influences reliance: the behavioral correlate of trust. Here, we investigated how reliance on haptic assistance affects performance during shared control with an automated system. Subjects made reaches towards a hidden target using a visual cue and haptic cue (assistance from the automation). We sought to influence reliance by changing the variability of trial-by-trial random errors in the haptic assistance. Reliance was quantified in terms of the subject's position at the end of the reach relative to the two cues. Our results show that subjects aimed more towards the visual cue when the variability of the haptic cue errors increased, resembling cue weighting behavior. Similar behavior was observed both when subjects had explicit knowledge about the haptic cue error variability, as well as when they had only implicit knowledge (from experience). However, the group with explicit knowledge was able to more quickly adapt their reliance on the haptic assistance. The method we introduce here provides a quantitative way to study user reliance on the information provided by automated systems with shared control.
Article
Teleoperation enables complex robot platforms to perform tasks beyond the scope of the current state-of-the-art robot autonomy by imparting human intelligence and critical thinking to these operations. For seamless control of robot platforms, it is essential to facilitate optimal situational awareness of the workspace for the operator through active telepresence cameras. However, the control of these active telepresence cameras adds an additional degree of complexity to the task of teleoperation. In this paper we present our results from the user study that investigates: 1) how the teleoperator learns or adapts to performing the tasks via active cameras modeled after camera placements on the TRINA humanoid robot; 2) the perception-action coupling operators implement to control active telepresence cameras, and 3) the camera preferences for performing the tasks. These findings from the human motion analysis and post-study survey will help us determine desired design features for robot teleoperation interfaces and assistive autonomy.