Article

Deaf individuals use compensatory strategies to estimate visual time events

Authors:
  • École normale supérieure
  • Istituto Italiano di Tecnologia and the University of Sydney
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Temporal perception is so profoundly linked to hearing that congenitally and early deaf individuals appear to experience visual temporal impairments. However, most studies investigated visual temporal perception in deaf individuals using static stimuli, while ecological objects with which we interact in everyday life often move across space and time. Given that deafness does not impact spatial metric representations, we hypothesize that, while the temporal perception of static stimuli is altered after early hearing loss, it can be enhanced by providing additional, ecologically relevant information. To evaluate our hypothesis, deaf and hearing participants were tested using an oddball-like visual temporal task. In such a task, participants had to temporally discriminate a Target embedded in a series of static stimuli, whose spatiotemporal structure was dynamically manipulated during the presentation. Our results highlighted that deaf participants could not successfully discriminate the Target’s duration when only temporal information was manipulated, while their temporal sensitivity significantly improved when coherent spatiotemporal information was displayed. Our findings suggest that deaf participants might develop compensatory strategies based on other visual, non-temporal features to estimate external time events.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... First, we release both the model's code and the dataset used for source spatial localization and the ventriloquist effect study. We foster the value of this dataset in conjunction with the model, as stimuli included in the dataset are a former replica of stimuli often used in psychophysics (e.g., [44,45]). All stimuli included in the dataset were manually recorded within a controlled experimental setting, through which reverberation and luminosity were strictly monitored. ...
Article
Full-text available
Our brain constantly combines sensory information in unitary percept to build coherent representations of the environment. Even though this process could appear smooth, integrating sensory inputs from various sensory modalities must overcome several computational issues, such as recoding and statistical inferences problems. Following these assumptions, we developed a neural architecture replicating humans' ability to use audiovisual spatial representations. We considered the well-known ventriloquist illusion as a benchmark to evaluate its phenomenological plausibility. Our model closely replicated human perceptual behavior, proving a truthful approximation of the brain's ability to develop audiovisual spatial representations. Considering its ability to model audiovisual performance in a spatial localization task, we release our model in conjunction with the dataset we recorded for its validation. We believe it will be a powerful tool to model and better understand multisensory integration processes in experimental and rehabilitation environments.
... We exclude that it derived from impaired memory of the group of deaf individuals, since the two groups did not differ in their performance for the space-bisection task. The time-bisection difficulty we observed agrees with research that demonstrates the importance of auditory experience for the development of timing processing skills in other sensory channels 44,45 . For example, both estimation of visual temporal durations in the range of seconds 46 and tactile temporal durations in the range of milliseconds 47 are compromised in deaf adults. ...
Article
Full-text available
It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50–90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.
Article
Full-text available
Participants from ages 5 to 99 years completed 2 time estimation tasks: a temporal generalization task and a temporal bisection task. Developmental differences in overall levels of performance were found at both ends of the life span and were more marked on the generalization task than the bisection task. Older adults and children performed at lower levels than young adults, but there were also qualitative differences in the patterns of errors made by the older adults and the children. To capture these findings, the authors propose a new developmental model of temporal generalization and bisection. The model assumes developmental changes across the life span in the noisiness of initial perceptual encoding and across childhood in the extent to which long-term memory of time intervals is distorted.
Article
Full-text available
An important step when designing an empirical study is to justify the sample size that will be collected. The key aim of a sample size justification for such studies is to explain how the collected data is expected to provide valuable information given the inferential goals of the researcher. In this overview article six approaches are discussed to justify the sample size in a quantitative empirical study: 1) collecting data from (almost) the entire population, 2) choosing a sample size based on resource constraints, 3) performing an a-priori power analysis, 4) planning for a desired accuracy, 5) using heuristics, or 6) explicitly acknowledging the absence of a justification. An important question to consider when justifying sample sizes is which effect sizes are deemed interesting, and the extent to which the data that is collected informs inferences about these effect sizes. Depending on the sample size justification chosen, researchers could consider 1) what the smallest effect size of interest is, 2) which minimal effect size will be statistically significant, 3) which effect sizes they expect (and what they base these expectations on), 4) which effect sizes would be rejected based on a confidence interval around the effect size, 5) which ranges of effects a study has sufficient power to detect based on a sensitivity power analysis, and 6) which effect sizes are expected in a specific research area. Researchers can use the guidelines presented in this article, for example by using the interactive form in the accompanying online Shiny app, to improve their sample size justification, and hopefully, align the informational value of a study with their inferential goals.
Article
Full-text available
Number sense is the ability to estimate the number of items, and it is common to many species. Despite the numerous studies dedicated to unveiling how numerosity is processed in the human brain, to date, it is not clear whether the representation of numerosity is supported by a single general mechanism or by multiple mechanisms. Since it is known that deafness entails a selective impairment in the processing of temporal information, we assessed the approximate numerical abilities of deaf individuals to disentangle these two hypotheses. We used a numerosity discrimination task (2AFC) and an estimation task, in both cases using sequential (temporal) or simultaneous (spatial) stimuli. The results showed a selective impairment of the deaf participants compared with the controls (hearing) in the temporal numerosity discrimination task, while no difference was found to discriminate spatial numerosity. Interestingly, the deaf and hearing participants did not differ in spatial or temporal numerosity estimation. Overall, our results suggest that the deficit in temporal processing induced by deafness also impacts perception in other domains such as numerosity, where sensory information is conveyed in a temporal format, which further suggests the existence of separate mechanisms subserving the processing of temporal and spatial numerosity.
Article
Full-text available
Recent studies have reported a strong interaction between spatial and temporal representation when visual experience is missing: blind people use temporal representation of events to represent spatial metrics. Given the superiority of audition on time perception, we hypothesized that when audition is not available complex temporal representations could be impaired, and spatial representation of events could be used to build temporal metrics. To test this hypothesis, deaf and hearing subjects were tested with a visual temporal task where conflicting and not conflicting spatiotemporal information was delivered. As predicted, we observed a strong deficit of deaf participants when only temporal cues were useful and space was uninformative with respect to time. However, the deficit disappeared when coherent spatiotemporal cues were presented and increased for conflicting spatiotemporal stimuli. These results highlight that spatial cues influence time estimations in deaf participants, suggesting that deaf individuals use spatial information to infer temporal environmental coordinates.
Article
Full-text available
When newborns leave the enclosed spatial environment of the uterus and arrive in the outside world, they are faced with a new audiovisual environment of dynamic objects, actions and events both close to themselves and further away. One particular challenge concerns matching and making sense of the visual and auditory cues specifying object motion. Previous research shows that adults prioritise the integration of auditory and visual information indicating looming and that rhesus monkeys can integrate multisensory looming, but not receding, audiovisual stimuli. Despite the clear adaptive value of correctly perceiving motion towards or away from the self — for defence against and physical interaction with moving objects — such a perceptual ability would clearly be undermined if newborns were unable to correctly match the auditory and visual cues to such motion. This multisensory perceptual skill has scarcely been studied in human ontogeny. Here we report that newborns only a few hours old are sensitive to matches between changes in visual size and in auditory intensity. This early multisensory competence demonstrates that, rather than being entirely naïve to their new audiovisual environment, newborns can make sense of the multisensory cue combinations specifying motion with respect to themselves.
Article
Full-text available
Evidence that audition dominates vision in temporal processing has come from perceptual judgment tasks. This study shows that this auditory dominance extends to the largely subconscious processes involved in sensorimotor coordination. Participants tapped their finger in synchrony with auditory and visual sequences containing an event onset shift (EOS), expected to elicit an involuntary phase correction response (PCR), and also tried to detect the EOS. Sequences were presented in unimodal and bimodal conditions, including one in which auditory and visual EOSs of opposite sign coincided. Unimodal results showed greater variability of taps, smaller PCRs, and poorer EOS detection in vision than in audition. In bimodal conditions, variability of taps was similar to that for unimodal auditory sequences, and PCRs depended more on auditory than on visual information, even though attention was always focused on the visual sequences.
Article
Full-text available
A common conceptualization of signal detection theory (SDT) holds that if the effect of an experimental manipulation is truly perceptual, then it will necessarily be reflected in a change in d′ rather than a change in the measure of response bias. Thus, if an experimental manipulation affects the measure of bias, but not d′, then it is safe to conclude that the manipulation in question did not affect perception but instead affected the placement of the internal decision criterion. However, the opposite may be true: an effect on perception may affect measured bias while having no effect on d′. To illustrate this point, we expound how signal detection measures are calculated and show how all biases—including perceptual biases—can exert their effects on the criterion measure rather than on d′. While d′ can provide evidence for a perceptual effect, an effect solely on the criterion measure can also arise from a perceptual effect. We further support this conclusion using simulations to demonstrate that the Müller-Lyer illusion, which is a classic visual illusion that creates a powerful perceptual effect on the apparent length of a line, influences the criterion measure without influencing d′. For discrimination experiments, SDT is effective at discriminating between sensitivity and bias but cannot by itself determine the underlying source of the bias, be it perceptual or response based.
Article
Full-text available
In everyday life moving objects often follow irregular or repetitive trajectories for which distinctive events are potentially noticeable. It is known that the perceived duration of moving objects is distorted, but whether the distortion is due to the temporal frequency of the events or to the speed of the objects remains unclear. Disentangling the contribution of these factors to perceived duration distortions is ecologically relevant: if perceived duration were dependent on speed, it should contract with the distance from the observer to the moving objects. Here, we asked observers to estimate the perceived duration of an object rotating at different speeds and radii and found that perceived duration dilated with temporal frequency of rotations, rather than speed (or perceived speed, which we also measured). We also found that the dilation was larger for two than for one object, but the increase was not large enough to make perceived duration independent of the number of objects when expressed as a function of the local frequency (the number of times an object crossed a given location per time unit). These results suggest that perceived duration of natural stimuli containing distinctive events doesn't depend on the distance of the events to the observer.
Article
Full-text available
The performance of deaf and hearing college students was compared in a same-different task involving visual temporal patterns. The results showed equivalent performance for the two groups. For both deaf and hearing subjects, hierarchically simple patterns were easier than more complex patterns, which is consistent with a model of temporal pattern perception proposed by Martin (1972).
Article
Full-text available
Deaf children have been characterized as being impulsive, distractible, and unable to sustain attention. However, past research has tested deaf children born to hearing parents who are likely to have experienced language delays. The purpose of this study was to determine whether an absence of auditory input modulates attentional problems in deaf children with no delayed exposure to language. Two versions of a continuous performance test were administered to 37 deaf children born to Deaf parents and 60 hearing children, all aged 6-13 years. A vigilance task was used to measure sustained attention over the course of several minutes, and a distractibility test provided a measure of the ability to ignore task irrelevant information - selective attention. Both tasks provided assessments of cognitive control through analysis of commission errors. The deaf and hearing children did not differ on measures of sustained attention. However, younger deaf children were more distracted by task-irrelevant information in their peripheral visual field, and deaf children produced a higher number of commission errors in the selective attention task. It is argued that this is not likely to be an effect of audition on cognitive processing, but may rather reflect difficulty in endogenous control of reallocated visual attention resources stemming from early profound deafness.
Article
Full-text available
Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind.
Article
Full-text available
Motion in depth can be perceived from binocular cues alone, yet it is unclear whether these cues support speed sensitivity in the absence of the monocular cues that normally co-occur in natural viewing. We measure threshold contours in space-time for the discrimination of three-dimensional (3D) motion to determine whether observers use speed to discriminate a test 3D motion from two identical standards. We compare thresholds for random-dot stereograms (RDS) containing both binocular cues to 3D motion-interocular velocity difference and changing disparity over time-with performance for dynamic random-dot stereograms (DRDS), which contain only the second cue. Threshold contours are tilted along the axis of constant velocity in space-time for RDS stimuli at slow speeds (0.5 m/s), evidence for speed sensitivity. However, for higher speeds (1.5 m/s) and DRDS stimuli, observers rely on the component cues of duration and disparity. In a second experiment, noise of constant velocity is added to the standards to degrade the reliability of these separate components. Again there is evidence for speed tuning for RDS, but not for DRDS. Considerable variation is observed in the ability of individual observers to use the different cues in both experiments, however, in general the results emphasize the importance of interocular velocity difference as a critical cue for speed sensitivity to motion in depth, and suggest that speed sensitivity to stereomotion from binocular cues is restricted to relatively slow speeds.
Article
Full-text available
Models of discrimination based on statistical decision theory distinguish sensitivity (the ability of an observer to reflect a stimulus–response correspondence defined by the experimenter) from response bias (the tendency to favor 1 response over others). Measures of response bias have received less attention than those of sensitivity. Bias measures are classified here according to 2 characteristics. First, the distributions assumed or implied to underlie the observer's decision may be normal, logistic, or rectangular. Second, the bias index may measure criterion location, criterion location relative to sensitivity, or likelihood ratio. Both parametric and "nonparametric" indexes are classified in this manner. The various bias statistics are compared on pragmatic and theoretical grounds, and it is concluded that criterion measures have many advantages in empirical work. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Normal older participants (aged 60–79 yrs), with known scores on the Culture Fair Intelligence Test, were tested on 4 timing tasks (i.e., temporal generalization, bisection, differential threshold, and interval production). The data were related to the theoretical framework of scalar timing theory and ideas about information processing and aging. In general, increasing age and decreasing IQ tended to be associated with increasing variability of judgments of duration, although in all groups events could be timed on average accurately. In some cases (e.g., bisection), performance differences between the older participants and students nearly 50 years younger used in other studies were negligible. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Adults integrate multisensory information optimally (e.g., Ernst and Banks, 2002) while children do not integrate multisensory visual-haptic cues until 8-10 years of age (e.g., Gori et al., 2008). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task.
Article
Full-text available
Signal detection theory (SDT) may be applied to any area of psychology in which two different types of stimuli must be discriminated. We describe several of these areas and the advantages that can be realized through the application of SDT. Three of the most popular tasks used to study discriminability are then discussed, together with the measures that SDT prescribes for quantifying performance in these tasks. Mathematical formulae for the measures are presented, as are methods for calculating the measures with lookup tables, computer software specifically developed for SDT applications, and general purpose computer software (including spreadsheets and statistical analysis software).
Chapter
Full-text available
After more than 30 years of systematic research conducted mainly on the visual abilities of profoundly deaf individuals, it is apparent that the long-standing debate as to whether perceptual and cognitive functions of deaf individuals are deficient or supranormal is far from being settled. Several reviews of this literature (e.g., Parasnis 1983; Bavelier et al. 2006; Mitchell and Maslin 2007) clearly indicate that deaf and hearing individuals perform comparably on a number of perceptual tasks. As we shall see later (see Section 22.2.1), this conclusion is strongly supported by tasks involving basic perceptual thresholds. Instead, other studies have revealed a differential performance in the two groups, either in the direction of deficient abilities in deaf than hearing participants (e.g., Quittner et al. 2004; Parasnis et al. 2003), or in the direction of supranormal abilities for the deaf population (e.g., Bottari et al. 2010; Loke and Song 1991; Neville and Lawson 1987). In this context, it should perhaps be emphasized that in the absence of clear behavioral differences between deaf and hearing participants, even the most striking differences between the two groups observed at the neural level cannot disentangle between the perceptual deficit hypothesis and the sensory compensation hypotheses. For instance, much of the renewed interest in the study of visual abilities in deaf individuals has been motivated by the seminal work of Neville et al. (1983). In that study, visual evoked potentials (VEPs) recorded from the scalp of eight congenitally deaf adults were significantly larger over both auditory and visual cortices, with respect to those of eight hearing controls, specifically for visual stimuli occurring in the periphery of the visual field (8.3°). Although this pioneering work implies that the lack of auditory experience from an early age can influence the organization of the human brain for visual processing [a finding that was later confirmed and extended by many other studies using different methodologies for the recording of brain responses; e.g., electroencephalogram (EEG): Neville and Lawson 1987; magnetoencephalography: Finney et al. 2003; functional magnetic resonance imaging: Bavelier et al. 2000, 2001], in the absence of a behavioral difference between the two groups it remains potentially ambiguous whether modifications at the neural level are an index of deficiency or compensation. In other words, even if one assumes that larger visual evoked components (e.g., Neville et al. 1983; Neville and Lawson 1987) or stronger bold responses (e.g., Bavelier et al. 2000;2001) indicate enhanced processing of the incoming input, if this is not accompanied by behavioral enhancement it is difficult to conclude that it really serves some adaptive functional role. Unfortunately, the current evidence in the literature lacks this explicative power. With the only exception of the work by Neville and Lawson (1987), all other neuroimaging studies focused on measures of brain response alone, instead of combined measures of brain response and behavior. Furthermore, conclusive evidence that cortical reorganization serves a functional role can only originate from the observation that interfering with the reorganized brain response [e.g., using transcranial magnetic stimulation (TMS)] impairs the supranormal behavioral performance in the sensory-deprived participants (e.g., see Cohen et al. 1997 for an example of abolished supranormal tactile discrimination in the blind, following disruption of occipital lobe function using TMS).
Article
Full-text available
Cross-modal reorganization in the auditory cortex has been reported in deaf individuals. However, it is not well understood whether this compensatory reorganization induced by auditory deprivation recedes once the sensation of hearing is partially restored through a cochlear implant. The current study used electroencephalography source localization to examine cross-modal reorganization in the auditory cortex of post-lingually deafened cochlear implant users. We analysed visual-evoked potentials to parametrically modulated reversing chequerboard images between cochlear implant users (n = 11) and normal-hearing listeners (n = 11). The results revealed smaller P100 amplitudes and reduced visual cortex activation in cochlear implant users compared with normal-hearing listeners. At the P100 latency, cochlear implant users also showed activation in the right auditory cortex, which was inversely related to speech recognition ability with the cochlear implant. These results confirm a visual take-over in the auditory cortex of cochlear implant users. Incomplete reversal of this deafness-induced cortical reorganization might limit clinical benefit from a cochlear implant and help explain the high inter-subject variability in auditory speech comprehension.
Article
Full-text available
Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.
Article
Full-text available
Six experiments investigated how changes in stimulus speed influence subjective duration. Participants saw rotating or translating shapes in three conditions: constant speed, accelerating motion, and decelerating motion. The distance moved and average speed were the same in all three conditions. In temporal judgment tasks, the constant-speed objects seemed to last longer than the decelerating objects, which in turn seemed to last longer than the accelerating stimuli. In temporal reproduction tasks, the difference between accelerating and decelerating stimuli disappeared; furthermore, watching an accelerating shape lengthened the apparent duration of the subsequent (static) display. These results (a) suggest that temporal judgment and reproduction can dissociate for moving stimuli because the stimulus influences the apparent duration of the subsequent interval, and (b) constrain theories of time perception, including those which emphasize memory storage, those which emphasize the existence of a pacemaker-accumulator timing system, and those which emphasize the division of attention between temporal and non-temporal information processing.
Article
Full-text available
When the brain is deprived of input from one sensory modality, it often compensates with supranormal performance in one or more of the intact sensory systems. In the absence of acoustic input, it has been proposed that cross-modal reorganization of deaf auditory cortex may provide the neural substrate mediating compensatory visual function. We tested this hypothesis using a battery of visual psychophysical tasks and found that congenitally deaf cats, compared with hearing cats, have superior localization in the peripheral field and lower visual movement detection thresholds. In the deaf cats, reversible deactivation of posterior auditory cortex selectively eliminated superior visual localization abilities, whereas deactivation of the dorsal auditory cortex eliminated superior visual motion detection. Our results indicate that enhanced visual performance in the deaf is caused by cross-modal reorganization of deaf auditory cortex and it is possible to localize individual visual functions in discrete portions of reorganized auditory cortex.
Article
Full-text available
This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.
Article
Full-text available
The "ventriloquist effect" refers to the fact that vision usually dominates hearing in spatial localization, and this has been shown to be consistent with optimal integration of visual and auditory signals (Alais and Burr in Curr Biol 14(3):257-262, 2004). For temporal localization, however, auditory stimuli often "capture" visual stimuli, in what has become known as "temporal ventriloquism". We examined this quantitatively using a bisection task, confirming that sound does tend to dominate the perceived timing of audio-visual stimuli. The dominance was predicted qualitatively by considering the better temporal localization of audition, but the quantitative fit was less than perfect, with more weight being given to audition than predicted from thresholds. As predicted by optimal cue combination, the temporal localization of audio-visual stimuli was better than for either sense alone.
Article
Full-text available
Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals. We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors - a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age. This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task.
Article
Full-text available
This article has two purposes. The first is to describe four theoretical models of yes-no recognition memory and present their associated measures of discrimination and response bias. These models are then applied to a set of data from normal subjects to determine which pairs of discrimination and bias indices show independence between discrimination and bias. The second purpose is to use the indices from the acceptable models to characterize recognition memory deficits in dementia and amnesia. Young normal subjects, Alzheimer's disease patients, and parkinsonian dementia patients were tested with picture recognition tasks with repeated study–test trials. Huntington's disease patients, mixed etiology amnesics, and age-matched normals were tested by Butters, Wolfe, Martone, Granholm, and Cermak (1985) using the same paradigm with word stimuli. Three major points are emphasized. First, any index of recognition memory performance assumes an underlying model. Second, even acceptable models can lead to different conclusions about patterns of learning and forgetting. Third, efforts to characterize and ameliorate abnormal memory should address both discrimination and bias deficits.
Article
Full-text available
We compared normally hearing individuals and congenitally deaf individuals as they monitored moving stimuli either in the periphery or in the center of the visual field. When participants monitored the peripheral visual field, greater recruitment (as measured by functional magnetic resonance imaging) of the motion-selective area MT/MST was observed in deaf than in hearing individuals, whereas the two groups were comparable when attending to the central visual field. This finding indicates an enhancement of visual attention to peripheral visual space in deaf individuals. Structural equation modeling was used to further characterize the nature of this plastic change in the deaf. The effective connectivity between MT/MST and the posterior parietal cortex was stronger in deaf than in hearing individuals during peripheral but not central attention. Thus, enhanced peripheral attention to moving stimuli in the deaf may be mediated by alterations of the connectivity between MT/MST and the parietal cortex, one of the primary centers for spatial representation and attention.
Article
Our timing estimates are often prone to distortions from non-temporal attributes such as the direction of motion. Motion direction has been reported to lead to interval dilation when the movement is toward (i.e., looming) as compared to away from the viewer (i.e., receding). This perceptual asymmetry has been interpreted based on the contextual salience and prioritization of looming stimuli that allows for timely reactions to approaching objects. This asymmetry has mainly been studied through abstract stimulation with minimal social relevance. Focusing on the latter, we utilized naturalistic displays of biological motion and examined the aforementioned perceptual asymmetry in the temporal domain. In Experiment 1, we tested visual looming and receding human movement at various intervals in a reproduction task and found no differences in the participants’ timing estimates as a function of motion direction. Given the superiority of audition in timing, in Experiment 2, we combined the looming and receding visual stimulation with sound stimulation of congruent, incongruent, or no direction information. The analysis showed an overestimation of the looming as compared to the receding visual stimulation when the sound presented was of congruent or no direction, while no such difference was noted for the incongruent condition. Both looming and receding conditions (congruent and control) led to underestimations as compared to the physical durations tested. Thus, the asymmetry obtained could be attributed to the potential perceptual negligibility of the receding stimuli instead of the often-reported salience of looming motion. The results are also discussed in terms of the optimality of sound in the temporal domain.
Article
Over the past decade, there has been an unprecedented level of interest and progress into understanding visual processing in the brain of the deaf. Specifically, when the brain is deprived of input from one sensory modality (such as hearing), it often compensates with supranormal performance in one or more of the intact sensory systems (such as vision). Recent psychophysical, functional imaging, and reversible deactivation studies have converged to define the specific visual abilities that are enhanced in the deaf, as well as the cortical loci that undergo crossmodal plasticity in the deaf and are responsible for mediating these superior visual functions. Examination of these investigations reveals that central visual functions, such as object and facial discrimination, and peripheral visual functions, such as motion detection, visual localization, visuomotor synchronization, and Vernier acuity (measured in the periphery), are specifically enhanced in the deaf, compared with hearing participants. Furthermore, the cortical loci identified to mediate these functions reside in deaf auditory cortex: BA 41, BA 42, and BA 22, in addition to the rostral area, planum temporale, Te3, and temporal voice area in humans; primary auditory cortex, anterior auditory field, dorsal zone of auditory cortex, auditory field of the anterior ectosylvian sulcus, and posterior auditory field in cats; and primary auditory cortex and anterior auditory field in both ferrets and mice. Overall, the findings from these studies show that crossmodal reorganization in auditory cortex of the deaf is responsible for the superior visual abilities of the deaf.
Article
This study examined the difference in the perception of time between young and older adults in a temporal bisection task with four different duration ranges from a few milliseconds (500 ms) to several seconds (30 s). In addition, individual cognitive capacities (short-term memory, working memory, processing speed, attention) were assessed with different neuropsychological tests. The results showed a general effect of age on the variability of time judgment, indicating a lower sensitivity to time in the old than in the younger adults, regardless of the duration range tested. In addition, the results showed that the individual differences in time sensitivity were explained by attention capacities, which decline with aging.
Article
It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori, Sandini, & Burr, 2012). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development.
Article
During the first years of life, sensory modalities communicate with each other. This process is fundamental for the development of unisensory and multisensory skills. The absence of one sensory input impacts on the development of other modalities. Since 2008 we have studied these aspects and developed our cross-sensory calibration theory. This theory emerged from the observation that children start to integrate multisensory information (such as vision and touch) only after 8-10 years of age. Before this age the more accurate sense teaches (calibrates) the others; when one calibrating modality is missing, the other modalities result impaired. Children with visual disability have problems in understanding the haptic or auditory perception of space and children with motor disabilities have problems in understanding the visual dimension of objects. This review presents our recent studies on multisensory integration and cross-sensory calibration in children and adults with and without sensory and motor disabilities. The goal of this review is to show the importance of interaction between sensory systems during the early period of life in order to correct perceptual development to occur.
Article
The aim of this study was to provide the first, comprehensive meta-analysis of the neuroimaging literature regarding greater neural responses to a deviant stimulus in a stream of repeated, standard stimuli, termed here oddball effects. The meta-analysis of 75 independent studies included a comparison of auditory and visual oddball effects and task-relevant and task-irrelevant oddball effects. The results were interpreted with reference to the model in which a large-scale dorsal frontoparietal network embodies a mechanism for orienting attention to the environment, whereas a large-scale ventral frontoparietal network supports the detection of salient, environmental changes. The meta-analysis yielded three main sets of findings. First, ventral network regions were strongly associated with oddball effects and largely common to auditory and visual modalities, indicating a supramodal "alerting" system. Most ventral network components were more strongly associated with task-relevant than task-irrelevant oddball effects, indicating a dynamic interplay of stimulus saliency and internal goals in stimulus-driven engagement of the network. Second, the bilateral inferior frontal junction, an anterior core of the dorsal network, was strongly associated with oddball effects, suggesting a central role in top-down attentional control. However, other dorsal network regions showed no or only modest association with oddball effects, likely reflecting active engagement during both oddball and standard stimulus processing. Finally, prominent oddball effects outside the two networks included the sensory cortex regions, likely reflecting attentive and preattentive modulation of early sensory activity, and subcortical regions involving the putamen, thalamus, and other areas, likely reflecting subcortical involvement in alerting responses. Hum Brain Mapp, 2013. © 2013 Wiley Periodicals, Inc.
Article
Evidence from neurophysiological studies in animals as well as humans has demonstrated robust changes in neural organization and function following early-onset sensory deprivation. Unfortunately, the perceptual consequences of these changes remain largely unexplored. The study of deaf individuals who have been auditorily deprived since birth and who rely on a visual language (i.e., American Sign Language, ASL) for communication affords a unique opportunity to investigate the degree to which perception in the remaining, intact senses (e.g., vision) is modified as a result of altered sensory and language experience. We studied visual motion perception in deaf individuals and compared their performance with that of hearing subjects. Thresholds and reaction times were obtained for a motion discrimination task, in both central and peripheral vision. Although deaf and hearing subjects had comparable absolute scores on this task, a robust and intriguing difference was found regarding relative performance for left-visual-field (LVF) versus right-visual-field (RVF) stimuli: Whereas hearing subjects exhibited a slight LVF advantage, the deaf exhibited a strong RVF advantage. Thus, for deaf subjects, the left hemisphere may be specialized for motion processing. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing, in the case of ASL) are recruited (or “captured”) by the left, language-dominant hemisphere.
Article
A book describing "the different ways in which man adapts to the temporal conditions of his existence." Sections are devoted to conditioning to time, the perception of time, and control over time. Views of philosophers and early experimental psychologists are represented as well as those based on recent experiments. (567 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
When a unique "oddball" stimulus is embedded in a train of repeated standard stimuli, its duration can seem relatively exaggerated (V. Pariyadath & D. Eagleman, 2007; P. U. Tse, J. Intriligator, J. Rivest, & P. Cavanagh, 2004). We explored the possibility of a link between this and signal intensity reductions at low levels of visual processing. In Experiment 1, we used Troxler fading as a metric of signal intensity-the apparent fading of a stimulus with prolonged viewing (I. P. V. Troxler, 1804). Fading was exaggerated by presenting oddball and standard stimuli to different eyes. However, there was no fading difference when standard stimuli were presented persistently or intermittently. These results contrast with oddball effects, which were insensitive to eye of origin, and which were contingent on intermittent standard stimuli. In Experiment 2, we show that oddball effects can be elicited with oddballs that are less intense versions of repetitive stimuli, and in Experiment 3, we show that oddball effects can scale with the discrepancy between repeated and oddball stimuli. These observations discredit any oddball effect explanation predicated on low-level neural response magnitudes to individual stimuli. Instead, our data support the view that oddball effects are driven by predictive coding (V. Pariyadath & D. Eagleman, 2007), reflecting the discrepancy between expected and actual inputs.
Article
There is little direct psychophysical evidence that the visual system contains mechanisms tuned to head-centered velocity when observers make a smooth pursuit eye movement. Much of the evidence is implicit, relying on measurements of bias (e.g., matching and nulling). We therefore measured discrimination contours in a space dimensioned by pursuit target motion and relative motion between target and background. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented parallel to the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity-discrimination ellipses were significantly oriented away from the cardinal axes, toward the main negative diagonal. However, ellipse orientation was considerably less steep than predicted by a pure combination of components. This suggests that observers used a mixture of two strategies across trials, one based on individual components and another based on their sum. We provide a model that simulates this type of behavior and is able to reproduce the ellipse orientations we found.
Article
The present experiment examined the interactive effects of sex, age, and interval duration on individual's time perception accuracy. Participants engaged in the duration production task and subsequently completed questionnaires designed to elicit their temporal attitudes. The overall group of 100 individuals was divided evenly between the sexes. Five groups, each composed of 10 males and 10 females, were divided by decades of age ranging from 20 to 69 years old. The specific time estimation task was an empty interval production procedure composed of 50 trials on each of four different intervals of 1, 3, 7, and 20 s, respectively. The presentation orders of these intervals were randomized across participants but yoked across the sexes within each of the respective age groups. Analysis of the production results indicated significant influences for the sex of the participant while age did not appear to affect estimates of these short durations. Temporal attitudes, as reflected in responses to time questionnaire inquiries, did however exhibit significant differences across age. The contending theoretical accounts of such sex and age differences are considered and explanatory accounts that present a synthesis of endogenous and exogenous causal factors are discussed in light of the present pattern of findings.
Article
Despite wide recognition that a moving object is perceived to last longer, scientists do not yet agree as to how this illusion occurs. In the present study, we conducted two experiments using two experimental methods, namely duration matching and reproduction, and systematically manipulated the temporal frequency, spatial frequency, and speed of the stimulus, to identify the determinant factor of the illusion. Our results indicated that the speed of the stimulus, rather than temporal frequency or spatial frequency per se, best described the perceived duration of a moving stimulus, with the apparent duration proportionally increasing with log speed (Experiments 1 and 2). However, in an additional experiment, we found little or no change in onset and offset reaction times for moving stimuli (Experiment 3). Arguing that speed information is made explicit in higher stages of visual information processing in the brain, we suggest that this illusion is primarily mediated by higher level motion processing stages in the dorsal pathway.
Article
Two experiments were conducted to determine whether young and old adults differ in the rate of a hypothetical internal clock. Clock rate was measured as the slope of the function relating actual duration to perceived duration. No age differences were apparent when subjects were asked to judge the duration of a flash of light in Exp. I, or to judge the duration of a dark interval between two light flashes in Exp. II. It was concluded that there is no evidence to support the hypothesis that perceptual and motor speed differences associated with increased age can be attributable to a slower rate of internal time.
Article
We examined the ability of observers to determine the vertical alignment of three Gabor patches (cosine gratings tapered in X and Y by Gaussians) when the grating within the middle patch was moving right or left. The comparison patches were flickered in counterphase, as was the test patch in a control condition. In all conditions, the Gabor patch itself (the envelope) was stationary. Vernier acuity (i.e. sensitivity) was almost as good with the moving as with the flickering Gabors, but there was a very pronounced positional bias in the case of the patterns in which the internal gratings were moving. The (stationary) patches appeared to be displaced in the direction of the grating movement. Thus if the grating were drifting rightwards, the observer would see the patches as being aligned only when the test patch position in fact was shifted far over to the left. This movement-related bias increased rapidly with retinal eccentricity, reaching 15 min at 8 deg eccentricity. The bias was greatest at 4-8 Hz temporal frequency, and at low spatial frequencies. Whether the patterns were on the horizontal or the vertical meridian was largely irrelevant, but larger biases were found with patterns moving towards or away from the fovea than with those moving in a tangential direction.
Article
A stationary window was cut out of a stationary random-dot pattern. When a field of dots was moved continuously behind the window (a) the window appeared to move in the same direction even though it was stationary, (b) the position of the 'kinetic edges' defining the window was also displaced along the direction of dot motion, and (c) the edges of the window tended to fade on steady fixation even though the dots were still clearly visible. The illusory displacement was enhanced considerably if the kinetic edge was equiluminous and if the 'window' region was seen as 'figure' rather than 'ground'. Since the extraction of kinetic edges probably involves the use of direction-selective cells, the illusion may provide insights into how the visual system uses the output of these cells to localize the kinetic edges.
Article
The auditory and visual modalities differ in their capacities for temporal analysis, and speech relies on more rapid temporal contrasts than does sign language. We examined whether congenitally deaf signers show enhanced or diminished capacities for processing rapidly varying visual signals in light of the differences in sensory and language experience of deaf and hearing individuals. Four experiments compared rapid temporal analysis in deaf signers and hearing subjects at three different levels: sensation, perception, and memory. Experiment 1 measured critical flicker frequency thresholds and Experiment 2, two-point thresholds to a flashing light. Experiments 3-4 investigated perception and memory for the temporal order of rapidly varying nonlinguistic visual forms. In contrast to certain previous studies, specifically those investigating the effects of short-term sensory deprivation, no significant differences between deaf and hearing subjects were found at any level. Deaf signers do not show diminished capacities for rapid temporal analysis, in comparison to hearing individuals. The data also suggest that the deficits in rapid temporal analysis reported previously for children with developmental language delay cannot be attributed to lack of experience with speech processing and production.
Article
The first stimulus in a sequential train of identical flashes of light appears to last longer than those in the middle of the train. Four flashes (each 600 or 667 ms) were presented and the first was shortened until it appeared to have the same duration as that of the next. The duration of the first stimulus was found to be overestimated by about 50%. The illusion was unaffected by stimulus contrast, size, or interflash interval (between 100 and 600 ms). For some subjects, the last stimulus in the train also appeared to be about 50% longer than the penultimate flash. The results are discussed in terms of theories of how attention, arousal, and stimulus processing can affect duration perception. The mechanisms activated are peculiar to the visual system, since no similar illusion of duration was consistently experienced with a train of auditory tones.