Drew J. McLaughlin's research while affiliated with Washington University in St. Louis and other places

Publications (38)

Preprint
Alternating between different talkers during listening typically incurs a cognitive processing cost. How these processing costs manifest, and potentially differ, in a multi-accent setting remains to be examined. Across two experiments, we investigate (1) whether talker and accent switching costs are driven by engagement of a recalibration mechanism...
Article
Full-text available
Prior research has shown that visual information, such as a speaker’s perceived race or ethnicity, prompts listeners to expect a specific sociophonetic pattern (“social priming”). Indeed, a picture of an East Asian face may facilitate perception of second language (L2) Mandarin Chinese-accented English but interfere with perception of first languag...
Article
Face masks offer essential protection but also interfere with speech communication. Here, audio-only sentences spoken through four types of masks were presented in noise to young adult listeners. Pupil dilation (an index of cognitive demand), intelligibility, and subjective effort and performance ratings were collected. Dilation increased in respon...
Article
The present study examined whether race information about speakers can promote rapid and generalizable perceptual adaptation to second-language accent. First-language English listeners were presented with Cantonese-accented English sentences in speech-shaped noise during a training session with three intermixed talkers, followed by a test session w...
Article
Listeners use more than just acoustic information when processing speech. Social information, such as a speaker’s perceived race or ethnicity, can also affect the processing of the speech signal, in some cases facilitating perception (“social priming”). We aimed to replicate and extend this line of inquiry, examining effects of multiple social prim...
Preprint
Under multi-talker listening conditions, listeners appear to rapidly accommodate variability in speaker productions. However, evidence indicates that this trial-to-trial accommodation incurs a processing cost, and is amplified by accent differences among talkers. The present study investigated how individual listener differences in working memory c...
Article
Full-text available
Prior work in speech processing indicates that listening tasks with multiple speakers (as opposed to a single speaker) result in slower and less accurate processing. Notably, the trial-to-trial cognitive demands of switching between speakers or switching between accents have yet to be examined. We used pupillometry, a physiological index of cogniti...
Preprint
Listeners rapidly “tune” to unfamiliar accented speech, and some evidence also suggests that they may improve over multiple days of exposure. The present study aimed to measure accommodation of unfamiliar second language- (L2-) accented speech over a consecutive five-day period using both a measure of listening performance (speech recognition accur...
Preprint
The present study examined whether race information about speakers can promote rapid and generalizable perceptual adaptation to second-language (L2) accent. We presented first-language (L1) English listeners with Cantonese-accented English sentences in speech-shaped noise during a training session with three intermixed talkers, followed by a test s...
Preprint
Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening e...
Preprint
Face masks offer essential protection but also interfere with speech communication. Here, audio-only sentences spoken through four types of masks were presented in noise to young adult listeners. Pupil dilation (an index of cognitive demand), intelligibility, and subjective effort and performance ratings were collected. Dilation increased significa...
Article
The subjective ease of understanding accents that differ from a listener’s has typically been assessed using self-reports. This approach, however, relies on metacognitive judgments that are difficult to interpret and may not converge with objective measures of effort. To address this challenge, this study utilizes effort discounting, a paradigm bor...
Article
Pupillometry has a rich history in the study of perception and cognition. One perennial challenge is that the magnitude of the task-evoked pupil response diminishes over the course of an experiment, a phenomenon we refer to as a fatigue effect. Reducing fatigue effects may improve sensitivity to task effects and reduce the likelihood of confounds d...
Preprint
Full-text available
The potential negative impact of head movement during fMRI has long been appreciated. Although a variety of prospective and retrospective approaches have been developed to help mitigate these effects, reducing head movement in the first place remains the most appealing strategy for optimizing data quality. Real-time interventions, in which particip...
Preprint
Prior research has shown that visual information, such as a speaker’s perceived race or ethnicity, prompts listeners to expect a specific socio-phonetic pattern (“social priming”). Indeed, a picture of an East Asian face may facilitate perception of second language (L2) Mandarin Chinese-accented English but interfere with perception of first langua...
Preprint
Prior work in speech perception indicates that listening tasks with multiple speakers (as opposed to a single speaker) result in slower and less accurate processing. Notably, the trial-to-trial cognitive demands of switching between speakers or switching between accents have yet to be examined. We used pupillometry, a physiological index of cogniti...
Preprint
Pupillometry has a rich history of use for indexing perception and cognition. One perennial challenge is that the magnitude of the task-evoked pupil response diminishes over the course of an experiment, a phenomenon we refer to as a fatigue effect. Reducing fatigue effects may improve sensitivity to task effects and avoid conclusions that may be co...
Preprint
Listeners use more than just acoustic information when processing speech. Social information, such as a speaker’s race/ethnicity, can also affect listeners’ processing of the speech signal, in some cases facilitating perception. We aimed to build on this line of inquiry, beginning with a conceptual replication of work by McGowan (2015). Outcomes of...
Article
No PDF available ABSTRACT Social information, such as a speaker’s race, can affect the perception of speech. In some cases, these social primes facilitate perception, while in others they can lead to reduced speech intelligibility. For example, a picture of an East Asian face may facilitate perception of Mandarin Chinese-accented English but interf...
Article
Speech intelligibility is improved when the listener can see the talker in addition to hearing their voice. Notably, though, previous work has suggested that this “audiovisual benefit” for nonnative (i.e., foreign-accented) speech is smaller than the benefit for native speech, an effect that may be partially accounted for by listeners’ implicit rac...
Article
No PDF available ABSTRACT Prior work indicates that a speaker’s race/ethnicity can prime a listener to expect native versus nonnative (foreign) accents. In the present study, we replicate the findings of McGowan (2015) and then address novel topics including the effect of social primes on perceptual adaptation. Using a matched-guise design, we exam...
Article
In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological “neighbors” with similar acoustic properties (e.g., “cap” vs. “cat”). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to incr...
Preprint
Speech intelligibility is improved when the listener can see the talker in addition to hearing their voice. Notably, however, previous work has suggested that this “audiovisual benefit” for nonnative (i.e., foreign-accented) speech is smaller than the benefit for native speech, an effect that may be partially accounted for by listeners’ implicit ra...
Preprint
In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological “neighbors” with similar acoustic properties (e.g., “cap” vs. “cat”). Thus, processing words with more competitors should come at a greater cognitive cost than processing words with fewer competitors, due to increased dem...
Article
Full-text available
Purpose Objective measures of listening effort have been gaining prominence, as they provide metrics to quantify the difficulty of understanding speech under a variety of circumstances. A key challenge has been to develop paradigms that enable the complementary measurement of subjective listening effort in a quantitatively precise manner. In this s...
Article
No PDF available ABSTRACT Objective measures of listening effort have been gaining prominence, as they provide metrics to quantify the difficulty of understanding speech. A key challenge has been to develop paradigms that enable the complementary measurement of subjective listening effort in a quantitatively precise manner. In the present study, we...
Article
No PDF available ABSTRACT In noisy settings or when listening to an unfamiliar talker or accent, it may be difficult to understand spoken language. This difficulty can result in reductions in speech intelligibility, but may also increase the effort necessary to process the speech. In the current study we used a dual-task paradigm and pupillometry t...
Article
For nearly 25 years, researchers have recognized the rich and numerous facets of native perception of non‐native speech, driving a large, and growing, body of work that has shed light on how native listeners understand non‐native speech. The bulk of this work, however, has focused on the talker. That is, most researchers have asked what perception...
Article
In noisy settings or when listening to an unfamiliar talker or accent, it can be difficult to understand spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech even when intelligibility is unaffected. In this study, we used a dual-task paradigm and...
Preprint
Objective measures of listening effort have been gaining increasing prominence, as they provide metrics to quantify the difficulty of understanding speech under a variety of circumstances. A key challenge has been to develop paradigms that enable the complementary measurement of subjective listening effort in a quantitatively precise manner. In the...
Article
Unfamiliar second-language (L2) accents present a common challenge to speech understanding. However, the extent to which accurately recognized unfamiliar L2-accented speech imposes a greater cognitive load than native speech remains unclear. The current study used pupillometry to assess cognitive load for native English listeners during the percept...
Preprint
Speech perception under adverse conditions, such as those caused by noise in the environment or a speaker’s accent, can be cognitively demanding. For second language- (L2-) accented speech, mismatches between the speech patterns of an L2-accented speaker and a listener can result in poorer understanding and reduced intelligibility (i.e., fewer word...
Preprint
In noisy settings or when listening to an unfamiliar talker or accent, it may be difficult to recognize spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech. In the current study, we used a dual-task paradigm and pupillometry to assess the cognit...
Article
No PDF available ABSTRACT Listening to second language- (L2-) accented speech is often described as an effortful process, even when L2 speakers are highly proficient. This increase in listening effort is likely caused by systematic segmental and suprasegmental deviations from native-speaker norms, which require additional cognitive resources to pro...
Article
During speech communication, both environmental noise and nonnative accents can create adverse conditions for the listener. Individuals recruit additional cognitive, linguistic, and/or perceptual resources when faced with such challenges. Furthermore, listeners vary in their ability to understand speech in adverse conditions. In the present study,...
Article
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language...
Article
Both environmental noise and talker-related variation (e.g., accented speech) can create adverse listening conditions for speech communication. Individuals recruit additional cognitive, linguistic, or perceptual resources when faced with such challenges, and they vary in their ability to understand degraded speech. However, it is unclear whether li...

Citations

... While relatively less problematic than the effect of removing visual cues, the use of face coverings was found to negatively impact each of the subjective aspects of communication measured. Even small effects like these can accumulate over extended periods, especially in less favourable listening conditions, requiring more cognitive resources and potentially leading to fatigue (Carraturo et al., 2023;McGarrigle et al., 2014). ...
... CTS has been suggested to reflect the capacity of neural oscillations to synchronize, or phase-lock, with quasi-rhythmic information contained in slow amplitude modulations of speech (speech envelope). CTS is commonly observed in temporal brain regions and within the delta (<4 Hz) and theta (4 -8 Hz) frequency bands, aligning with the prosodic and syllabic rhythms in the speech envelope, respectively Peelle, Gross, and Davis, 2013;Doelling et al., 2014;Molinaro and Lizarazu, 2018;Destoky et al., 2019;Bourguignon et al., 2020;Ershaid et al., 2024). It has been suggested that CTS is an important part of speech processing because it helps separate and decode continuous speech signals into linguistic units at different timescales (Ahissar et al., 2001;Giraud and Poeppel, 2012;Peelle and Davis, 2012;Peelle et al., 2013;Zoefel and VanRullen, 2015;Ding et al., 2016;Keitel, Gross, and Kayser, 2018;Kosem et al., 2018;Meyer and Gumbert, 2018;Lizarazu, Carreiras and Molinaro, 2023). ...
... Further work has also reexamined social priming for L2 accent, with varying outcomes. In a recent conceptual replication and expansion of McGowan (2015), our lab found that White American listeners were better able to understand Mandarin-accented English when paired with an East Asian, as compared to a White, face (McLaughlin & Van Engen, 2023b). Our data further showed that this difference was significant beginning at Trial 1, demonstrating that the priming effect was extremely rapid. ...
... A manipulation in number of talkers potentially produces variability in both acoustic (e.g., a broader distribution of acoustic features) and indexical (e.g., multiple-talker identities, multiple-talker genders) domains. Experimental conditions in which the talker varies trial-by-trial, as compared to single-talker or blocked-talker conditions, can impact speech processing, manifesting in slower, more effortful, or more inaccurate recognition of speech (Choi et al., 2018;McLaughlin et al., 2023;Mullennix et al., 1989;Sommers, 1997;Stilp & Theodore, 2020). These effects have been observed even when the talkers are highly familiar to the listeners (Magnuson et al., 2021). ...
... Listening effort, assessed using pupillometry, also increased with more challenging SNRs, with SNR 0 resulting in the greatest change in pupil diameter. Pupillometry has rapidly gained prominence as a tool to assess cognitive load and effortful listening, which is modulated by task difficulty, cognitive status and hearing acuity 28,53,[80][81][82][83][84] . The precise neural pathways that induce pupillary changes during listening is still under study, but it is hypothesized to be mediated by the locus coeruleus-norepinephrine (LC-NE) system. ...
... Currently, the degree of success in implementing clear speech and its effectiveness in enhancing intelligibility are assessed perceptually by clinicians. The subjective nature of this auditory-perceptual assessment poses several challenges associated with human perception, including speaker familiarity, context familiarity, and variability across clinicians (Babel and Russell, 2015;McGowan, 2015;Kutlu et al., 2022;McLaughlin and Van Engen, 2022). An acoustic-based approach would provide a solution as it is free of these biases, and identifying a biomarker for implementation of clear speech is the first step toward developing such an approach. ...
... Yi and colleagues interpreted this correlation as an indicator that biases toward Asian speakers may negatively affect the process of audiovisual integration for speech. However, in a direct replication of this study with a larger sample size (N = 260 as compared to N = 19 in Yi et al., 2013) McLaughlin et al. (2022 did not find evidence that IAT scores were related to reduced audiovisual benefit for Korean-accented English. The main difference in audiovisual benefit for Korean-accented versus American-accented speakers successfully replicated, and a follow-up experiment further demonstrated that this finding was not due to a confound of the overall difficulty level of each accent type. ...
... It remains an open question whether the amplitude of evoked pupil responses from this reduced baseline is dampened in aging. Some studies have reported higher task-evoked dilation for older adults (Piquado et al., 2010), whereas others have reported the opposite phenomenon , an issue which also depends on the statistical approach taken (McLaughlin et al., 2022). In any case, researchers should be cautious when conducting between-subjects contrasts such as younger vs. older adults because their canonical pupil response functions likely differ in ways that are not yet fully understood. ...
... For example, older and younger adults appear to be differentially motivated by monetary reward to participate in effortful listening, independent of self-reported socioeconomic status. McLaughlin et al. (2021) observed that, compared to younger adults, normal-hearing older adults chose to engage in less demanding SNRs, forgoing a larger monetary reward. A greater tendency to exhibit this pattern of discounting behavior was found among older adults with poorer hearing thresholds and smaller working memory capacities. ...
... This variability is known to make the recognition of unfamiliar talkers or accents difficult (Adank and Janse, 2010;Porretta et al., 2016). These difficulties can, however, dissipate as listeners adapt to the current input (Bradlow and Bent, 2008;Tzeng et al., 2016;Baese-Berk et al., 2020;Xie et al., 2021). For example, native listeners of English become significantly faster and more accurate in responding to Spanish-or Mandarin-accented speech within as few as 18 sentence-length utterances (Clarke and Garrett, 2004;Xie et al., 2018). ...