About the lab

Featured research (9)

Eye blinks not only serve to maintain the tear film of the eye but also seem to have a functional role in information processing. People tend to inhibit an eye blink when they expect relevant information to occur in their visual environment and blink more often when the information has been processed. Recent studies have shown that this relation also holds for auditory information processing. However, only artificial auditory stimuli like tones or controlled lists of words were used in studies so far. In the current study, we tested whether there would be a temporal association between the pauses in a continuous speech stream and the listener’s eye blinks. To this end, we analyzed the eye blinks of 35 participants who were instructed to attend to one of two simultaneously presented audiobooks. We found that the blink patterns of 13 participants were coupled with the speech pauses in the attended speech stream. These participants blinked more often during the pauses in the attended speech stream. Contrary to our prediction, participants did not inhibit their blinking preceding a pause in the attended speech stream. As expected, there was no evidence that the listeners’ blink pattern was coupled to the pauses in the ignored speech stream. Thus, the listeners’ blink patterns can reflect attention to continuous speech.
Eye blinks do not only serve to maintain the tear film of the eye but also seem to have a functional role in information processing. People tend to inhibit an eye blink when they expect relevant information to occur and blink more often when the information has been processed. Recent studies have shown that this relation also holds for auditory information processing. Yet so far, only artificial auditory stimuli like tones or controlled sentences were used. In the current study, we tested whether there is a temporal association between the pauses in a continuous speech stream and the listener’s eye blinks. To this end, we analyzed the eye blinks of 35 participants who were instructed to attended to one of two simultaneously presented audio books. We found that the blink patterns of 13 participants were coupled with the speech pauses in the attended speech stream. These participants blinked more often during the pauses in the attended speech stream. Contrary to our prediction, participants did not inhibit their blinking preceding a pause in the attended speech stream. As expected, there was no evidence that the listeners’ blink pattern was coupled to the pauses in the ignored speech stream. Thus, we conclude that the listeners’ blink patterns can reflect attention to continuous speech.
Auditory attention is an important cognitive function used to separate relevant from irrelevant auditory information. However, most findings on attentional selection have been obtained in highly controlled laboratory settings using bulky recording setups and unnaturalistic stimuli. Recent advances in electroencephalography (EEG) facilitate the measurement of brain activity outside the laboratory, and around-the-ear sensors such as the cEEGrid promise unobtrusive acquisition. In parallel, methods such as speech envelope tracking, intersubject correlations and spectral entropy measures emerged which allow us to study attentional effects in the neural processing of natural, continuous auditory scenes. In the current study, we investigated whether these three attentional measures can be reliably obtained when using around-the-ear EEG. To this end, we analyzed the cEEGrid data of 36 participants who attended to one of two simultaneously presented speech streams. Speech envelope tracking results confirmed a reliable identification of the attended speaker from cEEGrid data. The accuracies in identifying the attended speaker increased when fitting the classification model to the individual. Artifact correction of the cEEGrid data with artifact subspace reconstruction did not increase the classification accuracy. Intersubject correlations were higher for those participants attending to the same speech stream than for those attending to different speech streams, replicating previously obtained results with high-density cap-EEG. We also found that spectral entropy decreased over time, possibly reflecting the decrease in the listener’s level of attention. Overall, these results support the idea of using ear-EEG measurements to unobtrusively monitor auditory attention to continuous speech. This knowledge may help to develop assistive devices that support listeners separating relevant from irrelevant information in complex auditory environments.
Biological data like electroencephalography (EEG) are typically contaminated by unwanted signals, called artifacts. Therefore, many applications dealing with biological data with low signal-to-noise ratio require robust artifact correction. For some applications like brain-computer-interfaces (BCI), the artifact correction needs to be real-time capable. Artifact subspace reconstruction (ASR) is a statistical method for artifact reduction in EEG. However, in its current implementation, ASR cannot be used in mobile data recordings using limited hardware easily. In this report, we add to the growing field of portable, online signal processing methods by describing an implementation of ASR for limited hardware like single-board computers. We describe the architecture, the process of translating and compiling a Matlab codebase for a research platform, and a set of validation tests using publicly available data sets. The implementation of ASR on limited, portable hardware facilitates the online interpretation of EEG signals acquired outside of the laboratory environment.
With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS), and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed a stable lag and a small jitter (< 3 ms) indicating a high temporal precision of the system. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.

Lab head

Stefan Debener
Department
  • Department of Psychology

Members (7)

Sarah Blum
  • Hörzentrum gGmbH Oldenburg
Manuela Jaeger
  • Carl von Ossietzky Universität Oldenburg
Paul Maanen
  • Carl von Ossietzky Universität Oldenburg
Nadine Jacobsen
  • Carl von Ossietzky Universität Oldenburg
Björn Holtze
  • Carl von Ossietzky Universität Oldenburg
María Piñeyro Salvidegoitia
  • Carl von Ossietzky Universität Oldenburg

Alumni (8)

Maarten de Vos
  • university oxford
Catharina Zich
  • University of Oxford
Niclas Braun
  • University of Bonn
Jeremy D Thorne
  • Carl von Ossietzky Universität Oldenburg