Figure - available from: Trends in Hearing
This content is subject to copyright.
Instrumental evaluation results of the input signals over all angular velocities as a function of the input SNR values are shown. The left panel shows the evaluation results using the iSNR measure, the middle panel shows the results of the STOI measure, and the right panel shows the results of the PESQ measure.

Instrumental evaluation results of the input signals over all angular velocities as a function of the input SNR values are shown. The left panel shows the evaluation results using the iSNR measure, the middle panel shows the results of the STOI measure, and the right panel shows the results of the PESQ measure.

Source publication
Article
Full-text available
In many daily life communication situations, several sound sources are simultaneously active. While normal-hearing listeners can easily distinguish the target sound source from interfering sound sources-as long as target and interferers are spatially or spectrally separated-and concentrate on the target, hearing-impaired listeners and cochlear impl...

Similar publications

Article
Full-text available
Although bilateral cochlear implant users receive input to both ears, they nonetheless have relatively poor localization abilities in the horizontal plane. This is likely because of the two binaural cues, they have good sensitivity to interaural differences of level (inter-aural level differences, or ILDs), but not those of time (inter-aural time d...

Citations

... Via this subsystem, a real-time binaural streaming connection with two microphones per device can be established between two demonstration devices. This allows binaural algorithms, e.g., a minimum variance distortionless response (MVDR) beamformer [13], to be studied and explored on the research hearing aid. ...
... Firstly, calibration software is deployed to calibrate the corresponding demonstration device's input and output levels. In addition, common algorithms like noise reduction [15], adaptive feedback cancellation [16], a dynamic compressor [14], and an MVDR beamformer [13] were evaluated on the platform. Figure 3 depicts a block diagram of an exemplary evaluation setup for a real-world use case. ...
... Different hearing aid algorithms are deployed on the SoC to evaluate the real power consumption and utilization: two 256-point FFT and iFFT assuming two microphone inputs, a binaural MVDR Beamformer [21] using four microphone inputs to steer the beam between four quadrants, and a monaural compressor. The MVDR Beamformer and monaural compressor work in the frequency domain, thus also internally computing a 256-point FFT and iFFT. ...
Conference Paper
To handle the advances in hearing aid algorithms, the need for high-level programmable but low-power hardware architectures arises. Therefore, this paper presents the Smart Hearing Aid Processor (SmartHeaP), a mixed-signal system on chip (SoC) fabricated in 22 nm fully-depleted silicon-on-insulator (FD-SOI) with an adaptive body biasing (ABB) unit and a total die size of 7.36 mm² . The proposed SoC consists of two application-specific instruction set processor (ASIP) architectures: firstly, a Cadence Tensilica Fusion G6 instruction set architecture, extended with custom instructions for audio processing, and secondly, a Cadence Tensilica LX7 for wireless interfacing, e.g., Bluetooth Low Energy. Furthermore, an analog front-end and digital audio interfaces are added. The large local memory of 2 MB and a high-level software environment enables memory-intensive algorithms to be deployed quickly. Typical hearing aid algorithms in a real-time setup are used to evaluate the power consumption of the SoC at different operating frequencies. At 50 MHz, a mean power consumption of less than 2.2 mW was measured, resulting in an efficiency of 34.8 µW/MHz.
... The adaptive beamforming component processes multichannel hearing aid signals, which often provides a substantial benefit for hearing aid users ( [7,8] and references therein). An adaptive procedure can exploit the spatial location of a sound source but it requires estimates such as the speaker's position (assuming that hearing-aid users would like to attend a speaker in an acoustic scene) and noise statistics [9][10][11]. Incorrect parameter estimates for DOA or classical beamforming introduce artifacts in the output signal which potentially decrease speech quality and intelligibility [12]. ...
Article
Full-text available
Current hearing aids are limited with respect to speech-specific optimization for spatial sound sources to perform speech enhancement. In this study, we therefore propose an approach for spatial detection of speech based on sound source localization and blind optimization of speech enhancement for binaural hearing aids. We have combined an estimator for the direction of arrival (DOA), featuring high spatial resolution but no specialization to speech, with a measure of speech quality with low spatial resolution obtained after directional filtering. The DOA estimator provides spatial sound source probability in the frontal horizontal plane. The measure of speech quality is based on phoneme representations obtained from a deep neural network, which is part of a hybrid automatic speech recognition (ASR) system. Three ASR-based speech quality measures (ASQM) are explored: entropy, mean temporal distance (M-Measure), matched phoneme (MaP) filtering. We tested the approach in four acoustic scenes with one speaker and either a localized or a diffuse noise source at various signal-to-noise ratios (SNR) in anechoic or reverberant conditions. The effects of incorrect spatial filtering and noise were analyzed. We show that two of the three ASQMs (M-Measure, MaP filtering) are suited to reliably identify the speech target in different conditions. The system is not adapted to the environment and does not require a-priori information about the acoustic scene or a reference signal to estimate the quality of the enhanced speech signal. Nevertheless, our approach performs well in all acoustic scenes tested and varying SNRs and reliably detects incorrect spatial filtering angles.
... The beamforming stage exploits the spatial separation of the HA microphones and the resulting differences in the times of arrival (TOAs) to establish a direction-dependent pattern steered to the sound source of interest, for example, the target talker in a conversational scenario. Ranging from simple static unilateral delay-and-sum beamformers to more complex adaptive binaural beamforming algorithms, such as minimum-variance distortion- less response beamformers (Van Trees, 2004;Adiloğlu et al., 2015), directional signal processing allows to provide substantially improved SNRs and corresponding speech intelligibility (see, e.g. Picou & Ricketts, 2019;Best et al., 2015). ...
Book
Full-text available
Hearing loss (HL) has multifaceted negative consequences for individuals of all age groups. Despite individual fitting based on clinical assessment, consequent usage of hearing aids (HAs) as a remedy is often discouraged due to unsatisfactory HA performance. Consequently, the methodological complexity in the development of HA algorithms has been increased by employing virtual acoustic environments which enable the simulation of indoor scenarios with plausible room acoustics. Inspired by the research question of how to make such environments accessible to HA users while maintaining complete signal control, a novel concept addressing combined perception via HAs and residual hearing is proposed. The specific system implementations employ a master HA and research HAs for aided signal provision, and loudspeaker-based spatial audio methods for external sound field reproduction. Systematic objective evaluations led to recommendations of configurations for reliable system operation, accounting for perceptual aspects. The results from perceptual evaluations involving adults with normal hearing revealed that the characteristics of the used research HAs primarily affect sound localisation performance, while allowing comparable egocentric auditory distance estimates as observed when using loudspeaker-based reproduction. To demonstrate the applicability of the system, school-age children with HL fitted with research HAs were tested for speech-in-noise perception in a virtual classroom and achieved comparable speech reception thresholds as a comparison group using commercial HAs, which supports the validity of the HA simulation. The inability to perform spatial unmasking of speech compared to their peers with normal hearing implies that reverberation times of 0.4 s already have extensive disruptive effects on spatial processing in children with HL. Collectively, the results from evaluation and application indicate that the proposed systems satisfy core criteria towards their use in HA research.
... The software package includes plugins for basic operations such as calibration, filtering, resampling, amplification and an overlap-add Fourier analysis and synthesis framework for signal processing in the spectral domain. Furthermore a number of plugins are included that cover processing methods of the following types: multi-band dynamic range compression [39], adaptive feedback cancellation [40], directional microphones [35], binaural noise and feedback reduction [41], binaural beamforming [42][43][44], single-channel noise reduction [34], and sound source localization [45]. In some cases ( [34,35,41,42], adaptive beamforming [44], and a delay-and-subtract beamformer), reference configurations are provided that have been used the same way in a research study with the aim to enable reproducibility of the experimental setup by other researchers. ...
Article
Full-text available
open Master Hearing Aid (openMHA) was developed and provided to the hearing aid research community as an open-source software platform with the aim to support sustainable and reproducible research towards improvement and new types of assistive hearing systems not limited by proprietary software. The software offers a flexible framework that allows the users to conduct hearing aid research using tools and a number of signal processing plugins provided with the software as well as the implementation of own methods. The openMHA software is independent of a specific hardware and supports Linux, macOS and Windows operating systems as well as 32-bit and 64-bit ARM-based architectures such as used in small portable integrated systems. www.openmha.org
... This setup has been used exclusively for investigational purposes (cf. ; such a beamformer would not be realizable in typical commercial devices and is distinct from other studies that have evaluated approaches that could be implemented in current devices (Adiloglu et al., 2015;Baumgartel et al., 2015b;Dieudonn e and Francart, 2018). The signals from the 16 microphones were filtered/delayed-and-summed to create a single-beam beamformer aimed in any specified direction within a range of angles about the head. ...
Article
Bilateral cochlear-implant (CI) users struggle to understand speech in noisy environments despite receiving some spatial-hearing benefits. One potential solution is to provide acoustic beamforming. A headphone-based experiment was conducted to compare speech understanding under natural CI listening conditions and for two non-adaptive beamformers, one single beam and one binaural, called "triple beam," which provides an improved signal-to-noise ratio (beamforming benefit) and usable spatial cues by reintroducing interaural level differences. Speech reception thresholds (SRTs) for speech-on-speech masking were measured with target speech presented in front and two maskers in co-located or narrow/wide separations. Numerosity judgments and sound-localization performance also were measured. Natural spatial cues, single-beam, and triple-beam conditions were compared. For CI listeners, there was a negligible change in SRTs when comparing co-located to separated maskers for natural listening conditions. In contrast, there were 4.9-and 16.9-dB improvements in SRTs for the beamformer and 3.5-and 12.3-dB improvements for triple beam (narrow and wide separations). Similar results were found for normal-hearing listeners presented with vocoded stimuli. Single beam improved speech-on-speech masking performance but yielded poor sound localization. Triple beam improved speech-on-speech masking performance, albeit less than the single beam, and sound localization. Thus, triple beam was the most versatile across multiple spatial-hearing domains.
... Note that differences in intelligibility were not explicitly tested but inferred from the different signal-to-noise ratios between the to-be-attended and to-be-ignored speech stream. Attenuating the to-be-ignored narrative was accomplished using the steering beamformer algorithm described in Adiloglu et al. (2015). The attenuation was frequency dependent such that some frequencies of the narratives' speakers were more strongly attenuated than others (Supplementary Figure 1). ...
Article
Full-text available
Difficulties in selectively attending to one among several speakers have mainly been associated with the distraction caused by ignored speech. Thus, in the current study, we investigated the neural processing of ignored speech in a two-competing-speaker paradigm. For this, we recorded the participant’s brain activity using electroencephalography (EEG) to track the neural representation of the attended and ignored speech envelope. To provoke distraction, we occasionally embedded the participant’s first name in the ignored speech stream. Retrospective reports as well as the presence of a P3 component in response to the name indicate that participants noticed the occurrence of their name. As predicted, the neural representation of the ignored speech envelope increased after the name was presented therein, suggesting that the name had attracted the participant’s attention. Interestingly, in contrast to our hypothesis, the neural tracking of the attended speech envelope also increased after the name occurrence. On this account, we conclude that the name might not have primarily distracted the participants, at most for a brief duration, but that it alerted them to focus to their actual task. These observations remained robust even when the sound intensity of the ignored speech stream, and thus the sound intensity of the name, was attenuated.
... The openMHA software package includes plugins for basic operations such as calibration, filtering, resampling, amplification and an overlap-add Fourier analysis and synthesis framework for signal processing in the spectral domain. Furthermore a number of plugins are included that cover processing methods of the following types: multi-band dynamic range compression [35], adaptive feedback cancellation [36], directional microphones [26], binaural noise and feedback reduction [37], binaural beamforming [38,39,40], singlechannel noise reduction [20], and sound source localization [41]. In some cases ( [20,26,37,38], adaptive beamforming [40], and a delay-and-subtract beamformer), reference configurations are provided that have been used the same way in a research study with the aim to enable reproducibility of the experimental setup by other researchers. ...
Preprint
Full-text available
open Master Hearing Aid (openMHA) was developed and provided to the hearing aid research community as an open-source software platform with the aim to support sustainable and reproducible research towards improvement and new types of assistive hearing systems not limited by proprietary software. The software offers a flexible framework that allows the users to conduct hearing aid research using tools and a number of signal processing plugins provided with the software as well as the implementation of own methods. The openMHA software is independent of a specific hardware and supports Linux, MacOS and Windows operating systems as well as 32- bit and 64-bit ARM-based architectures such as used in small portable integrated systems. www.openmha.org
... A modern digital hearing aid could help to reduce the effect of the hearing loss with different monaural or binaural algorithms. One example algorithm to reduce these effects is an acoustic beamformer [2] [3]. This algorithm steers a beam to the desired direction of speech while attenuating the background noise. ...
... They require more complex hardware to do the processing in real-time. In addition to the fixed beamforming algorithm in Fig. 1, an adaptive gain [4] [5], an adaptive Filter [6] a General Sidelobe Canceler (GSC) [7] and a Minimum Variance Distortionless Response (MVDR) [3] [8] beamforming algorithm are used as a reference. The fixed beamformer is quite a simple algorithm that can be implemented only using a FIR filter with eight taps and a single MAC unit. ...
... The FFTs, depicted in Fig. 1e, are not taken into account in later comparisons because they are a pre-processing step and not part of the actual beamforming algorithm. In a real hearing aid application, this beamformer is a part of an algorithmic chain, which includes the classification and estimation of the DOA, as presented in [3]. Because all the other parts are also operating in the frequency domain, the FFT is done at the beginning and the input of the beamformer is already in the frequency domain. ...
Conference Paper
Choosing a suitable processor architecture for a hearing aid is a difficult task. Various aspects have to be taken into account, like power consumption and silicon area. Also, the computational performance and flexibility of an architecture are essential. Therefore, a wide variety of design goals must be weighted against each other before a final decision for the architecture can be made. In this paper, several configurable audio processors are evaluated, using five commonly known acoustic beamforming algorithms. In order to reduce the exploration time, this paper presents a partly automated design space exploration framework. The hearing aid algorithms are implemented in fixedpoint representation to reduce the computational complexity. This framework includes a fixed-point analysis and an automated reference code generation using MATLAB tools. With the Xtensa Xplorer, different configurations of the Tensilica-based processor architecture are profiled. Finally, a case study is presented to show the usability of the proposed framework.
... Dabei verwenden die Algorithmen Eingangssignale einer oder mehrerer Hörhilfen, sodass die Algorithmen uni-oder bilateral arbeiten. In dieser Studie wurde der unilateral arbeitende adaptive differential microphone (ADM, [8]) sowie der bilateral arbeitende fixed minimum variance distortionless response (MVDR, [9]) Beamformer untersucht. In diffusen Situationen mit mehreren Störquellen in der hinteren Hemisphäre wird für den ADM erwartet, dass sich eine super-kardioide Charakteristik einstellen kann. ...
Conference Paper
Full-text available
Die Vernetzung zweier Hörhilfen sorgt neben einer erleichterten Handhabung auch für komplexere Beamformer-Algorithmen und damit für eine Verbesserung des Signal-Rausch-Abstandes (signal-to-noise ratio, SNR). Aufgrund der relativ neuen Technologie vernetzter Hörhilfen wurden bilateral arbeitende Beamformer bisher nur in sehr speziellen Situationen untersucht. Diese Studie vergleicht den Sprachverständlichkeitsgewinn von unilateral und bilateral arbeitenden Beamformern in komplexen realitätsnahen Situationen. Die Ergebnisse zeigen, dass Normalhörende kaum von einem bilateral arbeitenden Beamformer profitieren, während einseitig implantierte Cochlea-Implantat (CI)-Träger durch solche Beamformer einen deutlichen Gewinn erfahren.