Mcneil Glerup posted an update 5 hours, 49 minutes ago
We present simulation results for an acoustofluidic device, showing how implementing a suitable ac electroosmosis results in a suppression of the resulting electroacoustic streaming in the bulk of the device by 2 orders of magnitude.Though necessary, protective mask wearing in response to the COVID-19 pandemic presents communication challenges. The present study examines how signal degradation and loss of visual information due to masks affects intelligibility and memory for native and non-native speech. We also test whether clear speech can alleviate perceptual difficulty for masked speech. One native and one non-native speaker of English recorded video clips in conversational speech without a mask and conversational and clear speech with a mask. Native English listeners watched video clips presented in quiet or mixed with competing speech. The results showed that word recognition and recall of speech produced with a mask can be as accurate as without a mask in optimal listening conditions. Masks affected non-native speech processing at easier noise levels than native speech. Clear speech with a mask significantly improved accuracy in all listening conditions. Speaking clearly, reducing noise, and using surgical masks as well as good signal amplification can help compensate for the loss of intelligibility due to background noise, lack of visual cues, physical distancing, or non-native speech. The findings have implications for communication in classrooms and hospitals where listeners interact with teachers and healthcare providers, oftentimes non-native speakers, through their protective barriers.Sound source localization using multichannel signal processing has been a subject of active research for decades. In recent years, the use of deep learning in audio signal processing has significantly improved the performances for machine hearing. This has motivated the scientific community to also develop machine learning strategies for source localization applications. This paper presents BeamLearning, a multiresolution deep learning approach that allows the encoding of relevant information contained in unprocessed time-domain acoustic signals captured by microphone arrays. The use of raw data aims at avoiding the simplifying hypothesis that most traditional model-based localization methods rely on. Benefits of its use are shown for real-time sound source two-dimensional localization tasks in reverberating and noisy environments. Since supervised machine learning approaches require large-sized, physically realistic, precisely labelled datasets, a fast graphics processing unit-based computation of room impulse responses was developed using fractional delays for image source models. A thorough analysis of the network representation and extensive performance tests are carried out using the BeamLearning network with synthetic and experimental datasets. Obtained results demonstrate that the BeamLearning approach significantly outperforms the wideband MUSIC and steered response power-phase transform methods in terms of localization accuracy and computational efficiency in the presence of heavy measurement noise and reverberation.Older adults often report difficulty understanding speech produced by non-native talkers. These listeners can achieve rapid adaptation to non-native speech, but few studies have assessed auditory training protocols to improve non-native speech recognition in older adults. In this study, a word-level training paradigm was employed, targeting improved recognition of Spanish-accented English. Younger and older adults were trained on Spanish-accented monosyllabic word pairs containing four phonemic contrasts (initial s/z, initial f/v, final b/p, final d/t) produced in English by multiple male native Spanish speakers. Listeners completed pre-testing, training, and post-testing over two sessions. Statistical methods, such as growth curve modeling and generalized additive mixed models, were employed to describe the patterns of rapid adaptation and how they varied between listener groups and phonemic contrasts. While the training protocol failed to elicit post-test improvements for recognition of Spanish-accented speech, examination of listeners’ performance during the pre-testing period showed patterns of rapid adaptation that differed, depending on the nature of the phonemes to be learned and the listener group. Normal-hearing younger and older adults showed a faster rate of adaptation for non-native stimuli that were more nativelike in their productions, while older adults with hearing impairment did not realize this benefit.The classical guitar is a popular string instrument in which the sound results from a coupled mechanical process. The oscillation of the plucked strings is transferred through the bridge to the body, which acts as an amplifier to radiate the sound. In this contribution, a procedure to create a numerical finite element (FE) model of a classical guitar with the help of experimental data is presented. The geometry of the guitar is reverse-engineered from computed tomography scans to a very high level of detail, and care is taken in including all necessary physical influences. All of the five different types of wood used in the guitar are modeled with their corresponding orthotropic material characteristics, and the fluid-structure interaction between the guitar body and the enclosed air is taken into account by discretizing the air volume inside the guitar with FEs in addition to the discretization of the structural parts. Besides the numerical model, an experimental setup is proposed to identify the modal parameters of a guitar. The procedure concludes with determining reasonable material properties for the numerical model using experimental data. The quality of the resulting model is demonstrated by comparing the numerically calculated and experimentally identified modal parameters.An empirical model for wind-generated underwater noise is presented that was developed using an extensive dataset of acoustic field recordings and a global wind model. These data encompass more than one hundred years of recording-time and capture high wind events, and were collected both on shallow continental shelves and in open ocean deep-water settings. The model aims to explicitly separate noise generated by wind-related sources from noise produced by anthropogenic sources. Two key wind-related sound-generating mechanisms considered are surface wave and turbulence interactions, and bubble and bubble cloud oscillations. AZD1480 The model for wind-generated noise shows small frequency dependence (5 dB/decade) at low frequencies (10-100 Hz), and larger frequency dependence (∼15 dB/decade) at higher frequencies (400 Hz-20 kHz). The relationship between noise level and wind speed is linear for low wind speeds (10 kHz), likely due to interaction between bubbles and screening of noise radiation in the presence of high-density bubble clouds.