The real world is a constantly changing place. Can an automatic system keep up?
Can people with hearing loss hear what their backseat drivers say? Assessing performance of hearing aids in complex real-world settings.
It is apparent that hearing aid manufacturers approach the idea and implementation of hearing aid automatics in different ways. Differences emerge when we focus our attention on the speed of processing, the number of programs available to the automatic system, the way the switching is performed, and the types of parameters able to be manipulated. On paper, automatic systems vary widely.
The gold standard, of course, is knowing how an automatic system impacts hearing aid user performance. At the Phonak Audiology Research Center (PARC), we set out to design a body of work to evaluate automatic systems in the real world. Most audiology research takes place in the soundbooth, the optimal setting for a controlled, repeatable experiment.
The truth of the matter is that hearing in the real world is complex. It is constantly changing, with each listening scenario requiring a different prioritization between comfort, music, or enhancement of speech intelligibility. These types of scenarios, often times a hybrid of difficult acoustic characteristics, are not easily produced in the sound booth. Further, for this particular experiment, we felt it absolutely necessary to put hearing aid automatic systems to the true test, and so a study paradigm outside the walls of the Research Center was created, while still maintaining as much experimental control as possible.
For this purpose, we designed three acoustic “test scenes”: the car, the Listening Loft (a large, echoic room), and a coffeeshop. These scenes were chosen for a couple of specific reasons: firstly, they are scenes commonly reported by hearing aid users as being frequently encountered and difficult, and secondly, these scenes gave us the ability to incorporate a level of research control that could yield repeatability. By doing so, we were able to assess the impact of the Phonak automatic classification system, AutoSense OS, in three complex real-world settings.
Special care was taken to ensure the same car, same speed, and same ventilation settings were used for the car testing with each participant. The same coffee shop was used, with this establishment particularly chosen for its high levels of consistent noise throughout all hours of the day, verified by repeated noise floor measurements. All experiments are double-blinded to ensure the experimenter scoring does is unaware of the condition being tested.
The first study on AutoSense OS done at PARC using this methodology ultimately showed the power, accuracy, and superiority of performance provided by AutoSense OS over manual programs. A second PARC study done in the area of automatic technology showed that AutoSense OS resulted in better speech recognition performance than the automatic systems of two competitors, in actual, real-world listening environments.
Here is a short video explaining how we approached this interesting task and a related article from Hearing Review.