I began in audiology at the time multiple program analog hearing aids were emerging (I’m that old). The argument for multiple programs was that no one combination of frequency response, gain, and compression was best for the varied sound environments encountered by hearing aid users. The solution was to empower the end-user with programs that they could select from using a button on the aids or remote control. We would counsel our clients that, for example: program one was for most situations, two for noise, three for comfort, and so on. Some research had questioned whether users did actually use the different programs, but it seemed to make sense that clients were the best placed to know what was right for a given situation.
Customizing programs for each client became routine practice for me, and something I enjoyed. Automatic sound classifiers then began to emerge. Sound classifiers use acoustic characteristics to determine what the sound environment is, for example music or speech in noise. These sound classifiers could then determine, according to some rules, when the hearing aid should switch from one setting to another. I like technology, but I was lukewarm on this idea.
My thoughts were that this might occasionally work, but that as humans we are quite ‘picky‘ about how we like to hear sound and surely we knew best! I did become concerned that clients’ data logging showed limited use of more than 1-2 programs, but my position on the value of user-selected programs was reinforced because every so often there would be a case where the user was extremely successful and used many programs. But I’ve changed my mind due to two studies I’ve been involved in.
In a cross sectional survey of a group of 44 clients recently fit with hearing aids, 80% used 1 program, 20% used 2, and none more than 2.1 So maybe clients are quite happy not changing settings? What convinced me was how poor they actually were at selecting the best setting for a given situation.2 In a carefully controlled study with 25 participants, we found that preferences of manual program for scenarios varied considerably between and within sessions. A Hearing in Noise advantage was observed for the Phonak AutoSense OS automatic classifier over participant’s manual selection for speech in quiet, loud noise and car noise, with sound quality ratings similar for both manual and automatic selections. It appeared that if the goal was hearing speech, the Phonak AutoSense OS did better than what users selected, and user selections weren’t rated as sounding better.
These findings do not mean we should abandon manual user-selected programs. There are factors other than acoustics that determine what and how the user wants to listen to. But my recommendation would be that users are counseled how automatic classifiers work and that we are conservative when recommending manual programs. Datalogging and follow-up counseling can guide the need for additional programs but I don’t think that they are needed in most circumstances.
So in conclusion, I do now think that many times the hearing aid does know best!
1. McMillan, A., Durai, M., & Searchfield, G. D. (2017). A survey and clinical evaluation of hearing aid data-logging: a valued but underutilized hearing aid fitting tool. Speech, Language and Hearing, 1-10.
2. Searchfield, G. D., Linford, T., Kobayashi, K., Crowhen, D., & Latzel, M. (2018). The performance of an automatic acoustic-based program classifier compared to hearing aid users’ manual selection of listening programs. International Journal of Audiology, 57(3), 201-212.