Automation counts, but it’s not always enough

Automation in hearing aids counts, but only if the best hearing aid settings are chosen for every listening environment and the transition between them is seamless. Can technology achieve this?

A noisy dinner party or a quiet walk on the beach —there is no doubt that our daily lives encounter a variety of complex listening scenes that can be challenging for those with hearing impairments. Unless hearing aids can accommodate changes in acoustic surroundings, people with hearing loss can be robbed of a satisfying social life.

Working in product development for more than a decade, I was fortunate to have been involved in the development of AutoSense OS, an automatic steering system that can identify acoustic scenes and blend many parameters and features to accurately match every listening environment – and ultimately, the end user’s needs. Although automatic programs are now more common, I think it important to recognize the incredible progress we have made with AutoSense OS through the years and across platforms.

Historically, hearing aids were only equipped with manual programs and hearing aid users pushed a button to switch between a couple of available programs. Manual programs worked well when there were few programs with different gain and features, like noise canceling or directionality, but as technology became increasingly complex and the number of hearing aid programs and feature options increased, automation became ‘a must’ for end users to truly have access to all the benefits available from a hearing aid.

I remember when I first started at Phonak in 2005 – we provided an automatic operating system that balanced clarity and comfort across complex environments using only 4 different programs. Over the years, we continued to innovate and strived for our automatic technology to meet two key requirements; (1) choose the best hearing aid setting for every listening environment; and (2) provide seamless transitions between these settings.

Fast forward to 2017, I believe we have introduced a system that successfully delivers on both of these requirements. The high degree of accuracy is delivered through an automatic steering program that blends multiple environments to create the right mix that reflects the end user`s acoustic environment. Due to the proportion of mixing of programs, it can now activate over 200 unique and audibly different hearing aid settings.

This performance was built on many years of extensive testing which included evaluation in real-life situations, as we added different functionality. For example, the new Speech in Car program required lengthy testing to ensure its performance in a variety of different settings (e.g. cars, public transport and airplanes). To test this program, we went into each of those real-life situations and optimized the system until we were satisfied with the performance. This process proved to be invaluable as research has shown that this blending approach not only delivers the best hearing aid setting for every listening situation, but the transitions occur without the hearing aid wearer noticing.

I am proud of have been a part of its development, which has only been made possible with decades of research and a dedication to make the best automatic program possible. With this technology, I believe automation counts.

I invite those interested in learning more about our automatic program to read the following article in Hearing Review.



Do you like the article?


2 thoughts on “Automation counts, but it’s not always enough

  1. Autosense OS is lightyears away from the clunky old Soundflow programme. Thank you for developing it. It has improved my life no end.

    The biggest improvement is noticable when I’m cooking. (The extractor on the cooker hood, combined with Soundflow was my nemesis!) now I can cook and chat!

    I do, however, still like to be able to manually switch to a speech in noise setting when I feel my brain just needs less input from the world outside.

Comments are closed.


Articles of interest

Fostering social connections: The essential role of speech understanding

Audiologist, Brandy Pouliot, AuD, shares why speech understanding is key to enjoying social interactions and how Phonak’s sound classifier, AutoSense OS, uses machine learning to activate the right settings so your patients can socialize with confidence and foster those important social connections.