AutoSense OS™: Powered by AI to improve the listening experience

Audiologist Nicole Klutz, AuD, explains how Phonak’s AutoSense OS™ leverages AI and machine learning to enhance hearing aids, bringing us closer to replicating the natural hearing process.

Artificial Intelligence (AI) has promising potential to play a transformative role in reshaping hearing aids to meet the demands of contemporary life. AI can tackle complex problems by navigating a vast space of possibilities towards a defined goal and learning through data to achieve optimal solutions. That’s why AI can be a valuable tool for making hearing care more effective and efficient.

AI enables the creation of more personalized and adaptive solutions, tailoring amplification, and noise reduction algorithms to the unique needs of each hearing aid user in any given auditory environment. Moreover, with continuous learning capabilities, AI-based hearing aid features could improve their performance over time, gradually perfecting the listening experience.

One of the key goals of integrating AI and machine learning into hearing technology is to get closer to replicating the natural hearing process. AI-based technologies have already been incorporated into hearing aid devices available today.

For many years AI technology using machine learning has been utilized in hearing aids to perform two tasks: 1) environmental classification and 2) program steering. Phonak first introduced this technology in 2007 with the SoundFlow operating system in Phonak Exelia hearing aids. At that time the algorithms used were not as sophisticated as today and the amount of data utilized for training the system was much less.

How AI is evolving: AutoSense OS 5.0

AutoSense OS™ 5.0 is our most recent application of machine learning trained with thousands of real-world sound recordings. This technology was developed and trained using thousands of real-world sound recordings. AutoSense OS scans the environment 700 times per second, automatically detecting and analyzing incoming sound in real-time based on the listening environment.

It then instantly activates the appropriate blend of gain, programs, noise reduction and other features, intelligently choosing from more than 200 unique setting combinations. The result is a truly personalized hearing experience, meaning your client is in the right setting in every situation.

By leveraging AI and machine learning to dynamically classify the wearer’s auditory environment, AutoSense OS has been clinically proven to enhance key areas of real-world hearing aid benefit and patient satisfaction.2

As technology continues to evolve, it’s important to remember the value of combining innovation with the expert care and guidance that hearing care professionals provide. AI is a helpful tool, but it can’t replace a qualified hearing care professional’s expertise and a holistic approach. Integrating advanced technology with expert care ensures we get the most out of hearing aid technology and ultimately benefit the client.

To learn more about the AI in hearing technology and AutoSense OS 5.0, we invite you to read a Phonak Insight titled,  Artificial intelligence in hearing aid technology.


1. Rodrigues T, Liebe S. (2018). Phonak AutoSense OS™ 3.0: The New & Enhanced Automatic Operating System. Phonak Insight, retrieved from, accessed June 04, 2024.

2. Appleton-Huber, J. (2020). AutoSense OS™ 4.0 – significantly less listening effort and preferred for speech intelligibility. Phonak Field Study News, retrieved from, accessed June 04, 2024.

Do you like the article?


2 thoughts on “AutoSense OS™: Powered by AI to improve the listening experience

  1. I wear Phonax Audeo Luminity 90 hearing aids. Can the software on these hearing aids be upgraded to AI Auto SenseTechnology OS 3.0? I’d be willing to pay for an upgrade in software but no buy a new Aid since my aids are less than a year old.

  2. sir, I am 73,from India ,having your Hearing aid,BRE
    Iam having Tinnitus. problem of hearing in crowd.
    Most of the persons of hearing aid experienced the same. we opinion is for old persons The Aid should have the out of pre calibrated decible irrespective of incoming sound.

Comments are closed.


Articles of interest

Fostering social connections: The essential role of speech understanding

Audiologist, Brandy Pouliot, AuD, shares why speech understanding is key to enjoying social interactions and how Phonak’s sound classifier, AutoSense OS, uses machine learning to activate the right settings so your patients can socialize with confidence and foster those important social connections.