24 years in the making: Evolution of AI in Phonak hearing technology

Artificial intelligence is set to transform the hearing industry, from enhancing hearing aid functionalities to optimizing workflow processes. Learn how Phonak has utilized machine learning to improve the user experience for over two decades and what great potential we can expect in the future.

Over the past decade, artificial intelligence (AI) has emerged as a transformative technology across industries, including healthcare. In the world of hearing technology, integrating AI via machine learning and deep learning algorithms offers exciting new possibilities for improving hearing aid performance and enhancing the user experience.

AI refers to computer systems that can perform tasks normally requiring human intelligence, such as visual perception, speech recognition, and decision making. By training AI models on vast datasets, they can learn to replicate some of the complex computational capabilities of the human brain.

Whilst examples of AI use in Phonak hearing solutions date back as far as the year 2000, hearing healthcare as an industry has been relatively late to adopt AI compared to other fields. However, we now stand at an inflection point; with access to big data, advanced machine learning techniques such as deep learning, and more powerful AI hardware, there is tremendous potential to develop smarter, more intuitive hearing solutions.

The potential of AI in hearing healthcare includes the possibility to:

  • Better understand the highly nuanced and personalized process of human hearing.
  • Produce more natural sound processing that adaptively responds to real-world environments. 
  • Enable personalized device settings tailored to the individual’s hearing needs.
  • Reduce the need for manual adjustment by automatically adapting to the wearer.
  • Streamline and optimize workflows for both patients and hearing care professionals.
  • Develop new speech and noise processing strategies to allow for even better understanding in difficult listening situations.

Thoughtfully integrating AI where it can augment human expertise has the potential to profoundly transform hearing healthcare. This creates an exciting future where hearing devices function more seamlessly, fitting into the wearer’s life and requiring less conscious effort to hear their best.

AI in hearing aids 

One of the key goals of integrating AI and machine learning into hearing technology is to get closer to replicating the natural hearing process. The way the human ear and brain work together to interpret sound is extremely complex and nuanced. By training AI algorithms on vast amounts of real-world sound data, we can develop more intelligent sound processing that better adapts to different environments more naturally.

Phonak’s use of AI

Phonak has been utilizing machine learning in its hearing aid technology for over two decades (shown in Figure 1). One of the latest and most advanced applications of AI is AutoSense OSTM. This operates using a machine learning model that has been trained on thousands of real-world sound recordings meticulously tagged to indicate different environments.

Figure 1: AI integrated into sound classification and noise reduction steering strategies in Phonak technology for over 24 years.

The model analyzes these recordings to identify the audio patterns that characterize environments like restaurants, conversations, music, and more. It builds an internal classification system for identifying these different sound scenes. When AutoSense OSTM runs on a Phonak hearing aid, it classifies the wearer’s environment by comparing the properties of the live incoming sound to its learned classification system. This allows the hearing aid to accurately recognize the current sound scene and automatically apply the ideal settings for that situation.

Real-world benefits of AutoSense OS

AutoSense OS is Phonak’s proprietary operating system that utilizes machine learning (see Figure 2) to classify the wearer’s listening environment and automatically apply the ideal settings for that situation. But what do these optimized settings actually mean for the hearing aid user in real world conditions?

Multiple independent studies have demonstrated that AutoSense OS provides significant benefits compared to basic automatic hearing aid programs for speech understanding, sound quality, reduced listening effort, and overall listening satisfaction.1-4

By leveraging AI and machine learning to dynamically classify the wearer’s auditory environment, AutoSense OS has been clinically proven to enhance key areas of real-world hearing aid benefit and patient satisfaction.1 The ability to automatically adjust to the wearer’s context avoids the need for manual program switching and provides optimized hearing across different soundscapes. Figure 2 provides an example of how AutoSense classifies environmental sounds in real-time, automatically adjusting hearing aid settings.

Figure 2: Example of machine learning in AutoSense OS.

Future AI integration in hearing aids

Over the next 5-10 years, we expect AI and machine learning to enable a new level of personalization and situational awareness in hearing aids that requires less manual adjustment by the wearer. As we continue feeding more real-world training data into the algorithms and leverage new techniques like deep learning, our AI systems will become even more intelligent.

The end goal is to create a hearing solution that automatically adapts to its wearer and environment as naturally and uniquely as the human hearing system does. The hearing aid would continuously optimize itself based on the user’s preferences and habits, without the need for repetitive manual tweaks and settings adjustments.

By embedding deep learning-based auditory intelligence into hearing devices, they can intuitively sense where the wearer is and what they are doing (e.g. conversation in a crowded restaurant) and seamlessly apply the ideal sound settings for that situation. This could significantly reduce the cognitive load on the wearer previously required to manually switch programs and adjust volume/directional focus.

AI brings us meaningfully closer to replicating the natural hearing process with technology. Within the next decade, we foresee AI enabling a major shift in hearing aids from static devices needing constant adjustment, to intelligent companions that automatically adapt to enrich the wearer’s listening experience in any environment.

Other AI applications in hearing healthcare

Phonak has utilized AI and machine learning in additional current applications in our hearing instruments to provide the optimal hearing experience for the end user.

Motion Sensor Hearing
AI is used to train our motion sensors to detect when the wearer is walking versus standing still or riding in a vehicle. This allows appropriate program shifts to focus on speech in front versus ambient sounds all around.

Biometric calibrations
Machine learning helps customize the fit of custom in-the-ear (ITE) devices by incorporating biometric data from thousands of ear scans to train a reference ear model which understands how the external ear reflects sound. When creating an ITE device, Phonak compares this AI trained ear model it to the hearing aid wearer’s ear shape to provide a more comfortable, personalized fit and optimized directional microphone sensitivity.

Proactive maintenance
Our data science teams employ AI algorithms to analyze large volumes of real-world usage data flowing from our hearing aids. This allows us to identify product opportunities, optimize features, improve reliability, and enhance the overall customer experience.

AI as an enabler in hearing healthcare

While our core focus of AI and machine learning has been integrating it into our hearing solutions, we see great potential to apply this technology to optimize and enhance workflows for hearing care professionals. 

By thoughtfully integrating AI where it can augment human intelligences, we aim to create more personalized hearing solutions that adapt to the user and drive the best possible hearing outcomes over time. Phonak is committed to using technology as an enabler that supports and enhances the work of hearing care professionals, not replacing them. They possess the human touch and expertise necessary for effective communication, counseling, and addressing complex hearing issues.

By utilizing advanced algorithms for noise reduction, speech enhancement, personalized hearing aids, fitting workflow efficiency, and predictive maintenance AI has the potential to improve the quality of care and enhance the overall user experience in hearing care.

To learn more about the use of AI in healthcare, hearing care, hearing aids and beyond, we invite you to visit our website.


References:

  1. Appleton-Huber, J. (2020). AutoSense OS™ 4.0 – significantly less listening effort and preferred for speech intelligibility. Phonak Field Study News, retrieved from www.phonak.com/evidence.
  2. Schulte, M.  Vormann, M., Heeren, J., Latzel, M. & Appleton-Huber, J. (2019). AutoSense OS – Superior speech intelligibility and less listening effort in complex listening situations, Phonak Field Study News, retrieved from www.phonak.com/evidence.
  3. Latzel, M., Lesimple, C., & Woodward, J. (2023). Speech Enhancer reduces listening effort and increases intelligibility for speech from a distance. Phonak Field Study News, retrieved from www.phonak.com/evidence.
  4. Appleton-Huber, J. (2020). Motion-based beamformer steering leads to better speech understanding and overall listening experience. Phonak Field Study News, retrieved from www.phonak.com/evidence.

Do you like the article?

Author

Articles of interest