Music is more than a language of emotions

Neuropsychologist and Audiological Engineer, Dr. Teresa Wenhart, explains why music is an important topic for hearing care professionals, audiological researchers, and audiological engineers.

“Where words fail, music speaks.”
– Hans Christian Andersen (repeated by many famous writers over the years)

Do you hate music? – You stumble on that question, right?

Why would anybody hate music? Although everyone has specific likes and dislikes when it comes to genre of music, composers and musicians, apart from very rare conditions where people cannot enjoy music (e.g., amusia*), nobody would generally dislike all kinds of music. Why is that?

The universal language of mankind

The American poet Henry Wadsworth Longfellow once said, “Music is the universal language of mankind.” For many decades, at least going back to Charles Darwin’s “The Descent of Man”1, evolutionary anthropologists and ethnomusicologists have been puzzled by how music making has evolved in humans and how universal it is across cultures.

Despite cultural differences, more and more studies have shown statistical universals across cultures, not only when it comes to common musical features (pitch, rhythm) but also when inspecting the social context of music.2

A recent study3 showed that nearly all human cultures have four common types of musical songs:

  1. Child songs/lullabies
  2. Dance songs
  3. Love songs
  4. Healing songs

Furthermore, these types of song functions are universally linked to similar musical features and form and were consistently identified correctly by participants of various foreign cultures.+ These findings stress the idea that music has universally evolved as a tool for social connections and group cohesion.

Music and speech share structural similarities

Music shares many similarities to language: music has a syntax; it is hierarchically structured into ‘words’, ‘sentences’ and ‘phrases’ with meaning; it has dynamics like prosody; and can even have call-response-patterns.

Musical compositions, be it classical symphonies and sonatas or pop and rock songs, usually share the same structural features as a short story: with an introduction, a main part or development, a conclusion, an arc of tension on a larger or smaller scale and characteristic repetitions of themes and motifs to tell a story. (more or less variation in freedom of form depending on genre and musical period).

Several neuroscientific studies show that musical and language syntax elicit indistinguishable amplitudes of electrophysiological potentials in the brain and that the most important brain areas for speech understanding and production – Wernicke’s and Broca’s Area – are active during processing of musical harmony in musicians.4  

A composer is like a poet putting life into harmonies and a musician a narrator emotionally interpreting the story. Hence another theory of music evolution for a longtime speculated, that music may have (in part) originated from or alongside speech development of mankind for the purpose of verbal and non-verbal communication and group cooperation behavior – the main weapons of human animals.

The neurochemical magic of music

Music is more than emotional words on tones. Neuroimaging studies repeatedly show that regions nearly all over the brain and in both hemispheres are involved when listening to or making music and those go way beyond the amount of brain regions involved in speech processing.5

And not enough, music can reduce levels of the stress hormone cortisol and elicits a cocktail of neurotransmitters and hormones in the human body:6

  • Dopamine: plays a role in feelings of pleasure, reward and motivation
  • Serotonin: the happiness hormone, also important for several bodily functions
  • Endorphins: act as pain killers and increase pleasure and well-being
  • Oxytocin: the bonding hormone released during childbirth, breastfeeding and sexual activity but similarly also during music-making in groups
  • Immunoglobulin A (s-IgA): the first peptide to respond to bacteria and viral material and increases in saliva while listening to or making music, especially while singing in a choir

What to consider in audiological care

When considering music perception in audiology and technological development we need to go far beyond the speech banana. Even without hearing loss or hearing aids, music is a much more complex stimulus for the ear and the brain than speech: especially with respect to frequency range, timbre, dynamic range and temporal dynamics.

Asking a client about their musical interests can give valuable insights about in how far they might require fine tuning of their music program or even the configuration of additional music programs. You can for example ask about what types of music they like to listen to, whether they make music themselves, etc.

When a client is a ‘professional ear user’

 A specific group of people have recently been named “professional ear users”7 – people who professionally work or rely on their hearing, such as professional musicians, audio engineers, instrument makers. This group has much higher demands for their expert hearing abilities.

Given the acoustic demands of their professions, they might report changes in their hearing abilities much earlier than your typical clients. Even more, they might also have a higher risk for noise induced hearing loss and tinnitus.  Consequently, professional ear users may require more of your time and attention to fully address their needs.

What about a multi-functional hearing device?

In 2019, the World Health Organization estimated that around 50% of the young people between 12-35 years (1.1 billion people) are at risk of hearing loss due to “prolonged and excessive exposure to loud sounds, including music they listen to through personal audio devices”.8

As an audiological engineer, I imagine a future where a single hearing device is multi-functional in that it can provide audibility for those with hearing loss, enhance the listening experience for those with ‘normal’ hearing and protect all users from noise exposure.  This device would be used by everyone, so free of stigma. It is exciting to think what the future holds for the >99% of us who enjoy music.


* “Amusia”: is a rare congenital or acquired condition (e.g., after a stroke) also known as “tone deafness”. Persons affected usually have a severe deficit of perception of tune, melody and rhythm. Because of their brains inability to organize music perception, some of them do not find pleasure in music.

+Take the song quiz : Music may transcend cultural boundaries to become universally human – Harvard Gazette

References:

  1. Darwin, C. (1888). The descent of man: and selection in relation to sex. John Murray, Albemarle Street.
  2. Savage, P. E., Brown, S., Sakai, E., & Currie, T. E. (2015). Statistical universals reveal the structures and functions of human music. Proceedings of the National Academy of Sciences112(29), 8987-8992.
  3. Mehr, S. A., Singh, M., Knox, D., Ketter, D. M., Pickens-Jones, D., Atwood, S., … & Glowacki, L. (2019). Universality and diversity in human song. Science366(6468), eaax0868.
  4. Patel, A. Language, music, syntax and the brain. Nat Neurosci 6, 674–681 (2003). https://doi.org/10.1038/nn1082
  5. Altenmüller, E. O. (2001). How many music centers are in the brain?. Annals of the New York Academy of Sciences, 930(1), 273-280.
  6. Chanda, M. L., & Levitin, D. J. (2013). The neurochemistry of music. Trends in cognitive sciences, 17(4), 179-193.
  7. Bächinger, D., Jecker, R., Hannig, JC. et al. Der „Professional Ear User“ – Implikationen für die Prävention, Diagnostik und Therapie von Ohrerkrankungen. HNO 70, 891–902 (2022). https://doi.org/10.1007/s00106-022-01235-0
  8. World Health Organization. (2019). Safe listening devices and systems: a WHO-ITU standard.