Advertisement

Scientists show how brain distinguishes lyrics from music

Albouy and his team found that degradation of temporal information impaired speech recognition but not melody recognition. On the other hand, the perception of melody decreased only with spectral degradation of the song.

Scientists show how brain distinguishes lyrics from music

Washington DC: The perception of speech and music - two of the most uniquely human uses of sound is enabled by specialized neural systems in different brain hemispheres adapted to respond differently to specific features in the acoustic structure of the song, a new study has found.

Though it's been known for decades that the two hemispheres of our brain respond to speech and music differently, this study used a unique approach to reveal why this specialization exists, showing it depends on the type of acoustical information in the stimulus.

Music and speech are often inextricably entwined and the ability for humans to recognize and separate words from melodies in a single continuous soundwave represents a significant cognitive challenge. It is thought that the perception of speech strongly relies on the ability to process short-lived temporal modulations, and for melody, the detailed spectral composition of sounds, such as fluctuations in frequency.

Previous studies have proposed a left- and right-hemisphere neural specialization for handling speech and music information, respectively.

However, whether this brain asymmetry stems from the different acoustical cues of speech and music or from domain-specific neural networks remains unclear. 

By combining ten original sentences with ten original melodies, Philippe Albouy and colleagues created a collection of 100 unique a cappella songs, which contained acoustic information in both the temporal (speech) and spectral (melodic) domain. The nature of the recordings allowed the authors to manipulate the songs and selectively degrade each in either the temporal or spectral domain.

Albouy and his team found that degradation of temporal information impaired speech recognition but not melody recognition. On the other hand, the perception of melody decreased only with spectral degradation of the song.

Concurrent fMRI brain scanning revealed asymmetrical neural activity; decoding of speech content occurred primarily in the left auditory cortex, while melodic content was handled primarily in the right.