Scientists have reconstructed Pink Floyd’s Another Brick in the Wall by eavesdropping on people’s brainwaves – the first time a recognisable song has been decoded from recordings of electrical brain activity.
The hope is that doing so could ultimately help to restore the musicality of natural speech in patients who struggle to communicate because of disabling neurological conditions such as stroke or amyotrophic lateral sclerosis – the neurodegenerative disease that Stephen Hawking was diagnosed with.
Although members of the same laboratory had previously managed to decipher speech – and even silently imagined words – from brain recordings, “in general, all of these reconstruction attempts have had a robotic quality”, said Prof Robert Knight, a neurologist at the University of California in Berkeley, US, who conducted the study with the postdoctoral fellow Ludovic Bellier.
“Music, by its very nature, is emotional and prosodic – it has rhythm, stress, accent and intonation. It contains a much bigger spectrum of things than limited phonemes in whatever language, that could add another dimension to an implantable speech decoder.”
Whereas previous work has decoded electrical activity from the brain’s speech motor cortex – an area that controls the tiny muscle movements of the lips, jaw, tongue and larynx that form words – the current study utilised recordings from the brain’s auditory regions of the brain, where all aspects of sound are processed.
The team analysed brain recordings from 29 patients as they were played an approximately three-minute segment of the Pink Floyd song, taken from their 1979 album The Wall. The volunteers’ brain activity was detected by placing electrodes directly on the surface of their brains as they underwent surgery for epilepsy.
Artificial intelligence was then used to decode the recordings and then encode a reproduction of the sounds and words. Though very muffled, the phrase “All in all, it’s just another brick in the wall” comes through recognisably in the reconstructed song – with its rhythms and melody intact.
“It sounds a bit like they’re speaking underwater, but it’s our first shot at this,” said Knight.
He believes that using a higher density of electrodes might improve the quality of their reconstructions: “The average separation of the electrodes was about 5mm, but we had a couple of patients with 3mm [separations] and they were the best performers in terms of reconstruction,” Knight said.
“Now that we know how to do this, I think if we had electrodes that were like a millimetre and a half apart, the sound quality would be much better.”
As brain recording techniques improve, it may also become possible to make such recordings without the need for surgery – perhaps using sensitive electrodes attached to the scalp.
This year, researchers led by Dr Alexander Huth at the University of Texas in Austin announced that they had managed to translate brain activity into a continuous stream of text using non-invasive MRI scan data. The system was not accurate enough to decode the exact words but could detect the gist of sentences.
“This [new study] is a really nice demonstration that a lot of the same techniques that have been developed for speech decoding can also be applied to music – — an under-appreciated domain in our field, given how important musical experience is in our lives,” Huth said.
“While they didn’t record brain responses while subjects were imagining music, this could be one of the things brain machine interfaces are used for in the future: translating imagined music into the real thing. It’s an exciting time.”
The research, published in PLoS Biology, also pinpointed new areas of the brain involved in detecting rhythm, and confirmed the right side of the brain was more attuned to music than the left.
A better understanding of how music and language is processed could also have practical applications, such as helping to shed light on the mystery of why people with Broca’s aphasia, who struggle to find and say the right words, can often sing words with no difficulty.