Scientists have reconstructed a famous Pink Floyd song by using brain waves. Researchers at the University of California, Berkley, used Artificial Intelligence to reconstruct a cover of Another Brick in the Wall Part 1 by listening to neural patterns of patients listening to music. Experts say the development gives those that are unable to communicate due to neurological conditions hope for the future. Let’s take a closer look: What happened? As per Daily Beast, scientists developed a machine-learning model that could decode brain activity. The research was published on Tuesday in the journal PLoS Biology,
They attached 2,668 electrodes directly to the brains of 29 patients who were listening to the song from the 1979 Pink Floyd album.
The brain activity of patients, who were undergoing epilepsy surgery, was then recorded and analysed, as per The Guardian. Scientists then put artificial intelligence to work – first to decode the recordings and then to reproduce the sounds and words. The team said they were able to recreate a rough copy of the song, as per the Daily Beast. They also discovered that the more data they fed into the model, the better the end result. Scientists said the “All in all it’s just another brick in the wall” phase is recognisable and that the song’s rythms remain intact. “It sounds a bit like they’re speaking underwater, but it’s our first shot at this,” Robert Knight, a neurologist and UC Berkeley professor of psychology at the Helen Wills Neuroscience Institute, was quoted as saying by EuroNews. “We reconstructed the classic Pink Floyd song “Another Brick in the Wall” from direct human cortical recordings, providing insights into the neural bases of music perception and into future brain decoding applications,” Ludovic Bellier, a computational research scientist at UC Berkeley, said in a press release. Knight and Bellier led the study. According to Financial Times, researchers found certain areas of the brain identify rythm. Parts of the auditory cortex, located just behind and above the ear, respond to a voice or synthesiser while others respond to sustained vocals.
The study also gave a fillip to the scientific ideas about the right brain and left brain.
Knight said that though the two work jointly, the left side handles language, while “music is more distributed, with a bias towards [the] right.”
What do experts say? That we are in a brave new world – and that this discovery could ultimatelyend up benefitting people with impaired speech. Knight told New Scientist, “For those with amyotrophic lateral sclerosis [a condition of the nervous system] or aphasia [a language condition], who struggle to speak, we’d like a device that really sounded like you are communicating with somebody in a human way,” Knight said. “Understanding how the brain represents the musical elements of speech, including tone and emotion, could make such devices sound less robotic.” “It’s a technical tour de force,” Robert Zatorre, a neuroscientist at McGill University in Canada, told The New York Times. Shailee Jain, a neuroscientist at the University of California, San Francisco, told Scientific American, “These exciting findings build on previous work to reconstruct plain speech from brain activity. Now we’re able to really dig into the brain to unearth the sustenance of sound.” Dr Alexander Huth, University of Texas, told The Guardian, “This [new study] is a really nice demonstration that a lot of the same techniques that have been developed for speech decoding can also be applied to music – an under-appreciated domain in our field, given how important musical experience is in our lives,” Huth said. Huth, who led a team of researchers who this year translated brain activity into text using MRI scan data, added, “While they didn’t record brain responses while subjects were imagining music, this could be one of the things brain machine interfaces are used for in the future: translating imagined music into the real thing. It’s an exciting time.” “It’s a wonderful result,” Knight added, as per EuroNews. “One of the things for me about music is it has prosody and emotional content. As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who’s got ALS or some other disabling neurological or developmental disorder compromising speech output. It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect. I think that’s what we’ve really begun to crack the code on.” With inputs from agencies