A research team from the University of Malaga (Andalusia, Spain) has analyzed how the brain responds to hearing different musical genres and has classified the electrical signals that occur in it by means of artificial intelligence, differentiating according to whether what is heard is melody or voice and whether or not what is perceived through the ear is liked or disliked. The data obtained enable the development of applications to generate musical playlists depending on the tastes or individual needs of each person.
Knowing how the brain works in response to different stimuli and which specific areas are activated in certain circumstances allows for the creation of tools to facilitate daily life. These researchers thereby offer an advance in the classification of the various brain responses to different musical genres and to sounds of differing natures (voice and music) in the article titled “Energy-based features and bi-LSTM neural network for EEG-based music and voice classification,” published in Neural Computing and Applications.
The experts have concluded that the brain responds differently to various stimuli and have identified these differences according to the energy levels of the electrical signals recorded in different brain regions. They have observed that these signals change depending on the musical genre perceived by the ear and whether or not the song is liked or disliked.
This could open the door to app development for creating better music recommendations than current playlists. “If we know how the brain reacts depending on the musical style being listened to and the tastes of the listener, we can fine-tune the selection proposed to the user,” Fundación Descubre was told by Lorenzo J. Tardón, researcher at the University of Malaga and one of the authors of the article.
To do this, they have defined a scheme for characterizing brain activity based on the relationships between electrical signals, detected in different locations by means of electroencephalography, and their classification using two types of tests: binary and multi-class.
“The first one uses tasks with simple algorithms, where we differentiate between spoken voice and music. The second, more complex, analyzes brain responses when hearing songs of different musical genres: ballad, classical, metal and reggaeton. In addition, it takes into account the musical taste of each individual,” adds the researcher.
Artificial intelligence to study the brain
Neural networks are artificial intelligence tools for processing data in a manner similar to the way the human brain works. The results of this work were obtained using the neural network known as bidirectional LSTM, which learns from relationships between data that can be interpreted as long-term and short-term memory. It is a type of architecture used in deep learning that treats information in two different directions, from the beginning to the end and vice versa. In this way, the model recognizes the past and future of the references included and creates contexts for classification.
Specifically, the neural network used to perform these classification tasks consists of 61 inputs that receive the data sequences along with the energy relationships between the different areas of the brain. The information is processed in the successive layers of the network to generate answers to the questions asked about the type of sound content heard or the tastes of the person listening to a piece of music.
Musical experiments
During the trials, the volunteers wore caps with electrodes that picked up electrical signals from the brain. At the same time, they were connected to speakers to hear the music. Synchronization marks were then established to check what was happening at each moment in the brain during the half hour of the experiment.
In the first test, participants randomly listened to 20 excerpts, each lasting 30 seconds, of the chorus or the catchy part of songs from different musical genres. After each listening test they were asked if they liked the song, with three possible answers: “I like it,” “I like it a little” or “I don’t like the song.” They were also asked if they already knew the song they heard, with the possibility of answering “I know the song,” “It sounds familiar” or “I don’t know the song.”
During the second test, subjects heard 30 randomly chosen sentences in different languages: Spanish, English, German, Italian and Korean. This second test was intended to identify brain activation upon hearing a voice regardless of whether or not the language being spoken is known.
Currently, the researchers are continuing their studies to evaluate other types of sounds and tasks and the reduction of the number of EEG channels, which will make it possible to expand the usefulness of the model for other applications and environments.
More information:
Isaac Ariza et al, Energy-based features and bi-LSTM neural network for EEG-based music and voice classification, Neural Computing and Applications (2023). DOI: 10.1007/s00521-023-09061-3
Provided by
Fundación Descubre
Citation:
Measuring the brain’s response to different musical genres with artificial intelligence (2024, February 26)
retrieved 26 February 2024
from https://medicalxpress.com/news/2024-02-brain-response-musical-genres-artificial.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.