Listening to music chosen by a machine with the use of thought

Home Healthcare Listening to music chosen by a machine with the use of thought

Edited by Silvia Sanna.

The Brain-Computer Interface (BCI) is a new science belonging to computer and biomedical engineering in which we try to interface human thought to a computer. The purposes are numerous even though lately the best known is the recent discovery by Neuralink (the BCI company of Elon Musk). The study conducted by the University of Reading in collaboration with the University of Plymouth is also of considerable importance: a BCI that associates the different patterns of cortical brain activity with different emotional states.

We all have special memories associated with notes, tunes and songs, especially during the summer. Sometimes listening to songs that you didn’t listen to for a long time triggers the memories of that moment, either beautiful or ugly. Sometimes we can also relive those same emotions. This is precisely the objective of the study, to make us relive emotions in particular moments.

Credits: Cellcode

Brain-Computer Music-Interface to monitor and induce affective states

As stated by the research group, the main objective of the project was to develop a technology to build an intelligent system that can monitor our affective states. In addition, induce specific states through music. However, induction must be automatic and adapted to each subject.

Credits: DiabetesCare

Initially, the study was based on understanding how EEG, heart rate and other physiological processes change when we listen to music that evokes particular memories. The visualization of the neuronal activity evolution was achieved with the images of fMRI (functional magnetic resonance). This demonstrates that this really happens and we have positive feedbacks among the synapses of the brain.

It has been shown that music has a strong impact on our emotions. Especially the rhythm, which increases the level of pleasure and excitement. The study also shows how the cortical activity captured by the EEG reflects the internal activity of some regions. Particularly, the productive activity of the amygdala.

Thanks to these data, a music system has been developed and is able to compose music in real-time and produce a large range of emotions within the listener. This system has been tested in a group of healthy participants and also in a small group of people with Huntington’s disease (a neurodegenerative genetic disease that affects muscle coordination resulting in cognitive and psychiatric problems).

Credits: SystemsCuE

From the researchers’ revelations, the system has great potential in its application, predominantly in music therapy and musical education. Also, helping in the development of therapeutic systems: a system capable of taking the first step towards the era of brain-guided music therapy. The system will be mainly useful for understanding the relationship between our emotions and what happens in the brain. Mostly, to study in depth the neuronal structure and interactions.

Some details of the Affective Jukebox application

The system was developed using dry electrodes and the MOBIlab + preamplifier system. The EEG signal was subsequently passed to the Matlab Simulink tool to process it (low pass filter and notch filter), extract the features and classify (LDA and SVM). These data were subsequently used as input for MAX / MSP which allowed the selection of music clips.

All eras (segments of the sampled signal in this case with a Hanning window) with a voltage threshold greater than 100 microVolts were discarded to avoid noise from the blink of an eye and artifacts. The good times were subsequently passed to a bandpass filter to isolate its power spectrum and normalize it in 30 windows of 10 seconds.

The mood was highlighted by the fact that the strong alpha band activity indicates a relaxed state of mind while an increase in the beta band activity indicates a state of excitement. Balancing brain activity levels, especially between the two hemispheres, indicates an emotional difference.

Implementation details on the algorithm and real-time composition of the music are not known.

Great results have been obtained with all these techniques. The average error of the system on mood prediction was equal to 0.07. More accurate data is shown in the tables below:

Credits: SystemsCuE

At this point an ethical question arises: in order to always have the best music at hand, would we be willing to have a helmet with electrodes always on our heads? Perhaps, to have these beautiful results, researchers would have to try some non-invasive alternatives. However, with the same results.


 Sources

Subscribe to our newsletter

To be updated with all the latest news about Brain-Computer Interfaces, subscribe!

· By entering your e-mail address and confirming the following steps, you will accept our privacy conditions.
· Check you spam folder if you don't see the confirmation mail.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

MUST READ