AI Matches Humans in Vocal Emotion Detection – Neuroscience News

by thinkia.org.in
0 comment


Summary: Machine learning (ML) models can accurately identify emotions from brief audio clips, achieving a level of accuracy comparable to humans. By analyzing nonsensical sentences to remove the influence of language and content, the study found that deep neural networks (DNNs) and a hybrid model (C-DNN) were particularly effective in recognizing emotions such as joy, anger, sadness, and fear from clips as short as 1.5 seconds.

This breakthrough suggests the potential for creating systems that can provide immediate feedback on emotional states in various applications, from therapy to communication technology. However, the study also acknowledges limitations, including the use of actor-spoken sentences and suggests further research on audio clip durations for optimal emotion recognition.

Key Facts:

  1. ML Models vs. Human Emotion Recognition: ML models, specifically DNNs and a hybrid model, can identify emotions from audio clips with an accuracy similar to that of humans, challenging the traditional belief that emotion recognition is solely a human capability.
  2. Short Audio Clips for Emotion Detection: The study focused on audio clips 1.5 seconds long, demonstrating that this is sufficient time for both humans and machines to accurately detect emotional undertones.
  3. Potential for Real-world Applications: The findings open up possibilities for developing technology that can interpret emotional cues in real-time, promising advancements in fields requiring nuanced emotional understanding.

Source: Frontiers

Words are important to express ourselves. What we don’t say, however, may be even more instrumental in conveying emotions. Humans can often tell how people around them feel through non-verbal cues embedded in our voice.

Now, researchers in Germany wanted to find out if technical tools, too, can accurately predict emotional undertones in fragments of voice recordings. To do so, they compared three ML models’ accuracy to recognize diverse emotions in audio excepts.

Their results were published in Frontiers in Psychology.

The present findings also show that it is possible to develop systems that can instantly interpret emotional cues to provide immediate and intuitive feedback in a wide range of situations. Credit: Neuroscience News

“Here we show that machine learning can be used to recognize emotions from audio clips as short as 1.5 seconds,” said the article’s first author Hannes Diemerling, a researcher at the Center for Lifespan Psychology at the Max Planck Institute for Human Development. “Our models achieved an accuracy similar to humans when categorizing meaningless sentences with emotional coloring spoken by actors.”

Hearing how we feel

The researchers drew nonsensical sentences from two datasets – one Canadian, one German – which allowed them to investigate whether ML models can accurately recognize emotions regardless of language, cultural nuances, and semantic content.

Each clip was shortened to a length of 1.5 seconds, as this is how long humans need to recognize emotion in speech. It is also the shortest possible audio length in which overlapping of emotions can be avoided. The emotions included in the study were joy, anger, sadness, fear, disgust, and neutral.

Based on training data, the researchers generated ML models which worked one of three ways: Deep neural networks (DNNs) are like complex filters that analyze sound components like frequency or pitch – for example when a voice is louder because the speaker is angry – to identify underlying emotions.

Convolutional neural networks (CNNs) scan for patterns in the visual representation of soundtracks, much like identifying emotions from the rhythm and texture of a voice. The hybrid model (C-DNN) merges both techniques, using both audio and its visual spectrogram to predict emotions. The models then were tested for effectiveness on both datasets.

“We found that DNNs and C-DNNs achieve a better accuracy than only using spectrograms in CNNs,” Diemerling said.

“Regardless of model, emotion classification was correct with a higher probability than can be achieved through guessing and was comparable to the accuracy of humans.”

As good as any human

“We wanted to set our models in a realistic context and used human prediction skills as a benchmark,” Diemerling explained.

“Had the models outperformed humans, it could mean that there might be patterns that are not recognizable by us.” The fact that untrained humans and models performed similarly may mean that both rely on resembling recognition patters, the researchers said.

The present findings also show that it is possible to develop systems that can instantly interpret emotional cues to provide immediate and intuitive feedback in a wide range of situations. This could lead to scalable, cost-efficient applications in various domains where understanding emotional context is crucial, such as therapy and interpersonal communication technology.

The researchers also pointed to some limitations in their study, for example, that actor-spoken sample sentences may not convey the full spectrum of real, spontaneous emotion. They also said that future work should investigate audio segments that last longer or shorter than 1.5 seconds to find out which duration is optimal for emotion recognition.

About this AI and emotion research news

Author: Deborah Pirchner
Source: Frontiers
Contact: Deborah Pirchner – Frontiers
Source: The image is credited to Neuroscience News

Original Research: Open access.
Implementing Machine Learning Techniques for Continuous Emotion Prediction from Uniformly Segmented Voice Recordings” by Hannes Diemerling et al. Frontiers in Psychology


Abstract

Implementing Machine Learning Techniques for Continuous Emotion Prediction from Uniformly Segmented Voice Recordings

Introduction: Emotional recognition from audio recordings is a rapidly advancing field, with significant implications for artificial intelligence and human-computer interaction. This study introduces a novel method for detecting emotions from short, 1.5 s audio samples, aiming to improve accuracy and efficiency in emotion recognition technologies.

Methods: We utilized 1,510 unique audio samples from two databases in German and English to train our models. We extracted various features for emotion prediction, employing Deep Neural Networks (DNN) for general feature analysis, Convolutional Neural Networks (CNN) for spectrogram analysis, and a hybrid model combining both approaches (C-DNN). The study addressed challenges associated with dataset heterogeneity, language differences, and the complexities of audio sample trimming.

Results: Our models demonstrated accuracy significantly surpassing random guessing, aligning closely with human evaluative benchmarks. This indicates the effectiveness of our approach in recognizing emotional states from brief audio clips.

Discussion: Despite the challenges of integrating diverse datasets and managing short audio samples, our findings suggest considerable potential for this methodology in real-time emotion detection from continuous speech. This could contribute to improving the emotional intelligence of AI and its applications in various areas.

You may also like

Thinkia is a professional platform where we provide informative content like current world news, all types of educational content, health awareness, food awareness, travel awareness, ideas and tips. We hope you like all the content provided by us.

Editors' Picks

Latest Posts

Copyright © 2024 | Thinkia | All Right Reserved