EmoRL: Continuous acoustic emotion classification using deep reinforcement learning

Link:
Autor/in:
Erscheinungsjahr:
2018
Medientyp:
Text
Schlagworte:
  • Speech
  • Speech recognition
  • Speech emotion
  • Speech Recognition
  • Models
  • Speech
  • Speech recognition
  • Speech emotion
  • Speech Recognition
  • Models
Beschreibung:
  • Acoustically expressed emotions can make communication with a robot more efficient. Detecting emotions like anger could provide a clue for the robot indicating unsafe/undesired situations. Recently, several deep neural network-based models have been proposed which establish new state-of-the-art results in affective state evaluation. These models typically start processing at the end of each utterance, which not only requires a mechanism to detect the end of an utterance but also makes it difficult to use them in a real-time communication scenario, e.g. human-robot interaction. We propose the EmoRL model that triggers an emotion classification as soon as it gains enough confidence while listening to a person speaking. As a result, we minimize the need for segmenting the audio signal for classification and achieve lower latency as the audio signal is processed incrementally. The method is competitive with the accuracy of a strong baseline model, while allowing much earlier prediction.
Lizenz:
  • info:eu-repo/semantics/closedAccess
Quellsystem:
Forschungsinformationssystem der UHH

Interne Metadaten
Quelldatensatz
oai:www.edit.fis.uni-hamburg.de:publications/b713f9d5-aa96-4b97-8427-aa5b70484719