A review of methods for detecting multidimensional emotions in sound, image and text
Emotional computing synergizes the understanding and quantification of emotions, drawing on diverse data sources such as text, audio, and visual indicators. A challenge arises when attempting to discern authentic emotions from those concealed deliberately via facial cues, vocal nuances, and other communicative behaviors. Integrating multiple physiological and behavioural signals can give more profound insights into an individual's emotional state. Historically, research has predominantly concentrated on a singular facet of emotional computing. In contrast, our study offers an in-depth exploration of its pivotal domains, encompassing emotional models, Databases (DBs), and contemporary developments. We begin by elucidating two prevalent emotional models and then examine a renowned sentiment analysis DB. Subsequently, we delve into cutting-edge emotion detection and analysis methodologies across varied sensory channels, elaborating on their design and operational principles. In conclusion, the fundamental principles of emotional computing and its real-world implications are discussed. This review endeavors to provide researchers from academia and industry with a holistic understanding of the latest progress in this domain.