Vortrag auf der Human-Robot Interaction 2018, Chicago06.04.2018
Diana Löffler hielt auf der HRI einen Vortrag zum Thema Emotionsvermittlung bei sozialen Robotern.
Multimodal Expression of Artificial Emotion in Social Robots Using Color, Motion and Sound
Artificial emotion display is a key feature of social robots to communicate internal states and behaviors in familiar human terms. While humanoid robots can draw on signals such as facial expressions or voice, emotions in appearance-constrained robots can only be conveyed through less-anthropomorphic output channels. While previous work focused on identifying specific expressional designs to convey a particular emotion, little work has been done to quantify the information content of different modalities and how they become effective in combination. Based on emotion metaphors that capture mental models of emotions, we systematically designed and validated a set of 28 different uni- and multimodal expressions for the basic emotions joy, sadness, fear and anger using the most common output modalities color, motion and sound. Classification accuracy and users' confidence of emotion assignment were evaluated in an empirical study with 33 participants and a robot probe. The findings are distilled into a set of recommendations about which modalities are most effective in communicating basic artificial emotion. Combining color with planar motion offered the overall best cost/benefit ratio by making use of redundant multimodal coding. Furthermore, modalities differed in their degree of effectiveness to communicate single emotions. Joy was best conveyed via color and motion, sadness via sound, fear via motion and anger via color.
Löffler, D., Schmidt, N., & Tscharn, R. (2018). Multimodal Expression of Artificial Emotion in Social Robots Using Color, Motion and Sound. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 334-343). ACM.