Volume 3, Issue 3, December 2016, Pages 148 - 151
Emotion Recognition of a Speaker Using Facial Expression Intensity of Thermal Image and Utterance Time
Authors
Yuuki Oka, Yasunari Yoshitomi, Taro Asada, Masayoshi Tabuse
Corresponding Author
Yasunari Yoshitomi
Available Online 1 December 2016.
- DOI
- 10.2991/jrnal.2016.3.3.3How to use a DOI?
- Keywords
- Emotion recognition, Mouth and jaw area, Thermal image, Utterance judgment.
- Abstract
Herein, we propose a method for recognizing human emotions that utilizes the standardized mean value of facial expression intensity obtained from a thermal image and the standardized mean value of the time at utterance. In this study, the emotions of one subject could be discerned with 76.5% accuracy when speaking 23 kinds of utterances while intentionally displaying the five emotions of “anger,” “happiness,” “neutrality,” “sadness,” and “surprise.”
- Copyright
- © 2013, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - JOUR AU - Yuuki Oka AU - Yasunari Yoshitomi AU - Taro Asada AU - Masayoshi Tabuse PY - 2016 DA - 2016/12/01 TI - Emotion Recognition of a Speaker Using Facial Expression Intensity of Thermal Image and Utterance Time JO - Journal of Robotics, Networking and Artificial Life SP - 148 EP - 151 VL - 3 IS - 3 SN - 2352-6386 UR - https://doi.org/10.2991/jrnal.2016.3.3.3 DO - 10.2991/jrnal.2016.3.3.3 ID - Oka2016 ER -