Emotion Detection Using Deep Learning Algorithm
Downloads
Automatic emotion detection is a key task in human machine interaction,where emotion detection makes system more natural. In this paper, we propose an emotion detection using deep learning algorithm. The proposed algorithm uses end to end CNN. To increase computational efficiency of the deep network, we make use of trained weight parameters of the MobileNet to initialize the weight parameters of our system. To make our system independent of the input image size, we place global average pooling layer On top of the last convolution layer of it. Proposed system is validated for emotion detection using two benchmark datasets viz. Cohn–Kanade+ (CK+) and Japanese female facial expression (JAFFE). The experimental results show that the proposed method outperforms the other existing methods for emotion detection.
G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applica- tions,” arXiv preprint arXiv:1704.04861, 2017.
G. Donato, M. S. Bartlett, J. C. Hager, P. Ekman, and T. J. Sejnowski, “Classifying facial actions,” IEEE Transactions on pattern analysis and machine intelligence, vol. 21, no. 10, pp. 974–989, 1999.
P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion.” Journal of personality and social psychology, vol. 17, no. 2, p. 124, 1971.
C. A. Corneanu, M. O. Sim o´n, J. F. Cohn, and S. E. Guerrero, “Survey on rgb, 3d, thermal, and
multimodal approaches for facial expression recognition: History, trends, and affect-related applications,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 8, pp. 1548–1568, 2016.
P. Burkert, F. Trier, M. Z. Afzal, A. Dengel, and M. Liwicki, “Dexpres- sion: Deep convolutional neural network for expression recognition,” arXiv preprint arXiv:1509.05371, 2015
A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in 2016 IEEE Winter conference on applications of computer vision (WACV). IEEE, 2016, pp. 1–10.
E. Barsoum, C. Zhang, C. C. Ferrer, and Z. Zhang, “Training deep networks for facial expression recognition with crowd-sourced label distribution,” in Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016, pp. 279–283.
B. Hasani and M. H. Mahoor, “Facial expression recognition using enhanced deep 3d convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 30–40.
H. Ding, S. K. Zhou, and R. Chellappa, “Facenet2expnet: Regularizing a deep face recognition net for expression recognition,” in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). IEEE, 2017, pp. 118–126.
G. Pons and D. Masip, “Supervised committee of convolutional neural networks in automated facial expression analysis,” IEEE Transactions on Affective Computing, vol. 9, no. 3, pp. 343–350, 2017.
B.-K. Kim, J. Roh, S.-Y. Dong, and S.-Y. Lee, “Hierarchical committee of deep convolutional neural networks for robust facial expression recognition,” Journal on Multimodal User Interfaces, vol. 10, no. 2, pp. 173–189, 2016.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [13]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
H. Jung, S. Lee, S. Park, I. Lee, C. Ahn, and J. Kim, “Deep temporal appearance-geometry network for facial expression recognition,” arXiv preprint arXiv:1503.01532, 2015.
H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2983– 2991.
K. Zhang, Y. Huang, Y. Du, and L. Wang, “Facial expression recognition based on deep evolutional spatial-temporal networks,” IEEE Transac- tions on Image Processing, vol. 26, no. 9, pp. 4193–4203, 2017.
Y. Kim, B. Yoo, Y. Kwak, C. Choi, and J. Kim, “Deep generative- contrastive networks for facial expression recognition,” arXiv preprint arXiv:1703.07140, 2017.
M. Pantic and I. Patras, “Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 36, no. 2, pp. 433–449, 2006.
N. Sebe, M. S. Lew, Y. Sun, I. Cohen, T. Gevers, and T. S. Huang, “Authentic facial expression analysis,” Image and Vision Computing, vol. 25, no. 12, pp. 1856–1863, 2007.
Z. Zhang, M. Lyons, M. Schuster, and S. Akamatsu, “Comparison between geometry-based and gabor wavelets-based facial expression recognition using multi-layer perceptron,” 05 1998, pp. 454 – 459.
M. F. Valstar, I. Patras, and M. Pantic, “Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops. IEEE, 2005, pp. 76–76.
C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image and vision Computing, vol. 27, no. 6, pp. 803–816, 2009.
C.-C. Lai and C.-H. Ko, “Facial expression recognition based on two- stage features extraction,” Optik, vol. 125, no. 22, pp. 6678–6680, 2014.
T. Jabid, M. H. Kabir, and O. Chae, “Robust facial expression recog- nition based on local directional pattern,” ETRI journal, vol. 32, no. 5, pp. 784–794, 2010.
A. R. Rivera, J. R. Castillo, and O. O. Chae, “Local directional number pattern for face analysis: Face and expression recognition,” IEEE transactions on image processing, vol. 22, no. 5, pp. 1740–1752, 2012.
A. R. Rivera, J. R. Castillo, and O. Chae, “Local directional texture pattern image descriptor,” Pattern Recognition Letters, vol. 51, pp. 94– 100, 2015.
B. Ryu, A. R. Rivera, J. Kim, and O. Chae, “Local directional ternary pattern for facial ex- pressionrecognition,”IEEETransactionsonImageProcessing,vol.26,no.12,pp.6006–6018, 2017.
S. H. Lee, K. N. K. Plataniotis, and Y. M. Ro, “Intra-class variation reduction using training expression images for sparse representation based facial expression recognition,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 340–351,2014
H. Mohammadzade and D. Hatzinakos, “Projection into expression subspaces for face recog- nitionfrom single sample per person,” IEEE Transactions on Affective Computing, vol. 4, no. 1, pp. 69–82,2012.
L. Zhang and D. Tjondronegoro, “Facial expression recognition using facial movement fea- tures,” IEEE Transactions on Affective Computing, vol. 2, no. 4, pp. 219–229, 2011.
S. Happy and A. Routray, “Automatic facial expression recognition using features of salient facial patches,” IEEE transactions on Affective Computing, vol. 6, no. 1, pp. 1–12, 2014.
T. Zhang, W. Zheng, Z. Cui, Y. Zong, J. Yan, and K. Yan, “A deep neural network-driven feature learning method for multi-view facial expression recognition,” IEEE Transactions on Multimedia, vol. 18, no. 12, pp. 2528–2536,2016.
W. Zheng, Y. Zong, X. Zhou, and M. Xin, “Cross-domain color facial expression recognition using transductive transfer subspace learning,” IEEE transactions on Affective Computing, vol. 9, no. 1, pp. 21–37, 2016.
O. Rudovic, M. Pantic, and I. Patras,“Coupled gaussian processes for pose-invariant fa-cial expression recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 6, pp. 1357–1369,2012.
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn- kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2010, pp. 94–101.
M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with gabor wavelets,” in Proceedings Third IEEE international conference on automatic face and gesture recognition. IEEE, 1998, pp. 200–205.
M. Mandal, M. Verma, S. Mathur, S. K. Vipparthi, S. Murala, and D. K. Kumar, “Regional adaptive affinitive patterns (radap) with logical operators for facial expression recognition,” IET Image Processing, vol. 13, no. 5, pp. 850–861, 2019.
T. Kanade, J. F. Cohn, and Yingli Tian, “Comprehensive database for facial expression anal- ysis,” in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), 2000, pp. 46–53