Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

The 2nd Korean Emotion Recognition Challenge: Methods and Results

  • Conference paper
  • First Online:
Frontiers of Computer Vision (IW-FCV 2021)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1405))

Included in the following conference series:

  • 630 Accesses

Abstract

The \(2^{nd}\) Korean Emotion Recognition Challenge (KERC2020) is a global challenge to promote the emotion recognition technologies by using audio-visual data analysis, especially for the emotion of Korean people. KERC2020 comprise of 1236 videos with each length from two to four seconds based on Korean movies are dramas. Around 68 participating teams compete to achieve state-of-the-art in recognizing stress, arousal, valence from Korean video in the wild. This paper provides a summary of dataset, methods and results in the challenge.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.kaggle.com/c/2020kerc/overview.

References

  1. Bai, S., Kolter, J.Z., Koltun, V.: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling, March 2018

    Google Scholar 

  2. Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: VGGFace2: a dataset for recognising faces across pose and age. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, May 2018. https://doi.org/10.1109/fg.2018.00020

  3. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, July 2017. https://doi.org/10.1109/cvpr.2017.195

  4. Gemmeke, J.F., et al.: Audio set: an ontology and human-labeled dataset for audio events. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, March 2017. https://doi.org/10.1109/icassp.2017.7952261

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, June 2016. https://doi.org/10.1109/cvpr.2016.90

  6. Hu, P., Ramanan, D.: Finding tiny faces. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, July 2017. https://doi.org/10.1109/cvpr.2017.166

  7. Khanh, T.L.B., Kim, S.H., Lee, G., Yang, H.J., Baek, E.T.: Korean video dataset for emotion recognition in the wild. Multimed. Tools Appl. (2020). https://doi.org/10.1007/s11042-020-10106-1

  8. Kossaifi, J., Tzimiropoulos, G., Todorovic, S., Pantic, M.: AFEW-VA database for valence and arousal estimation in-the-wild. Image Vis. Comput. 65, 23–36 (2017). https://doi.org/10.1016/j.imavis.2017.02.001

  9. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts, August 2016

    Google Scholar 

  10. Mollahosseini, A., Hasani, B., Mahoor, M.H.: AffectNet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10(1), 18–31 (2019). https://doi.org/10.1109/taffc.2017.2740923

  11. van den Oord, A., et al.: WaveNet: a generative model for raw audio, September 2016

    Google Scholar 

  12. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-ResNet and the impact of residual connections on learning, February 2016

    Google Scholar 

  13. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016). https://doi.org/10.1109/lsp.2016.2603342

Download references

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1A4A1019191) and by the Korea Sanhak Foundation and the University Industrial Technology Force (UNITEF) Support Group.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soo-Hyung Kim .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, S. et al. (2021). The 2nd Korean Emotion Recognition Challenge: Methods and Results. In: Jeong, H., Sumi, K. (eds) Frontiers of Computer Vision. IW-FCV 2021. Communications in Computer and Information Science, vol 1405. Springer, Cham. https://doi.org/10.1007/978-3-030-81638-4_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-81638-4_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-81637-7

  • Online ISBN: 978-3-030-81638-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics