
Contents
-
-
-
-
-
-
-
-
-
-
-
-
Introduction Introduction
-
State of the Art State of the Art
-
Requirements Requirements
-
Challenges Challenges
-
-
Data Collection—Ten Steps Toward a Multimodal Affect Database Data Collection—Ten Steps Toward a Multimodal Affect Database
-
Considering Ethics Considering Ethics
-
Recording and Reusing Recording and Reusing
-
Collecting Metainformation Collecting Metainformation
-
Synchronizing Streams Synchronizing Streams
-
Modeling Modeling
-
Labeling Labeling
-
Standardizing Standardizing
-
Partitioning Partitioning
-
Verifying Perception and Baseline Results Verifying Perception and Baseline Results
-
Releasing Releasing
-
-
Quality Assessment—Is This Really Joyful? Quality Assessment—Is This Really Joyful?
-
Ground Truth Versus the “Gold Standard” Ground Truth Versus the “Gold Standard”
-
Measuring Reliability—From Alpha to Kappa Measuring Reliability—From Alpha to Kappa
-
Weighting Evaluators—I Don’t Trust This Labeler Weighting Evaluators—I Don’t Trust This Labeler
-
-
Efficiency—How to Save Annotation Labor Efficiency—How to Save Annotation Labor
-
Active Learning: “Help me, I’m a machine” Active Learning: “Help me, I’m a machine”
-
Semisupervised Learning: “Okay, I can label this!” Semisupervised Learning: “Okay, I can label this!”
-
Unsupervised Learning: “Trust me—I’m a machine” Unsupervised Learning: “Trust me—I’m a machine”
-
Shared Learning: “Together we’re best” Shared Learning: “Together we’re best”
-
Pooling Data: “Let’s save the environment” Pooling Data: “Let’s save the environment”
-
-
Existing Multimodal Resources—What’s There? Existing Multimodal Resources—What’s There?
-
Exemplary Audiovisual Resources Exemplary Audiovisual Resources
-
Exemplary Resources Containing EEG and Physiological Data Exemplary Resources Containing EEG and Physiological Data
-
-
Conclusions and Future Avenues—Wrapping Up Conclusions and Future Avenues—Wrapping Up
-
References References
-
-
-
-
-
-
-
-
-
-
-
-
-
23 Multimodal Affect Databases: Collection, Challenges, and Chances
Get accessBjörn W. Schuller received his diploma in 1999, his doctoral degree for his study on Automatic Speech and Emotion Recognition in 2006, and his habilitation (fakultas docendi) and private lectureship (venia legendi, German PD) in the subject area of Signal Processing and Machine Intelligence for his work on Intelligent Audio Analysis in 2012 all in electrical engineering and information technology from TUM (Munich University of Technology), repeatedly the number one German university in different rankings and among its two persistent Excellence Universities. At present, he is a Senior Lecturer (Associate Professor) in Machine Learning in Imperial College London's (repeatedly ranked as a world top five (QS) or top ten (THE) university in diverse rankings) Department of Computing's Logic and Artificial Intelligence Section's Machine Learning Group in London/UK (since 2013) and a tenured faculty member heading the Machine Intelligence and Signal Processing (MISP) Group at TUM’s Institute for Human-Machine Communication since 2006 as well as CEO of audEERING UG (limited). Since 2013 he is also a permanent Visiting Professor in the School of Computer Science and Technology at the Harbin Institute of Technology, Harbin/P.R. China. In 2013 he was also heading the Institute for Sensor Systems as full professor at the University of Passau in Passau/Germany and a Visiting Professor at the Université de Genève in Geneva/Switzerland in the Centre Interfacultaire en Sciences Affectives and remains an appointed associate of the institute. In 2012 he was with JOANNEUM RESEARCH, Institute for Information and Communication Technologies in Graz/Austria, working in the Research Group for Remote Sensing and Geoinformation and the Research Group for Space and Acoustics - currently he is an expert consultant of the institute. In 2011 he was guest lecturer at the Università Politecnica delle Marche (UNIVPM) in Ancona/Italy and visiting researcher in the Machine Learning Research Group of NICTA in Sydney/Australia. From 2009 to 2010 he lived in Paris/France and was with the CNRS-LIMSI Spoken Language Processing Group in Orsay/France dealing with affective and social signals in speech. In 2010 he was also a visiting scientist in the Imperial College London's Department of Computing in London/UK working on audiovisual behaviour recognition. Best known are his works advancing Machine Intelligence and Data Mining, Information Systems and Retrieval for spoken and written language, and Affective and Mobile Computing. Dr. Schuller is president of the Association for the Advancement of Affective Computing (AAAC, former HUMAINE Association), Honorary Fellow and member of the TUM Institute for Advanced Study (IAS), elected member of the IEEE Speech and Language Processing Technical Committee, and member of the ACM, IEEE and ISCA and (co-)authored 5 books and more than 390 publications in peer reviewed books (>20), journals (>50), and conference proceedings in the field leading to more than 5900 citations (h-index = 39). He was co-founding member and secretary of the steering committee and guest editor, and still serves as associate editor of the IEEE Transactions on Affective Computing, associate and repeated guest editor for the Computer Speech and Language, associate editor for the IEEE Signal Processing Letters, IEEE Transactions on Cybernetics and the IEEE Transactions on Neural Networks and Learning Systems, and guest editor for the IEEE Intelligent Systems Magazine, Neural Networks, Speech Communication, Image and Vision Computing, Cognitive Computation, and the EURASIP Journal on Advances in Signal Processing, reviewer for more than 70 leading journals and 50 conferences in the field, co-general chair of ACM ICMI 2014, and as workshop and challenge organizer including the first of their kind INTERSPEECH 2009 Emotion, 2010 Paralinguistic, 2011 Speaker State, 2012 Speaker Trait, and 2013 and 2014 Computational Paralinguistics Challenges and the 2011, 2012, 2013, and 2014 Audio/Visual Emotion Challenge and Workshop and program chair of the ACM ICMI 2013, IEEE SocialCom 2012, and ACII 2011, and program committee member of more than 100 international workshops and conferences. He is recipient of an ERC Starting Grant 2013 (9% acceptance rate) for the iHEARu project dealing with Intelligent systems' Holistic Evolving Analysis of Real-life Universal speaker characteristics. Further steering and involvement in current and past research projects includes the European Community funded ASC-Inclusion STREP project as coordinator and the awarded SEMAINE project, and projects funded by the German Research Foundation (DFG) and companies such as BMW, Continental, Daimler, HUAWEI, Siemens, Toyota, and VDO. Advisory board activities comprise his membership as invited expert in the W3C Emotion Incubator and Emotion Markup Language Incubator Groups.
-
Published:01 April 2014
Cite
Abstract
This chapter is from the forthcoming The Oxford Handbook of Affective Computing edited by Rafael Calvo, Sidney K. D'Mello, Jonathan Gratch, and Arvid Kappas. This chapter focuses on multimodal affect databases. After a short introduction, the collection of affective data is discussed in 10 steps highlighting methodological considerations and challenges of building new resources of multimodal data and affect labels. It then touches upon quality assessment of collected emotion corpora. A section is also dedicated to “saving labor” by sharing annotation between human and machine and reusing data. Then a selection of representative audiovisual and further multimodal databases is introduced. Finally, the chapter concludes with a discussion of controversial issues and future directions.
Sign in
Personal account
- Sign in with email/username & password
- Get email alerts
- Save searches
- Purchase content
- Activate your purchase/trial code
- Add your ORCID iD
Purchase
Our books are available by subscription or purchase to libraries and institutions.
Purchasing informationMonth: | Total Views: |
---|---|
October 2022 | 4 |
November 2022 | 2 |
December 2022 | 5 |
January 2023 | 6 |
February 2023 | 4 |
March 2023 | 9 |
April 2023 | 2 |
May 2023 | 1 |
June 2023 | 2 |
July 2023 | 2 |
August 2023 | 4 |
September 2023 | 3 |
October 2023 | 4 |
November 2023 | 4 |
December 2023 | 4 |
January 2024 | 1 |
February 2024 | 1 |
March 2024 | 1 |
April 2024 | 12 |
May 2024 | 4 |
June 2024 | 1 |
July 2024 | 5 |
August 2024 | 4 |
October 2024 | 2 |
February 2025 | 1 |
April 2025 | 5 |
Get help with access
Institutional access
Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:
IP based access
Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.
Sign in through your institution
Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.
If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.
Sign in with a library card
Enter your library card number to sign in. If you cannot sign in, please contact your librarian.
Society Members
Society member access to a journal is achieved in one of the following ways:
Sign in through society site
Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:
If you do not have a society account or have forgotten your username or password, please contact your society.
Sign in using a personal account
Some societies use Oxford Academic personal accounts to provide access to their members. See below.
Personal account
A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.
Some societies use Oxford Academic personal accounts to provide access to their members.
Viewing your signed in accounts
Click the account icon in the top right to:
Signed in but can't access content
Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.
Institutional account management
For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.