
Contents
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Introduction Introduction
-
Multimodal Affect Recognition: State of the Art and Challenges Multimodal Affect Recognition: State of the Art and Challenges
-
Data Collection Data Collection
-
Beyond Prototypical Emotions and Acted Affective Expressions Beyond Prototypical Emotions and Acted Affective Expressions
-
Annotation Annotation
-
-
Feature Extraction Feature Extraction
-
Feature Representation—Frame- Versus Window-Based Feature Representation—Frame- Versus Window-Based
-
Feature Selection in a Multimodal Context Feature Selection in a Multimodal Context
-
-
Context Sensitivity Context Sensitivity
-
Classification Schemes Classification Schemes
-
Static Versus Dynamic Modeling Static Versus Dynamic Modeling
-
Discrete Versus Continuous Recognition Discrete Versus Continuous Recognition
-
-
Stream Fusion Stream Fusion
-
Early Feature-Level Fusion Early Feature-Level Fusion
-
Late Semantic Fusion Late Semantic Fusion
-
Hybrid Fusion Hybrid Fusion
-
-
-
Multimodal Affect Detection for (Soft) Real-Time HCI and HRI: Methodological Considerations and Case Studies Multimodal Affect Detection for (Soft) Real-Time HCI and HRI: Methodological Considerations and Case Studies
-
Multimodal Affect Detection for a Sensitive Artificial Listener—Results and Lessons Learned from the SEMAINE Project Multimodal Affect Detection for a Sensitive Artificial Listener—Results and Lessons Learned from the SEMAINE Project
-
Audiovisual Affect Recognition in a Real-Life System Audiovisual Affect Recognition in a Real-Life System
-
The Audiovisual Emotion Challenges The Audiovisual Emotion Challenges
-
-
Multimodal Affect Recognition for a Robotic Companion—Results and Lessons Learned from the LIREC Project Multimodal Affect Recognition for a Robotic Companion—Results and Lessons Learned from the LIREC Project
-
Context-Sensitive Affect Recognition in Real-World HRI Settings Context-Sensitive Affect Recognition in Real-World HRI Settings
-
-
-
Conclusions and Future Directions Conclusions and Future Directions
-
Acknowledgement Acknowledgement
-
References References
-
-
-
-
-
-
-
-
-
-
17 Multimodal Affect Recognition for Naturalistic Human-Computer and Human-Robot Interactions
Get accessGinevra Castellano, School of Electronic, Electrical and Computer Engineering, University of Birmingham, United Kingdom
Hatice Gunes is a Lecturer (Assistant Professor) at the School of Electronic Engineering and Computer Science, Queen Mary University of London (QMUL), UK. Prior to joining QMUL, she was a postdoctoral researcher at Imperial College London, UK and an Associate of the Faculty of IT at University of Technology Sydney, Australia. Her research interests lie in the multidisciplinary areas of affective computing and social signal processing, focusing on automatic analysis of emotional and social behaviour and human aesthetic canons, audio-visual information processing, machine learning and human-computer interaction. She has published more than 60 technical papers in these areas, and her work to date has received more than 1200 citations (current H-index equals 20). Dr Gunes is a Member of the Management Board of the Association for the Advancement of Affective Computing (AAAC) and the Steering Committee of IEEE Transactions on Affective Computing. She has served as a Guest Editor of Special Issues in Int’l J. of Synthetic Emotions, Image and Vision Computing, and ACM Transactions on Interactive Intelligent Systems, member of the Editorial Advisory Board for the Affective Computing and Interaction Book (2011), main organiser of the EmoSPACE Workshops at IEEE FG'13 and FG'11, workshop chair of MAPTRAITS’14, HBU’13 and AC4MobHCI'12, area chair for ACM Multimedia’14, IEEE ICME’13, ACM ICMI’13 and ACII’13, and as a general workshop chair for BCS HCI’12. For her research in affective computing, she was a recipient of a number of awards for Outstanding Paper (IEEE FG’11), Quality Reviewer (IEEE ICME’11), Best Demo (IEEE ACII’09), and Best Student Paper (VisHCI’06). She is currently involved as PI and Co-I in several projects funded by the Engineering and Physical Sciences Research Council UK (EPSRC) and the British Council.
Christopher Peters, School of Computer Science and Communication, Royal Institute of Technology (KTH), Sweden
Björn W. Schuller received his diploma in 1999, his doctoral degree for his study on Automatic Speech and Emotion Recognition in 2006, and his habilitation (fakultas docendi) and private lectureship (venia legendi, German PD) in the subject area of Signal Processing and Machine Intelligence for his work on Intelligent Audio Analysis in 2012 all in electrical engineering and information technology from TUM (Munich University of Technology), repeatedly the number one German university in different rankings and among its two persistent Excellence Universities. At present, he is a Senior Lecturer (Associate Professor) in Machine Learning in Imperial College London's (repeatedly ranked as a world top five (QS) or top ten (THE) university in diverse rankings) Department of Computing's Logic and Artificial Intelligence Section's Machine Learning Group in London/UK (since 2013) and a tenured faculty member heading the Machine Intelligence and Signal Processing (MISP) Group at TUM’s Institute for Human-Machine Communication since 2006 as well as CEO of audEERING UG (limited). Since 2013 he is also a permanent Visiting Professor in the School of Computer Science and Technology at the Harbin Institute of Technology, Harbin/P.R. China. In 2013 he was also heading the Institute for Sensor Systems as full professor at the University of Passau in Passau/Germany and a Visiting Professor at the Université de Genève in Geneva/Switzerland in the Centre Interfacultaire en Sciences Affectives and remains an appointed associate of the institute. In 2012 he was with JOANNEUM RESEARCH, Institute for Information and Communication Technologies in Graz/Austria, working in the Research Group for Remote Sensing and Geoinformation and the Research Group for Space and Acoustics - currently he is an expert consultant of the institute. In 2011 he was guest lecturer at the Università Politecnica delle Marche (UNIVPM) in Ancona/Italy and visiting researcher in the Machine Learning Research Group of NICTA in Sydney/Australia. From 2009 to 2010 he lived in Paris/France and was with the CNRS-LIMSI Spoken Language Processing Group in Orsay/France dealing with affective and social signals in speech. In 2010 he was also a visiting scientist in the Imperial College London's Department of Computing in London/UK working on audiovisual behaviour recognition. Best known are his works advancing Machine Intelligence and Data Mining, Information Systems and Retrieval for spoken and written language, and Affective and Mobile Computing. Dr. Schuller is president of the Association for the Advancement of Affective Computing (AAAC, former HUMAINE Association), Honorary Fellow and member of the TUM Institute for Advanced Study (IAS), elected member of the IEEE Speech and Language Processing Technical Committee, and member of the ACM, IEEE and ISCA and (co-)authored 5 books and more than 390 publications in peer reviewed books (>20), journals (>50), and conference proceedings in the field leading to more than 5900 citations (h-index = 39). He was co-founding member and secretary of the steering committee and guest editor, and still serves as associate editor of the IEEE Transactions on Affective Computing, associate and repeated guest editor for the Computer Speech and Language, associate editor for the IEEE Signal Processing Letters, IEEE Transactions on Cybernetics and the IEEE Transactions on Neural Networks and Learning Systems, and guest editor for the IEEE Intelligent Systems Magazine, Neural Networks, Speech Communication, Image and Vision Computing, Cognitive Computation, and the EURASIP Journal on Advances in Signal Processing, reviewer for more than 70 leading journals and 50 conferences in the field, co-general chair of ACM ICMI 2014, and as workshop and challenge organizer including the first of their kind INTERSPEECH 2009 Emotion, 2010 Paralinguistic, 2011 Speaker State, 2012 Speaker Trait, and 2013 and 2014 Computational Paralinguistics Challenges and the 2011, 2012, 2013, and 2014 Audio/Visual Emotion Challenge and Workshop and program chair of the ACM ICMI 2013, IEEE SocialCom 2012, and ACII 2011, and program committee member of more than 100 international workshops and conferences. He is recipient of an ERC Starting Grant 2013 (9% acceptance rate) for the iHEARu project dealing with Intelligent systems' Holistic Evolving Analysis of Real-life Universal speaker characteristics. Further steering and involvement in current and past research projects includes the European Community funded ASC-Inclusion STREP project as coordinator and the awarded SEMAINE project, and projects funded by the German Research Foundation (DFG) and companies such as BMW, Continental, Daimler, HUAWEI, Siemens, Toyota, and VDO. Advisory board activities comprise his membership as invited expert in the W3C Emotion Incubator and Emotion Markup Language Incubator Groups.
-
Published:01 July 2014
Cite
Abstract
This chapter provides a synthesis of research on multimodal affect recognition and discusses methodological considerations and challenges arising from the design of a multimodal affect recognition system for naturalistic human-computer and human-robot interactions. Identified challenges include the collection and annotation of spontaneous affective expressions, the choice of appropriate methods for feature representation and selection in a multimodal context, and the need for context sensitivity and for classification schemes that take into account the dynamic nature of affect and the relationship between different modalities. Finally, two examples of multimodal affect recognition systems used in (soft) real-time naturalistic human-computer and human-robot interaction frameworks are presented.
Sign in
Personal account
- Sign in with email/username & password
- Get email alerts
- Save searches
- Purchase content
- Activate your purchase/trial code
- Add your ORCID iD
Purchase
Our books are available by subscription or purchase to libraries and institutions.
Purchasing informationMonth: | Total Views: |
---|---|
October 2022 | 3 |
November 2022 | 2 |
December 2022 | 3 |
January 2023 | 2 |
February 2023 | 4 |
March 2023 | 6 |
April 2023 | 1 |
May 2023 | 1 |
June 2023 | 2 |
July 2023 | 2 |
August 2023 | 4 |
September 2023 | 8 |
October 2023 | 3 |
November 2023 | 8 |
December 2023 | 6 |
January 2024 | 2 |
February 2024 | 1 |
March 2024 | 2 |
April 2024 | 12 |
May 2024 | 4 |
June 2024 | 1 |
July 2024 | 2 |
September 2024 | 3 |
October 2024 | 4 |
April 2025 | 10 |
Get help with access
Institutional access
Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:
IP based access
Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.
Sign in through your institution
Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.
If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.
Sign in with a library card
Enter your library card number to sign in. If you cannot sign in, please contact your librarian.
Society Members
Society member access to a journal is achieved in one of the following ways:
Sign in through society site
Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:
If you do not have a society account or have forgotten your username or password, please contact your society.
Sign in using a personal account
Some societies use Oxford Academic personal accounts to provide access to their members. See below.
Personal account
A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.
Some societies use Oxford Academic personal accounts to provide access to their members.
Viewing your signed in accounts
Click the account icon in the top right to:
Signed in but can't access content
Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.
Institutional account management
For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.