We still need to know more about how emotions and social context interact in driving facial activity

CERE 2025 in Grenoble

CONFERENCES

AK

7/21/20256 min read

The 10th Conference of the Consortium of European Research on Emotions took place from July 16 to 18, 2025. I have attended several of these meetings and was, in fact, present at some of the very early activities of this group - for example, I taught at the first summer school on Emotion in Amsterdam in 1992, but that is a different story for another day.

This time, I submitted a symposium titled: We still need to know more about how emotions and social context interact in driving facial activity, which was accepted. It took place on July 17 in the afternoon. What was the session about?

In various subdisciplines of psychology, as well as in fields like affective computing, facial behavior is often still treated as a direct readout of emotions—for example, assuming that smiling people are happy. When facial expressions do not match feelings, this discrepancy is frequently attributed to regulatory processes influenced by display rules. Over 30 years ago, Fridlund challenged this view by demonstrating that implicit social context can significantly shape facial behavior—findings that were inconsistent with both the readout and display-rule models. He proposed that facial expressions are best understood as a function of social context. Building on this, in 1995, Hess, Banse, and Kappas showed that facial activity in response to humorous stimuli (e.g., funny videos) varied depending on social context: with friends, facial expressions reflected both social context and stimulus intensity, but with strangers, this pattern did not emerge. This highlighted the intricate interaction between emotions and social context. Since then, research has increasingly explored the role of social context in shaping facial behavior. However, while many studies have demonstrated the influence of specific factors, we remain far from a comprehensive theory capable of predicting which facial expressions will occur in specific situations and for particular audiences. How close are we to developing a theory robust enough to guide artificial agents, such as virtual avatars or robots, in producing facial expressions aligned with their intended pragmatic goals? This symposium will address key challenges in interpreting facial behavior, review recent advances in the field, and outline future directions for disentangling the complex interplay of affect, social motivations, social structures, and individual differences.

left to right: Raphaela Heesen, Konstanz University, Ursula Hess, Humboldt University, José-Miguel Fernández-Dols, Autonomous University of Madrid, Arvid Kappas, Constructor University, Nicole Nelson, University of Adelaide

I was very happy when every member of my dream team for the session accepted (see photo). Here are the abstracts of their presentations:

Fernández-Dols José-Miguel - Are facial expressions context? Putting the baby in the water

There is a long tradition of studies about the interaction between expression and context, but only recently has the field progressed into using dynamic, rather than static or written, contextual inputs. As a consequence of this methodological advance, there is growing number of studies that show that people can correctly infer emotion through the dynamic visual context while facial expressions are masked. These studies typically use videos from real or acted situations. Their contribution is extremely important and opens the way to new views about the expression of emotion. The next step is to test if contextual information is still as important in identical, controlled situations recorded in the laboratory. We have tried to accomplish that goal through a study in which we tested whether people can infer the emotional state of persons who were videorecorded in an experiment about the cooccurrence of expression and emotion. We collected six videos of people who experienced disgust, and six videos of people who did not experience any negative emotion before eating a worm. All the participants displayed the same behavioral sequence (open a can with worms, make up their mind, and eat the worm while seated in front of a table). Then, in a large sample of judges, we tested the weight of visual context in the inference of the presence or absence of disgust when the face was masked, and when it was visible. The results show that people can infer emotions from bodily movements even when these movements were limited by the physical constraints of the experimental situation. Based on these findings and the findings of other related studies we speculate about an alternative view of facial expression as one of several logic gates in a communicational circuit.

Kappas Arvid - Let’s get down to business. Putting theories on facial behavior into motion

Since Darwin's The Expression of the Emotions in Man and Animals (1872), facial expressions have been central to emotion research. But there is still no consensus on why we show what we show on our faces. And why do we interpret others' expressions the way we do? One dominant view suggests that facial expressions are readouts of internal emotional states, modulated by cultural display rules. A contrasting, ecological view posits that facial actions serve social and communicative purposes, shaped by context and motivation. While such debates provide lively intellectual ping-pong, the arrival of robots and virtual agents drags these questions into the real world. Psychologists are now tasked with defining when artificial entities should express what—and how they should interpret human expressions. The challenge? Many artificial systems are built on outdated readout models, grounded in a narrow set of “basic emotions,” with little regard for the pragmatic complexities of communication and social context. The result? Systems that equate smiling with happiness, ignoring the myriad social and pragmatic functions of a smile. Not every smiling person is happy, and not every frown signals sadness. Why, then, do artificial systems persist in these simplistic mappings? In this talk, I will dissect these shortcomings, offer case studies, and propose a pragmatic framework for encoding and decoding facial behavior in artificial systems—one that embraces the nuance and contextual richness of human emotion and interaction.

Hess Ursula - The impact on social norms and expectations on emotional mimicry

Emotion communication is a social act and is heavily influenced by the social context in which it takes place. Even though this is generally recognized in psychological research, and research on the impact of context on emotion communication has blossomed in the last two decades, many open questions remain. This presentation addresses the impact of social and situational context on a specific facial reaction: Emotional mimicry. Emotional mimicry is the imitation of the emotion expressions of others and is considered a marker of affiliative intent. Specifically, I will focus on normative context, that is, the social rules and expectations that are associated with a specific event and their influence on facial mimicry reactions to the emotional behaviours of protagonists. Normative context has a strong effect on facial mimicry such that that expressions that violate normative expectations are either not mimicked or mimicked to a much lesser extent than expressions that conform to norms. The results suggest that observers try to emotionally distance themselves from individuals who violate social rules and expectations and that not showing mimicry is a means to do so. The results also show that emotional mimicry is top-down modulated by social context.

Nelson Nicole - Expressive behaviour varies based on who you’re with, and how close you feel to them

Effective emotional communication is essential for navigating social interactions, with facial expressions and hand gestures serving as key channels for conveying emotional messages. While previous research has examined the impact of social context on emotional expressions, particularly in Western cultures, detailed investigations into specific facial movements and gestures influenced by audience presence, and their comparison across different valence contexts and culture, remain scarce. Addressing this gap, our study aimed to explore the effect of a social audience and valence contexts on the use of facial expressions and hand gestures among Ugandan and UK participants in response to various emotion-inducing stimuli. Overall, N= 80 UK and N = 97 Ugandan participants were video-recorded while watching amusing, fearful, or neutral video clips under both alone and social conditions. We utilized automated remote tracking to identify specific facial movements and applied manual gesture coding to detect emotional hand gestures exhibited during these conditions. Our findings revealed that in both populations, amusing and fearful stimuli elicited increased facial and gestural movements compared to neutral stimuli, confirming the role of these expressions in emotional responding. Furthermore, in both populations, the presence of an audience, represented by another familiar person, facilitated greater movements in lower facial areas and increased gesture use, highlighting the influence of the social context on emotional signalling. Critically, however, a comparison between Ugandan and UK participants indicated a stronger audience effect on positive emotional expression in Uganda compared to the UK, suggesting cultural differences especially in positive valence contexts. Overall, our study sheds light on the nuanced interplay between social/valence contexts and emotional expressions, enriching our understanding of human emotions across diverse cultural contexts and providing valuable resources for future investigations into human emotional communication.

Heesen Raphaela - A cross-cultural investigation of the impact of social context on human emotional face and hand movements in Uganda and the UK

Emotion expressions are determined by much more than just emotion. For example, the presence of another person, or the quality of the relationship between people, can substantially alter expressive behavior. We examined how expressive behavior varied in the presence of different people (friends and strangers) compared to alone (Study 1) and then tested whether the social closeness of two people influenced this variation (Study 2). In Study 1, participants watched emotion eliciting videos alongside a friend, a stranger, or alone, and reported their emotional experiences while their expressions were surreptitiously filmed. These expressions were then rated by naive viewers. We found that participants who were with friends produced more recognizable, positively valenced, and higher arousal expressions than participants alone or with a stranger. Expressive behavior was similar for participants who were alone and those who were with a stranger. In Study 2, we induced social closeness between pairs of strangers by having participants complete a self-disclosure task together before viewing the emotion eliciting videos. Participants who felt greater social closeness produced expressions that were higher in arousal, but social closeness did not influence recognizability or valence of expressions.