Facial expressions can be hard to read — and not just when someone is experiencing a mild emotion or feels ambivalent. Research has suggested that when we witness someone in the throes of a particularly acute emotional state, like intense joy or pain, we find it hard to pinpoint exactly what they’re feeling.
A new study looks at a similar phenomenon, this time focusing on vocalisations such as laughter, cries, screams and moans. Writing in Scientific Reports, Natalie Holz and colleagues from the Max Planck Institute for Empirical Aesthetics find that our ability to identify emotions increases as vocalisations become more intense — but only to a point. When these sounds reach peak intensity, we find it surprisingly hard to classify them.
Participants were split into three groups and assigned different tasks; each involved listening to non-verbal vocalisations representing different emotions and intensities. The first group focused on categorising emotions, assigning one of seven possible affective states to the sounds: anger, fear, pain, achievement, positive surprise, sexual pleasure, or none of the above. Next, participants indicated how intense they felt the emotion was, and how authentic.
The second group took part in an emotion rating task, indicating how clearly they could perceive a specified emotion in the vocalisations. And the third group rated the sounds according to their valence (how positive or negative the sound was) and arousal (how calm or intense it was). This group also indicated how authentic they felt each sound was.
In the emotion categorisation task, participants were largely able to correctly classify different emotions. Participants were also able to successfully identify how intense an emotion was.
However, there appeared to be an intensity “sweet spot” at which participants were most accurate in their classifications. Participants were able to judge emotions more accurately when they were of moderate or strong intensity compared to low intensity — but after this, when the vocalisations reached “peak” intensity, accuracy decreased again.
Participants also showed some confusion when it came to categorising sounds as negative or positive. Although they heard an equal number of sounds of each valence, participants often miscategorised positive expressions as negative, particularly at high intensities.
Overall, then, the results show that while participants were able to correctly identify intensity and arousal across the sounds, working out the exact nature and emotion expressed became more difficult as intensity reached its peak. This might be the case because of the “relevance” of certain vocalisations. If someone is expressing something at peak intensity, lead author Holz suggests, “the most vital job might be to detect big events and to assess relevance. A more fine-grained evaluation of affective meaning may be secondary”.
In other words, extremely emotionally intense sounds indicate something is happening that requires our attention: what it is exactly, in that moment, may be less important and can come later on.
Previous studies have pinpointed clear, objective markers in intense facial expressions that in theory should allow us to determine whether they indicate positive or negative valence. In the moment, however, we frequently fail to do so. Further research could explore the cues that exist in intense emotional sounds, giving an idea of why — and how — we mix up their meaning.