There is growing evidence that postnatally depressed mothers have difficulties responding appropriately to their infants (see Murray et al., 2010
, for a review of this literature). Although these difficulties may adversely influence subsequent child development (Yarrow et al., 1984
), including problems in infant learning, attention, language, and emotional regulation (Goodman, 2004; Murray et al., 2010; Stein et al., 2008
), the mechanisms involved in this process have not been fully elucidated. In particular, the question of whether
depression-associated cognitive biases may explain the adverse effects of postnatal depression on offspring development has not been adequately addressed. One of the few studies conducted in this area (Field et al., 1993
) suggested that the problematic interactions of depressed mothers with their infants are related to their negative perceptions of their infants' behaviour. These negative perceptions are consistent with the general cognitive biases that have been found to characterise depression (Gotlib and Joormann, 2010; Mathews and MacLeod, 2005
). These biases operate most strongly when the cognitions involved are connected with interpersonal relationships (Gotlib and Hammen, 1992
) and include distorted interpretation of interpersonal information, such as others' emotions and facial expressions (Bouhuys et al., 1995
Given that the development of a strong mother–child relationship depends on a mother's ability to respond appropriately to her infants' cues – which are largely non-verbal – and that the processing of facial expressions is an essential feature in human interactions (Darwin, 1872/1955
), it is likely that difficulties appraising infants' facial expressions underlie to a significant extent the problematic interactions between postnatally depressed mothers and their offspring.
A few studies have investigated the effects of depression on the processing of adult emotional facial expressions. Overall, results suggest that when compared to non-depressed participants, depressed individuals show deficits in processing emotional cues derived from facial expressions (Bouhuys et al., 1995; Cavanagh and Geisler, 2006; Coupland et al., 2004; Mogg et al., 2000a; Persad and Polivy, 1993
). However, the majority of these studies have looked at full intensity (i.e., extreme) expressions and have generally reported mixed findings. For example, Leppänen et al. (2004)
found that depressed individuals were as accurate as controls in identifying sad and happy faces, but were significantly less accurate in identifying neutral faces and attributed sadness to neutral facial expressions. Hale (1998) and Lee et al. (2008)
also found that depressed individuals attributed significantly more sadness to ambiguous adult facial expressions than controls. However, Suslow et al. (2001)
found that whereas depressed and control participants did not differ in their latencies to detect sad faces in a display of schematic faces, depressed participants were significantly slower than were controls to detect happy faces. Similarly, Gur et al. (1992)
found that the tendency to misinterpret happy faces as neutral best discriminated depressed patients from controls.
A minority of studies have examined responses to lower-intensity expressions. As Joormann and Gotlib (2006)
point out, processing of low-intensity emotional expressions and subtle changes in facial expressions of emotion may be stronger predictors of interpersonal functioning, given that in everyday life people are confronted with information comprising a wide range of emotional intensity, not only with full-intensity information. Results from research using low-intensity facial expressions have been fairly consistent in demonstrating that depression is associated with deficits in the processing of positive facial expressions (Deveney and Deldin, 2004; Yoon et al., 2009
). These findings are in line with recent studies that suggest that depression is characterised primarily by difficulties in the processing of positive affect, perhaps even more than by biases in the processing of negative affect (e.g., Deveney and Deldin, 2004; Gilboa-Schechtman et al., 2002; Surguladze et al., 2004
). Surguladze et al. (2004)
, for example, found that at 2-second presentations, depressed participants were less likely than their nondepressed counterparts to label 50% of happy faces as happy.
One task that has been used recently to examine the effects of mood on processing of facial expressions, including low-intensity expressions, is the ‘morphed faces task.’ This task, developed by Niedenthal et al. (2000)
, represents an alternative to the typically used prototypical or static faces. In everyday life, expressions are constantly changing and the ability to decode dynamic facial signals is crucial for social adaptation. Therefore, the morphed faces task mimics real-life processes by presenting a sequence of faces that change gradually between two emotions, typically between a neutral and a sad, happy, or angry expression (Heuer et al., 2010
Findings from research using the morphed faces task have documented the impact of depression on the processing of happy faces. For example, studying healthy university students, Coupland et al. (2004)
found that mood was associated with emotion identification. Specifically, whereas level of positive affect predicted participants' threshold for the identification of happy morphed faces, level of negative affect was associated with the threshold for the identification of disgusted morphed faces. Similarly, Cavanagh and Geisler (2006)
showed that, overall, university students were more accurate in identifying happy than sad morphed faces and that those with higher levels of depression had greater difficulty identifying happy faces. In particular, they were slower to process this emotion, especially if the expressions were shown at lower intensities (e.g. 40%). Finally, Joormann and Gotlib (2006)
found that whereas clinically depressed participants required significantly greater intensity of emotion than participants with social phobia and control participants to correctly identify happy facial expressions, participants with social phobia needed less intensity to correctly identify angry facial expressions than did depressed and control participants.
To date, virtually all of the studies examining cognitive biases in depression and the processing of facial expressions, including the morphed faces task, used adult faces as stimuli; consequently, little is known about biases in the processing of infant facial expressions. This is particularly important because infant faces have a different configuration than adult faces, including a relatively large head, predominance of the brain capsule, large and low-lying eyes and a bulging cheek region, which is posited to be important for eliciting parental responses (Lorenz, 1971
). Two studies have examined responses to infant facial expressions in the context of depression. Pearson et al. (2010)
found that whereas non-depressed pregnant women showed an engagement bias towards distressed infant faces, depressed women tended to disengage more quickly from the images. In addition, we found that mothers with postnatal depression were more likely to rate negative infant faces more negatively than controls, whereas mothers with postnatal GAD did not differ from controls (Stein et al., 2010
). However, this study used faces that were at the extreme of the spectrum (i.e., obviously happy or sad), unlike the morphed face paradigm which utilises expressions at intensities more akin to ‘real life’, and did not attempt to explicitly measure accuracy. Given the crucial role of maternal responsiveness in later child development and the importance of processing infants' expressions in this dyadic interaction, we used a morphed faces task to examine the effects of maternal postnatal depression on the identification of emotional expression in infants' faces. In order to examine whether any obtained biases are specific to depression and not characteristic of more general postnatal psychiatric disturbance, we included a group of postnatal mothers who were experiencing anxiety. Although cognitive biases have been found in anxiety disorders, most typically those are specific attentional biases towards threatening stimuli rather than towards sad or more generally negative material (Mogg et al., 2000b
We tested three hypotheses in this study. First, we predicted that all participants would classify happy faces more easily (i.e., with fewer errors and at a lower intensity) than they would sad faces. Second, based on findings from morphing studies using adult faces (Joormann and Gotlib, 2006; Surguladze et al., 2004
), we predicted that depressed participants would exhibit a bias in the identification of happy faces, but not of sad faces. Finally, we hypothesised that this bias would be disorder-specific, such that compared to controls, anxious participants with Generalised Anxiety Disorder (GAD) would not exhibit differences in accuracy or intensity of identification of either sad or happy faces.