As a child, much to my parents’ confusion, I had the uncanny ability to see faces in everyday patterns and objects. Yes, my circle of friends soon extended well beyond the confines of the playground, covering cloud formations, leaves, coffee stains and my personal favourite, a wise old man who stared knowingly down from a rip in my bedroom wallpaper. This peculiarity was put down to the fact that I was an only child with a particularly overactive imagination. However, the ability has never left me and these illusionary faces often still pop up from time to time…although I now rarely try and engage them in conversation.
Therefore, I was quite interested to find that this phenomenon, known as facial pareidolia, is actually quite common. It is also the subject of a recent research paper entitled: ‘Seeing Jesus in Toast: Neural and behavioural correlates of face pareidolia’, which won its authors an Ig Nobel prize*. This paper explores how pareidolias (both facial and otherwise) can be used to understand the way our brains process information and how our sensory experiences often incorporate more than meets the eye.
We all know that our sensory experiences (vision, olfaction, touch etc) begin life in our sensory organs (eyes, nose, fingertips etc). Once a sensation is detected, this travels to the brain, where it is processed into a multi-sensory experience. However, we are not just passive sensors, like a camera. Instead our sensations are often coloured by internal processes, such as mood, expectation or attention. This ‘colouring’ is known as top-down modulation and can be a particularly personal experience (think of the Rorschach ‘inkblot’ test). Facial pareidolia is a good example of top-down modulation since: our sensory organs are simply experiencing a random pattern of input (such as a cloud formation or coffee stain), and something else causes us to give this input a more familiar or meaningful interpretation – in this case a face.
The study described in ‘Seeing Jesus in Toast’ investigated which brain regions were active when participants experienced pareidolia for either faces or letters. Specifically, brain activity was monitored while subjects viewed one of five different categories of images:
1) obvious faces (image: A)
2) hard-to-detect faces (image: B)
3) obvious letters (image: C)
4) hard-to-detect letters (image: D)
5) pure noise – no face or letter (image: E).
Participants were told that the images they were being shown could contain either faces or letters and were asked to decide which pictures actually showed this hidden imagery. To ensure that all participants experienced pareidolia, the pesky researchers deliberately made sure that this task was really tricky.
What they discovered was that their subjects were a pretty suggestible bunch. Those who expected to see faces often spied a pair of eyes peering out at them from the pure noise stimuli, while those who were looking for letters often saw just that. .
From this work, the researchers were able to identify a network of crafty brain regions which seemed to be specifically responsible for tricking us into seeing illusory faces. They suggested that when we expect to see a face, regions of the brain responsible for decision making and facial recognition (such as the prefrontal cortex) shout commands down to regions that process more basic elements of images ( in this case a region known as the right fusiform face area). Such a shout forces the ‘lower’ areas to incorrectly interpret a noisy image as containing a face. Put simply, if the brain is expecting to see a face it can alter the way we interpret visual information and make us see things which aren’t actually there.
Interestingly, not all noisy images were incorrectly interpreted as showing faces. In fact, when the researchers took a closer look and compared noisy images which were mistaken for faces with those which were not, they found that these did actually contain patterns which looked a bit like faces. Take a good long squint at the image to the left (showing a noisy image mistaken for a face) and, at least to me, it’s easy to see two eyes a nose and an open mouth.
The researchers suggest that the system responsible for seeing faces popping out of highly ambiguous visual information may actually be adaptive. Specifically, they say “The tendency to detect faces in ambiguous visual information is perhaps highly adaptive given the supreme importance of faces in our social life and the high cost resulting from failure to detect a true face”. So, it appears that not only is seeing faces perfectly normal, but it may even be a socially adaptive trait. I guess that means I’m not really crazy and neither is this woman…
Post by: Sarah Fox
* The Ig Nobel Prizes honor achievements that make people laugh, and then make them think and include a huge range of fascinating research. If you’ve not already heard of them I strongly recommend you have a browse through their website (here).