A tale of anxiety and reward – the role of stress and pleasure in addiction relapse

At the start of February we heard the horrible news that Philip Seymour Hoffman, a wonderful Academy Award winning actor, had died from a drug overdose. This followed news from last year of the death of Glee star Cory Monteith from a heroin and alcohol overdose. Perhaps the most shocking thing about these deaths was that no-one saw them coming.

From http://www.flickr.com/photos/beinggossip/
From www.flickr.com/photos/beinggossip/

Worryingly, the reality is that drug relapses such as these are all too common, but often go unnoticed. Our understanding of the science behind these relapses has come on leaps and bounds in recent years. We have moved from understanding how a drug makes us feel pleasure, to understanding how a drug may cause addiction and subsequent relapse.

Classically, scientists have explained addiction by focusing on how a drug affects the reward systems of the brain. Drugs have the ability to make us feel good due to their actions on this pathway. The reward system of the brain is a circuit that uses the chemical dopamine to stimulate feelings of elation and euphoria. This system has a motivational role and normally encourages survival behaviours such as obtaining food, water and sex. Drugs of addiction can hijack this system to induce euphoric feelings of their own.

Cocaine, for example, is a highly addictive drug that blocks reuptake transporters of dopamine. These transporters normally soak up excess dopamine and ensure that the reward system is not overactive. Cocaine stimulates euphoria by preventing dopamine from being retrieved and increases stimulation of the reward system. Another addictive drug, nicotine directly stimulates the reward system to produce more dopamine.

These classical views work well when considering the motivation to start taking drugs and to continue taking drugs in the initial stages. The drug stimulates feelings of euphoria, ‘rewarding’ the taker. The taker learns to associate taking the drug with these feelings of euphoria and therefore the taker wants to do it more.

This theory can also explain some aspects of withdrawal. Just as activation of the reward system has a physiological role, so does shutting it down. It appears there is such a thing as ‘too much fun’. If we spent all of our time copulating and over-eating we’d be prime targets for predators. Due to this, the body has its own off-switches in our reward pathways that try to limit the amount of pleasure we feel. These normally work by desensitising the brain to dopamine, so that dopamine isn’t able to produce the effects it once could.

Addiction

During drug use, when dopamine levels and subsequent pleasurable feelings are sky-high, the brain works to limit the effects of this overload of dopamine. When the drug wears off, dopamine levels fall but the desensitisation to dopamine persists. This causes withdrawal, whereby when there are no drugs to boost dopamine, one fails to gain pleasure from previous pleasurable day-to-day activities. The dopamine released when one has a nice meal for example, is no longer sufficient to cause enough activity in the reward pathways and no satisfaction is felt.

Scientists believed for a while that the reward system could tell us all we need to know about addiction and how it manifests itself throughout the brain. However, tolerance builds and the euphoric responses to these drugs begin to wane. Some users start feeling dysphoria, a horrible sombre feeling, and don’t know why they continue using these drugs as they are no longer experiencing euphoria – the reason why they took the drug in the first place.

On top of that, when doctors and therapists talk to drug addicts who relapse, the addicts often do not talk about wanting to feel pleasure, wanting to feel elation again. They talk of stress building up inside them, the release from this stress they want to feel.

When asked about why they relapsed, previously clean addicts often talk of stressful events leading to their relapse – they lost their job or they broke up with their partner. First-hand accounts suggest this stress seems to be the driver of a relapse, the driver to continued addiction.

This is depicted clearly back in the 19th century by the eccentric American author and poet Edgar Allan Poe:

“I have absolutely no pleasure in the stimulants in which I sometimes so madly indulge. It has not been in the pursuit of pleasure that I have periled life and reputation and reason. It has been the desperate attempt to escape from torturing memories, from a sense of insupportable loneliness and a dread of some strange impending doom.” 

Intrigued by this, scientists have now found many threads of evidence to suggest that stress pathways within the brain play a key role in addiction and relapse. For example, work into this so-called ‘anti-reward system’, has led to proof that stress can instigate drug-seeking behaviours in animal studies.

Our stress pathways are built around a hormone system known as the HPA axis – the hypothalamic-pituitary-adrenal axis. This axis is responsible for regulation of many biological processes but plays a crucial role in stress.

The HPA axis is the stress hormone system of the body.
CRF = corticotrophin releasing factor; ACTH = adrenocorticotropic hormone

Much like other drugs of addiction, drinking alcohol feels good due to its actions on the reward system. In line with addicts of other drugs, alcoholics commonly talk about the release of stress they want to feel. Evidence is building to suggest that alcoholics have increased activity through the HPA axis.

A hormone called cortisol is the final chemical involved in the HPA axis, released from the adrenal glands during times of stress. Compared to occasional drinkers, alcoholics have higher basal levels of cortisol and a higher basal heart rate – two common measures of HPA activity. This pattern has also been seen in other addictions. For example, in previously clean cocaine addicts, higher basal HPA axis activity correlates with an earlier relapse and higher levels of stress hormones (e.g. cortisol) can predict a higher usage of cocaine in the future.

A puzzling scenario surrounding addiction is how most users can enjoy occasional usage but for some, this can spiral uncontrollably into an addiction? The likelihood of different individuals having a higher propensity to addiction could well be explained by differences in how different people respond to stress.

So what begins as a behaviour driven by the reward pathways appears to have now escalated into a behaviour dominated by stress pathways. It seems it is the stress that drives the craving and relapse, not the longing for a ‘reward’.

Armed with this knowledge, work into how we can design medicines to alleviate cravings and prevent relapse has shown early potential. Blocking the first stage of the HPA axis has been able to prevent alcohol addiction in rats. Blocking a suspected link between the stress pathways and the reward pathways has shown to be able to prevent stress-induced cocaine seeking behaviour.

These compounds have yet to be tested in humans but the early promise is there. It is an intriguing theory that the susceptibility to stress of different individuals may explain the varying susceptibility to addiction. This idea provides a basis for further work to try to understand why some individuals can only occasionally use, whilst others become addicted. Relapse is a horribly common situation amongst drug addicts and with the stigma attached giving addicts substantial additional stress, it is well-worth the research to prevent more unnecessary deaths. Unfortunately, this will be too late for those we have already lost, but the future is bright with continued progress in understanding these horrible ordeals.

By Oliver Freeman @ojfreeman

Flashes of brilliance in the brain – the best neuroscience images of 2013

Pretty pictures and popular neuroscience go hand-in-hand. People love to see the contours of their brain on an MRI and journalists are drawn to a brain flashing away with activity. There have been some fantastic images from neuroscience in 2013. Here are my favourites, one for each month starting way back in January 2013.

Disclaimer – We own no rights to any of the images on this page. All images are credited to the original authors and copyright holders. The MRC’s Biomedical Picture of the Day has been used as inspiration for some of the images.

January – New eyes for blindness

Blindness is a major challenge to the neuroscience field. Untreatable blindness is often caused by a degeneration of the light-sensitive cells of the retina. Here, researchers from University College London, UK have injected new photoreceptor cells into the retina of mice with retinal degeneration restoring normal responses to light!

Host retina cells are shown in blue, injected new photoreceptive cells are shown in green. The top left is a healthy mouse. The next three images show three different types of genetic blindness models in mice – all show integration of the injected cells. From Barber et al. PNAS 110(1): 354-359
Host retina cells are shown in blue, injected new photoreceptive cells are shown in green. The top left is a healthy mouse. The next three images show three different types of genetic blindness models in mice – all show integration of the injected cells. From Barber et al. PNAS 110(1): 354-359

February – Pathfinding connections in the brain

This year there has been a burst of activity in the ‘connectomics’ field. Mapping the connections of the brain is the next big challenge of neuroscience and the main topic of the Human Brain Project in Europe and the BRAIN initiative in the US.

Here, researchers from the École Normale Supérieure in Paris, France looked at how neurons find their way from the thalamus in the middle of the brain to the outermost folds of the cortex. 

These figures show neurons in green making their way from the thalamus (Th) to the cortex (NCx). From: Deck et al. Neuron 77: 472-484.
These figures show neurons in green making their way from the thalamus (Th) to the cortex (NCx). From: Deck et al. Neuron 77: 472-484.

 

March – Whole brain activity

Seemingly a burning campfire, this is actually a brain flashing with activity. In one of the most impressive images of neuroscience in 2013, researchers from Howard Hughes Medical Institute’s Janelia Farm campus in the US used calcium imaging (see July) to view the activity of a whole brain.

The brain of a zebrafish larvae – imaged by light-sheet microscopy. From Ahrens et al. Nature Methods 10: 413-420.
The brain of a zebrafish larvae – imaged by light-sheet microscopy. From Ahrens et al. Nature Methods 10: 413-420.

One of the biggest challenges of neuroscience is working out how everything links up together. The most accurate measurements we currently have can only take into account a handful of cells at once. The brilliance of this technique, utilising see-through zebrafish larvae, is that they were able to image more than 80% of the neurons of the brain at once. This can tell you how large populations of cells interact, allowing different regions to work together.

For more info see this article by Mo Costandi in the Guardian.

 

April – See-through brains

Another amazing technical feat designed to view how the brain links together, in April we were brought CLARITY. By dissolving the opaque fat of a brain whilst keeping the structure intact, researchers led by Karl Deisseroth at Stanford University, California were able to image a whole mouse brain.

The hippocampus of a mouse, visualised with CLARITY. Excitatory cells are green, inhibitory cells are red, and support cells called astrocytes are blue. From Chung et al. Nature 497: 332-337.
The hippocampus of a mouse, visualised with CLARITY. Excitatory cells are green, inhibitory cells are red, and support cells called astrocytes are blue. From Chung et al. Nature 497: 332-337.

The images from this technique are truly breath-taking. Using this technology, researchers could look in detail at the structure of the brain, giving valuable information of the wiring of different regions. They even imaged part of a post-mortem human brain from an autistic patient, finding evidence of structural defects normally associated with Down’s syndrome.

For more info, see this article in New Scientist.

 

May – Brainbow 3.0

‘Brainbow’ is a transgenic system designed to label different types of cells in many different colours. Prime material for pretty pictures. Take a look at these:

Multicoloured neurons. b shows the hippocampus, c and d show the cortex. From: Cai et al. Nature Methods 10: 540-547
Multicoloured neurons. b shows the hippocampus, c and d show the cortex. From: Cai et al. Nature Methods 10: 540-547.

 

June – Controlling a helicopter with your mind

In June, researchers from the University of Minnesota, USA showed that one could fly a helicopter with their mind! Watch below as the subject guides a helicopter using an EEG skullcap.

July – Better calcium sensors

More calcium imaging now. Calcium imaging works by engineering chemicals that will fluoresce when they encounter calcium. When nerve cells are active, millions of calcium ions flow into the cell at once, therefore a flash of fluorescent light can be seen. Here, researchers from Howard Hughes Medical Institute’s Janelia Farm campus in the US have been working on better, more sensitive calcium sensors. Using these you can colour code neurons based on what they respond to.

Caption: Neurons colour-coded by their response properties. From Chen et al. Nature 499: 295-300.
Neurons colour-coded by their response properties. From Chen et al. Nature 499: 295-300.

They were also able to record a video of the electrical activity in dendritic spines, the tiny arborisations of nerve cells – see here.

 

August – Using electron microscopy to connect the brain

Drosophila are wonderful little flies with nervous systems simple enough to get your head around, but complicated enough to be applicable to our own.

Caption: An electron micrograph, colour coded for each individual neuron. From: Takemura et al. Nature 500: 175-181.
An electron micrograph, colour coded for each individual neuron. From: Takemura et al. Nature 500: 175-181.

Here, researchers from Janelia Farm (again!) have performed electron microscopy on drosophila brains to connect up neurons across multiple sections. An algorithm colour codes them to line up the same neuron in different sections in what looks like a work by Picasso.

 

September – Astrocytes to the rescue

Astrocytes are support cells in the brain which become highly active following brain injury. Here, researchers from Instituto Cajal, CSIC in Madrid, Spain were interested in the different characteristics astrocytes take on when a brain is injured. The injury site can be seen as a dark sphere. Astrocytes with different characteristics have been stained in different colours. For example, the turquoise-coloured astrocytes can be seen forming a protective net around the injury site.

The injury site (dark sphere) can be seen surrounded by multi-coloured astrocytes. From: Martín-López et al. PLoS ONE 8(9) .
The injury site (dark sphere) can be seen surrounded by multi-coloured astrocytes. From: Martín-López et al. PLoS ONE 8(9) .

 

October – Preserved human skulls

Not strictly neuroscience but these images need to be included. Published in October 2013, The New Cruelty (commissioned by True Entertainment), photographed a series of preserved human skulls.

A preserved human skull. From the New Cruelty exhibition, commissioned by True Entertainment.
A preserved human skull. From the New Cruelty exhibition, commissioned by True Entertainment.

 

November – Brain Computing

Part of the vision of the Human Brain Project and the BRAIN initiative is to marry anatomy of the brain with computer models to try to produce a working computer model of the brain. This image represents BrainCAT, a software designed to integrate information from different types of brain scan to gain added information about the functionality of the brain.

This image shows BrainCAT  linking functional MRI data with connectivity data (diffusion tensor imaging). From: Marques et al. Front. Hum. Neurosci. 7: 794
This image shows BrainCAT linking functional MRI data (blue and turquoise shapes) with connectivity data (diffusion tensor imaging – green lines). From: Marques et al. Front. Hum. Neurosci. 7: 794.

 

December – Men Are from Mars, Women Are from Venus.

The last month of the year gave us preposterous headlines of ‘proof’ that “Men and women’s brains are ‘wired differently’”. This finally proved why women are from Venus and men are from Mars; why men ‘are better at map reading’ and women are more ‘emotionally intelligent’…. These exaggerated headlines have been kept in check recently on this blog but there’s no denying that the research paper did show some lovely images of male and female brain connections.

The top shows the most interconnected male regions, the bottom shows the most interconnected female regions. From: Ingalhalikar et al. PNAS (online publication before print).
The top shows the most interconnected male regions, the bottom shows the most interconnected female regions. From: Ingalhalikar et al. PNAS (online publication before print).

So that’s it. 2013 was a year of flashing brains, dodgy connections and overegged hype. Let’s hope there’s even more to come in 2014.

Post by Oliver Freeman @ojfreeman

Are we on the verge of an era of personalised medicine? Ten years on from the Human Genome Project.

“Ten years ago, James Watson testified to Congress that once we had the genome sequenced, we would have the language of life. But it turns out that it’s a language we don’t understand.”         Robert Best, 2013

From http://www.flickr.com/photos/askpang/9251502638/sizes/c/in/photostream/
From http://www.flickr.com/photos/askpang/9251502638/sizes/c/in/photostream/

In April 2003, at a cost of $2.7bn, the human genome was announced to great pomp and ceremony. This would reveal the deepest secrets of biology, give us the blueprint to create a human and disclose how diseases develop. But 10 years on, has the Human Genome Project been a true medical breakthrough and what role does it play in medical treatment today?

The Human Genome Project was headline busting. This was an international collaboration like no other, which would be the shining peak of human endeavour. Being able to look into an individual’s DNA would give us new knowledge about what caused diseases. What’s more, this would herald a new era in medicine whereby we could use this to diagnose diseases years before a patient felt any symptoms and tailor their treatment to their own personal needs.

Unfortunately, passionate headline-filling coverage has waned recently. The human genome is sadly a lot more complex than we had hoped. The quote above, from geneticist Bob Best, sums up the current state of genome sequencing. Speaking to the Guardian’s Carole Cadwalladr in her excellent first-hand account of how it feels to get your own genome sequenced, Dr. Best hits the nail on the head. We can now sequence a whole human genome for $5,000. That part of the technology has accelerated forward. What remains however, is the pining question of what it all means.

DNA_Double_Helix

The first problem is that the most common diseases seem more complicated than we had hoped. Instead of being one disease caused by one gene, these diseases seem to be lots of smaller diseases caused by many different genes, conspiring to produce similar results. This is the most evident in cancer. People like to group cancers together into one disease but in reality, there are many, varied faulty processes that can cause a cancer to develop.

One can now do an experiment whereby you measure the genomes of a group of cancer patients and compare them to the genomes of a group of healthy volunteers. Unfortunately, this gives you many, subtle deviations and not one Holy Grail ‘cancer gene’.

Despite this disappointment, there are great success stories emerging from this area. Angelina Jolie nobly led the way earlier in the year with the announcement that she had had a preventative double mastectomy. Jolie had a genetic test which revealed she carried a faulty copy of a gene known to increase women’s chances of breast cancer to 87%. Having a complete code of the cells in your body can only increase the likelihood of finding similar preventable risks of your own. Whether you would want to is another matter entirely.

Genome sequencing has given rise to the potential for personalised treatments. We know that the best treatments we currently have do not work for all patients. This fits with the knowledge that these diseases are actually different diseases all showing similar symptoms. By understanding which small subset of disease a patient is showing and how a patient might deal with a drug, we can tailor the treatment accordingly. The way this technology can be used is explained excellently in this animation. At this stage, we have increasing numbers of great success stories from rare genetic diseases, but limited success in the most common diseases.

This could be because the genome only tells us what may be happening in a living organism. Genes contain the instructions to make life; on their own they do nothing. It is the proteins that are made from these instructions that are the true machines of the living world. Proteins are made from genes via processes known as transcription and translation (see picture below). How a protein is made from the instructions in a gene can be impacted by many lifestyle and environmental factors. This is exactly why lifestyle and environmental factors play such a huge part in disease.

One’s genes do not tell us everything, they only contain the instructions. It is the proteins they make that are the true workhorses of the living world. Genes contain the instructions to make proteins via processes known as transcription and translation. These are impacted by environment, lifestyle and disease.
One’s genes do not tell us everything, they only contain the instructions. It is the proteins they make that are the true workhorses of the living world. Genes contain the instructions to make proteins via processes known as transcription and translation. These are impacted by environment, lifestyle and disease.

There are leaps and bounds being made in the field of proteomics. This is the process in which proteins, as opposed to genes, are measured. By measuring the proteome (all the proteins in a cell) we can now see what the genes are doing. We can see which genes are more active than others by seeing how much of the respective protein is being made. The vast complexity of this comes when people can have very similar genes, but widely different combinations of proteins they make from them. The human genome contains roughly 21,000 protein-encoding genes. These are responsible for the production of an estimated 250,000 – 1 million proteins. Measuring someone’s genome alone will not tell you exactly what is going on within their bodies.

Albeit technically difficult, analysing the proteome of patients has the potential to tell you which subset of disease patients may have, it can tell you which faulty genes are the most harmful, and can give you possibilities for new treatments. We hope that what goes on beyond a person’s genes will unlock further understanding of disease and truly bring in the era of personalised medicine.

The Human Genome Project has and will continue to open up new realms of possibility for understanding more about life. It has given us the basis to build on our knowledge of how we are made and the beginning to personalised medicine. However, there is still a very long way to go. It is the proteins inside you that truly define health and disease. Until we understand more about how specific genes make specific proteins and how this is impacted in common diseases, we will only be scratching the surface of the potential personalised medicine has to revolutionise treatment.

Post by Oliver Freeman

@ojfreeman

Controlling the brain with light: Where are we at with optogenetics?

Optogenetics has had a blockbuster billing. This remarkable neuroscience tool has been lauded to have the potential to illuminate the inner workings of the brain and allow us to understand and treat neurological diseases. But how far off are we from these goals and what have we achieved thus far?

Optogenetics has been explained before on this blog and elsewhere and is explained excellently in the video below.

Briefly, in order to understand the way the brain works we need to be able to investigate how circuits of cells communicate with each other. Previously, we have used electrodes to record what is going on inside the brain. By presenting a stimulus and recording the brain’s response, we can try and interpret what this response means. The true test of our understanding, however, relies on us being able to produce the required behavioural response by stimulating the brain.

Using electrodes, we can record the response of a nerve cell when a mouse wants to turn right. Then we can replay this response back to the mouse’s brain and see whether he turns right. The problem with this is that we will not only stimulate the nerve cells we want, but all of their neighbours as well. The brain is precisely organised and neighbouring cells may carry out completely different roles so this will commonly be a large interference.

What optogenetics allows us to do is insert a light-sensitive ion channel into the cells that we want to activate. This channel will open when we flash a burst of light onto it, in turn activating our cell. This technology thus allows us look at the actions of specific population of cells without the interference that comes with using electrodes.

From http://neurobyn.blogspot.se/2011/01/controlling-brain-with-lasers.html
From http://neurobyn.blogspot.se/2011/01/controlling-brain-with-lasers.html

So in just over a decade since optogenetics was first described, what have we managed to do? I’ve split my summary into two main headings. The first highlights basic research that increases our understanding of the brain. The second, the possibilities for understanding and treating disease.

Basic research

Top row shows the behavioural set-up. Mice detect whether the pole is in the right (right) or wrong (left) position and are trained to either lick (right) to receive a water reward or not lick (left). Bottom row shows the optogenetic intervention.  When a laser beam is crossed (red dot), the cortex is stimulated. Mice were fooled into licking in the left condition, even though their whisker hadn’t touched a pole. From O’Connor et al. (2013).
Top row shows the behavioural set-up. Mice detect whether the pole is in the right (right) or wrong (left) position and are trained to either lick (right) to receive a water reward or not lick (left). Bottom row shows the optogenetic intervention. When a laser beam is crossed (red dot), the cortex is stimulated. Mice were fooled into licking in the left condition, even though their whisker hadn’t touched a pole. From O’Connor et al. (2013).

Optogenetics has been used successfully to understand more about perception and the senses. Karel Svoboda’s group at the Janelia Farm Research Campus in the US have been able to create what they call ‘illusory touch’ in the brain of awake, behaving mice. Mice were trained to detect the position of a vertical pole, presented near to their whiskers. When the mice sensed that the pole was in the right position, they were trained to expect a drink of water and would lick (see left). When the pole was in a wrong position, they were trained to do nothing. Once trained, mice were able to give the correct response 75% of the time.

The researchers suspected they knew the area of the brain that was telling the mouse when their whiskers touched the pole, a part of the cerebral cortex. To test this, they injected an optogenetic channel into this area and replaced the pole with a laser beam and detector. The detector was linked to a light on the skull so that when the whisker passed through the laser light beam, a signal was sent to shine light onto the cerebral cortex and activate the specific cells they were interested in. They found that by activating the cortex when the whisker passed through the beam, the mouse would lick even though the whisker hadn’t touched a pole. They had created an illusion in the mouse’s brain that his whisker had touched a pole!

Another optogenetics study that hit the headlines recently looked into memory formation in the hippocampus. Susumu Tonegawa and his team from the RIKEN-MIT Center for Neural Circuit Genetics in the USA were interested in proving the crucial role of the hippocampus in memory formation. They inserted optogenetic channels into the hippocampus of mice in such a way that when new memories were formed, the cells that were connected by this memory would also contain these light-sensitive channels.

By placing mice in an arena and allowing them to explore, the mice would form spatial memories of the arena within their hippocampus. The new cells and connections generated would form a circuit that could be stimulated with light. The next day, the researchers placed the mice in a new arena that the mouse had no knowledge of. The researchers shone a light on the head of the mouse to activate the cells formed the previous day. They simultaneously gave the mouse an electric shock. Now, when placed back in the first arena, the mouse froze. It had associated the memory of the first arena with an electric shock, even though the mouse had never been shocked in the first arena! The researchers described this as creating a false memory within the brain.

 

Treating disease

Perhaps the most exciting aspect of optogenetics is its potential to treat disease, with the fantastic Karl Deisseroth leading the way. Arguably, the most natural place to start when talking about the therapeutic options of light-responsive channels is with blindness.

A collaborative group led by Botond Roska at the Friedrich Miescher Institute for Biomedical Research in Switzerland has looked into retinitis pigmentosa, a form of blindness caused by degeneration of the retina at the back of the eye. Using two animal models, they have been able to restore vision by inserting a light-sensitive channel into the retina. This therapy worked well enough to allow the previously blind mice to be able to carry out visually guided behaviours.

eyeA further use of therapeutic optogenetics has been mooted for the treatment of Parkinson’s disease. In Parkinson’s, a region of the brain called the basal ganglia degenerates, leading to an inability of the patient to co-ordinate movement. Increasing the activity of this region has been shown to have therapeutic benefit and the lab of Anatol Kreitzer at the University of California, US have shown potential in an optogenetic approach. They were able to mould the activity of the basal ganglia in such a way to create Parkinsonian-like symptoms in mice and also to reduce Parkinsonian-like symptoms in a mouse model of Parkinson’s.

So has optogenetics lived up to the hype so far? Well despite its application in humans lacking somewhat at this point, it appears that optogenetics has already answered some vital questions we have. The challenge now is to develop the technology further so that we have more accurate and controllable tools for when we’d like to start using them in humans. The excitement surrounding optogenetics is still widespread and there is no evidence yet that the bubble is set to burst anytime soon.

Post by: Oliver Freeman @ojfreeman

Papers Referenced:

Sensory Perception – O’Connor et al. Nature Neuroscience (2013)

Memory Formation – Ramirez et al. Science (2013)

Blindness – Busskamp et al. Science (2010)

Parkinson’s Disease – Kravitz et al. Nature (2010)

Synesthesia: How does your name taste?

Are you convinced that Mount Everest tastes like strawberries? Or that Friday is a deep green colour? Does hearing your friend Dave’s name make you wretch? If so, you might be entering the baffling world of synesthesia.

colour

Synesthesia (synaesthesia in British English) is a neurological condition whereby one’s senses literally merge. It means ‘joined perception’ and can cause names to have a particular taste, letters to have a particular colour and a whole host of other sensory fusions.

It may sound more like a psychedelic experience in a drug-induced haze but this is a sober reality for a number of people. Some of the most famous creative types of our times have been diagnosed or suspected to have synesthesia. Marilyn Monroe was described by the biographer Norman Mailer to have a “displacement of the senses which others take drugs to find… she is like a lover of rock who sees vibrations when she hears sounds”.

Image credit: CaramelBeauty77
Image credit: CaramelBeauty77

Stevie Wonder reportedly has sound-colour synesthesia whereby even though he is blind, he ‘sees’ the colour of the music. This added sensation is not restricted solely to arty types, however. Richard Feynman, the excellent Nobel Prize winning physicist, claimed “When I see equations, I see the letters in colours”, before quipping, “I wonder what the hell it must look like to the students”.

Estimates put the number of synesthetes at between 1 in 200 and 1 in 100,000 (which basically means we really don’t know how many people experience synesthesia). Some even claim it could be as high as 1 in 23! If any of this sounds familiar to you, you can test whether you may have synesthesia here.

There is a lot of fascination from scientists into this condition. The organisation of the senses is seen to play a major role in formation of memories. When you form a memory of a person, you link the information from multiple senses. The sound of their voice, the colour of their hair, the letters of their name all form your memory of that person. But for most of us, these senses still exist as separate aspects of our memory of that person. You don’t intrinsically link the letters of their name to their hair colour.

Synesthetes have secondary sensory experiences that are completely involuntary. Some synesthetes say that the colour of someone’s name is actually how they remember them. They may remember the name Dave by the colour orange more than the letters of the name.

This is where the science behind it all comes in. Sensory information should be contained separately but able to be integrated at will. Different sensory information is processed in largely the same way but in distinct sensory regions. You can see below the various parts of the brain that are responsible for the different senses. As you can see they are spatially separate. The beauty of the brain is connecting this information as and when it is needed.

Senses

What appears to be the case in synesthetes is that these sensory regions are intrinsically connected. For example, sound-colour synesthetes seem to have more activity in colour-processing regions of their visual cortex (known as area V4) while listening to sounds. Instead of distinct pathways being linked at will, it could be the case that these two systems are bound together and stimulation of one sense also activates another.

It is interesting to be able to see these differences in the brains of synesthetes. What is not very clear, however, is why synesthesia is around at all. Does having synesthesia benefit you in any way? It appears that actually it might.

Synesthetes have been known to be better at remembering phone numbers, for example, suggesting they may have better memories. Letter-colour synesthetes have also shown they can be better at discriminating between different colours, whilst hearing-motion synesthetes are better at visual tasks. So could this be a rise of the superhumans – an evolutionary advantage to be able to process more sensory information than the rest of us?

Further research into synesthesia will not be a purely academic exercise but will contribute further to our understanding of the human brain. How the brain links information together on request is one of the major mysteries of neuroscience, a point focused on in Obama’s BRAIN initiative. As synesthetes show remarkable differences in how they integrate information, it will be fascinating to see whether they can be the valuable subjects to advance our knowledge of the brain.

Post by: Oliver Freeman @ojfreeman

 

 

The dream-reading machine

The film Inception starts with Leonardo DiCaprio and Joseph Gordon-Levitt attempting to infiltrate someone’s subconscious. They are trying to steal the target’s dreams. This wonderfully futuristic concept may be a thing of science-fiction movies, but researchers in Japan might just be on the road to seeing what you dream.

Research carried out by Yukiyasu Kamitani’s group at the Advanced Telecommunications Research Institute in Kyoto, published in the journal Science, used an fMRI (functional magnetic resonance imaging) brain scan to monitor volunteers’ brain activity whilst they drifted off to sleep. By creating a computer algorithm to predict what this brain activity meant, they were able to predict what a subject was dreaming about.

To begin, 3 volunteers were placed into an fMRI scanner and shown Google images of many different objects. The activity in the visual areas of the brain was monitored by the scanner and uploaded to a computer. Words associated with each of theses images were processed and arranged into groups of like-meaning words, called synsets. For example, words such as Structure, Building, House and Hotel would be grouped together in their own synset. Words within a synset were ranked depending on their importance, and the most important was used to describe that synset. For the example above, the word Building would be the highest ranked word in that synset. This allowed them to narrow down a number of possible words/objects to one word of ‘best-fit’.  Images of houses, hotels, offices, would all be narrowed down to Building.

The computer was given information of the synsets that related to each image, along with the brain activity at the time that image was shown. This allowed the researchers to match brain activity to certain images and words. The computer now knew that when the subject saw a picture of a house, their brain responded in a certain way. This brain activity was grouped together with activity when the subject sees an office, and a hotel etc.

Now came the real test. By categorising brain activity based on what a person sees, could they read what a person was dreaming about? The 3 subjects were placed in the scanner and told to fall asleep if they felt tired. The electrical activity of their brain was recorded by EEG (electroencephalogram) in order to see when they fell into the early stages of sleep. During these early stages, one does not normally have vivid ‘dreams’ but typically light hallucinations.

As these hallucinations started, the brain activity in visual areas was recorded and run through the algorithm. The algorithm came up with the synsets that were most likely to be represented by that brain activity, and used the Google images from before to present a video of what it thought the person was ‘dreaming’ about. This can be seen in the video below. To test how accurate the computer was at predicting the ‘dreams’, the volunteer was awoken and asked what they had just seen.

When the researchers compared what the participants reported they were seeing with the computers prediction, they found that the computer was correct in 60% of cases. This is significantly higher than getting it right by chance. The computer was able to use brain activity during the early stages of sleep to read and predict what the volunteer was seeing.

This study is not without its limitations. Firstly, what most of us see as ‘dreaming’ is not thought to occur in these early stages. We believe ‘dreaming’ occurs mainly during rapid-eye movement sleep, a stage of sleep that occurs around an hour later than these early stages of sleep (see below). What are measured here are hallucinations that occur when we are falling to sleep. Furthermore, an fMRI machine is incredibly loud due to its large, spinning magnets. It is questionable that the sleep stage observed in these participants is truly what we would regard as sleep.

 

A typical sleep cycle. Researchers recorded hallucinations in stages 1 and 2. We normally dream in REM (rapid eye movement) sleep. Image credit to Sleep 1102.


Secondly, a success rate of 60% is hardly news to excite those wanting to perform dream extraction. The crude prediction is not an exact match of what someone is seeing (as you can see from the video above). The computer is able to recognise that you were seeing a building, but not that you were cleaning windows of your own house for example. It is clear that it will take some time to really enter the realms of dream-reading. The interpretation of this crude prediction is also hampered by the fact that the study was based on only 3 participants. It is not clear whether this result will scale up to the larger public.

Despite these limitations, what the researchers have done is remarkable. They have shown that these early sleep hallucinations create very similar patterns of activity in the brain to when we are awake. They have shown a relatively accurate way to decode this activity into what the subject is seeing. And they have opened up the possibility of studying the function and nature of sleep in more detail. But don’t worry; the Thought Police won’t be after you just yet.

 

By Oliver Freeman @ojfreeman

 

News and Views: The Festival of Neuroscience – A 5 minute guide.

BNA

 

Between 7th-10th April 2013, neuroscientists from across the globe met in London for the British Neuroscience Association’s ‘Festival of Neuroscience’. Here is my whistle-stop tour of the main talking points.

 

 

Drugs, neuroscience and society

Instigated by the divisive Professor David Nutt, delegates heard about research into cognitive enhancing drugs. Professor Judy Illes suggested these drugs should be labelled ‘neuro-enablers’ not ‘neuro-enhancers’ to focus on their role in improving cognition in those affected by diseases such as Down’s syndrome. Topics debated included: Should we criminalise use in healthy people?; Should we allow their use in exams, job interviews etc.?; Should we allow them only if declaration of use were compulsory?; Would this lead to two-tiered exams – users and non-users?

 

Dr Paul Howard-Jones spoke of ‘neuromyths’ in education. He highlighted the oft-cited theory that children all have their own learning style (visual, kinaesthetic, auditory etc.) as having no scientific basis. ‘Neuromyths’ are routine in the field of education, he said.

 

Professor Emeritus Nicholas Mackintosh described findings in the Royal Society’s report into neuroscience and the law. Bottom-line is that there is very little evidence thus far to suggest one can use brain scans successfully in a court of law. Only once has neuroscience been used successfully in court.

 

Pain, placebo and consciousness

Professor Irene Tracey gave a fantastic plenary talk on imaging pain in the brain. She gave wonderful insights into how the placebo effect is very real and can be seen in the brain. A placebo can hijack the same systems of the brain that (some) painkillers act on. This could have strong implications for the experimental painkiller vs. placebo set-up of randomised controlled trials.

 

Professor Ed Bullmore spoke about the connectivity of the brain. He described the intricate connections of the brain and how some regions are highly connected whilst others have only sparse connections. He noted that in a coma, highly connected regions lose connectivity and sparsely connected regions gain connectivity. This goes nicely with the work by Giulio Tononi into theories of consciousness.

 

Professor Yves Agid entertained with his animated talk on subconsciousness. He argued that the basal ganglia are crucial for subconscious behaviours. He showed that the basal ganglia become dysfunctional in diseases such as Tourette’s syndrome and Parkinson’s disease – both of which involve defective subconsciousness.

 

Great science

Professor David Attwell told a wonderful story about glia, the support cells of the brain. Glia are often forgotten about when talking about functions of the brain, but Prof Attwell described fantastic research condemning this. Glia are involved in brain activity, both in health and disease. Glia are involved in regulating the speed of nerve cell communication. Glia may also be involved in learning and memory.

 

Professor Tim Bliss, one of the pioneers of research into memory formation, spoke about his seminal discovery of long-term potentiation. He recalled the story behind how he and his colleague Terje Lømo discovered one of the mechanisms that mammals use to store long-term memories. He even owned up to falling asleep during the published night-time experiment and failing to jot down the data for a short period! #overlyhonestmethods

 

Professor Anders Björklund gave a public lecture on his life’s work into stem cell therapy to treat Parkinson’s disease. He showed some wonderful results that have really made a difference to patients’ lives. This therapy shows good improvement in ~40% of cases. Work continues into why it does not work in all cases.

 

To follow what else was said during the conference, see #BNAneurofest.

 

By Oliver Freeman @ojfreeman

What is consciousness? A scientist’s perspective.

ev.owaWe all know what consciousness is. We can tell when we’re awake, when we’re thinking, when we’re pondering the universe, but can anyone really explain the nature of this perception? Or even what separates conscious thought from subconscious thought?

Historically any debate over the nature of consciousness has fallen to philosophers and religious scholars rather than scientists. However, as our understanding of the brain increases so do the number of scientists willing and able to tackle this tricky subject.

What is consciousness?

ev.owa_1A good analogy of consciousness is explained here based on work by Giulio Tononi. Imagine the difference between the image of an apple to your brain and a digital camera. The raw image is the same whether on a camera screen or in your head. The camera treats each pixel independently and doesn’t recognise an object. Your brain, however, will combine parts of the image to identify an object, that it is an apple and that it is food. Here, the camera can be seen as ‘unconscious’ and the brain as ‘conscious’.

The bigger the better?

This example works as a simple analogy of how the brain processes information, but doesn’t explain the heightened consciousness of a human in comparison to say a mouse. Some people believe that brain size is linked with consciousness. A human brain contains roughly 86 billion neurons whereas a mouse brain contains only 75 million (over a thousand times less). A person might then argue that it is because our brains are bigger and contain more nerve cells that we can form more complex thoughts. While this may hold to a certain extent, it still doesn’t really explain how consciousness arises.

To explain why brain size isn’t the only thing that matters, we need to consider our brain in terms of the different structures/areas it consists of and not just as a single entity. The human cerebellum at the base of the brain contains roughly 70 billion neurons, whereas the cerebral cortex at the top of the brain contains roughly 16 billion. If you cut off a bit of your cerebellum (don’t try this at home) then you may walk a bit lopsided, but you would still be able to form conscious thoughts. If however, you decided to cut off a bit of your cortex, the outer-most folds of the brain, your conscious thought would be severely diminished and your life drastically impacted. So it seems that the number of brain cells we have doesn’t necessarily relate to conscious thought.

ev.owa

Linking information

As a general rule the more primal areas of the brain, such as the brain stem and cerebellum act a bit like the camera. Like the camera, they are purely responsible for receiving individual pieces of information from our sensory organs and don’t care for linking this information together. As you move higher up the brain, links form between different aspects of our sensory experiences. This linking begins in mid-brain structures (such as the thalamus) then these links are made more intricate and permanent in the cerebrum.

Tononi believes that it is this linking of information that is the basis for consciousness. As cells become more interlinked, information can be combined more readily and therefore the essence of complicated thought can be explained. The more possible links between cells, the more possible combinations there are and therefore a greater number of ‘thoughts’ are possible.

There may be more neurons in the cerebellum than the cerebrum, but because they are not as extensively linked to each other, they cannot form as complicated thoughts as the cerebrum. When information is relayed upwards from the cerebellum in the brain, it is passed to neurons that have more connections and can therefore make more abstract links. Perhaps a neuron responsible for telling the colour red links with a neuron responsible for the representation of a round object, giving you the notion of a red apple. If you multiply this process up a couple of times, cells soon hold a lot of combined information – smell, taste, colour etc. all come together to create your representation of the apple.

Too much connectivity

So it’s the number of connections that matter? The more connections the better? Well no, sadly it’s not quite that simple. The cells at the higher levels need to be highly interconnected but if all the cells in the brain were too interconnected then you would really be back to square one, where the whole system is either on or off. All the cells fire, or none of them do. Here, you lose all specific information and your brain doesn’t know whether it is red or round or anything, it just knows there’s something. Because along with your red apple cells, all your blue cells will fire, all your bicycle cells will fire and so on, meaning you’ll get no clear information about the apple whatsoever.

The key is that cells at the basic level need to be focused and not have their message conflicted by other information. They then pass their message up to a more connected cell that combines it with other information before passing it up a level, and so on and so forth. Now we have an entity that can build up complicated information from small bits. According to Tononi it is the ability to combine lots of information efficiently that yields the ability to analyse abstract concepts and thus gives us ‘consciousness’.

How do we become unconscious?

The true test of how good a theory of consciousness this is is whether it can also explain a loss of consciousness. Tononi believes that unconsciousness is brought on when the system becomes fragmented and connectivity in the brain decreases. This is exactly what seems to happen when in a deep sleep (when we don’t dream) or under general anaesthetic. Normally when awake and alert, fast activity can be found all over the brain and signals can be passed between areas. When we go into a deep sleep however, the brain moves to a state where signals cannot easily pass between different areas. Tononi believes that the cells temporarily shut off their connections with each other in order to rest and recuperate, therefore losing interconnectivity and associated higher thought processes.

ev.owa

While it may seem a far reach to suggest that consciousness is purely a state of high interconnectivity, what Tononi has done is to present the beginnings of a tangible scientific theory, backed by evidence that suggests interconnectivity is crucial for higher brain power. The question of why we can form conscious thoughts is more of a philosophical one but the scientific view seems to be that it is a fundamental property of our brains. The evolution of man has led our brains to become highly efficient at processing complex information, giving us a vast repertoire of possible thoughts. This repertoire has expanded to such an extent that we can now debate our very existence and purpose. Whatever you believe about the reasons behind consciousness, however, scientists are beginning to have their say about what rules may govern consciousness in the brain.

Post by: Oliver Freeman @ojfreeman

Could Jennifer Aniston hold the key to memory formation?

Ever since her leap to fame as Rachel on the popular TV sitcom Friends, Jennifer Aniston has been one of the most recognisable actresses in the world. Now, scientists believe that the discovery of brain cells responding specifically to pictures of Jennifer Aniston  may hold the key to understanding how the brain forms memories.

Imagine walking down a busy street and noticing a friend walking on the other side of the road. Even following just a brief glance from any angle your brain allows you to recognise your friend and conjure up a whole host of memories about that person; including their name, their personality and perhaps something really important you were meaning to talk to them about. This scenario provides a perfect example of how efficient the brain is when it comes to memory storage and retrieval. However, scientists still have a very limited understanding of how all this can occur so quickly and faultlessly.

The idea that single brain cells can respond exclusively to specific objects/people is not a new one. However this idea is not widely accepted in the scientific community. One notable sceptic was Jerry Lettvin, a researcher from Massachusetts Institute of Technology, who argued against the simplification of memory function in the late ‘60s.

Lettvin expressed his criticism of this idea through an example. He described a hypothetical brain cell specialised to respond only to the sight of your grandmother (a ‘grandmother cell’). This cell could then be linked with and activate many other cells responsible for memories of your grandmother such as the smell of her cooking or the sound of her knitting. Through this example Lettvin highlighted a number of problems with such a simple set-up; if the brain did possess a cell to recognise every single object you’ve encountered then surely the brain would run out of space at some point? Moreover, what would happen if you lost one of these cells? Would you be unable to recognise your grandmother any more?

Image credit: Jolyon Troscianko

Despite its ridicule, the ‘grandmother cell’ theory has recently been revived by the discovery of single cells in the human brain which respond specifically to recognisable people. These cells were discovered by a team operating in Los Angeles, California led by Rodrigo Quian Quiroga from the University of Leicester who had the unique opportunity of recording from single cells in the brains of awake, behaving humans.

The ability to record single cell activity in awake human patients is clearly very rare. However, the LA team were able to conduct their study using a special group of patients undergoing treatment for severe epilepsy. When a patient with severe epilepsy does not respond to medication, the faulty brain region responsible for seizure generation must be be removed. As this area usually differs between patients, a surgeon will implant an electrode (see left) into the brain which will record electrical activity in various locations and tell the surgeon which area needs to be removed. This allowed Quiroga and his team to record single-cell activity from awake, behaving humans.

They showed these patients many pictures of objects and people in an attempt to discover what these brain cells responded to. In one of their first experiments they found a cell that appeared to respond specifically to pictures of Jennifer Aniston which they later named the ‘Jennifer Aniston cell’.

To ensure that this cell was actually responding to Jennifer Aniston and not some other feature of the pictures they were using (for example, her blonde hair or maybe the contrast of her compared to the background etc) they tested the cell using a huge range of Jennifer Aniston pictures. These included pictures of her face from various angles, her whole body and some of her standing next to (her then husband) Brad Pitt. These pictures were shown to the patient multiple times and mixed in with pictures of other celebrities and family members.

The results were remarkable. The cell did not respond to any other person (around 80 other people were shown) but responded specifically to pictures of Jennifer Aniston. Interestingly the cell did not care whether it was a head shot or a picture of her whole body – two views which, from an image processing perspective, are very different. However, the cell did not like Brad Pitt! Any time a shot of Jennifer Aniston and Brad Pitt together was shown the cell refused to respond. This baffled the researchers. Why would the cell fire specifically to Jennifer Aniston but only when she was on her own?

The answer came when they found that this cell also responded (not as much as to Jennifer Aniston but enough to be significant) to Lisa Kudrow, the actress who plays Phoebe on Friends. Quiroga hypothesised that the cell was not actually responding to a specific person, but instead responding to the ‘concept’ of ‘Rachel from Friends‘. When Jennifer Aniston was shown on her own, the patient was reminded of Rachel and the cell would fire. When Jennifer Aniston was shown next to Brad Pitt, that was Jennifer Aniston the actress and not Rachel, and the cell did not respond. Thus the cell, once thought to be a ‘Jennifer Aniston cell’, became known instead as a ‘concept cell’. These ‘concept cells’ would form a key part of a hypothesis Quiroga was building regarding memory formation.

Recording from new patients, Quiroga and his team found multiple examples of these ‘concept cells’. For example, cells were found which responded to Halle Berry on her own and Halle Berry in costume as ‘Catwoman’ (pictures where her face is almost entirely obscured by the costume). A cell was also seen responding exclusively to either Luke Skywalker or Yoda; a Star Wars concept cell?

To further cement the ‘concept cell’ theory, Quiroga’s team investigated whether these cells would also respond to non-visual methods of triggering these concepts. To do this they used the ‘text to speech’ function on their laptop playing a robotic voice speaking the celebrity’s name. Amazingly, this had the same effect – that is, the cell that responded to pictures of Halle Berry, also responded to the spoken words ‘HALLE BERRY’. The pathways for the processing of visual and auditory information are largely separate and have limited cross-over but somehow, these two types of information are being relayed to one individual cell.

This raises some interesting questions concerning what happens when we form new memories. Imagine for example that you meet a new person and do not catch their name. Your brain will store a visual image of this person, linking it to a ‘concept cell’. Days or weeks later you may then learn the person’s name. Does this mean that this auditory information will create new links, through a completely different pathway, right back to the original ‘concept cell’ for this person?

If correct, this type of specificity in linking parts of the brain together is truly remarkable. Quiroga believes the ‘concept cells’ he has unearthed in these studies represent the building blocks of our memories and are crucial for forming associations necessary for storing and retrieving multifaceted memories. Quite a claim, but he is uneasy about labelling them ‘grandmother cells’ as this simplifies what many believe to be a complex process. The next stage of this research will be crucial as Quiroga aims to investigate how these ‘concept cells’ communicate with each other and how the timing of each cell’s activity may be the key to linking them – and therefore, your memories – together.

All work described here is summarised in the following review by Rodrigo Quian Quiroga (subscription needed):

Post by Oliver Freeman @ojfreeman

The neuroscience of race – is racism inbuilt?

The topic of race is one of fierce debate; never far from our minds and commonly discussed both in the media and down the pub. Britain is one of the most diverse and multicultural countries on the planet but the development of this multiculturalism has grown from a torrid past and race relations continue to dominate the national psyche. The ever-growing diversity of our country means that race relations are becoming more and more crucial to many socio-political advances. Indeed a number of intergroup interactions come to the forefront every year, with prominent events from this year including the allegations of racial abuse against former England football captain John Terry. Understanding what defines our prejudices and creates these racial tensions is an aspect of race relations which does not receive widespread media coverage, despite its potentially major implications for society – so what is currently known about the neuroscience of race?

Most of the early work on race relations came from the field of social psychology. Henri Tajfel and John Turner were early pioneers of ‘social identity theory’ – a theory which explores people beliefs and prejudices based on their membership and status within different social groups. Their work at the University of Bristol (UK) in the 1970s described the phenomena of ingroups and outgroups. They assigned volunteers to one of two groups based on relatively superficial preferences, i.e. individuals may have been assigned to a certain group due to their appreciation of a certain style of art. Individuals within these groups were then asked to rate their preference for other volunteers either within their own group (ingroup) or in another group (outgroup). Tajfel and Turner consistently found a prejudice towards the outgroup individuals and a preference for ones ingroup. This research suggests that we have an innate mechanism of preference towards those who we perceive to be similar to ourselves over those who are ‘different’ – no matter how insignificant or trivial that difference may be.

Interestingly this inbuilt prejudice can be masked, as is often the case in similar studies using different racial groups. However, recent neuroscience research suggests that prejudices may still exist despite the conscious effort to hide them.

Research by Elizabeth Phelps and colleagues at New York University (US) believe they have uncovered one of the brain pathway involved in determining reactions to faces of different race. This research provides some intriguing insights into our views of different racial groups. Using fMRI (functional magnetic resonance imaging), Phelps and her team have discovered a network of interconnected brain regions that are more active in the brain of white participants in response to a picture of a black face than to a white face.

This circuit includes the fusiform gyrus, amygdala, ACC (anterior cingulated cortex) and the DLPFC (dorsolateral prefrontal cortex). Activity in the fusiform gyrus is not surprising, since this region has been linked to processing of colour information and facial recognition. Intuitively, this region should play a simple role in the initial recognition of a black face. The next region in this circuit is the amygdala. The amygdala is responsible for processing/regulation of emotion and it is here where the circuit becomes more intriguing. A simple explanation of amygdala involvement could be that black faces evoke more emotion in white participants than white faces. Further along the circuit the roles become more complex as we move into the higher areas of the brain. The ACC and the DLPFC are regions that have both been linked to higher order processes. The ACC is commonly reported to be active in tasks that involve conflict. This region is commonly activated in tests such as the ‘stroop test’: this involves naming the font colour of written words which either agree (BLUE) or disagree (BLUE). In this case, the ACC is active during the second conflicting task. The DLPFC is one of the most sophisticated areas of the human brain, responsible for social judgement and other such complex mental processes.

A study conducted by Mahzarin Banaji and a team from Yale and Harvard Universities in the USA may explain why activity is seen in areas involved in conflict resolution and social judgement when viewing ‘outgroup’ faces. This research showed that activation of these pathways was time dependent. When images of ‘outgroup’ faces were flashed for a very short time (30 milliseconds) significant activation was seen in the fusiform gyrus and amygdala but none was observed in the ACC or DLPFC. However, when these images were shown for a longer period of time (525 milliseconds) activity in the amygdala was virtually abolished, replaced by strong activity in the ACC and DLPFC. This research yields vital insight into the role of the ACC and DLPFC and the possible presence of inbuilt prejudice. One interpretation of these findings is that after a short presentation, the ‘raw’ inbuilt activity is strong, showing unintentional emotive activity to ‘outgroup’ faces, while after the longer exposure time this activity is abolished by the influence of the ACC and the DLPFC, which provide a more rational regulation of this response.

This suggests that a member of today’s society knows consciously that racial prejudice is wrong and so activity in the DLPFC could represent a conscious decision to be unbiased. The ACC activity may represent conflict between this conscious DLPFC process and the subconscious emotion seen in the amygdala activity. Obviously, a mere increase in amygdala activity does not necessarily signify negative emotion. Therefore this automatic activity may not represent inbuilt racism, instead it may simply reflect heightened awareness and deeper thought when assessing faces from another racial group. However, one thing it does highlight is the obvious differences in the processing of ‘outgroup’ faces.

This research could have serious implications for our understanding of inter-race relations. Therefore, although this activity is subconscious and unlikely to be linked with conscious racial discrimination, it may still play a key role in influencing how we go about our daily lives – choosing jobs, places to live, friends and so on. However, since our brains are malleable, racial prejudice such as this can be lessened, a prime example being through inter-racial friendships and marriages. It is possible that this ingroup vs. outgroup association of race will diminish more and more as our education and upbringing continues to become more multicultural. But for now, easing these racial divides may take a lot of thought.

Guest Blog by: Oliver Freeman @ojfreeman

References (only accessible with subscription):

  • Kubota et al. The Neuroscience of Race. Nature Neuroscience
  • Cunningham et al. Separable Neural Components in the Processing of Black and White Faces. Psychological Science

Learn a little more about Oliver:

My research looks into the effects of diabetes on the nervous system. Diabetes is nearly 4 times as common as all types of cancer combined and around half of those with diabetes have nerve damage. Most people are not aware of this very common condition and I am trying to increase awareness of the disorder and understand what causes diabetic patients to feel increased pain and numbness/tingling in their hands/feet.