Your nose sniffs out friends, foes, and sickness

Image by ‘great_sea’ (Wikicommons)

The sense of smell is often unfairly overlooked, but it is becoming more and more obvious that it’s essential for a huge number of really important functions. Recently, the average human nose was estimated to be able to discriminate between more than 1 trillion different smells, each of which are mixtures of odorant molecules recognised by the nose. Of the ~24,000 genes in every person’s genetic make-up, around 400 code for smell receptors that reside in the nose. These receptors bind to these odorant molecules and trigger neural activity to the brain.

Our sense of smell  – or ‘olfaction’ – has long been known to be closely associated with our emotions. Strongly emotional memories often involve elements of smell (e.g. the smell of sun cream reminding you of holidays, or associating the smell of cut grass with sports day), so perhaps it isn’t surprising that a lot of emotional relationships can be helped (or hindered!) by our sense of smell too.

Sniffing out friends

Pheromones in guys’ and girls’ sweat can help give a clue as to their gender for the heterosexual members of the opposite sex, or for homosexual people of the same sex. In one experiment, smelling the male hormone androstadienone made it easier for women and homosexual men to identify a ‘gender neutral’ light point walker as ‘male’. By contrast, the smell of the female hormone estratetraenol made it easier for heterosexual men to identify a walk as ‘female’. The responses of bisexual people and homosexual women to the smell of estratetraenol were, on average, between those of heterosexual men and women. So a sense of smell can alert us generally to members of the sex we find more attractive.

In fact, a person’s smell may even determine how attractive they are to potential mates. Major Histocompatibility Complex (MHC) molecules are produced by the immune system and detect chemical labels on the surface of all cells, bacteria and infected cells. They determine whether a cell belongs to you, or to a pathogen (a parasite or bacterium) or an infected cell. If an MHC-carrying cell comes across anything dodgy, they initiate the immune response to get rid of the pathogen/diseased cell. What’s amazing is that the genes that code for MHC molecules are the most variable of any of the sets of genes in the genome. That is, most of the genetic differences between any two people are found in the MHC genes.

Image from Bayonetblaha (Wikicommons)

Fascinatingly, the level of MHC similarity between two people may influence how attractive they find one another – and that this is mediated by a sense of smell. A study from 1995 discovered that women found the smell of a T-shirt worn for two days by men who had more dissimilar MHC genes to them more attractive than the smell of men with more MHC alleles in common with their own. (Weirdly, women on the contraceptive pill showed more or less the opposite effect, implying the pill could either affect sense of smell, or choice of mate). This study led to a whole load of hypotheses about how smelling out potential mates’ MHC molecules could prevent inbreeding between genetically similar people, or increase their potential babies’ defences against infections.

What’s more, babies use smell to identify their mothers, in order to start suckling. Whereas people previously thought that babies start suckling in response to their mothers’ pheromones – which are perceived by a different sensory organ to that of regular ‘smelling’ – instead experiments in mice suggest that babies learn the unique smell of their mother.  This is interesting because it means that babies don’t instinctively know the scent of their mother through the pheromone system, which is innately programmed; rather, recognising ‘mum’ must be learned in order for the baby to start suckling and therefore survive.

Sniffing out foes and illness

Image by Aaron Logan

A recent report from a neuroscience lab at McGill University suggests that laboratory mice used in pain research respond differently depending on the sex of the researcher. Specifically, male mice showed more signs of stress when handled my male researchers, and this stress response interfered with the feelings of pain that the researchers were trying to measure. This means that the mice appeared to feel less pain whenever male researchers were around. In fact, this happened not only when the male researchers were around, but also when the male mice were given bedding that carried the scent of the male researchers, or other male mammals that were unfamiliar to the mice. So different smells can affect stress, pain and the smells that cause these responses aren’t necessarily specific to other animals of the same species.

Your sense of smell may also help you to avoid catching illnesses. When people are injected with a substance called lipopolysaccharide (LPS) they get a fever as if they have an infection, and have an increase in immune-system chemicals called cytokines in their blood. In one experiment, people were asked to rate the smell of T-shirts that had been worn by people who were injected with LPS, or saline (which wouldn’t have the same effect on the immune system but would involve an injection – something which can make some people sweat!). Individuals tended to rate the LPS group’s T-shirts as less pleasant, more intense and ‘less healthy’ than the control groups’ T shirts. Amazingly, these T-shirts were only worn for 4 hours, and the ratings of ‘unhealthy’ smells correlated with the level of cytokines in the T-shirt wearer’s blood.

So our sense of smell can guide us towards people we might have more biological ‘chemistry’ with, and guards us from sick people (and male scientists!). But why is knowing about our sense of smell useful? Well, in addition to there being a condition where individuals can’t smell anything (called anosmia), there’s some evidence that people with other, more common, disorders have deficient senses of smell. For instance, psychopaths – who show less empathy, and tend to be more manipulative and callous than other individuals – have a reduced ability to identify or discriminate smells. People with depression are also less able to smell faint odours than non-depressed people. People who are more at risk from developing schizophrenia misidentify different smells. Knowing about how our sense of smell relates to the rest of brain function, might help scientists to develop ways of diagnosing, treating or measuring improvement in disorders where this crucial sense goes wrong.

Post by Natasha Bray

Diving narcosis and laughing gas

Photo by Derek Keats

I watched a programme the other day about a deep sea mystery. A strangely high number of experienced deep sea divers had been lost on diving trips in a particular bay, and no one seemed to know why. The presenter, being a decent diver himself, went for a dive in the bay and noticed that he could make out the sunlight shining through the water at the other end of an underwater tunnel. His conclusion was that the now deceased divers saw this light and thought they could swim through the tunnel to the other side. What wasn’t obvious to the divers was that this light was deceptively far away and they would have to swim very fast for a long time to make it to the other end of the tunnel before running out of oxygen. But what could cause these supposedly experienced divers to make such a rash, fatal decision?

Nitrogen narcosis can give you tunnel vision, making it harder to read diving instruments. Image by RexxS

Above sea level, nitrogen is a pretty boring gas – it makes up about 80% of the air around us and doesn’t normally do us any harm. However, a problem arises when we breathe it in under high pressure – such as when diving. Several gases, including nitrogen, carbon dioxide, and oxygen are normally dissolved in our bloodstream. When you dive deep underwater, the increase in pressure exerted on your body by the surrounding water causes more of these gases to dissolve into your blood through your lungs when you breathe from the gas tank (because going deep-sea diving without a gas tank would be an even less recommendable thing to do). In fact, for every 10m a diver descends, their blood holds an extra 1.5 litres of dissolved nitrogen.

All that extra nitrogen rushing round in the bloodstream has weird, wonderful, and incompletely understood effects on the brain, collectively known as nitrogen narcosis.

Nitrogen narcosis is experienced by all divers – to varying degrees – and feels essentially like being drunk. Because of this similarity, nitrogen narcosis is often referred to as the ‘Martini effect’. Divers liken every 10m below sea level as the equivalent of having one martini – meaning they feel increasingly intoxicated the deeper they get. Even at comparatively shallow depths (10-30m below the surface), a diver will become less co-ordinated and a bit giddy – 20m lower they’ll start making mistakes and bad decisions and may start laughing for no reason. At 50-70 metres, they may start experiencing hallucinations, sleepiness, terror, poor concentration and confusion, and at 90m they risk losing consciousness or even dying.

So, the worse symptoms of nitrogen narcosis aren’t exactly like getting drunk, because even a huge amount of alcohol doesn’t give people hallucinations (though some alcoholics experience hallucinations when withdrawing from alcohol). Actually, the closest similarity to nitrogen narcosis you can find on dry land is from breathing laughing gas, or nitrous oxide.

A pretty sexist cartoon from ages ago showing some ‘scolding wives’ being prescribed laughing gas. I wonder why they were usually so unhappy with their husbands.

Nitrous oxide has been used by doctors to relax patients since 1794 and it is still used today as a form of pain relief for women during childbirth. It has been in the press a lot recently, dubbed ‘hippie crack’, as it’s often used recreationally (though usually not legally) for its mild hallucinogenic and euphoric ‘feel good’ effects, which have often been likened to nitrogen narcosis. So how does nitrous oxide affect the brain?

Although nitrous oxide is hugely understudied, there are several theories about how it can affect the brain. Because gases like nitrous oxide and nitrogen are really fat-soluble, they may interfere with cell membranes (which are made from fatty molecules) disrupting their normal function. In the case of brain cells, this may alter the way they communicate with one another. In addition, the dissolved gas molecules may directly bind to the receptors on the surface of brain and nerve cells. Nitrous oxide is used as a mild anaesthetic because it has been shown to block NMDA receptors – which normally ‘excite’ the brain – and because it activates potassium channels, which further suppress brain cell excitation. All this means is that brain activity is generally depressed and so users are more prone to making bad decisions or losing concentration.

As I mentioned before, nitrous oxide is also good for pain-relief, as it’s believed to activate opioid centres in the brain. When activated, the opioid system – the same one stimulated by drugs like heroin and morphine – then disinhibits certain adrenergic cells in the spinal cord, which dampen down any feelings of pain.

While there have been reports that nitrogen narcosis also decreases the perception of pain, it’s obviously difficult, and, well, not very practical to test the potential of high pressure deep sea diving on pain relief. Instead, what should be studied more are the effects of nitrous oxide on the nervous system. We’ve used the stuff for more than 200 years and yet the biology behind its uses and its dangers is still not fully understood. What’s more, the fact that people use nitrous oxide recreationally (and probably will continue to do so in spite of its non-legal status in many countries) means we really ought to know what its short and long term effects on the brain are. Unlike the mystery of the missing deep sea divers, the full extent of the ways in which nitrous oxide works remains unsolved.

Post by Natasha Bray

Michael Schumacher’s traumatic brain injury explained

Photo by Mark McArdle

Seven times Formula 1 World Champion and holder of a huge number of driving records, Michael Schumacher is thought of as the greatest F1 driver of all time, a champion among champions. On 29th December last year on a ski slope in the French Alps, Schumacher fell and hit his head on a jagged rock. Witness reports claim that he lost consciousness for about a minute, but that ten minutes later, when the emergency helicopter arrived, he was conscious and alert. Over the next two hours, however, Schumacher’s condition deteriorated and since that day, he has had two head operations and has remained in intensive care under a medically-induced coma.

An epidural haemotoma caused by a fracture in the skull (indicated by the arrow). Image by Hellerhoff.

Traumatic brain injury, or TBI, is quite a vague medical term and simply describes any injury to the brain sustained by a trauma – that is, a serious whack to the head. TBI is the most common brain disorder in young people: while older people are at higher risk from degenerative diseases, young people are more likely to find themselves in car crashes, fights or…ski accidents.

The specific type of TBI that Schumacher is believed to have suffered is an epidural haemotoma (click here for a seriously gory video of a haemotoma clot removal – you have been warned). Between the brain and the skull are a number of protective layers; these are membranes which encapsulate the brain and spinal cord, all of which swim around in cerebrospinal fluid. An epidural haemotoma is a bleed between the tough outermost membrane, called the dura mater, and the skull. Technically this sort of bleed isn’t in the brain itself, but this injury seriously affects the brain due to one important factor: pressure.

Six ways a brain squeezed by a bleed can go. Coning (indicated by number 6) is a last resort but compresses respiratory centres and can be fatal. Image by Rupert Millard.

Since the skull is an almost totally enclosed space, a bleed that grows too big in size could end up pushing, not just on the inside of the skull, but on the brain itself – squeezing it into an ever-decreasing volume. When put under pressure the only way the brain can physically go is down towards the spinal cord, but if this happens, the brainstem at the base of the brain may become compressed. This horrible scenario, known as ‘coning’, has a really high fatality rate, as the brainstem is needed to keep our heart pumping and lungs breathing. So keeping intracranial (in-skull) pressure low after TBI is key.

Doctors have reported that, as well as a haemotoma, Schumacher suffered contusion and oedema. Cerebral contusion is essentially bruising, as you might expect in other parts of the body: tiny blood vessels bleed when they have suffered a serious hit – in this case either from the rock itself, or from the other side of Schumacher’s skull in a ‘rebound’ type impact. Oedema, or swelling, may come as a result of the contusion – like when a bruised knee swells – and again would need to be curbed in order to prevent the pressure inside the skull from rising dangerously.

Illustration by Max Andrews.

So what have the doctors done to treat Michael Schumacher’s condition? Well, firstly, the surgeon Prof. Stephan Chabardes has reportedly performed two operations to remove blood clots from the haemotoma, as well as a craniectomy – removal of part of the skull – in order to relieve the intracranial pressure and prevent coning. Craniectomy has been used for some time to treat both TBI and stroke, but remains somewhat controversial, mainly because of the risks of further bleeds, infections, or herniation of brain tissue through the surgically-made hole in the skull.

The second main strategy in treating Michael Schumacher has been to keep him under an artificial or medically-induced coma. This involves sedating him with strong anaesthetic agents. Propofol, which quietens brain activity by boosting inhibitory GABAA (‘off’) receptor activity and by blocking sodium (‘on’) ion channels in neurons, or barbiturates, which also enhance GABAA receptors, but block excitatory glutamate (‘on’) channels could be used to keep him sedated and to ‘slow’ his brain down. Slowing is achieved through a net reduction in excitatory activity within the brain. Thus, the medically-induced coma not only stops the patient being conscious, through what no doubt would be a painful experience, but also limits the amount of activity-related blood flow, curbs swelling and prevents what’s known as excitotoxicity.

Risk of post-trauma seizures after TBI by the severity of the initial injury. Graph by Delldot (wiki).

Excitotoxicity can occur after TBI, stroke or in patients with epilepsy. It is a term used to describe what happens when brain cells run out of energy or become overloaded with excitatory inputs. In this state cells become overexcited and die, either immediately, or after a delay. Seizures, which can cause excitotoxic cell death, are fairly common after severe TBI and are thought to drive brain damage after the original injury. Seizures after TBI can be worsened by swelling and higher temperatures, so it is likely that Schumacher has been kept a few degrees cooler than his normal body temperature to limit this risk.

Data as interpreted from Laureys S, Owen AM, Schiff ND (2004). “Brain function in coma, vegetative state, and related disorders”. The Lancet Neurology 3 (9): 537–546. Graph by Shin Andy Chung.

If he is still in a medically-induced coma, or is gradually being weaned off the anaesthetic agents, Schumacher may be undergoing physical therapy to move his limbs and joints to prevent muscle wastage, or contracture, which is irreversible muscle shortening. If his condition improves and he is able to move, his limbs will need re-strengthening.

Some conflicting reports suggest that doctors treating Michael Schumacher may have started removing him from his coma. If they are doing this, the full extent of their patient’s rehabilitation needs won’t become clear for some time. While I obviously hope the driving legend makes a full and speedy recovery, it is hugely unlikely that his brain will completely recover all its previous functions. The brain is such a delicate organ and Schumacher’s tragic case only highlights its fragility.

Hugo Lloris in 2012. Photo by Stanislav Vedmid.

Schumacher’s injuries also raise the debate on the guidelines of treating head injuries in sports. Last November, Tottenham’s Hugo Lloris lost consciousness after a hit to his head during a football match, but after waking up was allowed to return to the pitch to finish the game. Lucid intervals, such as the one reported shortly after Schumacher’s fall, can be deceptive and players of contact sports should always be given immediate medical attention after losing consciousness. It’s no news that the cumulative effects on the brain of losing consciousness multiple times – as many boxers do on a regular basis – are to be avoided at all costs.

Every year in the USA, 1.7 million TBIs – more than double the number of heart attacks – contribute to almost a third of all accidental deaths as well as varying levels of lasting disability. While we hold our fingers crossed for Michael Schumacher’s successful rehabilitation, we must also think of the other thousands of people, and their families dealing with the long-term aftermath of serious brain injuries around the world.

 Post by Natasha Bray

Battle of the brain’s sex differences…or not really?

Why some people are surprised at the very idea of there being differences between male and female brains I don’t understand. But, what really confuses me is when journalists misinterpret research findings and overextrapolate speculative comments to fit cliched gender stereotypes.

65464_web
“Brain networks show increased connectivity from front to back and within one hemisphere in males (upper) and left to right in females (lower).”
Credit: Ragini Verma, Ph.D., Proceedings of National Academy of Sciences, from press release.

Whenever I ask my (less sciencey) friends what they’d like to read on The Brain Bank, there is a perennially raised topic. At least one, usually single, hopeful will ask desperately for a guide on how men and women’s brains differ – and why they might work in different ways, scientifically speaking. Efforts to crack the mental codes of the opposite sex started as far back as Aristotle, who claimed that women were “more mischievous,  … more easily moved to tears[,] more apt to scold and to strike[,] … more void of shame or self-respect,…of more retentive memory” (History of Animals).

White matter tracts, as imaged using diffusion tensor imaging. Author: Xavier Gigandet et al., source here.

Earlier this month, a research paper from the University of Pennsylvania used a fancy imaging technique called diffusion tensor imaging (DTI) to solve the mystery behind the different ways guys and gals think. DTI basically gives you a picture of where the white matter tracts – the wiring between different brain areas – lie between various processing parts of the brain.

The technique works by looking at how water travels within the brain: water ‘prefers’ moving along bundles of fibres, such as white matter tracts. In this way, DTI examines the strength of ‘connectivity’ between various parts of the brain.

Researchers, led by Prof Ragini Verma, scanned the brains of 949 youths aged 8-22. They found that, in general, the connecting pathways within each half of the brain were stronger in guys, but that in girls, the wiring between the two halves was stronger. In other words, connectivity in girls tended to be more ‘left-right’, whereas in boys, ‘front-back’ connectivity was stronger.

The researchers also reported that the girls performed better on tests involving attention; word and face memory; and social cognition, whereas boys fared better on spatial processing and sensorimotor speed tasks.

NOT REALLY. Author: Miz Mura.

This paper and its associated press release rallied some…OK, a lot, of interest from the press. But then something strange happened. Something was lost in translation between the original paper and the resulting newspapers reports, claiming that ‘hardwired’ differences between men and women’s brains might explain ‘why men are better at map reading’ and women are more ‘emotionally intelligent’…

OH dear…

Seriously, NOT REALLY. Author: Miz Mura

…Then there was a knee-jerk reaction against the potentially neurosexist connotations of this ‘kind of science’, and not just because the research was published in PNAS (hehe). In my opinion, if a conclusion is based on valid and reliable science, you shouldn’t really argue unless you have definitely read the research. If, on the other hand, the offending ‘conclusions’ are the result of a bizarre ‘Chinese Whispers’ scenario where no one has actually read the original research, then no,  it’s probably not worth listening – but then, mistranslation isn’t based on science anyway…I digress.

While we all know that there are some obvious – and other more subtle – distinctions between men and women. This research article doesn’t actually claim to explain anything besides the physical connections between different parts of the brain. Just to clarify, here are some of the problems with treating this particular research paper as the Holy Grail of sex differences:-

1. There’s no saying whether there’s a big difference or not. The authors present (undeniably) a very striking diagram, with the statistically significant bits indicated in gender-relevant colours. However, just because a difference is statistically significant, doesn’t mean the effect of being male, or female is a big deal. In fact, as the study uses such a large sample (949 youths), even very small differences between male and female brains may prove significant.

2. Less wiring doesn’t necessarily mean lower ability. The authors don’t actually indicate anywhere in the paper that the ‘wiring’ is associated with men and women’s differing abilities on the tests – though Prof Verma has been quoted speculating on the possibility. Instead, the authors have pointed out the brain’s physical differences and then separately comment on behavioural differences without saying whether the two correlate.

If the hypothesis is that men or women with mega-strong connectivity left-to-right, or front-back are respectively better at, say, language, or football, you could easily find that out with a bunch of correlations. Not that correlation would imply causation anyway. In fact, the strengthening or weakening of physical connections could even suggest that women and men’s brains change to compensate for innate differences!

3. Size/proportions might matter. It’s pretty well-known that men have larger brains than women – the situation is pretty complicated though, as women reportedly have more grey matter, less white matter and a thicker cortex than men. However – please correct me if I’m wrong – the authors don’t correct for brain sizes (either front-back, left-right, total volume or any other measure), which could be very important. Especially considering the people being imaged are aged between 8 and 22, when brains grow a lot anyway. Not to mention that girls and boys grow at different rates too. Oh well.

Social media word clouds for females (top) and males (bottom). Size = strength of the correlation; color = relative frequency of usage. Underscores (_) connect words in phrases. Words and phrases in center; topics surround. Author: H. Andrew Schwartz et al.; Source. Apologies for the bad language!

4. There are many more potential mechanisms than meets the eye. Yes, it’s very possible that exposure to sex hormones could change the brain’s connectivity. But, there’s a whole host of other possible influences on a child going through puberty that can’t be ruled out, because the brain is notoriously/amazingly plastic. Environmental influences, influences that can’t ever be controlled for, such as parents, peers, teachers and the media – could just as easily alter the physical structures of the brain, or the brain’s abilities. In fact, hearing in the news that ‘men are better at map reading’ because it’s ‘hardwired’ in their brains is conceivably rather likely to make guys feel a bit more confident navigating, while discouraging women from taking that responsibility instead.

This piece of research is not the first and certainly won’t be the last to be accidentally misinterpreted or overhyped. Research into the differences between men and women will continue to fascinate us because, for whatever reasons – social, biological or otherwise – people of different sexes tend to look, sound and act differently. More seriously (and the authors of the paper explain the motivation of their research), sex differences are linked to brain disorders like autism and depression, so the differences between ‘Martians’ and ‘Venetians’ should be properly understood, and carefully reported.

For further examination of this topic, here’s another blog article and a BuzzFeed piece with a few more reasons why it should all be taken with a pinch of salt.

Post by Natasha Bray

Why are all the bees dying?

Photo by Erik Hooymans

Bees are great. They have an amazing social hierarchy, they provide medical care for their sick, they have ruthless security ‘bouncer-bees’ and each bee travels huge distances to gather about one twelfth of a teaspoonful of honey. For us humans, the benefits of bees don’t stop at honey. About a third of our crops – approximately $220 billion-worth globally – are inadvertently pollinated by foraging bees and, from what I’ve heard, we really don’t want to have to start doing that ourselves.

The problem is that bees are dying at an alarming rate. As it happens, my father is a budding bee-keeper and has just received a letter from the Food and Environment Research Agency that reports a halving of honey production in South-East England in the last six years alone. This problem is, however, happening all over the world. Imaginatively dubbed ‘colony collapse disorder’ (CCD), a mystery disease is wiping out huge numbers of bees yet no one can pin down exactly what the cause is. There are several theories, so I’ve taken the liberty of making a list akin to a ‘Top Six Most Wanted Villains’ of the bee world.

A varroa mite feeding on a honeybee (Wikicommons)

Varroa mites: Affectionately known as ‘vampire’ mites, these teeny-weeny bugs are big trouble. They suck hemolymph (the bee’s version of blood) from honeybees and, in so doing, weaken the bee and may even transmit deadly viruses (more later).

Neonicotinoids and other pesticides – Neonicotinoids (NNs) are chemicals designed to kill insects that feed on farmed crops. They bind to acetylcholine receptors on the cells of the insect’s nervous system, eventually blocking their normal use, causing paralysis and death. In the past couple of years, various research groups have shown that these chemicals get into bee hives at dangerous, though not lethal concentrations. Not only that, but a paper published in Nature showed that a cocktail of these chemicals may lead to CCD by affecting bee behaviour, presumably through their effects on the bees’ brains. Bees affected by these chemicals tend to forget where they are in relation to the hive, and produce less food. Other research has shown that NNs may affect the way that bees metabolise their food to produce energy. Scientists have even shown that exposure to NNs affects an important immune defence pathway, which may make bees more vulnerable to parasites and viruses.

Viruses: Viruses such as Israeli acute paralysis virus, deformed wing virus and acute bee paralysis virus are spread by varroa mites and have all been identified as possible causes of CCD. Deformed wing virus is particularly tragic; if pupae are infected and develop wing deformities, they are kicked out of the colony, and the number of healthy bees dwindles. Israeli acute paralysis virus has been shown to interfere with the bees’ cellular machinery that produce proteins.

Nosema – this is a fungus that causes intense diarrhoea when swallowed by a bee, leading to worker bees pulling a sickie, which means less food for the hive. To add insult to injury, the queen bee becomes infertile and the colony stops producing young.

Malnutrition – Bees that collect their food from a variety of sources tend to be more hardy and resistant to infection than those that rely on only one or two types of flowering plant. In the US where farms cultivating one or two crops such as wheat or corn are vast, bees may become malnourished and more susceptible to disease.

Female phorid fly laying eggs into a worker honey bee. Core A, Runckel C, Ivers J, Quock C, Siapno T, et al. (2012).

Parasitic phorid fly – Last year, a researcher found a phorid fly larva in a test tube containing a honeybee that had died from suspected CCD. Phorid flies (which apparently scuttle more than they fly) lay eggs on the bee’s abdomen, which then hatch and feed on the bee. Weirdly, bees that carry this parasite end up acting more like moths than bees (foraging at night, buzzing around bright lights) before abandoning the hive.

What’s most likely is that CCD is caused by a mixture of two or more of the culprits mentioned above working in tandem. For example, varroa mites weaken bees and give them viruses. While a colony may be able to withstand either the mites or the virus, the two knocks together could be lethal. This interplay between several different factors makes it all the more difficult for scientists and beekeepers to research and prevent CCD.

So what’s being done to stop all the bees dying? Aside from all the tried and tested treatments for the parasites and viruses known, there are new efforts to save the bees via various industrial collaborations. Earlier this year, Monsanto set up its own Honey Bee Advisory Council including scientists, beekeepers, industrial and governmental representatives to try and tackle the issue. In 2011, Monsanto also bought Beeologics, a company in Israel that researches possible solutions to CCD. One strategy used by Beeologics against bad viruses is to deliberately infect bees with a special artificial ‘good’ virus. In turn, this good virus infects any varroa mites feeding on the bee. Amazingly, this good virus acts to prevent the mites from being able to pass on bad viruses to the bee. This treatment is currently passing through regulatory tests, but it will hopefully represent the start of a new approach to keeping bees alive for the benefit of humanity – and not just for the honey.

Post by Natasha Bray

Science of the bloody brain

mosso
One of Mosso’s experiments. Each of the four traces on the right compares brain blood flow (red) and pulsations in the feet (black) simultaneously, during 1)resting 2)listening to the clock and church bells 3)remembering whether Ave Maria should have been said and 4)’8×12′?

Luigi Cane literally had a hole in his head. A brick had unforgivingly fallen on the back of it, smashing a section of his skull like a spoon knocking the shell off the top of a hard-boiled egg. And so, after surgery, part of the surface of his brain was left precariously unprotected except for a layer of skin. Peering through this accidental window into his head, Dr. Angelo Mosso was able to measure the pulsations of the brain’s blood supply. Cane sat in Mosso’s lab with pressure gauges strapped around his feet and a handmade instrument resting delicately on the skin over his vulnerable brain. This was to be the world première of neuroimaging.

Angelo Mosso, a 19th century physiologist and first brain imager

“What is 27 times 13?” Mosso inquired. Cane thought deeply and silently while the various contraptions simultaneously showed his feet shrinking while his brain swelled with blood flow. This experiment was the first to reveal that when our mental ‘cogs’ turn, a boost of blood is directed to the brain. Mosso confirmed this in individuals with intact skulls with what was essentially a wobble-board bed. When people lying down on the balance thought about tricky or even particularly emotional questions, it would tip down towards the head end with the weight of the extra blood.

The brain is an extremely greedy part of the body when it comes to blood. While it only makes up about a fiftieth of the body’s mass, it consumes up to a fifth of the total energy and oxygen carried in the bloodstream. Charles Roy and Charles Sherrington later proved that the blood rushing to the head was actually being diverted specifically to the parts that were most active – like a bonus for the busiest brain cells. Over twelve decades later, neuroscientists are still using this same principle to observe brain activity and the accompanying ‘rush of blood’ to the head.

The brain imaging technique functional magnetic resonance imaging (fMRI) works on the principal that deoxygenated haemoglobin (the protein that carries oxygen in red blood cells) has magnetic properties. In essence, fMRI can measure how well-oxygenated or deoxygenated different parts of the brain get when the person in the scanner performs a task, for example reading, writing, or thinking about chocolate. But information collected from this kind of experiment needs to be handled very carefully.

Computer-enhanced fMRI scan of a person who has been asked to look at faces. The image shows increased blood flow in the part of the visual cortex that recognizes faces.

Firstly, fMRI is not a direct measure of brain activity per se; rather, it’s the triggered oxygenated blood flow response to brain activity. Secondly, no one really knows what a larger blood flow response means, especially in parts of the brain that have several jobs. Lots of blood in a specific part of the brain while doing sums might mean that a person can do sums easily because their blood supply is so efficient. Alternatively, it could be interpreted as suggesting that person struggles with mental arithmetic and needs more blood in their head to cope. Thirdly, fMRI data needs to be stringently tested to avoid seeing activity that isn’t there. Researchers at the University of California found that using different statistical tests they could see a blood flow response in the brain of a dead salmon while it was looking at different human faces – and won an IgNobel Prize for highlighting the dangers of shoddy stats.

With all this to bear in mind, it’s perhaps unsurprising that poorly carried out fMRI experiments have been dubbed the modern phrenology – the practice of comparing measurements of peoples’ skulls to infer personality traits. What is perhaps more surprising, though, is that despite the speculations on the validity and accuracy of fMRI, it is being used for things besides its more traditional remit. ‘No Lie MRI’ is a company in the U.S. that advertises the use of brain imaging to detect liars or untrustworthy individuals, whether they be potential politicians, investments or romantic interests. Brain imaging techniques including fMRI have even controversially been used as evidence in Indian courts of law.

pain
Example of fMRI responses to painful heat to the forehead in a cohort of 12 subjects. ACC, anterior cingulate cortex; PCC, posterior cingulate cortex (Moulton et al., unpublished observations). Borsook et al. Molecular Pain 2007 3:25

There are, however, other emerging uses for fMRI that may improve its reputation. By watching live feedback of the blood flow going to the anterior cingulate and insula, two pain centres deep within the brain, sufferers of chronic pain can consciously train these parts of the brain to receive more blood. Christopher deCharms and his colleagues at Omneuron have found that people who were given the real, live feedback from their insula and cingulate and successfully learnt to train the blood flow within these parts said they experienced less pain than usual. Conversely, people unwittingly shown a dummy feedback (random fluctuations or blood flow levels from an unrelated part of the brain) didn’t report any substantial pain relief.

Brain imaging techniques that rely on measuring blood flow around the brain should be carefully interpreted; fMRI is heavily-used in research and is still fashionable in brain research. Technology has come on a massively long way since the days of wobble boards, so we should probably count ourselves lucky that we don’t need a hole in our heads to unlock the further mysteries of the blood in our brains.

Post by Natasha Bray

 

How much of ‘life’ can be patented?

A fragment of a complementary DNA array. Photo by Mangapoco

Can patents give scientists or companies the rights to ‘life’? In June this year the US Supreme Court ruled that genes cannot be patented in the States. To say this ruling was controversial would be a massive understatement; this mixed ruling has led to equally mixed reactions from the public, academics and pharma/biotech companies.

Without wanting to take sides, I think high profile cases like this are brilliant because they get people talking about what may be owned by whom, where, and under which conditions. A lot of innovation and scientific discoveries are largely paid for and protected by patents. And since scientists are becoming ever more creative with synthetic biology, I think the questions around whether ‘life’ is patentable are increasingly important.

So I have taken the liberty of compiling a list of need-to-know biological patent questions and example relevant case, in a nutshell, but in no particular order…

Can you patent a gene?Association for Molecular Pathology v. Myriad Genetics, 2013

In short, Myriad made kits that test for BRCA1 and BRCA2, two genes that are involved in a certain type of breast cancer. This meant that only Myriad or people who paid for the kit could test whether someone had a gene which increases the chance of developing breast cancer. Although the US Supreme Court’s ultimate decision was that naturally occurring genes can’t be patented in the States any more, there were other important rulings made in the same case. Complementary DNA (like the other side of a molecular zip), artificially-made DNA and gene chips are still patentable in the US, so there’s arguably still room for genetic test innovation.

Can you patent genetically modified organisms?Diamond v. Chakrabarty, 1981

Ananda Mohan Chakrabarty is a genetic engineer who modified the bacteria Pseudomonas into a new species called Pseudomonas putida which can break down crude oil (useful for oil spills) and polystyrene (useful for recycling). Chakrabarty wanted to patent his invention but his application was refused by the US Patent Office because it was thought that no one should be able to patent a living organism. Eventually the patent was allowed, because even though the bacteria are living, the species is technically ‘human-made’.

Can you farm genetically modified crops?Monsanto v. Schmeiser 2004

Monsanto genetically modify crops that are resistant to weedkillers like RoundUp (which they also produce). Percy Schmeiser, a Canadian crop farmer, found some canola plants growing on his land that were RoundUp resistant, so he harvested the crop and planted it for the next year. Since Monsanto sell the RoundUp-resistant canola seed, they asked Schmeiser if he would agree to pay for a licence so that he could use their invention on his land. Schmeiser refused, since he argued he didn’t get the seeds from Monsanto but Monsanto eventually sued him. Four years later, Schmeiser managed to bill Monsanto $660 for clearing all the RoundUp-resistant canola from his fields. You win some, you lose some.

A) shows human embryonic stem cells; B) shows neurons derived from human embryonic stem cells. Image by Nissim Benvenisty

Who owns human embryonic stem cells?Bruestle v. Greenpeace 2011

Greenpeace challenged Professor Bruestle on his new method of treating stem cells from human embryos so that they turned into ‘beginner’ nerve cells. This case led to a ruling by the European Court of Justice that no one in Europe can patent human embryonic stem cells (or techniques that use them) which originate from where an embryo has been destroyed. This is based on an obvious moral argument that no one should profit from destroying human embryos, but some argue that if a technique is legal, it should be patentable.

Can you patent methods of measuring life processes? Mayo v. Prometheus 2012

Prometheus had the rights to sell a kit which allowed doctors to 1. give patients a drug for gastrointestinal disease, 2. measure how well it was working, and  3. work out whether to increase or decrease the dose. Mayo used to get this kit from Prometheus but then they stopped buying and subsequently started making their own instead.  Prometheus tried suing Mayo, but in court it was argued that steps 1 and 2 were pretty standard, and that step 3 was a logical decision based on a mathematical equation, which can’t really be patented. The kit’s patent was revoked, though the case still impacts today on research into personalised medicine. Here’s a (spoof) video that deals with some of the emotional quandary resulting from this case.

A neem tree.

Can you patent a species? Indian Government v. WR Grace 2005

You could try to patent substances derived from naturally occurring species, but you might become hugely unpopular. In Europe a patent was granted for a fungicide derived from neem, an Indian tree used by locals for more than two thousand years for… well, its anti-fungal, medicinal properties. Once this was pointed out by the Indian Government, the patent was revoked.

I hope you have enjoyed this list*. Incidentally, while I don’t think Buzzfeed has patented the idea of creating lists, they have created a list of totally bizarre patents, which you may also enjoy. Cheers Buzzfeed.

Post by Natasha Bray

*I should point out now a) I am not a lawyer so none of the above is advice or guaranteed and b) patent law evolves and varies hugely between countries, so some of the items on this list may be ‘invalid’ (…so to speak).

Acne bacteria to blame for back pain?

What do acne and chronic back pain have in common? Well, as it turns out, more than people once thought.  A group at the University of Southern Denmark have found that the same bacteria that gives people spots might be to blame for up to 40% of patients with lower back pain. What’s more, these infections can be treated with antibiotics.

Slipped disc popping out from in between the evenly grey vertebrae

Your backbone is a column of alternating vertebrae (bones) and intervertebral discs (cushions). The bones provide the strength and support, while the cushion discs allow movement and flexibility. Occasionally, thanks to a mix of age and awkward movement, the disc can bulge out from between the bones. In some cases the jelly-like goo in the disc’s centre, called the nucleus, can even ooze out – a bit like thick jam leaking out of a doughnut. If the nuclear material or the disc itself puts pressure on nerves coming in and out of the spine, it can be even more painful.

Slipping a disc is, by all accounts, excruciating, but it usually starts to heal by 6-8 weeks. However, someone can be diagnosed with chronic back pain (CBP) when the pain doesn’t subside after three months. Trouble is, this happens all too often, with an estimated 4 million people in the UK suffering from CBP at some point in their lives. The cost of CBP to the NHS is about £1 billion per annum. This doesn’t even cover lost working hours or the loss of livelihood suffered. Treatment usually focuses on relieving pain, preventing inflammation and more recently, cognitive behavioural therapy to treat the patient’s psychology, especially if the organic, physical cause of the pain is no longer obvious.

Recently, scientists in Denmark found a really important link between the bacteria responsible for acne, known as Propionibacterium acnes (P. acnes) and bad backs. The researchers found that in about half of their patients with slipped discs, the disc itself was infected, usually with P. acnes. A year later, 80% of the infected patients – compared to 43% of the uninfected patients – had dodgier bones either side of the slipped disc than 12 months before. The affected bones had developed tiny fractures and the bone marrow was replaced with serum, the liquid found in blisters.

Acne is not to blame for bad teenage hairstyle choices.

So how did the discs get infected? Bacteria like P. acnes get into our bloodstream all the time, particularly when we brush our teeth or squeeze spots. P. acnes and other similar bacteria don’t like oxygen-rich environments and so don’t normally grow inside us. The spinal disc doesn’t have a lot of oxygen around, providing a perfect home for the bacteria. If the disc is damaged – say, after popping out from the spinal column – tiny blood vessels sprout into it, letting the bacteria move in and settle down.

There, the bacteria grow and, rather than spread anywhere else, they spit out inflammatory chemicals and acid. The acid corrodes the bone next to the disc and causes more swelling and pain around the area. This discovery is ground-breaking, since before this research it was thought that discs couldn’t get infected except in a few exceptional cases.

The Danish researchers then conducted a second study, testing whether simple antibiotics could get rid of these bacteria and therefore treat chronic lower back pain. Patients that already had the characteristic signs of bone inflammation (tiny fractures and swelling) were given a 100-day course of antibiotics. The patients were reassessed a year after the trial began. Patients treated with antibiotics reported less pain, less ‘bothersomeness’ (yes!), took fewer days off work, made fewer visits to the doctor and, crucially, their bones looked in much better nick than the patients given a placebo.

Considering the huge numbers of people who are affected by chronic back pain, and the cost of treatments like surgery versus a course of antibiotics, this discovery has been glorified as the stuff of Nobel prizes. The revelation that bacteria may be to blame for some cases of this mysteriously untreatable condition rings familiar. It has been likened to the discovery of the culprit bacteria behind stomach-ulcers, Helicobacter pylori. Like back pain today, stomach ulcers were dismissed for years to be a disease of the mind, endemic among stressed-out melodramatics or people who ate too much spicy food. (And yes, Barry Marshall did get a Nobel Prize for swallowing a Petri-dishful of H. pylori.) It would be fantastic if, instead of resorting to surgery, half a million CBP patients could be effectively cured within 100 days or less!

The bacteria in the plate on the right have become resistant to many of the antibiotic white spots and so are more widespread.
Photo by Dr. Graham Beards

Unfortunately, there is a downside. Antibiotics have long been the magical cure-all, but just like fossil fuels, housing and talent on TV, we’re running out. Bacteria are becoming resistant to antibiotics faster than we can create new, effective ones. It’s an arms race and we’re losing, very quickly. What’s worse is that because of the recent negativity surrounding over-prescription, there are now restrictions on giving patients broad spectrum antibiotics. Since antibiotics can’t be used as much as they were 30 years ago, pharmaceutical companies can’t make any profit from developing new ones. And so, to further compound the problem of antibiotic resistance, there are fewer and fewer antibiotics being created every year.

In 2000 alone, UK doctors made 2.6 million prescriptions of antibiotics for acne. One study by a group in Leeds looked at the number of acne patients who were infected with P.acnes and were resistant to at least one type of anti-acne antibiotic. Between 1991 and 2000, the fraction of acne patients with antibiotic-resistant bacteria rose from about a third to more than a half.

The discovery that acne bacteria might be to blame for so many cases of debilitating back pain is hugely important. However, it also highlights how dependent we are on our dangerously dwindling supply of effective antibiotics, and how we might be wasting antibiotic effectiveness on comparatively trivial conditions such as spots.

Post by: Natasha Bray

News and Views: The Brain Activity Mapping Project – What’s the plan?

“If the human brain were so simple that we could understand it, we would be so simple that we couldn’t” – Dr. Emerson Pugh

Isabelle Abbey:

An ambitious project intended to unlock the inscrutable mysteries of nerve cell interactions in the brain is on its way. Labelled America’s ‘next big thing’ in neuroscience research, the ‘BRAIN’ (Brain Research through Advancing Innovative Neurotechnologies) initiative will use highly advanced technologies in an attempt to map the wiring of the human brain.

Cajal drew some of the billions of neurons in the human cortex..technology has come a long way since 1899

Also referred to as the ‘Brain Activity Map’ Project (BAM), the BRAIN initiative aims to decode the tens of thousands of connections between each of the ~86 billion neurones that form the basis of human brain. Scientists believe completing the map will be an invaluable step that may have huge implications for therapeutically tackling neurological pathology.

Moving forward in this manner does seem particularly appropriate. For the past 10 years, we have been reaping the benefits of technologies like fMRI and PET scanning, which have allowed us to visualize the brain in a way that has never been done before. From measuring behaviours to diagnosing abnormalities, the contribution of neuroimaging to our understanding of brain physiology and pathology is undeniable.

Paul Alivastos, the lead author of the paper detailing the BAM proposal, aims to develop novel toolkits that can simultaneously record the activities of billions of the cells in the live brain, rather than from macroscopic slices. Eventually, these technologies will allow for the accurate depiction of the flow of information in the human brain, and how this may differ in pathological states such as in Alzheimer’s or autism.

Despite the daunting nature of the task at hand, this proposal has been met with much political enthusiasm. On 2nd April Barack Obama announced the American Government would be backing the project by approving a $100m funding budget for its first year of operation.

The humble nematode worm, 1mm long

But might this project need some grounding? After all, Alivastos and his co-authors are yet to establish the basis for which such tools can be developed or the extent to which these technologies could be used. The years of extensive research that has concentrated on mapping the wiring of a simple nematode worm, consisting of only several hundred nervous system cells, is yet to allow us to accurately predict the worm’s behaviour. So, some scepticism does seem reasonable.

While we must be cautious in predicting ambitious benefits from such a project, the map Alivastos and his colleagues have envisaged gives reason enough to be hopeful for the next decade in our neuroscientific appreciation of human cognition.

Natasha Bray:

As a neuroscience researcher, I can’t help but take an interest in the BRAIN initiative proposed by President Obama earlier this month. It’s a massive pot of cash designed not only to further the neuroscientific knowledge base, but also to create jobs and technologies that can’t even be described yet. As Izzy mentions above, the project is an ambitious and important undertaking that merits the now fashionable label of ‘big science’.

The BRAIN initiative is funded by a big pot of money from different resources including DARPA (the Defence Advanced Research Projects Agency), the National Science Foundation, the National Institute for Health, Google and various other institutes and charities.

So far, even defining the project and choosing suitable methods has been a challenge. The research leaders have proposed “to record every action potential from every neuron within a circuit”. Bear in mind action potentials (nerve impulses) happen in a matter of a couple of thousandths of a second, while a single circuit may encompass many millions of cells. At the moment, neuroscientists can record action potentials from up to about 100 cells simultaneously. We can work out anatomical circuits. We just can’t record from every cell within them; there is not one single tool in neuroscience’s toolbox that is currently capable of gathering that kind of data (yet).

There are, however, candidate techniques that could be improved or perhaps combined. Imaging techniques, including optical, calcium or voltage imaging, or magnetic imaging such as fMRI and MEG can scan on different scales in both time and space. Neurons’ electrical activity can be recorded using silicon-based nanoprobes or very tightly-spaced electrodes. Researchers have even suggested synthesising DNA that records action potentials as errors in the DNA strand like a ticker tape. Advances in all these technologies are still being made, making them the most likely candidates.

Added to the difficult choice of method is the serious task of storing and analysing quadrillions of bytes of data, plus the fact that it’ll take about ten years just to complete an activity map of the fly brain. It’s clear there are significant hurdles to jump. Then again, no one said big science would be easy…or cheap. But the potential benefits of big science are huge. The Human Genome Project had a projected cost of $3 billion, but was completed within its budget and has already proved a huge investment both intellectually and financially. It’s famously estimated that for every dollar originally invested in the Human Genome Project, an economic return of $140 has already been made.

I see the BRAIN initiative as a very worthy cause, a good example of aspirational ‘big science’ and a great endorsement for future neuroscience. One gripe I have with it, however, is that it seems a little like Obama’s catch-up effort in response to Europe’s Human Brain Project (HBP). The HBP involves 80 institutions striving towards creating a complex computer infrastructure powerful enough to mimic the human brain, right down to the molecular level. Which begs the question: surely in order to build an artificial brain you need to understand how it’s put together in the first place? I really hope that the BRAIN initiative and Human Brain Project put their ‘heads together’ to help each other in untangling the complex workings of the brain.

Your Brain on Lies, Damned Lies and ‘Truth Serums’

question the answerPork pies, fibs, whoppers, untruths, tall stories, fabrications, damned lies…not to mention statistics.

Apparently, every person lies an average of 1.65 times every day. However, since that average is self-reported, maybe take that figure with a pinch of salt. The truth is, most people are great at lying. The ability to conjure up a plausible alternative reality is, when you think about it, seriously impressive, but it takes practice. From about the age of 3 young children are able to make up false information, at a stunning rate of one lie every 2 hours – though admittedly the lies from a toddler’s wild imagination are relatively easy to identify.

When we lie, brain cells in the prefrontal cortex – the planning, ‘executive’ of the brain – work harder than when we tell the truth. This may be reflected in the physical structure of our brains as well: pathological liars have been shown to have more white ‘wiring’ matter and less grey matter in the prefrontal cortex of their brain than other people. But how can we tell if someone is telling a lie, or telling the truth?

Back in the day – 2000 years ago – in ancient India, people would use the rice test to spot liars. When someone is lying, their sympathetic (‘fight or flight’) nervous system goes into overdrive, leading to a dry mouth. If you could spit out a grain of rice, you were seen to be telling the truth. If your mouth was parched and you couldn’t spit the grain out, you were lying. Since then, several different methods of catching out liars have been used – to varying levels of success.

In several books and films (Harry Potter, True Lies, The Hitchhiker’s Guide to the Galaxy and many more), a ‘truth serum’ is used to elicit accurate information from the recipient.  In actual fact, however, truth serums don’t exist.  Apart from in fiction, their title is an ironic misnomer. Having said that, scientists have tried for decades to develop a failsafe ‘veritaserum’ in order to catch out liars.

wineAlcohol has been used as a sort of lie preventor for millennia, as the Latin phrase ‘in vino veritas’ (in wine [there is] truth) demonstrates. Alcohol acts in the brain by increasing the activity of GABA channels, leading to a general depression of brain activity. This has been thought to suppress complex inhibitions of thoughts and behaviours, loosening the drinker’s tongue. However, drinking alcohol doesn’t prevent people giving false information, and it by no means prompts people to tell ‘the truth, the whole truth and nothing but the truth’.

A drug called scopolamine was used to sedate women during childbirth in the early 20th century when a doctor noticed that the women taking the drug would answer questions candidly and accurately. Scopolamine acts on GABA receptors in a similar way to alcohol, and so a person intoxified with the drug is just as likely to give false information as someone who’s had a few stiff drinks.

Barbiturates are sedatives such as sodium amytal that work on the brain in a similar way to alcohol – by interfering with people’s inhibitions such that they spill the beans. Sodium amytal was used in several cases in the 1930s to interrogate suspected malingerers in the U.S. army, but the drug does not prevent lying and can even make the recipient more suggestible and prone to making inaccurate statements.

headIn the 1950s and 60s, the CIA’s Project ‘MK-ULTRA’ tested drugs such as LSD on unconsenting adults and children. If LSD proved a reliable truth serum, it would be an invaluable tool in the Cold War. The tests showed that LSD would be far too unreliable and unpredictable to use in interrogation.

Despite the repeated lack of success in the search for a ‘truth serum’, scientists have continued trying to develop alternative technologies for busting liars. The polygraph, used by respected institutions including the CIA , FBI and The Jeremy Kyle Show, measures changes in arousal – heart rate, blood pressure, sweating, and breathing rate – in order to detect deception. However, there is a lot of scepticism surrounding polygraphy. In particular, there are several hacks to avoid getting caught out by a polygraph – most notably biting your tongue, difficult mental arithmetic, or tensing your inner anal sphinchter without clenching your buttocks (thanks for that factual gem, QI).

The improvement of brain imaging methods – in particular functional magnetic resonance imaging or fMRI – has extended the scope of detecting liars. On the internet, one might stumble across ‘No Lie MRI’, an American firm that offers a lie detection service for individuals, lawyers, governments and corporations. They claim that this service could be used to “drastically alter/improve interpersonal relationships, risk definition, fraud detection, investor confidence [and] how wars are fought.”

court

Currently James Holmes, the man charged with injuring 70 and killing 12 at the Batman cinema shooting in Aurora, Colorado, is on trial. The judge has ruled his consent to a “medically appropriate” narcoanalytic interview and polygraph. That is, Holmes could be interviewed under the influence of sodium amytal or other similar drugs in order to determine whether or not he is feigning insanity. The use of these drugs may contravene the U.S. Constitution’s 5th Amendment ‘the right to remain silent’. Clinical psychiatrist Professor Hoge says, “The idea that sodium amytal is a truth serum is not correct.  It’s an invalid belief. It is unproven in its ability to produce reliable information and it’s not a standard procedure used by forensic psychiatrists in the assessment of the insanity defence, nor is polygraph.”

The potential benefits of a 100% reliable, valid method of lie-detection are obvious, although there are ethical grey areas that scientists and the legal/ethical community would need to tackle if the technology is ever found. For now I think the evidence for using current lie detection methods, especially for anything more serious than The Jeremy Kyle Show, is far too sparse.

Post by Natasha Bray