NEWS AND VIEWS: Standing Up For Science – Improving the relationship between science and the media

Historically, scientists and journalists have never really got along. In general, scientists tend to be a little … mistrustful of the ability of a journalist to accurately portray their research to a wider audience. In return, journalists may find that scientists can be difficult to work with. The research the scientists present can also be a bit confusing or complicated. But they need each other. Scientists need journalists to get the message about their research across, and newspapers like to print science stories  because their readers are interested in it.

Knowing that the science/media relationship can be somewhat antagonistic, the charity Sense About Science has set up a series of workshops as part of their “Voice of Young Science” section. The aim is to help foster a better relationship between early career scientists and the journalists that report scientific stories. These workshops encourage scientists to stand up for themselves and their subject by responding to misinformation or dubious claims in all kinds of media.

I was lucky enough to be able to attend the recent Voice of Young Science media workshop at The University of Manchester. The day was split into several panel discussions; the first involved scientists discussing their experiences with the media – both good and bad – and advising how to get the best out of their situation. Amongst the speakers was Professor Matthew Cobb, from the University of Manchester, one of the advisors on the BBC’s recent “Wonders of Life” series (this counted as one of his “good” experiences!). Professor Cobb’s main advice was to “Just say yes”, because nothing will happen if you say “No”. It may not turn out as well as you’d hoped, but the experience will still be valuable.

It’s easy for scientists to be scared about the way their results may be interpreted by the media. These fears are illustrated by a horror story from another panel member, the evolutionary biologist Dr Susanne Shultz. Dr Shultz had discovered an evolutionary link between social animals and increasing brain size over time, as opposed to solitary animals, whose brains had more or less remained the same size. A misunderstanding somewhere along the line meant it was reported that she had discovered that dogs (as a social animal) were more intelligent than cats (as a solitary one). These were not her results, meaning she had to repair quite a lot of damage. However, whilst Dr. Shultz had a horrible time dealing with misinterpretation of her research, she didn’t think it had done permanent damage to her scientific credentials, which was a relief to hear.

Another panel consisted of people on the media side of the equation, including the science journalist David Derbyshire, as well as Radio 5 Live producer Rebekah Erlam, and Morwenna Grills, the press officer for the Faculty of Life Sciences here in Manchester. There was a sharp intake of breath when Derbyshire admitted that he has written for certain tabloids which are not particularly well-regarded for their science reporting! However, he raised some very good points that I’ve never thought about before. The one that stuck with me was that the turnaround time for getting a story into a newspaper is incredibly short. You’ve got to investigate the story, track down those involved, write it and send it off, sometimes in the space of a few hours. This is not an ideal situation, as scientific stories in particular need proper research to make sure you thoroughly understand it, and this takes time. But what do you do if that time is not available to you? And if your piece is sub-edited into something different, is there a lot you can do about it?

The thing that struck me most about the workshop is that scientists and journalists really need to communicate with each other more effectively. Without journalists reporting on scientific matters, scientific research would never reach the public consciousness; and when you have an important message to get across that would be a very bad thing. Scientific breakthroughs are usually of great interest for the general public, whether it’s about a potential cure for cancer or horsemeat in our burgers. It should ideally be a trusting relationship for both sides to get the best out of the arrangement, and at the moment it is inherently the opposite. The good thing about workshops such as this one is that it helps each side see the situation from the other’s point of view; I certainly feel a bit more understanding towards science reporters. Hopefully the journalists on the panel feel more sympathy towards scientists and why they can be quite protective about their work. Perhaps more events like this can help to heal the rift between these two opposing factions.

For more information about the Voice of Young Science media workshops, please go to:

Diabetes and Alzheimer’s: Could overeating lead to dementia?

The number of people suffering from diabetes is on the rise. This rise runs alongside a worldwide increase in obesity, with around 10 percent of the population suffering from diabetes, and 12 percent considered obese.

MH900431256Although we know bad eating habits increase our risk of developing diabetes, this doesn’t seem to be enough to make us ditch the junk! I know, despite having diabetes run in my family, that when the stress piles up I always crave comfort foods. But new research might soon encourage me to change these eating habits. Yes, if a long term risk of heart disease, blindness and nerve damage aren’t enough to make me snack less, the looming threat of Alzheimer’s may just do the trick.

Numerous studies have shown that people with type 2 diabetes are twice as likely to develop Alzheimer’s than the rest of the population. But why?

alzheimer_brainAlzheimer’s is a pretty complicated problem. In fact a confident diagnosis can still only be made following post-mortem. We know that in the late stages of the disease the brain is shrunken and riddled with clusters of mismanaged proteins called plaques and tangles. But what we don’t really understand is why these proteins start to misbehave in the first place.

The emerging picture is of a complex patchwork of many factors: all of which can initiate a downward cascade toward Alzheimer’s disease. Now, diabetes seems to be forming another patch on this causation quilt.

Type 2 diabetes, the kind that can develop later in life, is brought about by a number of factors: including obesity. This leads to an imbalance in insulin production. In non-diabetics insulin is produced at constant levels, causing cells around the body to absorb glucose from the blood; a process which is necessary for regulating carbohydrate and fat metabolism. Insulin can also cross into the brain and has been found to aid cognitive function.

Although it may seem counter-intuitive, the chronic high levels of blood insulin seen in many diabetics actually means less insulin crosses into the brain. This, combined with fluctuations in blood sugar, may explain why a number of diabetics report reduced cognitive function. But this is not the end of the story. Diabetes also has an effect on the metabolism of fat, leading to an overproduction of ceramides. These ‘waxy fats’ are released into the blood and cross into the brain. Once there, they cause brain insulin resistance and encourage inflammation.

It is believed that this mixture of insulin resistance and inflammation causes Alzheimer’s related proteins to collect in the brain and form plaques. In fact, scientists have recently discovered that inducing insulin resistance in the brains of mice and rats leads to both memory loss and accumulation of plaques.

This research certainly seems compelling, although within the scientific community the jury is still out on the exact role diabetes plays in the development of Alzheimer’s. I personally doubt that diabetes alone can be hailed as a causative factor for Alzheimer’s. However, if we connect the dots the two certainly seem to be linked, perhaps through overconsumption of fatty/sugary junk foods? Whatever the outcome, I know that this research will certainly make me think twice before reaching for the snacks in future!

Post by: Sarah Fox

Your Brain on Lies, Damned Lies and ‘Truth Serums’

question the answerPork pies, fibs, whoppers, untruths, tall stories, fabrications, damned lies…not to mention statistics.

Apparently, every person lies an average of 1.65 times every day. However, since that average is self-reported, maybe take that figure with a pinch of salt. The truth is, most people are great at lying. The ability to conjure up a plausible alternative reality is, when you think about it, seriously impressive, but it takes practice. From about the age of 3 young children are able to make up false information, at a stunning rate of one lie every 2 hours – though admittedly the lies from a toddler’s wild imagination are relatively easy to identify.

When we lie, brain cells in the prefrontal cortex – the planning, ‘executive’ of the brain – work harder than when we tell the truth. This may be reflected in the physical structure of our brains as well: pathological liars have been shown to have more white ‘wiring’ matter and less grey matter in the prefrontal cortex of their brain than other people. But how can we tell if someone is telling a lie, or telling the truth?

Back in the day – 2000 years ago – in ancient India, people would use the rice test to spot liars. When someone is lying, their sympathetic (‘fight or flight’) nervous system goes into overdrive, leading to a dry mouth. If you could spit out a grain of rice, you were seen to be telling the truth. If your mouth was parched and you couldn’t spit the grain out, you were lying. Since then, several different methods of catching out liars have been used – to varying levels of success.

In several books and films (Harry Potter, True Lies, The Hitchhiker’s Guide to the Galaxy and many more), a ‘truth serum’ is used to elicit accurate information from the recipient.  In actual fact, however, truth serums don’t exist.  Apart from in fiction, their title is an ironic misnomer. Having said that, scientists have tried for decades to develop a failsafe ‘veritaserum’ in order to catch out liars.

wineAlcohol has been used as a sort of lie preventor for millennia, as the Latin phrase ‘in vino veritas’ (in wine [there is] truth) demonstrates. Alcohol acts in the brain by increasing the activity of GABA channels, leading to a general depression of brain activity. This has been thought to suppress complex inhibitions of thoughts and behaviours, loosening the drinker’s tongue. However, drinking alcohol doesn’t prevent people giving false information, and it by no means prompts people to tell ‘the truth, the whole truth and nothing but the truth’.

A drug called scopolamine was used to sedate women during childbirth in the early 20th century when a doctor noticed that the women taking the drug would answer questions candidly and accurately. Scopolamine acts on GABA receptors in a similar way to alcohol, and so a person intoxified with the drug is just as likely to give false information as someone who’s had a few stiff drinks.

Barbiturates are sedatives such as sodium amytal that work on the brain in a similar way to alcohol – by interfering with people’s inhibitions such that they spill the beans. Sodium amytal was used in several cases in the 1930s to interrogate suspected malingerers in the U.S. army, but the drug does not prevent lying and can even make the recipient more suggestible and prone to making inaccurate statements.

headIn the 1950s and 60s, the CIA’s Project ‘MK-ULTRA’ tested drugs such as LSD on unconsenting adults and children. If LSD proved a reliable truth serum, it would be an invaluable tool in the Cold War. The tests showed that LSD would be far too unreliable and unpredictable to use in interrogation.

Despite the repeated lack of success in the search for a ‘truth serum’, scientists have continued trying to develop alternative technologies for busting liars. The polygraph, used by respected institutions including the CIA , FBI and The Jeremy Kyle Show, measures changes in arousal – heart rate, blood pressure, sweating, and breathing rate – in order to detect deception. However, there is a lot of scepticism surrounding polygraphy. In particular, there are several hacks to avoid getting caught out by a polygraph – most notably biting your tongue, difficult mental arithmetic, or tensing your inner anal sphinchter without clenching your buttocks (thanks for that factual gem, QI).

The improvement of brain imaging methods – in particular functional magnetic resonance imaging or fMRI – has extended the scope of detecting liars. On the internet, one might stumble across ‘No Lie MRI’, an American firm that offers a lie detection service for individuals, lawyers, governments and corporations. They claim that this service could be used to “drastically alter/improve interpersonal relationships, risk definition, fraud detection, investor confidence [and] how wars are fought.”


Currently James Holmes, the man charged with injuring 70 and killing 12 at the Batman cinema shooting in Aurora, Colorado, is on trial. The judge has ruled his consent to a “medically appropriate” narcoanalytic interview and polygraph. That is, Holmes could be interviewed under the influence of sodium amytal or other similar drugs in order to determine whether or not he is feigning insanity. The use of these drugs may contravene the U.S. Constitution’s 5th Amendment ‘the right to remain silent’. Clinical psychiatrist Professor Hoge says, “The idea that sodium amytal is a truth serum is not correct.  It’s an invalid belief. It is unproven in its ability to produce reliable information and it’s not a standard procedure used by forensic psychiatrists in the assessment of the insanity defence, nor is polygraph.”

The potential benefits of a 100% reliable, valid method of lie-detection are obvious, although there are ethical grey areas that scientists and the legal/ethical community would need to tackle if the technology is ever found. For now I think the evidence for using current lie detection methods, especially for anything more serious than The Jeremy Kyle Show, is far too sparse.

Post by Natasha Bray

Cancer – when good cells go bad.

Cancer is an illness that will unfortunately affect most of us at some point in our lives – either directly or through someone we care about. The remarkable thing about cancer is that although in many ways the disease acts like a foreign invading body it is actually our own cells that have started to misbehave. When we look at how cancer cells operate they can seem crafty, clever and at times downright evil. Of course they’re not. They’re cells – unable to think or have any emotion-like behaviour. The thing that allows cancer cells to behave in the way they do is actually the same process that allowed human beings to evolve from single celled swamp-dwelling amoeba – genetic variation and adaptation.

Most cells in our body behave the way they should. When they get signals from the tissue surrounding them telling them to multiply they will divide into two new cells; when they get old or damaged they will kill themselves in a cell-suicide process called apoptosis. Cells are very altruistic in this way, they even have the good grace to package all their remnants into little membrane-bound sacks that other cells can come along and chow down on. Cancer cells are not altruistic. While normal cells function solely to benefit the organism as a whole, cancer cells have their own agenda and that is to stay alive and to keep dividing. The problem for our body is that when a cancer cell goes forth and multiplies uncontrollably, a mass of cells form and that mass is a tumour.

The cancer cells don’t set out to become harmful, the process is random. One of the first steps in a cell becoming cancerous can be losing the ability to divide properly. If genes that control cell division are mutated, cells may start to divide randomly and more often. But these types of mutations alone are not usually enough to cause cancer and other types of adaptations are needed.

Potential cancer cells become really dangerous when they not only divide in an uncontrolled way but also fail to recognise when they need to commit suicide. One of the main reasons a cell might normally commit suicide is because it has miscopied its DNA. Before a cell divides it has to produce an identical copy of its entire DNA so each of the two resulting ‘daughter’ cells will have a full set of genes. This is no easy task for the cell, as there is an estimated 3 meters of DNA crammed into each one. There’s a lot of room for mistakes and they do happen. Normally the cell will detect a mistake and either rectify it or if that’s not possible commit suicide. If the genes that control this detection or suicide process are mutated the cell will not kill itself and will pass faulty genes onto the two new cells. This means you now have cells that will continue to divide and easily accrue mutations, which is very bad news indeed.

 The body has natural ways to keep cancer in check; one of these is our immune system. Because cancer cells don’t act normally our white blood cells often recognise them as different and attack them in the same way they would a bacteria or virus-infected cell. It is thought that the immune system can actually keep cancer cells in check for years, but eventually a cancer cell might gain a genetic mutation that allows it to evade the immune system. If this happens the cancer cell will thrive, multiply and produce many more cells that can avoid the immune response, meaning a tumour can grow more easily.

Within a tumour there is often very little blood supply and this means less oxygen and fewer nutrients reach the cancer cells, which can inhibit growth and even cause them to die. Unfortunately, some cancer cells gain mutations that allow them to release signalling molecules that encourage blood vessels to grow towards them. Once again, the adaptation through genetic changes helps the cancer cells to survive.

In the final stages of the transformation from a normal cell into a rogue cancer cell, the cancer cells often gain mutations that allow them to move. The cells that gain this ability can migrate out of the tumour to find pastures new. They move their way into the blood supply or lymphatic system and hitch a ride to the closest organ or tissue to set up a new colony. This is how cancer spreads through the body and tragically is one of the hallmarks of advanced cancer.

Cancer cells show the ability to adapt and thrive as individual cells at the expense of the body as a whole. The body provides environmental pressures in its natural safe guards against cancer. The cells that can adapt to these pressures through random genetic mutation will go on to divide and pass on these cancerous traits to more cells. Although understanding these processes can show how scary and seemingly persistent cancer cells are, it also helps us understand how the disease progresses.DNA

The advances we have made in understanding cancer in the last 30-40 years are phenomenal. It has given rise to drugs that target cancer cells at all these different stages during their development. There is a long way to go in the treatment of cancer and since every type of cancer will have a different set of mutations, there is no wonder ‘cure-all’ drug. Despite this, in recent years there have been huge leaps forward in DNA analysis. It is not inconceivable that in the near future patients will be able to have the DNA of a cancer cell analysed to work out a personalised treatment plan targeting their cancer specifically. Ultimately our understanding of the cancer cells hyper-adaptability may hold the key to beating the disease all together.

Post by: Liz Granger

Twitter: @Bio_Fluff

What is consciousness? A scientist’s perspective.

ev.owaWe all know what consciousness is. We can tell when we’re awake, when we’re thinking, when we’re pondering the universe, but can anyone really explain the nature of this perception? Or even what separates conscious thought from subconscious thought?

Historically any debate over the nature of consciousness has fallen to philosophers and religious scholars rather than scientists. However, as our understanding of the brain increases so do the number of scientists willing and able to tackle this tricky subject.

What is consciousness?

ev.owa_1A good analogy of consciousness is explained here based on work by Giulio Tononi. Imagine the difference between the image of an apple to your brain and a digital camera. The raw image is the same whether on a camera screen or in your head. The camera treats each pixel independently and doesn’t recognise an object. Your brain, however, will combine parts of the image to identify an object, that it is an apple and that it is food. Here, the camera can be seen as ‘unconscious’ and the brain as ‘conscious’.

The bigger the better?

This example works as a simple analogy of how the brain processes information, but doesn’t explain the heightened consciousness of a human in comparison to say a mouse. Some people believe that brain size is linked with consciousness. A human brain contains roughly 86 billion neurons whereas a mouse brain contains only 75 million (over a thousand times less). A person might then argue that it is because our brains are bigger and contain more nerve cells that we can form more complex thoughts. While this may hold to a certain extent, it still doesn’t really explain how consciousness arises.

To explain why brain size isn’t the only thing that matters, we need to consider our brain in terms of the different structures/areas it consists of and not just as a single entity. The human cerebellum at the base of the brain contains roughly 70 billion neurons, whereas the cerebral cortex at the top of the brain contains roughly 16 billion. If you cut off a bit of your cerebellum (don’t try this at home) then you may walk a bit lopsided, but you would still be able to form conscious thoughts. If however, you decided to cut off a bit of your cortex, the outer-most folds of the brain, your conscious thought would be severely diminished and your life drastically impacted. So it seems that the number of brain cells we have doesn’t necessarily relate to conscious thought.


Linking information

As a general rule the more primal areas of the brain, such as the brain stem and cerebellum act a bit like the camera. Like the camera, they are purely responsible for receiving individual pieces of information from our sensory organs and don’t care for linking this information together. As you move higher up the brain, links form between different aspects of our sensory experiences. This linking begins in mid-brain structures (such as the thalamus) then these links are made more intricate and permanent in the cerebrum.

Tononi believes that it is this linking of information that is the basis for consciousness. As cells become more interlinked, information can be combined more readily and therefore the essence of complicated thought can be explained. The more possible links between cells, the more possible combinations there are and therefore a greater number of ‘thoughts’ are possible.

There may be more neurons in the cerebellum than the cerebrum, but because they are not as extensively linked to each other, they cannot form as complicated thoughts as the cerebrum. When information is relayed upwards from the cerebellum in the brain, it is passed to neurons that have more connections and can therefore make more abstract links. Perhaps a neuron responsible for telling the colour red links with a neuron responsible for the representation of a round object, giving you the notion of a red apple. If you multiply this process up a couple of times, cells soon hold a lot of combined information – smell, taste, colour etc. all come together to create your representation of the apple.

Too much connectivity

So it’s the number of connections that matter? The more connections the better? Well no, sadly it’s not quite that simple. The cells at the higher levels need to be highly interconnected but if all the cells in the brain were too interconnected then you would really be back to square one, where the whole system is either on or off. All the cells fire, or none of them do. Here, you lose all specific information and your brain doesn’t know whether it is red or round or anything, it just knows there’s something. Because along with your red apple cells, all your blue cells will fire, all your bicycle cells will fire and so on, meaning you’ll get no clear information about the apple whatsoever.

The key is that cells at the basic level need to be focused and not have their message conflicted by other information. They then pass their message up to a more connected cell that combines it with other information before passing it up a level, and so on and so forth. Now we have an entity that can build up complicated information from small bits. According to Tononi it is the ability to combine lots of information efficiently that yields the ability to analyse abstract concepts and thus gives us ‘consciousness’.

How do we become unconscious?

The true test of how good a theory of consciousness this is is whether it can also explain a loss of consciousness. Tononi believes that unconsciousness is brought on when the system becomes fragmented and connectivity in the brain decreases. This is exactly what seems to happen when in a deep sleep (when we don’t dream) or under general anaesthetic. Normally when awake and alert, fast activity can be found all over the brain and signals can be passed between areas. When we go into a deep sleep however, the brain moves to a state where signals cannot easily pass between different areas. Tononi believes that the cells temporarily shut off their connections with each other in order to rest and recuperate, therefore losing interconnectivity and associated higher thought processes.


While it may seem a far reach to suggest that consciousness is purely a state of high interconnectivity, what Tononi has done is to present the beginnings of a tangible scientific theory, backed by evidence that suggests interconnectivity is crucial for higher brain power. The question of why we can form conscious thoughts is more of a philosophical one but the scientific view seems to be that it is a fundamental property of our brains. The evolution of man has led our brains to become highly efficient at processing complex information, giving us a vast repertoire of possible thoughts. This repertoire has expanded to such an extent that we can now debate our very existence and purpose. Whatever you believe about the reasons behind consciousness, however, scientists are beginning to have their say about what rules may govern consciousness in the brain.

Post by: Oliver Freeman @ojfreeman