Will 2013 be the year of the celebrity scientist?

The end of a year is always a good time to reflect on your life; to see what you’ve achieved and plan for the year ahead. For example: this year I attempted to cycle 26 miles in a single day across the Peak District, despite having not used a bike since the 90s. Next year I intend to buy padded shorts before I even go near a bicycle. Oh and I also intend to get a PhD, but that’s no biggie (gulp).

But what sort of year has it been for the relationship between science and the public? I’d say it’s been a good one.

It may just be that I’m more aware of the science-communication world, having finally given in and joined Twitter, but I can sense that science is losing its reputation as a refuge for the über-nerdy and slowly working its way into the mainstream. Scientific stories and issues are being reported more frequently in the media and there appears to have been a dearth of science-related shows on TV. Whether it’s Professor Brian Cox pointing at the sky, the US-based shenanigans of the nerds in the Big Bang Theory or priests taking ecstasy in the name of science live on Channel 4, science certainly seems to be taking centre stage.

538px-Brian_CoxThis year appears to have brought about a shift in attitude towards science and scientists. This change has no doubt been helped by personalities such as Professor Brian Cox, Professor Alice Roberts and astrophysicist-turned-comedian Dara O’Briain. They have helped make science a bit cooler, a bit more interesting and above all a bit more accessible. Perhaps because of this, the viewing public now seem more willing to adopt a questioning and enquiring attitude towards the information they are given. We now often hear people asking things like: how did they find that out, where is the evidence for this claim, how many people did you survey to get that result and has that finding been replicated elsewhere?

The effect of this popularity, at least amongst the younger generation, can be seen in university applications. Despite an overall drop across the board in applications, the drop in students applying to study science has been much lower than for other subjects. Applications for biological sciences dropped a mere 4.4% compared to subjects such as non-European languages which fell 21.5%. Biological science was also the fourth most popular choice; with 201,000 applicants, compared to 103,000 for Law (which dropped 3.8%). Physical science, which is in vogue at the moment mostly due to the aforementioned Prof. Cox and the Big Bang Boys, fell by a measly 0.6%. (Source).

250px-Twitter_bird_logo_2012.svg However, it’s not just the public who have shown a shift in their attitudes. Scientists are getting much better at communicating their views and discoveries. There is now a huge range of science blogs managed and maintained by scientists at all stages of their careers (for some outstanding examples see here). Twitter is also stuffed full of people who have “science communicator” in their bio, ranging from professional communicators such as Ed Yong to PhD students and science undergraduates. This increase in willingness of scientists to communicate their work has probably helped contribute in a big a way to the shift in public feeling towards scientists. They are proving that, contrary to the stereotype, scientists are perfectly able to communicate and engage with their fellow humans.

Universities are also showing a shift in attitude towards better communication between scientists and the rest of the world. The University of Manchester has now introduced a science communication module to its undergraduate Life Sciences degree courses and several universities offer specialised master’s degrees in science communication. Also people who are already involved in science communication, such as the Head of BBC Science Andrew Cohen, are touring universities giving lectures on how to get a science-related job in the media. This means that current undergraduates and PhD students are learning to communicate their discoveries alongside actually making them.

So what does this mean for the coming year? Our declining interest in singing contests and structured reality shows appears to be leaving a void in our celebrity culture. Will our new-found enthusiasm for ‘nerds’ mean that scientists could fill this gap? Will Brian Cox replace Kate Middleton as the most Googled celebrity in 2013? Will we see ‘The Only Way is Quantum Physics’ grace our screens in the new year?

The answer to these questions is still likely to be no. Whilst science does seem to be in vogue at the moment scientists themselves don’t often seek out the limelight, perhaps due to their already large workload or the fact that being in the public eye does not fit with their nature. Figures such as Brian Cox and Dara O’Briain are exceptions to the “scientists are generally shy” rule; both being famous prior to their scientific debuts (as a keyboardist in an 80’s group and a stand-up comedian, respectively).

mathsAnother reason that scientists aren’t likely to be the stars of tomorrow is that science communication is notoriously hard. Scientists have to condense amazingly complex concepts into something someone with no scientific background can easily understand. Many scientists are simply unwilling to reduce their work to this level, arguing that these explanations are too ‘dumbed down’ therefore miss the subtlety necessary to really understand the work. Unfortunately if communicators are unable to simplify their explanations it often leaves the rest of the population (myself included if it’s physics) scratching their heads. As a cell biologist, I find the hardest aspect of communicating my work is knowing what level I’m pitching at – would the audience understand what DNA is? A cell? A mitochondrion? You want to be informative without being either confusing or patronising which is incredibly hard and not a lot of people can do it well. This doesn’t mean that scientists won’t try and get their voice heard though. It may just be that they achieve this through less “in your face” media such as Facebook or Twitter, rather than via newspapers/magazines or on the TV.

My hope is that the current increased interest in science may help shape the type of celebrity culture we see gracing our screens. It may also help to get across the concept that maybe it’s OK to be clever and be interested in the world around you. Maybe it’ll even become socially acceptable to be more interested in the Higgs Boson or the inner workings of the cell than seeing someone fall out of Chinawhite with no pants on.

Post by: Louise Walker

Why can’t we tickle ourselves while schizophrenics can?

Have you ever tried to tickle yourself? Try it; you will find that the feeling will be nothing like the sensation you get when someone else tickles you. But why is this the case?

The simplest answer to this question is to assume that when you tickle yourself you’re expecting the sensation, so are less likely to react. However, functional magnetic resonance imaging (fMRI) has shown that activity in an area of the brain known as the somatosensory cortex is comparable both when subjects are tickled unexpectedly and when they are warned that they are about to be tickled. This provides evidence that the brain responds to an expected sensation in the same way as it does to an unexpected sensation. Meaning that expectation alone cannot be the explanation for our inability to tickle ourselves.

The brain is constantly receiving sensory input  (information about our experiences communicated by our physical senses)  from everything we touch, see, hear, taste and smell. This constant barrage of information must be sorted and processed by the sensory systems of the brain in order for us to make sense of the world around us. Arguably, the most important feature of normal brain processing is the ability to identify and extract information about externally-induced changes in our environment. Therefore, in order to differentiate between spontaneous environmental changes and those we cause ourselves, the brain categorizes self-produced movements as being less significant than those initiated external to our bodies. Indeed, fMRI scans have identified increased activity in the somatosensory cortex in response to externally produced tickling (as used in the above study) compared to little or no change in activity seen when participants tickle themselves. This data suggests that activity in the brain differs in response to externally and internally produced stimuli, reinforcing the neurological basis for our ability to consciously distinguish between the two.

Cerebellum in purple

Research suggests that this ability to recognise a self-initiated movement may depend on a structure at the back of the brain known as the cerebellum. Circuits within the cerebellum have been termed the bodies ‘central monitor’ and may be the key to distinguishing between self-produced sensations and external stimuli. Neurons of the cerebellum have the capacity to calculate strong and accurate predictions about the sensory consequence of self-tickling. This system takes predictions about our movements and compares them with actual sensory feedback produced by the action. The difference between the two is known as an ‘error signal’. If you attempt to tickle yourself, your internal ‘central monitor’ will accurately predict the sensory consequence because the movement is self-produced and there will be little or no difference in error signal. In contrast, when someone else tickles you (even if you are aware it is going to happen), you will not be able to predict exactly what the sensory stimulation will feel like; that is, its position or strength. Therefore, there will be a difference between your brains prediction and the actual sensory feedback.

So it seems that you can’t tickle yourself? Well, at least this is usually the case. However, research has now stumbled upon a remarkable feature of schizophrenia showing that, unlike the rest of us, schizophrenics actually have the capacity to tickle themselves! It has been suggested that this phenomenon may be a caused by neurological changes in the schizophrenic brain which disable the patient’s ability to detect self-initiated actions. It is possible that biochemical or structural changes in the brain cause a malfunction in the predictive system of the cerebellum. This results in a miscommunication of information concerning internally- vs. externally-generated actions. Essentially this means that, although the patient is able to process the intent to move and is aware the movement has occurred, they cannot then link the resulting sensation (the tickle) with their internal knowledge of making the movement. It is therefore possible that this deficit in self-awareness or monitoring could result in thoughts or actions becoming isolated from the internal appreciation that they are producing them. Consequently, schizophrenic patients may misinterpret internally-generated thoughts and movements as external changes in the environment.

Our ability to control the magnitude of our responses based on prior knowledge of our own actions appears to have numerous advantages. This includes the ability to distinguish between real external threats, such as a poisonous spider crawling up our leg, and those we create ourselves, for example resting our own hand on our leg. Indeed, recognising the difference between an external threat and a self-induced false alarm may, in some situations, be the difference between life and death. The multifactorial basis of the tickling sensation indicates a staggering complexity in central processing in the brain. Science is currently unravelling these complexities and, with luck, this research may lead to both a better understanding of disorders such as schizophrenia and may point the way towards novel treatment strategies.

Post by: Isabelle Abbey-Vital

The Placebo Effect: A treatment of the mind?

When a patient known as Mr. Wright was diagnosed with terminal lymphoma (cancer of the lymphatic system), the doctors battling to prolong his life were ultimately left with no option but to try a new ‘controversial’ anti-cancer drug – Krebiozen. Although doctors and scientists remained unconvinced about the drugs effectiveness, Mr Wright was confident that it would lead to an improvement of his condition. Despite being bed-bound and in extremely poor health, just three days after his first treatment he had enough energy to get out of bed. After ten days his tumours had shrunk significantly in size, and he was well enough to go home.

So what was this mysterious drug, and what caused the remission of his symptoms?

Krebiozen was marketed and endorsed in the 1950s by several physicians who claimed the drug possessed anti-cancer properties. One study claimed that of 22 patients with diagnosed terminal cancer, 14 remained alive after treatment. However, other scientists failed to reproduce these results and ultimately decided that the drug was of no benefit to cancer sufferers. The National Institute of Cancer verified this decision after finding that the drug consisted of nothing more than simple amino acids and mineral oil, with no active ingredients.

This drug was actually a placebo! Placebo treatments are usually given to patients in the form of sugar pills, but can also include injections and sham surgery. The key to the placebo’s success is ensuring the patient believes the treatment will improve their condition. Belief in the treatment can lead to a perceived or actual improvement of the condition.

The so-called ‘placebo effect’ appears, at first glance, to make no sense whatsoever. How could a simple sugar pill alleviate serious medical symptoms? The answer may be as simple as a positive mental attitude.  A patient’s mental well-being and perception of of their illness may be an important factor in influencing their medical prognosis. This is a remarkable concept, but how is it biologically possible? One suggestion is that the notion of medical intervention creates a mental cue which acts to kick-start an immune response within our bodies leading to self-healing. A similar response to seemingly unrelated external cues has in fact also been observed in Siberian Hamsters. When exposed to light levels which mimic winter days, hamsters show a depression of their immune response. If, however, they are exposed to lights mimicking summer days, the immune response increases and healing begins.

I was first inspired to write this article by a programme I watched a few weeks ago on Channel 4 titled Derren Brown: Fear and Faith. This show demonstrated the power of the placebo effect through a fake clinical trial. Subjects on this trial were given a placebo drug (Rumyodin) and were told that it could inhibit feelings of fear. Over the course of few weeks, we saw each of the subjects overcome their respective fears ranging from heights and confrontation to singing in public. The drug was also effective as a cure for smoking and allergies. The strength of the placebo was enhanced by the very convincing story behind the drug’s development, including a fictitious pharmaceutical company and the use of doctors to administer the drug. The placebo’s extremely powerful effects were probably due to this attention to detail, meaning subjects were convinced that the treatment would work.

So if placebos can offer such amazing results without the need for any active ingredient and all the side effects these may bring, why are they not used more regularly?

The ability of a placebo to alleviate symptoms is variable both in how often they succeed and the strength of the resulting symptom alleviation. Placebos appear to be more effective when symptoms are subjective such as pain or nausea and less effective for non-subjective symptoms such as abnormal blood pressure or heart rate.

A recent study in the US has suggested that genes may also play an important role in deciding whether or not an individual responds to a placebo. Preliminary results indicate that if a particular gene is present, individuals with irritable bowel syndrome are more likely to respond to placebo acupuncture. Whether this effect may be replicated for other conditions is unclear. However, these results do offer an explanation as to why some people are more susceptible to the placebo effect than others.

A study from 1985 hypothesised that the placebo effect relies heavily upon a belief that the medicine will make you feel better. Indeed, one study showed that the attitude of the prescribing doctor towards both the drug and the patient significantly altered the patients prognosis. In this study patient’s responses to a placebo rose from 44% to 62% when the doctor prescribing the treatment made a conscious effort to be positive.

This means that scientists are faced with a paradox when it comes to the use of placebos. Although there are clear ethical issues arising from their use, such as the controversy of introducing dishonesty into the patient-doctor relationship, ethical issues also arise from NOT using placebos. Is it unethical not to use something that could help improve a patient’s health? Despite this, the UK Parliamentary Committee for Science and Technology think that the placebo effect is unreliable and should not be used as a sole treatment on the NHS. In contrast, a study of GPs in Denmark has shown that 48% had previously prescribed placebo as treatments at least 10 times over the last year. Moreover, a study in 2004 uncovered that approximately 60% of physicians in Israel has used placebos in their practice.

What is important to remember, is that placebos are not a ‘one-size-fits-all’ cure that works for everyone. The effect that placebos have can be highly variable and often unreliable. Whilst some people respond positively to treatment with placebos, others experience no change to their condition. This positive effect appears to depend not only on the type of ailment the patient is suffering with, but also their mental attitude towards the treatment. However, what is clear is that a lot more research needs to be carried out to investigate exactly how and why placebos work, and why their success is so variable.

Post by: Sam Lawrence

Ketamine: from drug of abuse to anti-depressant.

Ketamine is probably best known as a recreational drug and horse tranquilliser. However, it also has a number of beneficial medical uses. It is routinely used as an anaesthetic, it is used in medical research to replicate symptoms of schizophrenia and current work suggests it may also be an effective treatment for depression. So what do we know about ketamine and how could a popular psychedelic drug be used to treat a psychiatric disease?

Why take ketamine?

Ketamine is classed as a psychotomimetic: this means it can induce hallucinations, delusions and feelings of dissociation from the world around you (other examples of psychomimetics include LSD and cannabis). These effects make ketamine a desirable recreational drug, but they are not the drug’s only action. It can also cause people to appear unresponsive or apathetic, severely disrupt memory and concentration and, at high doses, even lead to temporary paralysis or coma. Because these effects are similar to the symptoms of schizophrenia, it is commonly administered to rats (and healthy human volunteers) to study schizophrenia and test potential new drugs.

Ketamine produces its psychotic effects by disrupting the way the brain perceives the world and how it processes information. Specifically, it blocks the transmission of signals between a group of neurons which use the neurotransmitter glutamate (a chemical signal that neurones use to communicate). It does this by selectively interfering with the regions of the cell which detect glutamate; acting preferentially in certain parts of the brain, such as the prefrontal cortex.

Prefrontal cortex (Orange)

The prefrontal cortex, at the front of the brain, is responsible for higher functions; such as problem-solving, reasoning, understanding social interactions and control of behaviour. This region is important for enabling us to understand the world around us and make decisions about how to interact with it. Activity in the prefrontal cortex is managed by ‘inhibitory’ neurons (cells that prevent other neurones becoming too excited). These cells control what signals the prefrontal cortex sends to other brain regions. Ketamine selectively targets these inhibitory neurons, making them less active. This allows activity in the prefrontal cortex to continue unchecked, leading to disorganised communication and a disruption of its communication with other brain regions. The signals coming out of the prefrontal cortex make less sense and more irrelevant information is transmitted.

This extra transmission of irrelevant information may be crucial to understanding the basis of delusional thinking. If normally uninteresting stimuli are flagged up by the brain as important, this could prevent the brain from making sense of the world in a normal way, leading to bizarre or irrational beliefs and disorganised thoughts.

The effects of ketamine are fairly short-lasting: in clinical studies effects usually wear off within a few hours. Regular use may, however, have longer-lasting effects on the brain and reports of users developing psychological dependence have increased in recent years.

Ketamine for depression

Recently a number of clinical trials have found that a single low dose of ketamine can improve the symptoms of depression in a matter of hours. These trials used volunteers suffering from severe depression that had previously shown no improvement with traditional anti-depressants. Within 24 hours of a single dose, the percentage of patients showing a considerable improvement in mood varied across trials from 25% to an impressive 85%. Some patients showed improvements that lasted for weeks, however the majority of patients showed improvement lasting only a few days. Compared with the usually prescribed anti-depressants, this is an incredibly fast response; most take weeks to act, and can actually worsen symptoms before they show beneficial effects. This is an exciting development, especially for the treatment of suicidal patients, where rapid treatment is essential. Importantly, side-effects are generally mild, because the dose is far lower than a typical recreational dose.

It isn’t clear what causes ketamine’s anti-depressant effects but it isn’t thought to be a direct result of alteration in neuronal communication. In healthy subjects, small amounts of ketamine doesn’t influence mood, indicating that it may act to correct some problems found specifically in depressed patients. One possibility is that it encourages new connections to be created between neurons. This may be beneficial since research has found reduced numbers of connections in the brains of depressed patients.

However, this research is still in the early stages. The next big challenge is to determine a schedule of treatment to maintain the short-term improvements seen after one dose. This may involve combining ketamine with other anti-depressants, or repeating the dose. Although further research is still needed to assess the long-term effects of ketamine as an anti-depressant.

These exciting results suggest that in the future, ketamine may be famous not as a club-drug or horse-tranquilliser, but as a life-changing treatment for a devastating mental condition.

Post by: Claire Scofield

What can science add to the abortion debate?

Few topics generate such a passionate division in opinion as abortion and ultimately there is no easy answer when choosing between an unborn child’s right to life and a woman’s right to freedom over her own body. However, after reading about the uproar caused by the tragic death of Savita Halappanavar, I knew I wanted to add my own voice to this debate. Which left me pondering the following question: what can a science blog bring to the table when tackling a heated moral debate like this?

The answer, I believe, is something few mainstream sources address: the development of the brain and consciousness (as we understand it) in the growing fetus.

Of course if you are of the opinion that ‘life begins at the moment of conception’ the emergence of consciousness is probably a moot point. However, according to recent statistics more than 60% of UK adults and 18-35 year-olds in the republic of Ireland are pro-choice. This means that, under certain circumstances, they accept abortion as a viable option, raising a particularly difficult question. Assuming abortion, in theory, is acceptable, is there a point during the pregnancy when it becomes unacceptable and how do we decide where to draw this line?

Current UK legislation states that an abortion must be carried out during the first 24 weeks of pregnancy. However, guidelines also state that the procedure should ideally be performed before 12 weeks. Current legislation bases its ‘upper limit’ on the survival rate of premature babies, which is significantly reduced prior to 24 weeks (Percentage of babies successfully discharged from hospital after premature birth at 24 weeks: 33.6%, 23 weeks: 19.9%, 22 weeks:9.1%).

Almost 90% of UK abortions are performed within the first 12 weeks of pregnancy. During this time there is no scientific doubt that the developing fetus is incapable of  any form of conscious awareness. The fetal brain does not begin to develop until 3-4 weeks into the pregnancy, at which point it is little more than a hollow tube filled with dividing neurons. Between weeks 4 and 8 this neural tissue grows forming the major divisions of the adult brain (forebrain, midbrain, hindbrain and spinal cord). By 8 weeks recognisable facial features have developed and the cerebral cortex separates into two distinct hemispheres. By the end of the first trimester (12 weeks) nerve cells are beginning to form rudimentary connections between different areas of the brain. However, these connections are sparse and incapable of performing the same functions as an adult brain. So by 12 weeks, although the fetus is certainly starting to look like a little human, the neural circuits responsible for conscious awareness are yet to develop.

The first trimester is also the time when around three quarters of spontaneous miscarriages occur. Miscarriages are possible throughout the pregnancy and are much more common than most people realise. One in eight women who are aware of their pregnancy experience a miscarriage, with many more occurring before the woman is even aware she has fallen pregnant.

As the complexity of the fetal brain grows, forming structures similar to those we recognise in the adult, so the does the fetus’ ability to experience and respond to its environment. Indeed, studies have shown that from 16 weeks the fetus can respond to low frequency sound and by 19 weeks will withdraw a limb or flinch in response to pain. An observer would certainly think these responses look very much like the start of conscious awareness. However, during these early days the neural pathways responsible for converting senses to conscious experiences have yet to develop. This means what we are seeing are just reflexes, probably controlled entirely by the developing brainstem and spinal cord.

In fact, we know that the brain structures necessary for conscious experience of pain do not develop until 29-30 weeks, while the conscious processing of sounds is only made possible after the 26th week. Even when the fetal brain possesses all its adult structures, scientists are cautious to assume it posesses what we refer to as ‘consciousness’. This is mainly because the low oxygen levels and a constant barrage of sleep-inducing chemicals from the placenta ensure that, until birth, the foetus remains heavily sedated.

Ultimately, although science cannot and should not try and answer the moral questions behind abortion, it can give us some amazing insights into how the brain develops. It seems that, in the womb, a fetus is unlikely to ever experience traditional consciousness. However, we do know that from the time neural pathways are in place (the last weeks before birth) the fetus can form rudimentary memories. Meaning that after birth it can show a preference for its mother’s voice and other sounds and smells experienced in the womb – yes, newborn babies show a liking for the smell of amniotic fluid.

Therefore, although the ‘upper limit’ on abortion remains relatively arbitrary. Its current position at 24 weeks appears to fit well with both premature birth survival rates and, in terms of neural development, a time before any major connections are in place. Making it, in my eyes, as pretty good point at which to draw this line.

Post by: Sarah Fox