News and Views: The importance of vaccination

NeedleThere has been a story in the news recently about a measles outbreak in Swansea and certain other areas of Wales. The cause of this outbreak is attributed to a lack of children being vaccinated with the controversial Measles, Mumps and Rubella (MMR) jab. This measles outbreak highlights the troubled relationship between the general public and vaccination.

The drop in numbers of children receiving the MMR jab can probably be traced back to a 1998 news story. A paper was published in the journal The Lancet stating there was a link between the MMR jab and cases of autism and bowel disease. The study was led by Andrew Wakefield, a former surgeon. Wakefield claimed that instead of the single MMR jab, a vaccination should be administered in three single doses, one for each disease.

However, there were several major problems with the science in the paper. No other scientists could identify the link between the MMR jab and autism that Wakefield and his team claimed. Investigation by the journalist Brian Deer also revealed that Wakefield had a “conflict of interest”, in that he was being paid by a law firm trying to prove that the MMR jab was harmful. This should have been declared to the Lancet, but wasn’t. Therefore his motives appeared to be more financial than scientific1. Eventually after a large, long hearing Wakefield was struck off the medical register in 2010.

A greater problem has arisen from all this. However dishonestly Wakefield behaved, his original claim was never that “all vaccinations are bad”. He claimed that one particular vaccine had a (disproven) link to disease.  However, it appears that some people have become generally mistrustful of all vaccines and worry that they all cause serious disease.  For example Michele Bachmann, a US congresswoman contending for the Republican nomination for president in 2012, claimed that the HPV vaccine led to mental retardation. This statement was not based on scientific evidence or due to any research on the HPV vaccine; she was quoting a parent who had also made that claim without any actual evidence. Other people, including celebrities, both here and abroad, have begun to claim links between some vaccines and diseases which have never been scientifically proven. This had led to a multitude of preventable illnesses and deaths because people are unsure about whether to be vaccinated or not.

Can vaccines be harmful? They do sometimes contain “attenuated” or less virulent versions of the disease-causing microbe to stimulate the immune system. This could theoretically lead to a person who is vaccinated getting the disease instead if the virus reverts to virulence. However, vaccines are rigorously tested before being administered, so any side effects can be detected and assessed before it enters the general population. If the side effects are too bad or the vaccine is not effective enough, it will not be administered. Occasionally things can go wrong, but the prevention of these diseases generally outweighs the risks of using the vaccine.

The media has apparently made little attempt to rectify the public’s mistrust in vaccines. Whilst the original story about the link between MMR and autism was blasted across the front pages of the national papers, the subsequent retraction of the paper (in 2010) and Wakefield’s dismissal have not been as heavily reported. This means that people still have a vague remembrance that “vaccinations are bad” and are not being vaccinated because the story has been poorly clarified. Unfortunately, this has led to several outbreaks of measles, as well as other diseases such as whooping cough that can be prevented by vaccination.

It is important that as many people get vaccinated as possible. When enough of the population is vaccinated against a certain disease, the spread of that disease is limited. This protects people that have not, or cannot, be vaccinated. This concept is known as “herd immunity” but, for it to be successful, a large number of the population need to be vaccinated. This is called the “herd immunity threshold” and may need to be up to 95% of the population to be effective.

I’m not suggesting that you should get every vaccine which is available. However, if you or someone you know is due to have a vaccine and you’re worried, ask your doctor (and get second opinions) about potential side effects or the importance of the vaccine. It is important to make an informed decision about whether to be vaccinated or not based on scientific and medical evidence rather than hysterical celebrities or a retracted paper.

1 Reference: http://www.bmj.com/content/342/bmj.c5347 and references therein

Post by: Louise Walker

NEWS AND VIEWS: Standing Up For Science – Improving the relationship between science and the media

Historically, scientists and journalists have never really got along. In general, scientists tend to be a little … mistrustful of the ability of a journalist to accurately portray their research to a wider audience. In return, journalists may find that scientists can be difficult to work with. The research the scientists present can also be a bit confusing or complicated. But they need each other. Scientists need journalists to get the message about their research across, and newspapers like to print science stories  because their readers are interested in it.

Knowing that the science/media relationship can be somewhat antagonistic, the charity Sense About Science has set up a series of workshops as part of their “Voice of Young Science” section. The aim is to help foster a better relationship between early career scientists and the journalists that report scientific stories. These workshops encourage scientists to stand up for themselves and their subject by responding to misinformation or dubious claims in all kinds of media.

I was lucky enough to be able to attend the recent Voice of Young Science media workshop at The University of Manchester. The day was split into several panel discussions; the first involved scientists discussing their experiences with the media – both good and bad – and advising how to get the best out of their situation. Amongst the speakers was Professor Matthew Cobb, from the University of Manchester, one of the advisors on the BBC’s recent “Wonders of Life” series (this counted as one of his “good” experiences!). Professor Cobb’s main advice was to “Just say yes”, because nothing will happen if you say “No”. It may not turn out as well as you’d hoped, but the experience will still be valuable.

It’s easy for scientists to be scared about the way their results may be interpreted by the media. These fears are illustrated by a horror story from another panel member, the evolutionary biologist Dr Susanne Shultz. Dr Shultz had discovered an evolutionary link between social animals and increasing brain size over time, as opposed to solitary animals, whose brains had more or less remained the same size. A misunderstanding somewhere along the line meant it was reported that she had discovered that dogs (as a social animal) were more intelligent than cats (as a solitary one). These were not her results, meaning she had to repair quite a lot of damage. However, whilst Dr. Shultz had a horrible time dealing with misinterpretation of her research, she didn’t think it had done permanent damage to her scientific credentials, which was a relief to hear.

Another panel consisted of people on the media side of the equation, including the science journalist David Derbyshire, as well as Radio 5 Live producer Rebekah Erlam, and Morwenna Grills, the press officer for the Faculty of Life Sciences here in Manchester. There was a sharp intake of breath when Derbyshire admitted that he has written for certain tabloids which are not particularly well-regarded for their science reporting! However, he raised some very good points that I’ve never thought about before. The one that stuck with me was that the turnaround time for getting a story into a newspaper is incredibly short. You’ve got to investigate the story, track down those involved, write it and send it off, sometimes in the space of a few hours. This is not an ideal situation, as scientific stories in particular need proper research to make sure you thoroughly understand it, and this takes time. But what do you do if that time is not available to you? And if your piece is sub-edited into something different, is there a lot you can do about it?

The thing that struck me most about the workshop is that scientists and journalists really need to communicate with each other more effectively. Without journalists reporting on scientific matters, scientific research would never reach the public consciousness; and when you have an important message to get across that would be a very bad thing. Scientific breakthroughs are usually of great interest for the general public, whether it’s about a potential cure for cancer or horsemeat in our burgers. It should ideally be a trusting relationship for both sides to get the best out of the arrangement, and at the moment it is inherently the opposite. The good thing about workshops such as this one is that it helps each side see the situation from the other’s point of view; I certainly feel a bit more understanding towards science reporters. Hopefully the journalists on the panel feel more sympathy towards scientists and why they can be quite protective about their work. Perhaps more events like this can help to heal the rift between these two opposing factions.

For more information about the Voice of Young Science media workshops, please go to: http://www.senseaboutscience.org/pages/workshops.html

Pushing Scientific Boundaries: How far is too far?

Science is nothing if not controversial. From Galileo through Darwin to modern day researchers; certain scientists have always challenged the dogma of the era and often faced persecution because of it. These scientists usually kept up their ‘heretical’ beliefs because they were sure they were right and, in some famous examples, they were eventually vindicated.

But how does controversy affect modern-day science? We have now reached a stage where almost nothing seems impossible. We are able to do things that would have seemed outrageous a century ago: flying through the air on a regular basis, transplanting hands and faces and curing cancer, to name a few. A lot of scientific breakthroughs are made when people push the ethical boundaries of their time, but at what point must we say “that is enough, this has gone too far”? As each scientific taboo is broken and assimilated into modern day research, will there ever be a time where we push too far? Even if we do, will future generations use these once-controversial techniques as freely as we now accept that the earth revolves around the sun?

634px-Day-old_miceOne problem faced when deciding whether or not a techniques morally acceptable is the notion that moral and ethical values vary significantly from person to person. For example, in October 2012, it was reported that scientists were able to create healthy baby mice from stem cells. This led to speculation that in the future infertile women may be able to give birth to healthy babies made from their own stem cells. When the article was reported in the Guardian, the comments below the report were divided. Some thought it was a great breakthrough which should be applauded for the sophistication of the science alone. Others were excited about the prospect of ending the misery of infertility. Some people, however, were more cautious. Arguments against the technique included the opinion that, in an already overpopulated world, should we really be celebrating something that could make that problem worse? Others feared the scientists were “playing God” and were scared at the thought of them having so much control over life itself. This research may have started as a simple question of determining whether such a technique was possible, or from a desire to help infertile women but has now entered a minefield of divided opinion and controversy.

One scientist who is no stranger to controversy is John Craig Venter. Venter, a genome specialist based in the USA, hit the headlines in 2010 when his team created the first synthetic organism. Venter and his colleagues created a bacterial genome entirely from synthetic DNA, they nicknamed the resulting organism Synthia. Synthia acted much like a normal bacterium, replicating and passing the synthetic DNA on to her offspring. Whilst Venter was praised in many scientific corners for this remarkable achievement, there were others who voiced concerns about his work. Venter defended his creation by pointing out a number of beneficial tasks it could accomplish: for example capturing and removing excess carbon dioxide in the atmosphere, or generating an alternative fuel source.

ev.owaInterestingly, sometimes the amount of controversy generated around a discovery depends on the person who made it. Venter has previously made himself unpopular with the scientific community by turning the project to sequence the human genome into a race. He has also made moves into patenting (particularly of the synthetic genome he created), ensuring that in the future he will have full control over how Synthia may be used and will reap any financial rewards attached to this. This has angered many scientists who believe that discoveries should not be owned by any one individual and that they should also not be exploited for profit. Venter’s millionaire lifestyle and self-aggrandising quotes (for example apparently insinuating that he deserves a Nobel Prize) have also rubbed fellow scientists up the wrong way. This behaviour may mean that people are generally mistrustful of Venter’s motives and therefore make his discoveries controversial. Did he make Synthia because he truly wanted to help technology and the environment? Did he do it just because he could? Or because he knew it would get him publicity? Did he make it with the idea of patenting? Or is it a case of “all of the above”?

However, is Venter any different from controversial figures of the past, some of whom we now consider to be the greatest scientific minds of all time? Do we need these maverick scientists to push forward discoveries that others are too afraid to make? If Venter hadn’t turned it into a race, the human genome project would not have been finished earlier than planned. There’s certainly no denying that, whatever you think of his methods, Venter has made remarkable achievements in his career. On the other hand, do we need these boundaries pushed? How much should science interfere with nature? Is it this type of behaviour which makes scientists appear immoral or power-hungry in the minds of the public?

Unfortunately, there is no easy answer to this question. It would be nice to say that science can keep behind the moral horizon and still move forwards, but that’s not the way the world works. We need mavericks and controversial figures to push scientific discoveries into the next era and, as I stated before, what is controversial at first may become normal several years later.

800px-Neandertala_homo,_modelo_en_Neand-muzeoFor my part, I’m wary of scientists who do something which they know is controversial simply because it is possible for it to be done. I call this the “Jurassic Park mentality”: doing something for no better reason than ‘because you can’. Now, before you protest that Jurassic Park is fictional, remember that sometimes truth can be stranger than fiction. Take for example the Harvard Professor who wants a surrogate mother for a Neanderthal baby. I always like to think that research should have some greater purpose which will ultimately prove beneficial. However, I’m not sure how a Neanderthal baby would be even remotely beneficial to anything or anyone.

Although, it’s true that we can’t always tell how research will be used in the future. Sometimes little or less controversial discoveries can become part of something much bigger, and there’s no way of knowing how your research may be used by other people. Just ask Albert Einstein, whose work on atomic theory went on to aid development of the atom bombs dropped on Hiroshima and Nagasaki during World War II.

Perhaps it’s best to think of it this way: when you start pushing at the boundaries of what will be considered controversial or even downright immoral, maybe that’s the time to step back and think “What will the point of this be? Will this be helpful to humanity or the planet or the universe or am I just doing for publicity, fame, glory or just because it is possible?” And if your answer comes into the latter part of that question, then maybe you should at least carefully assess the possibility of someone getting eaten by a rampaging dinosaur before you continue.

Post by: Louise Walker

Will 2013 be the year of the celebrity scientist?

The end of a year is always a good time to reflect on your life; to see what you’ve achieved and plan for the year ahead. For example: this year I attempted to cycle 26 miles in a single day across the Peak District, despite having not used a bike since the 90s. Next year I intend to buy padded shorts before I even go near a bicycle. Oh and I also intend to get a PhD, but that’s no biggie (gulp).

But what sort of year has it been for the relationship between science and the public? I’d say it’s been a good one.

It may just be that I’m more aware of the science-communication world, having finally given in and joined Twitter, but I can sense that science is losing its reputation as a refuge for the über-nerdy and slowly working its way into the mainstream. Scientific stories and issues are being reported more frequently in the media and there appears to have been a dearth of science-related shows on TV. Whether it’s Professor Brian Cox pointing at the sky, the US-based shenanigans of the nerds in the Big Bang Theory or priests taking ecstasy in the name of science live on Channel 4, science certainly seems to be taking centre stage.

538px-Brian_CoxThis year appears to have brought about a shift in attitude towards science and scientists. This change has no doubt been helped by personalities such as Professor Brian Cox, Professor Alice Roberts and astrophysicist-turned-comedian Dara O’Briain. They have helped make science a bit cooler, a bit more interesting and above all a bit more accessible. Perhaps because of this, the viewing public now seem more willing to adopt a questioning and enquiring attitude towards the information they are given. We now often hear people asking things like: how did they find that out, where is the evidence for this claim, how many people did you survey to get that result and has that finding been replicated elsewhere?

The effect of this popularity, at least amongst the younger generation, can be seen in university applications. Despite an overall drop across the board in applications, the drop in students applying to study science has been much lower than for other subjects. Applications for biological sciences dropped a mere 4.4% compared to subjects such as non-European languages which fell 21.5%. Biological science was also the fourth most popular choice; with 201,000 applicants, compared to 103,000 for Law (which dropped 3.8%). Physical science, which is in vogue at the moment mostly due to the aforementioned Prof. Cox and the Big Bang Boys, fell by a measly 0.6%. (Source).

250px-Twitter_bird_logo_2012.svg However, it’s not just the public who have shown a shift in their attitudes. Scientists are getting much better at communicating their views and discoveries. There is now a huge range of science blogs managed and maintained by scientists at all stages of their careers (for some outstanding examples see here). Twitter is also stuffed full of people who have “science communicator” in their bio, ranging from professional communicators such as Ed Yong to PhD students and science undergraduates. This increase in willingness of scientists to communicate their work has probably helped contribute in a big a way to the shift in public feeling towards scientists. They are proving that, contrary to the stereotype, scientists are perfectly able to communicate and engage with their fellow humans.

Universities are also showing a shift in attitude towards better communication between scientists and the rest of the world. The University of Manchester has now introduced a science communication module to its undergraduate Life Sciences degree courses and several universities offer specialised master’s degrees in science communication. Also people who are already involved in science communication, such as the Head of BBC Science Andrew Cohen, are touring universities giving lectures on how to get a science-related job in the media. This means that current undergraduates and PhD students are learning to communicate their discoveries alongside actually making them.

So what does this mean for the coming year? Our declining interest in singing contests and structured reality shows appears to be leaving a void in our celebrity culture. Will our new-found enthusiasm for ‘nerds’ mean that scientists could fill this gap? Will Brian Cox replace Kate Middleton as the most Googled celebrity in 2013? Will we see ‘The Only Way is Quantum Physics’ grace our screens in the new year?

The answer to these questions is still likely to be no. Whilst science does seem to be in vogue at the moment scientists themselves don’t often seek out the limelight, perhaps due to their already large workload or the fact that being in the public eye does not fit with their nature. Figures such as Brian Cox and Dara O’Briain are exceptions to the “scientists are generally shy” rule; both being famous prior to their scientific debuts (as a keyboardist in an 80’s group and a stand-up comedian, respectively).

mathsAnother reason that scientists aren’t likely to be the stars of tomorrow is that science communication is notoriously hard. Scientists have to condense amazingly complex concepts into something someone with no scientific background can easily understand. Many scientists are simply unwilling to reduce their work to this level, arguing that these explanations are too ‘dumbed down’ therefore miss the subtlety necessary to really understand the work. Unfortunately if communicators are unable to simplify their explanations it often leaves the rest of the population (myself included if it’s physics) scratching their heads. As a cell biologist, I find the hardest aspect of communicating my work is knowing what level I’m pitching at – would the audience understand what DNA is? A cell? A mitochondrion? You want to be informative without being either confusing or patronising which is incredibly hard and not a lot of people can do it well. This doesn’t mean that scientists won’t try and get their voice heard though. It may just be that they achieve this through less “in your face” media such as Facebook or Twitter, rather than via newspapers/magazines or on the TV.

My hope is that the current increased interest in science may help shape the type of celebrity culture we see gracing our screens. It may also help to get across the concept that maybe it’s OK to be clever and be interested in the world around you. Maybe it’ll even become socially acceptable to be more interested in the Higgs Boson or the inner workings of the cell than seeing someone fall out of Chinawhite with no pants on.

Post by: Louise Walker

The science behind the myths: Are there clinical explanations for vampires, zombies or werewolves?

When people don’t understand how something works, they often come up with their own explanations. For example, when ancient societies didn’t understand where lightning came from, they attributed it to an angry god. Thus the myth of the lightning god was born.

This tendency of humans to create their own explanations for unusual phenomena may have led to the invention of mythological creatures such as those now seen dominating fantasy writing and films. From a scientific point of view, it is interesting to investigate the source of these myths. How did they come about and why did they become so popular?

With Halloween approaching, I have decided to dedicate a blog entry to the potential ‘scientific’ explanations behind some of our favourite and most enduring mythological creatures: vampires,  zombies and werewolves!

Vampires.

Vampires have always been amongst the most popular mythological creatures, from the tales of Bram Stoker to more modern incarnations like those in Buffy the Vampire Slayer and Twilight. However, in case you have been living in a cave and these have all bypassed you, here is a brief overview of the vampire legend: vampires are generally believed to be human beings who, in life, were bitten by another vampire and then return after death to feed on the blood of other humans. Vampires are generally assumed to never die naturally but, depending on which adaptation you read, can be killed by exposure to sunlight, garlic, holy water or direct penetration through the heart with a wooden stake. Vampires are now a pretty popular part of modern culture, but how could the myth have first come about?

Although few scientific papers exist on this topic the internet is rife with debate and appears to point to several different medical conditions:

Probably the most popular theory of the origin of the vampire is the disease porphyria:  as explained by this article in Scientific American. Porphyria is actually a term for several diseases which are all caused by irregularities in production of heme, a chemical in blood. Some forms of this condition, such as cutaneous erythropoietic porphyria (CEP), lead to deposition of toxins in the skin. Sufferers are often sensitive to light since light activates these toxins. When active, toxins eat away at the skin causing disfigurement, including erosion of the lips and gums. These factors could have led to the corpse-like, fanged appearance that we associate with vampires and their dislike of sunlight. Interestingly, people who suffer from porphyria also have an intolerance to foods that have a high sulphur content…such as garlic.

Mycobacterium tuberculosis

Another possible explanation for vampires is tuberculosis (TB). This is a lung disease caused by the bacterium Mycobacterium tuberculosis. The reason this disease has been suggested as the origin of the vampire myth is because victims turn very pale, often avoid the sunlight and cough up blood. This is actually due to the disease damaging the lungs, but it’s easy to see how it could be misinterpreted as someone having recently drunk blood. According to this study, the vampire myth may also have arisen from the fact that TB spreads rapidly and easily from person to person. The infectious nature of this disease may have led to the belief that the vampire rises from the dead to feed on his loved ones, causing them to suffer the same symptoms.

An intriguing alternative explanation is Catalepsy. This is a disease of the central nervous system leading to a slowing of the heart and breathing rate, with sufferers often seizing up completely. These symptoms may have led people to mistakenly believe the sufferer to be dead. Therefore, since these individuals were perceived to have risen from the dead, it is easy to see how this disorder could be linked to paranormal mythology.

Zombies.

Ah, the zombie apocalypse, ever a popular scenario in films and books. Some organisations, such as the Centers for Disease Control in the USA even run “zombie apocalypse” days so you can prepare for what to do when the end is nigh.

Zombies are usually defined as people who were once human, but have been altered in some way so they no longer have a sense of self. Usually the sufferers have died and then been re-animated with a surprising taste for human brains. Zombies pursue this delicacy relentlessly. Often, anything that has had its free will removed and is bending to the will of others is also referred to as a “zombie”.

The zombie myth is believed to have originated in Haiti. There are many examples in Haitian and voodoo folklore of corpses which have been re-animated and used as slaves by sorcerers. The existence of zombies was explored scientifically in 1982 by Dr. Wade Davis after a man, Clairvius Narcisse, claimed to have been brought back to life by a sorcerer. Dr. Davis examined samples of the “zombie powder” which the sorcerer allegedly used to create his zombies. He found that the powder contained several toxins, including tetrodotoxin, which is found in pufferfish. Dr. Davis theorised that the tetrodotoxin caused paralysis and a death-like appearance in the sufferer, but that this state would eventually wear off, giving the illusion that the victim had been raised from the dead. He wrote two books on the subject, called Passage of Darkness and The Serpent and the Rainbow (the latter of which was used as the basis for a horror film). However, some sources do not believe that Davis’s work is scientifically valid due to the fact that the tetrodotoxin level in the “zombie powder” were actually found to be quite low. There was also some speculation that Davis’ work could have been plagued with murky ethics following reports of alleged grave-robbing.

Film depictions of the zombie apocalypse usually hint that it is rapidly spread by a pathogen such as a bacteria or virus. This may have some root in real life, as there are a number of known pathogens that are suspected of causing behavioural changes. As explained in this blog by fellow Brain Bank-er Sarah the parasite Toxoplasma gondii can control the behaviour of rats. The rats behave in a “zombie-like” manner, going against their natural instincts to actively seek out cats – the parasite’s true target. There have been some suggestions that toxoplasma gondii can affect the behaviour of humans too, making men more jealous and women more ‘warm hearted’. If T. gondii or similar parasites are ever able to affect humans in a way that modifies their behaviour to extremes well … hello, zombie apocalypse! (In the interests of not scaring you too much, I should point out that this scenario is very, very, unlikely).

However, there are other ways of creating a Zombie. Scarily, some current scientific techniques may one day be capable of creating ‘zombies’! Scientists are now capable of controlling some aspects of behaviour in certain laboratory animals using targeted laser light to activate groups of genetically modified neurons, this technique is known as optogenetics (for more detail see this post by fellow Brain Bank-er Natasha). This notion of behavioural control of ‘loss of free will’ is spookily similar to the depictions of some of the mindless zombies seen in popular culture. However, the ultimate aim of this technology is much less sinister, it is actually being used to investigate how the nervous system works and how problems may be corrected when things go wrong.

Werewolves.

Werewolves appear to be having a mini media renaissance, thanks to Professor Lupin from the Harry Potter books and all of Team Jacob. Legend has it that werewolves spend most their time in human form but then, on the full moon, transform into a giant man-eating wolf with no human conscience. The werewolf usually turns back into a human at sunrise, with no recollection of their wolfish activities.

Lycanthropy, the clinical name given to werewolves in fiction, is actually a real medical term referring to someone who is under the delusion that they are a wolf.

Some medical theories concerning the origin of werewolves were explored in the book Why do Men have Nipples? by Billy Goldberg and Mark Leyner. One of these is once again based around porphyria, the same disease with links to the vampire myth. Some sufferers of cutaneous porphyria exhibit the canine “fang” look caused by the erosion of the gums. Also, following exposure to light, the healing blisters on sufferers’ skin often grow a fine layer of hair.

Someone suffering from congenital hypertrichosis universalis

The authors also speculate that the disease congenital hypertrichosis universalis could be a cause of the werewolf myth as this also causes excessive hair growth across the whole body. However, this disease is extremely rare so may not be prevalent enough to have bred such a popular myth.

Another possible reason behind the werewolf myth is the disease rabies. Rabies most famously affects dogs, but can also be transmitted to other animals. Its most characteristic feature is foaming at the mouth but it also causes hydrophobia (fear of water), aggressiveness, hallucinations and delirium. If an infected animal bites a human, they will suffer from similar symptoms. Possibly, in the past, someone noticed that a human bitten by a rabid dog took on the same characteristics and thought that the person was literally becoming a very aggressive dog or wolf.  However, rabies doesn’t explain the all-over hairiness or link to the lunar cycle most people associate with werewolves, particularly as, if you believe Noel Coward, sufferers of rabies famously come out in the midday sun.

According to howstuffworks.com, the idea of men turning into wolves has been  a part of folklore since ancient times, but was popularised by the 1941 film The Wolf Man. It is therefore possible that the myth of werewolves, unlike vampires and zombies, has been shaped more by popular culture than medical science.

My boyfriend suggested that being a woman may also be an origin for the werewolf myth. He decided to point out that women tend to get a bit aggressive at certain times once a month. This suggestion was met with a stony silence and being made to pay for dinner (I think it may have been a full moon).

So, there is no clear scientific explanation for these myths, but the subjects continue to fascinate and intrigue us. More and more films and books are being produced which revolve around these mythical horrors, often meaning that the origins of the myths become further buried as authors and film-makers add new characteristics and traits (However, that doesn’t make unearthing the science behind these enduring and popular creatures any less interesting). As you can see from some of the articles here, scientists are using the popularity of these myths, especially zombies, to raise awareness of very real and potentially dangerous situations such as the rapid spreading of a deadly disease. Since these stories can be used both to entertain and educate, keep the tales coming!

Post by: Louise Walking Dead

Solving the Stem Cell Problem: A Nobel Prize-winning discovery

An important part a scientist’s work is being aware of the ethical issues surrounding their research and the implications these have on the wider community. Some scientists dedicate their careers to resolving these ethical issues. The winners of the 2012 Nobel Prize in Physiology or Medicine, John Gurdon and Shinya Yamanaka, have pioneered a method which hopefully represents a big step towards solving the ethical issues involved in the use of Stem Cells.

The Stem Cell Problem

Imagine that there was a potential cure for illnesses such as Alzheimer’s, heart disease, spinal cord injuries, or any number of others. Then assume that in order to achieve this, an unborn embryo has to be destroyed. What would you do? This dilemma is at the heart of the Stem Cell Problem.

Embryonic Stem Cell’s (ES) are used in a number of areas of research. The use of these cells is controversial since they are taken from human embryos, initially created for In Vitro Fertilisation (IVF) but not implanted, these ‘spare’ embryos are then donated to scientific research. Unfortunately, during the course of the research the embryo is destroyed; a fact which has led some organisations, such as the Catholic Church, to claim that research using ES cells is tantamount to murder.

The reason ES cells are so important to medical research is that certain properties, only possessed by these young cells, makes them ideal for therapeutic manipulations. Mature cells in the human body are highly specialised towards their function, whether this is in the blood, liver, brain or elsewhere. However, stem cells are immature cells which have yet to develop into their final specialised form. When required, the stem cell is stimulated by certain factors in its environment and eventually develops into a specific mature cell type, for example a blood cell. This means that an embryonic stem cell has the potential to become any cell in the body, given the right environment! ES cells are also able to replicate themselves many times over, unlike specialised adult cells. Therefore these cells are invaluable to scientists investigating cell behaviour and methods for regenerating damaged tissue. The therapeutic uses for stem cells range from understanding cancer to regenerating tissue in a whole number of degenerative disorders.

The 2012 Nobel Prize in Physiology and Medicine was recently awarded to two researchers who made extraordinary advances in cell reprogramming. Their pioneering work has given scientists a clearer understanding of how cells function and also provided a method of obtaining stem cells from adult tissue. Thus, potentially solving the ethical issues surrounding the use of ES cells in research.

Cell Reprogramming

One of the Nobel Prize recipients is Sir John Gurdon of the University of Cambridge. In 1962, Gurdon transferred a nucleus (the part of the cell which contains DNA) from an adult frog cell into a frog egg cell. The egg developed into a normal tadpole, showing that DNA from a specialised adult cell could be reprogrammed to function in a developing embryo. This was a landmark discovery since, up until this point, it was thought that adult cells “lost” certain components of their DNA so could not function as part of a developing cell. Gurdon’s work showed that this wasn’t the case proving that mature differentiated cells contain a full compliment of DNA; it’s just that some of the DNA in mature cells is inactive. This also showed that mature adult DNA can be reverted to a previous immature form. The work opened the door for many other scientific breakthroughs, including the cloning of Dolly the Sheep in 1996.

Shinya Yamanaka, the second Nobel recipient, built on Gurdon’s work reprogramming mature adult cells back to an immature form. Yamanaka developed a line of cells called induced Pluripotent Stem Cells (iPSCs). He and his colleagues adjusted the expression of certain components within adult cells, enabling them to revert back to their young, stem-cell, form. These reprogrammed cells had similar characteristics to embryonic stem cells, including the ability to mature into a variety of different cell types. Yamanaka then used the same technique with human adult cells, reverting them to a state similar to an ES cell, further developing his concept to be used in the study of human cells and diseases. As they display many of the important properties found in ES cells, iPSCs could potentially be used as a replacement for ES cells, thus eliminating the controversy surrounding the use of embryonic cells in research.

Stem Cells in Organ Transplants

The potential of iPSCs doesn’t stop at replacing ES cells. They could also herald a major advance in the science behind organ transplantation. Since, ES cells can both regenerate themselves several times over and become any cell type, their use in the replacement of damaged tissue is now being studied. Meaning it may soon be possible for patients to receive transplants composed of reprogrammed cells (iPSC’s) from their own bodies. This would solve the major problem of transplant rejection from donated tissues, caused by the recipient’s body recognising the donated organ as foreign. In theory, an iPSC-derived tissue would not be rejected as it would be made from the patient’s own cells.

Problem Solved?

As a cell biologist, I may be slightly biased, but I think many scientists (especially biologists) would agree that the work undertaken by Gurdon, Yamanaka and their colleagues is incredibly exciting. It represents a great leap forward in our understanding of how cells work and new ways of studying them in a controversy-free environment.

So, does this Nobel prize-winning work signal a solution to the Stem Cell Problem? Alas, no. It is unclear whether iPSCs and ES cells are equivalent on the molecular level, casting doubt on the likelihood of iPSCs being able to completely replace ES cells in research. Another problem, as with many newly discovered techniques, is that the long-term effects of these technologies are unknown. For example, there are concerns that cells derived from any form of stem cells have a tendency to become cancerous. There has also been a surprising report that iPSCs still produce an immune response when transplanted in mice, which would lead to transplant rejection.

So, unfortunately we still don’t have a comprehensive solution to the Stem Cell Problem. However, this does not detract in any way from the discoveries of Gurdon, Yamanaka and their colleagues, and there is no doubt that their innovation, expertise and skills should have been rewarded by the Nobel Prize. Only time will tell just how much more useful their discoveries will be.

Post by: Louise Walker

For more information on the properties and use of stem cells: http://stemcells.nih.gov/

Right to Die: Is it ever justified? – one scientist’s perspective

For this post I’m going to break from my normal light-hearted blogging to talk about a topic which is very serious and close to my heart – the “Right to Die”.

The “right to die” is the ethical entitlement of someone who is suffering from a debilitating and permanent illness and who has no quality of life to choose to end their life on their own terms often through either suicide or, if necessary, assisted suicide. This is a subject which crops up in the news on a regular basis and there is, understandably, a great deal of controversy surrounding this question.

Recently the “right to die” issue has surfaced surrounding the case of Tony Nicklinson, a 58-year old man who suffered from a condition known as locked-in syndrome. His condition meant he was unable to move or speak, communicating solely through eye movements. Mr Nicklinson went to court in an attempt to make it possible for a doctor to end his life without fear of prosecution. He argued that he was suffering a miserable and undignified life and wanted to end it on his terms. However, his appeal was unsuccessful. The court ruled that it was unable to “usurp the function of parliament” and did not have the power to grant his request. His devastation was clear to see and harrowing to watch. Sadly or fortunately, however you want to look at it, Mr Nicklinson died peacefully of ‘natural causes’ at home just 6 days after the court date, after refusing to eat and finally contracting pneumonia

I believe that the UK government urgently needs to review its policies on the right to die and voluntary euthanasia. This is partly so people, such as Mr Nicklinson, don’t have to suffer needlessly. However science also foresees a bigger problem society may soon have to face, one which is set to cause huge economic and ethical problems: the dementia time-bomb.

I have a personal interest in wishing that the law for euthanasia be changed, specifically with regard to dementia. I have seen two of my grandparents suffer from this devastating disorder – my paternal grandfather and maternal grandmother. In my grandfather’s case, he suffered from a low-level form of dementia for many years, before suddenly and rapidly going downhill. However, just a few months after his sudden deterioration, he died of an infection. When I heard about his death I was relieved: at least he didn’t have to suffer the indignity of full blown dementia for many years. However, this was the sad fate which befell my grandmother.

In 1999 my grandmother was diagnosed with suspected Alzheimer’s disease (the most common form of dementia). Our whole family, including her husband of nearly 60 years, watched her turn from a happy, chatty, busy woman into one who forgot who or where she was. She became violent and confused. She had a long, slow, painful descent into being completely helpless – unable to remember who she was or basic things like how to dress herself. After several years my mother made the heartbreaking decision to move her to a care home, since the burden of caring for her was too much for my 82-year old grandfather, who had been diagnosed with cancer. During her time in care she continued to deteriorate, a process we believe was accelerated by bad practice within the care home (we know she often went without sufficient food since no one ensured she ate her dinner – another simple necessity she had long forgotten the need for). As she descended, she slowly forgot who her grandchildren were, then her children. I still remember the moment she forgot who my grandfather was, when there was no recognition of the man she had married in 1948. This devastated my grandfather; he never really got over it, and began to give up his fight against his cancer, succumbing to the disease in 2010.

My grandmother eventually plateaued, but only after she’d forgotten how to walk, speak, go to the bathroom or do anything other than sit in a chair, constantly grinding her teeth and very occasionally mumbling a nonsensical sentence. Even those things stopped eventually and she essentially became a corpse whose heart happened to be beating … this was in no way a dignified way to live. She finally died in June 2012, after 13 long, painful years. My whole family was relieved when she passed away – she wasn’t suffering any more.

Syringe by bocian - It is this experience which has strengthened my view on euthanasia. I strongly believe that it should be legal for people who are suffering enormously and have no quality of life to be able to end their life on their own terms.

Of course I’m aware of the problems surrounding legalising euthanasia. There’s a huge difference between the case of Tony Nicklinson and my grandmother. Tony Nicklinson was mentally sound but trapped in his non-functioning body, my grandmother was OK physically but mentally there was nothing left. These two cases would have to be treated very differently. The main difference being that, in one case the person is able to state for themselves that they wish to die whilst in the other, they are no longer in sound mind therefore unable to make that decision.

One of the main objections to the legalising of euthanasia is that it may put vulnerable people in harm’s way. Take for example people who are disabled and believe they are a burden on their relatives and carers, or a family who might just ship off their mad old grandma and be done with it, no matter what she may have wanted. These are all very real concerns which need to be addressed, however, I think that the vulnerable can be protected by implementing very strict controls around the process. These controls must ensure that euthanasia is only allowed in extreme cases, such as those mentioned above and that each separate case is subject to an extensive and thorough review. I also believe that interviews and psychiatric assessments are necessary for both the patient and their chosen representative (in cases where another person will ultimately have to make the decision) and that no action should be taken unless two or more doctors agree that euthanasia is the best option. Of course in the case of dementia it will be necessary for the patient to express their wishes whilst still in sound mind, perhaps relying on an advocate/representative to ultimately decide when euthanasia should be performed. It is also important to take into account the wishes of the doctor(s) involved in the process – no doctor should be forced to perform an act of euthanasia against their wishes, much like they cannot be forced to perform an abortion. But what’s wrong with introducing a legally-binding document, such as an advanced directive, stating that “if I get to a stage where my life has become devoid of any quality or dignity due to a debilitating and permanent illness, then I trust a designated person/people to decide when my life can be ended (subject to legal red tape and psychiatric evaluations).

My feelings on this matter don’t just come from my own personal experiences, but have also been formed through my research on Alzheimer’s disease. However, I must stress that I have had many debates with friends and colleagues (including those doing Alzheimer’s research) on the matter and that not all scientists agree with my views. The bioethics behind euthanasia are tricky – most people would only want their lives to end if they knew there was no possibility of a cure. As far as dementia goes, there is a huge amount of research being undertaken into a possible prevention or cure for the disease. A definitive prevention and/or cure is the ideal and if this ever occurs then there will be no need for euthanasia laws to exist. However, dementia is an extremely complicated condition, believed to be caused by a multitude of genetic and environmental factors. A cure still seems a very, very long way off. Clinical trials for drugs take many years from conception to being available on the market, so even if a breakthrough does occur at the research level, it may take ten years before any drugs are freely available. This also isn’t taking into account that dementia takes many forms – Alzheimer’s disease is the most common, but there are many other versions of dementia, including vascular dementia, dementia with Lewy bodies and Fronto-temporal dementia, which all have their own causes.

In cases such as locked in syndrome, the outlook may be even bleaker. A quick search for “locked in syndrome” on the journal website Pubmed doesn’t produce many papers (8124) and few seem to be about treatment. This is a big contrast to papers published on Alzheimer’s (85847) or lung cancer (211982). There is no cure for the syndrome; research is mostly concentrating on helping sufferers communicate. The best hope scientifically would be to prevent the syndrome by preventing strokes, which cause many cases of locked-in syndrome. Although there are a few isolated cases where people have recovered from the disease these cases seem to be rare.

Time is something which is not on our side when it comes to dealing with dementia. According to the Alzheimer’s Society, there are 800,000 people suffering from dementia in the UK at the moment. By 2021, only 9 years away, they estimate that this number will rise to over a million. The cost of dementia to the UK is predicted to be £23 billion in 2012, minus £8 billion a year which is saved by people caring for relatives with dementia themselves. Worldwide, there is expected to be 36 million people suffering from dementia, with that number expected to rise to 115 million by 2050(source).

I find those numbers utterly staggering, to me it seems like politicians are metaphorically sticking their fingers in their ears and singing loudly rather than confronting the issue, which is just getting bigger and bigger. Like it or not, in 9 years’ time, 1 million people are going to be suffering from dementia, most requiring round the clock care. This doesn’t even take into account the pressure and emotional stress put on the families of those 1 million people. Maybe a cure will be forthcoming sometime soon, but something needs to be done to combat this rising crisis – through increasing funding for Alzheimer’s research, improving the quality of care (for example, reclassifying from “social” care to “medical” care) and for helping people who are suffering to end their lives with dignity, and on their own terms.

Of course, I’m not saying that the minute someone gets diagnosed with dementia that we should ship them off to dignitas or some future UK based equivalent. This has to be a choice made by the individual when they are of sound mind, stated clearly by them in a legally binding document with assurances that it is not decided under pressure from anyone else. I just think that the opportunity should be there if it comes down to it – I know anyone in my family would prefer to end their lives rather than enduring the indignity my grandmother had to go through. It’s time for the government to at least seriously assess the possibility and consequences of making euthanasia legal. In my opinion, the option of euthanasia should be available to those who require it, but it should not be made easy. So my take home message would be – Make it hard, but make it possible.

Post by: Louise Walker

Science fiction vs. science fact: The use of viruses to cure disease

Several popular blockbusters, including I Am Legend and Rise of the Planet of the Apes, have envisioned the use of viruses, rigged to deliver therapeutic DNA to patients as a way of curing disease. In these films the scientists using these techniques are ecstatic when they discover that they’ve been successful, but their joy quickly turns to horror as the virus mutates out of control and begins to destroy the human population. This is undoubtedly a nightmare scenario, but how close do these films come to the truth? Can viruses, commonly known to cause disease, actually be used as a cure? How likely is it that they will mutate out of control and destroy the world? If this is the case, then why are they being used at all?

Viruses are responsible for a range of diseases, including the common cold, influenza, HIV (Human Immunodeficiency Virus) and Ebola. The sole selfish function of a virus is to infect a host (e.g. a human) then use this host to make more copies of its own DNA. It does this by entering a host cell and hijacking its DNA-making machinery, forcing it to make more viral DNA. However, it doesn’t stop there. Once the host cell has made sufficient viral DNA the virus then commandeers the host cell’s other machinery to create more intact viruses which “bud off” from the infected cell ready to infect its neighbours.

So, if viruses are so deadly and infectious can they seriously be used to cure disease? Surprisingly, the answer is yes. The use of viruses to deliver new DNA to human cells is being investigated as part of a technique known as Gene Therapy.

Certain diseases, such as cystic fibrosis, are caused by known defects in a single gene. The idea behind gene therapy is to fix damaged DNA. This can be achieved by either swapping the defective gene for a working one, repairing the damaged gene by mutating it back to a healthy form or by “switching off” the defective gene. Viruses are being investigated as carriers or ‘vectors’ for delivery of new, undamaged, DNA. However, gene therapy cannot offer a miracle cure for all known disorders. In fact it is only a feasible treatment for disorders stemming from a small number of recognised genetic mutations. Therefore, the idea of a single gene therapy functioning as a cure for Alzheimer’s or all known cancers, as seen in the movies, is purely fictitious!

A typical virus consists of a viral-coat (like the skin of a balloon) enclosing DNA and a small number of proteins. In gene therapy, the virus is modified so that its DNA is replaced with DNA required for the therapy. The virus is then injected into the patient where it targets and infects cells, replacing damaged DNA with the new healthy DNA. The part of the virus which allows it to replicate has been removed, meaning that whilst the virus retains the ability to infect cells and alter DNA, it has lost the ability to replicate itself and infect neighbouring cells (this infectious ability is called virulence).

Viruses which are currently being investigated for use in gene therapy include: adenovirus (responsible for the common cold), retrovirus (HIV is a retrovirus) and Herpes Simplex virus (as the name suggests, is responsible for herpes infections and also cold sores). Part of the appeal of using viruses in gene therapy is that they may be used to target healthy DNA to specific cell types. This can be achieved by manufacturing viruses which can recognise and infect only certain types of cell. This means that “innocent”  cells which are not expressing the disease-causing gene should not be infected.

In the films, the therapeutic virus mutates back to its virulent form, or an even more virulent one. It then spreads a fatal disease throughout the population, causing a global catastrophe. One of the concerns about using viruses for gene therapy is that this nightmare scenario might come true. This possibility is currently under intensive study within controlled research environments. Although current research has found that recombination and a return to virulence may be possible for certain viruses, this may not be the case for all viral vectors used in gene therapy. However, if this technology is proven to pose a real risk then such research will likely be discontinued.

There are also other problems with using viruses as DNA carriers. The introduction of any foreign material into the body is likely to produce an immune response. The surface of the viral-coat is not smooth, it actually expresses a number of extruding proteins which may be recognised by a patient’s immune system. This means that the host’s immune system may recognise the foreign body and attempt to dispose of it. Strong immune responses can be fatal, especially in someone already weakened by a genetic disease. This problem could, in theory, be circumvented by removing proteins and other foreign bodies from the outside of the virus to lessen the chances of it being recognised as a foreign object.

Currently, gene therapy using viral vectors is not approved by the U.S. Food and Drug Administration (FDA) since concerns have arisen surrounding the deaths of two patients participating in gene therapy trials. One died of a severe immune response to the viral carrier. The other appeared to develop leukaemia, leading to fears that viral vectors may cause cancer.

A liposome

There are alternatives to using viruses for gene therapy. One option is to use liposomes, small balls of lipid (fat droplets found in the cell membrane) which contain the DNA needed to fix the gene. These shouldn’t produce an immune response since liposomes are made from materials found naturally in cells. However, the disadvantage of liposomes is that they can’t target specific cell types. Another alternative is to simply inject the DNA directly into target cells. The advantage of this is that it won’t cause an unwanted immune response. However, injecting DNA is an immensely tricky process and may only be possible with certain types of cell. Also, since this ‘naked’ DNA does not integrate well into cells, it may not be expressed as reliably as DNA delivered by other methods.

So how likely is it that in the future viral vectors will be used to help cure disease? Although the process is still in its infancy and concerns over its feasibility need to be addressed the principle is promising. Indeed, there have already been some positive steps towards implementing these techniques! In 2008, a group at University College London (UCL) used gene therapy to successfully improve the sight of a patient with the eye disease Leber’s congenital amaurosis. The patient suffered no adverse effects since the study used an adeno-associated virus, a strain which cannot replicate itself without the addition of a partner virus. The adeno-associated viral DNA also usually inserts itself into a specific region in chromosome 19 (whilst insertion from other viruses may be random), meaning it is less likely to interfere with other functional DNA.

Therefore there is still hope that viruses can be used as vehicles for gene therapy. If current problems can be overcome, it may prove to be a revolutionary method for treating ‘single-gene’ diseases. Indeed, it is unlikely that viruses will cause the apocalyptic effects seen in these films, since any reversion to virulence is likely to be caught long before it infects the general population. But that wouldn’t make for such entertaining viewing would it?

References:

http://learn.genetics.utah.edu/content/tech/genetherapy/

http://www.ornl.gov/sci/techresources/Human_Genome/medicine/genetherapy.shtml

Post by: Louise Walker

Science: Is it a girl thing?

Last week I was forwarded a link to a, now withdrawn, advert from the European Commission. The link came from a friend with a wry sense of humour and when I first watched the clip I automatically assumed it was a tongue-in-cheek social commentary aimed at rubbing ‘feminist types’ up the wrong way. To me it seemed to play on a negative female stereotype (the fashion-conscious airhead), accentuating how this attitude does not fit with the otherwise serious, male oriented, world of science. The women in the advert were all immaculately preened and shown dancing around in short skirts and high heels, whilst the only legitimate looking scientist was a male sporting a lab coat and staring studiously down a microscope (see the clip below). It was later I realised that, far from being a cheeky misogynist jibe, this was an actual advert produced by the European Commission aimed towards encouraging young women to study science.

 

It seems, after extensive market research, the European Commission decided this was the best way to foster an interest in science amongst young girls. I assume their research showed that a large proportion of young women are more interested in fashion, beauty and music than differential equations and scientific method. Therefore, they ‘logically’ deduced that: to encourage more women into scientific careers, they had to show how glamourous and sexy this career choice can be. On the surface this makes sense: if you want to market a product (a scientific career) you have to make that product appeal to your target demographic (young women). However, it is not too hard for me, a female scientist, to see how this approach is short-sighted.

The life scientific is one of massive contradiction. Most people only know the romanticised/glamorised view of science as seen on TV: white coats, test tubes and Brian Cox staring knowingly into the middle distance. This view of science is easy to fall in love with. However, look beyond the inspirational sound-bites and you will find the true heart of science; the scientists. Very few of us are your stereotype ‘super genius’ and, as with any worker, we struggle on a day to day basis with insecurity, doubt and frustration but the one thing I believe we all share is an unwavering curiosity and determination, since without this we would undoubtedly fall to the pressures of our field. This curiosity and determination is the key to becoming a successful scientist and something the European Commission’s marketing ignores. Therefore, although the campaign may encourage more young women to study science, if these women enter the field believing a scientific career to be no more than a glamourous asset they can flaunt with their girlfriends over a late lunch, the outcome will undoubtedly be some rather disillusioned women and a number of ineffectual scientists.

That said, although I disagree with the their approach, I cannot deny that there is a lack of women in the higher levels of academia, indeed this is true for the higher echelons of many careers. This is undoubtedly a problem which must be addressed. However, I believe that before we stand a chance of readdressing the balance we must first uncover where the problem actually lies.

As a postgraduate student in the biological sciences, I don’t see a large divide between the number of male and female students in my field and level of study. However, there are undoubtedly fewer women holding higher academic positions (e.g. professor-ships) in this field. Conversely an area where you can see a vast male/female divide, starting as early as A-level, is in the physical sciences. A (male) friend of mine studied physics at university and often recants how there were so few women in his department that they had to organise socials together with the psychology department to ensure a good male/female balance.

statistics from the UKRC and the Athena SWAN charter, in the period spanning 2007-2009
statistics from the UKRC and the Athena SWAN charter, in the period spanning 2007-2009

 

This raises two questions: firstly, what is standing in the way of women achieving higher academic positions and secondly why do fewer women choose to study the physical sciences and maths?

The first question may simply reflect the fact that, historically fewer women chose the scientific path. Therefore at the higher levels of these fields, where the practising academics tend to be older (40 or above), the more recent influx of women is yet to filter its way up. However another explanation, indeed one that I grapple with on a regular basis, is that there is something about the academic lifestyle which does not appeal to the female mentality.

As I mentioned earlier, the life of a scientist is a constant battle. We spend most our time forming and testing theories, many of which only lead to more questions. Then, once things begin to slot into place, we are expected to defend our methods and findings against the rest of the community, which can often lead the poor researcher back to the drawing board. Although this process is certainly not pleasant it is necessary to ensure our theories are scientifically accurate, especially since mistakes can have devastating consequences. This means that good researchers are not only thick-skinned but also highly motivated, determined and willing to dedicate the majority of their time to their work.

Along with these pressures, the financial rewards of a scientific career are often small. To gain a typical Ph.D. from a UK university the student must first have spent at least three years as an undergraduate, more often four including a Master’s degree (we all know this is an expensive endeavour, more so recently). A typical Ph.D. course lasts an additional 3 years, during which it is rare to earn more than ~£16,000 p.a. Most courses offer an optional unpaid 4th year, which many students (even the most organised and diligent) often use to finish their thesis. Assuming you can defend your work and gain a Ph.D. this qualification usually leads to one of two academic career paths: either a side-step into industry (an area I’m not so qualified to speak about) or a move into a ‘postdoc’ position, usually with the ultimate goal of gaining permanent academic employment. Unfortunately, despite the sheer amount of effort required to reach this stage, postdoc jobs are usually temporary (often lasting only 3-4 years) and do not tend to be highly paid. It is also not rare for a researcher to move through three or more postdocs before finding a permanent position. This means that many researchers are expected to spend the whole of their twenties and often a good proportion of their thirties in relatively low paid, high stress, temporary positions.

Don’t get me wrong; although this may sound bleak, for the right person, science is an ideal career. For the most part you get to be your own boss, you are constantly challenged by new problems, you get to travel around the world presenting your data at conferences and you know that your work is of huge significance to the community, even if it’s just a small part of a larger picture. However, for many people the lack of stability and a sustainable work/life balance will undoubtedly become a stumbling point.

In my experience women are more likely to struggle in positions which do not offer a sustainable work/life balance, especially if they intend to start a family. This may be why you find a large number of female academics moving sideways into more flexible careers such as teaching or medical writing (a predominantly female profession). In my opinion, the high attrition rate of women in the biological sciences does not reflect a difference in intellectual capacity or capability but simply a difference in priorities; men are perhaps more willing to sacrifice relationships and financial stability for their work. If this is the case, I believe there is a problem with a system which allows a number of intelligent motivated scientists who want a more balanced lifestyle to simply fall by the wayside.

The second question (why at most ages there are fewer women studying the physical sciences and maths) stabs at the heart of the age old nature/nurture debate. As far as I know, we cannot say whether the female mind is less inclined to this type of thought or whether the environment in which young girls grow up discourages them from studying these subjects. Either way, I believe the key to tackling the imbalance lies in fostering an interest in these subjects early in life rather than trying to convince teenagers that science is ‘cool’. Indeed, I think a good start would be to provide young girls with some more realistic academic female role models!

– but hey, don’t ask me I’m just a girl…

Post by: Sarah Fox and Louise Walker

 

The Flow-Chart of Science

Next time you’re perusing your favourite newspaper or news website, it’s quite likely you’ll come across a headline announcing a new scientific discovery, perhaps saying something like “New drug found to reduce tumour growth in lung cancer patients”. This headline seems simple enough, but don’t be fooled, in order to generate it several scientists, several years and a whole lot of blood, sweat and tears (sometimes literally) will have been involved. What makes it all so complicated you may ask? Well to answer this question I have created, for your entertainment and enjoyment, some (very generalised) flow-charts of a scientist’s life, which I hope will provide you with a small window on our world:

#1 Experimentation:

 

All research starts out with a hypothesis (an idea). Normally, after months of reading around a topic, you build on existing knowledge by proposing your own question. This question may stem from something someone else has discovered, or perhaps a hunch or suspicion you have based on your own previous research. For example, you may propose that “daily intake of a new experimental drug will significantly reduce growth of tumours in lung cancer patients”. The difficult part is working out how to prove this! You must find the best way to design an experiment to answer the question, preferably in the quickest and cheapest way possible.

Designing the experiment is arguably the trickiest part of scientific research. It is important that you can prove, beyond reasonable doubt, that the factor you are testing is the sole cause of the effect you are seeing. Taking our cancer-drug hypothesis above, you will need to prove that the experimental drug is the main factor which is causing the reduction in tumour growth. In order to prove this, you need to find a relevant ‘control’ for your experiments. A control experiment largely mimics the actual experiment with the exception of the factor you are testing. To test that a new experimental drug works against lung cancer, you will need to use two groups of individuals (humans, or perhaps experimental animals depending on what stage of development the drug is in) with the same type of lung cancer. The first group will have the experimental drug given to them and the effect it has on the size of their tumours will be monitored. This group is the “experimental group”. However, the data from this group cannot stand alone; you need something to compare it to. This is where your control group (also known in human trials as the “placebo group”) comes in. The placebo group will usually also receive a pill, but it will be inactive, such as a sugar pill. You can then compare the rate of tumour shrinkage between the experimental and placebo groups. If the placebo group also experiences a reduction of tumour growth, then it shows that the effect is not due to your drug, but rather something else.

The selection of the experimental and placebo group is immensely important; you must ensure that the only real difference between the groups is whether or not they receive the drug. For example, if your experimental group are all females but your control group are males, you will not be able to say whether it was the drug or the effect of gender which caused your results. This is much easier to achieve in laboratory settings than in clinical studies (carried out with human subjects) since in a lab you can easily control for factors such as diet and lifestyle in both groups.

Ensuring your experiment works and is properly controlled can often be a serious headache and can take up a significant amount of time. However, once you’ve got your experiment working, then it’s time to get the results. If you’re lucky, the results will prove your hypothesis and you can move on to the next stage. However, there’s also a chance it’ll disprove your hypothesis. This means that you’ll need to generate a new hypothesis and start the whole process again.

So you’ve got your experiment to work, it’s well controlled and it proves your hypothesis. Now you’re good to go, right? Actually, no. Science rarely gives a definitive answer, and what’s more likely is that getting the answer to your original question will produce more questions which need further investigation. You may have noticed several infinite loops in the flow charts (if you follow the “no” answers anyway) – it is sometimes arbitrary when you decide to break the loop and publish your data in a ‘journal’ for the rest of the scientific community to see.

#2 Publishing:

There are several hundred different journals covering all specialities, from general science (maths, physics, biology, chemistry, engineering etc.) to incredibly specific areas such as Alzheimer’s disease. These journals publish “papers” which have been submitted by a group of scientists detailing their research, any relevant data required to back up their claims and explanations of how their work is relevant to the wider community. The important thing about publishing data is that you have a coherent story which is interesting to other people, especially fellow scientists.

So what happens when you finally decide to publish? Firstly, there is a significant advantage in having positive data (results which prove your original hypothesis). A big problem with scientific research is that negative data (results which disprove a hypothesis) are much harder to publish. A positive result is generally regarded as more interesting and a journal is more likely to accept it.

Another thing about publishing is that the data you present has to be of the highest quality. This may mean repeating experiments until the data produced is both convincing and aesthetically pleasing. This could take months to do well. Once you’ve got your data, then you can write your paper, in the style of your intended journal. This again can be a lengthy process.

So you’ve got convincing positive data which you’ve written up as a paper but now what do you do? Well, It depends on which journal you submit to but generally the paper will first be scrutinised by the journal’s editors. This is where they decide whether it’s interesting enough to publish. This can depend on the journal – top-level publications (also known as “high impact journals”), will only accept the highest quality most interesting stories and are quite likely to reject research if it’s not interesting enough. If your work not deemed good enough for your chosen journal the paper is ‘bounced back’ to you and you’ll have to rewrite it for another journal with a lower “impact factor”, or perform more experiments to help back up or round off your story.

If the editors accept your data, they’ll send it for review. This means giving it to (usually) three other scientists who work in a similar field. They thoroughly check the data and ensure that it makes sense. At this point, they may give the green light to publish, but they could also ask for additional experiments or data to add to the story. However, they could also decide that it isn’t interesting or convincing enough to be published in this particular journal. You can then attempt to submit it to a different journal, or you may have to scrap your whole idea and come up with a new hypothesis.

The pay off for all of this time and frustration is finally having your work published and available to the wider scientific community. Fellow scientists around the world will now be able to see what you’ve been doing and what you’ve achieved, making all the blood, sweat and tears worth it. Hopefully your work will make a recognisable contribution towards your field – even small or seemly insignificant discoveries can turn out to be very important later on.

So you’ve published … now what? You guessed it: time for a new hypothesis. Prepare to re-enter the loop and start the whole business over again.

Post by: Louise Walker