Are we ready to live without immunisations?

Screen Shot 2016-07-17 at 17.47.46The immune system is like an army, poised and ready to attack: it actually comprises two separate systems, these being the innate and the adaptive immune systems – you could think of these as two separate groups of soldiers i.e. foot soldiers and intelligence officers. The innate immune system responds quickly to threats, it is always on guard but this system is non-specific, meaning that it does not recognise any specific threats and will respond to all threats in the same way – this system could be thought of as the bodies foot soldiers (recognising all types of enemy but always launching the same type of attack). The adaptive immune system is slower since it needs time to recognise the threat but, once this is done, this system can launch a more specific, targeted, attack. This system also has a memory, meaning that after successive encounters with the same threat it will adapt and the next time it encounters the same threat its attack will be faster and more efficient. It is during this type of attack that antibodies are produced. These antibodies stick around in the body and help bolster our immune system and speed up future attacks (it is this process which is augmented through immunisation/vaccination). We could think of this system as the body’s intelligence agency, gathering information on its enemy and launching a targeted attack.

When we receive a vaccine we are attempting to induce an immune response without harming the body. The body is infected by a harmless version of a virus or bacteria that triggers an immune response without making the recipient sick. This then creates an immunological memory so that the next time we are infected by the same pathogen the immune system will be quick to react and the threat will be neutralised before we show any symptoms.

Vaccines have been a major success, they have helped to eliminate most of the childhood diseases that historically caused millions of deaths and are very cost effective. Thanks to this, average life expectancy has increased from 35 years in 1750 to above 80 years today. According to World Health Organization measles vaccination resulted in a 79% drop in measles deaths between 2000 and 2014 worldwide, and according to UNICEF each year immunisation prevents around 2-3 million deaths a year from life-threatening diseases in children.

But some people still choose not immunise their children, stating a range of reasons – from religion to the belief that vaccines are neither effective nor safe. In 1998 the Lancet published a paper by Andrew Wakefield stating that the measles vaccine produced autism in 21 children. Later several peer-reviewed studies failed to show any association between the vaccine and autism and eventually the Lancet’s editors fully retracted Wakefield’s paper claiming deliberate falsification. But, despite a lack of solid evidence and the paper retraction, vaccination rates in the UK dropped to 80% in the years following, leading to an increase in cases of measles across the UK – causing not only deaths but also measles encephalitis. By 2008, measles had become endemic in the UK due to low-vaccinated communities.

Last year there was a case of diphtheria in Spain when a 6-year-old non-immunised boy became infected. Diphtheria is a highly contagious disease caused by a bacterium called Corynebacterium diphtheria. Thanks to immunisation campaigns in Spain the number of cases of diphtheria in the country dropped from 1,000 cases in 100,000 inhabitants in 1945 to 0.10 cases in 100,000 in 1965 – now in 2016 diphtheria is thought to have been eradicated in this area. Therefore when this young boy fell ill last year there were no treatments available in the country at the time. Sadly, despite heroic efforts to import a treatment, the child died. When his parents were asked why he had not been immunised they said they felt tricked and not properly informed by anti-vaccination groups; they thought they were doing the best thing for their child.

So what happens when people decide not to immunise their children? :

Assuming that a large proportion of a population are immunised it is possible that non-immunised individuals may be protected by a process known as herd immunity. Basically, the more people who are immunised, the fewer opportunities a disease has to spread – this confers protection to those who can’t be immunised (such as children with cancer receiving chemotherapy or radiotherapy, children treated with immunosuppressed drugs such as corticosteroids, people with weakened immune system or people allergic to any of the components of the vaccine). This means that these people really depend on those around them being immunised.

352px-Community_Immunity

We can lose sight of the benefits of immunisation because we don’t have a memory of living in a world without vaccines. But, diseases that we thought were eradicated a long time ago will come back if we stop immunisation – meaning we could find ourselves confronting epidemics of diseases with the ability to kill hundreds of thousands of children and adults every year. Sadly some diseases will never be fully eradicated because they are found everywhere. For example, tetanus is a serious infection caused by Clostridium tetani bacteria which produces a toxin that affects the brain and nervous system. Clostridium tetani spores can be found most commonly in soil, dust and manure, but also exist virtually anywhere. So a child playing in a sand pit or with just some grass can get in easily infected through a cut or wound on his hands. Therefore, maintaining immunisation is particularly important in the fight against this type of disease.

Although vaccines do come with some side effects, like high temperature or soreness in the injected site, very serious health events post-immunization are rare and the benefits of immunisation clearly exceed the risks of an infection. Thus, the only way to prevent the infection is through immunisation.

Post by: Cristina Ferreras

References:

Rappuoli R et al. Vaccines for the twenty-first century society. Nat Rev Immunol. 2011 Nov 4;11(12):865-72. doi: 10.1038/nri3085. Erratum in: Nat Rev Immunol. 2012 Mar;12(3):225

UNICEF_immunization Facts and Figures April 2013 http://www.unicef.org/immunization/file/UNICEF_Key_facts_and_figures_on_Immunization_April_2013%281%29.pdf

Save

Save

The functions of bioluminescence brought to light

When picturing the dead of night or the deepest depths of the ocean, we may be inclined to think of pure, impenetrable darkness. Yet nature has quite a different image in mind – one where darkness is pierced by flashes of light, warming glows and pulses of colour. These vibrant light displays are the result of a phenomenon known as bioluminescence – the production and emission of light from a living organism, which occurs when a molecule called luciferin combines with oxygen in the presence of the enzyme, luciferase. Apart from being undeniably beautiful to watch, there is increasing evidence to suggest bioluminescence has a number of important functions for an organism.

Screen Shot 2016-07-10 at 11.41.10Bioluminescence can occur in a broad range of organisms, from bacteria to fish (with the exception of higher vertebrates, e.g. reptiles and mammals), and is found across a variety of land and, more commonly, marine environments. In fact, around 60–80% of deep sea fish are thought to be bioluminescent. The pattern, frequency and wavelength (i.e. the colour) of light emitted can also differ by species and habitat. For instance, while violet or blue light is more common in deep water, bioluminescent organisms found on land tend to produce green or yellow light.

Bioluminescence lends itself to a number of functions – the first being for reproductive success. The most prominent examples of bioluminescence’s advantage in this area are in fireflies and glow-worms. In species of firefly where only the males are able to fly, the females attract their mate by emitting a constant glow which can be spotted by the males as they fly overhead. In other species, the male fireflies are also bioluminescent and produce a flashing light in response to the female’s glow. This results in a kind of “courtship conversation”. It has been suggested that the female’s preference may be determined by the frequency of male flashes, with higher flashing rates being more desirable. There are also a variety of fish in the deep ocean which appear to use bioluminescence to facilitate reproduction. Black dragonfish, for instance, are unusual in the fact that they emit a red infrared light, rather than the blue light common to deep-sea organisms. In doing so, however, dragonfish are able to use this light to seek out a mate in the darkness without alerting prey to their presence.

Screen Shot 2016-07-10 at 11.41.17In contrast to hiding from prey, as with the dragonfish, bioluminescence can also be used to attract prey. Deep in the Te Ana-au caves of New Zealand, fungus gnat larvae construct luminescent “fish lines” to lure other insects. These insects then become trapped on the sticky lines and become a tasty meal for the lurking larvae. Back in the ocean once more, there is evidence that a type of jellyfish, known as the “flowerhat” jellyfish, also uses bioluminescence to attract small fish (e.g. young rockfishes) on which it preys. These jellyfish have fluorescent patches on the tips of their tentacles. In experiments studying how tip visibility influences predation, it was found that significantly more rockfish were attracted to the jellyfish with visible fluorescence than when the fluorescence was indiscernible, highlighting the importance of this attribute to this organism in acquiring food.

Alternatively, bioluminescence may also be useful to protect an organism from predation. This can work in a variety of ways, from providing camouflage to acting as a warning signal to predators against the dangers of attacking. A good example of the latter of these functions can be seen in glow-worm larvae. Unlike adult glow-worms whose fluorescence aids courtship, glow-worm larvae emit light to warn predatory toads of their unpalatability. This has been demonstrated by researchers in Belgium who found that wild nocturnal toads were more reluctant to attack if dummies resembling the larvae were bioluminescent. In addition, bioluminescence can work to protect against predation by acting as a diversion technique. Some species of squid, for example, are able to release a luminescent secretion when under threat, confusing the predator so they can escape.

For now, the role of bioluminescence seems to be clearer in animals than in those organisms outside the animal kingdom, such as fungi or bacteria. There is, however, a recent study which has suggested a potential role for bioluminescence as a method of spreading spores for a certain variety of mushroom found in Brazilian coconut forests. The investigators in charge of this study were able to attract nocturnal insects using an artificial light which replicated the mushroom’s green bioluminescence. This did not happen when the light was switched off, suggesting light may be used to help this type of mushroom entice insects which can then disperse its spores.

Bioluminescence can provide its creator with a light in the darkness. It can help an organism to seek out, attract and successfully court a mate; lure unsuspecting prey to their doom, and warn off or divert the attention of predators when under attack. Yet while there are a many instances where the function of bioluminescence is fairly clear, as discussed here, scientists remain very much “in the dark” in other cases. This is particularly true for those organisms, such as fungi and bacteria, which do not belong to the animal kingdom. Nevertheless, with continued research and new discoveries forever being made, it is only a matter of time before these elusive functions are brought to light.

Post by: Megan Freeman

Save

Are you my type? The fascinating world of blood types

4910573791_48625c7c5a_zWith our 30th birthdays on the horizon my best friend and I decided to make a bucket list. As the months ticked by one thing on my list stood out – blood donation. I must admit I feel slightly ashamed that, peering over the horizon of the big 30, I’m yet to give blood, especially since both my mum and maternal grandmother donated so regularly in their youth that they were given awards. My grandmother always told me she felt compelled to donate since she had a relatively rare blood type*. She also told me that the reason my mother was an only child was because of an incompatibility between my mother’s blood, which was positive for a blood factor, and her own, which was negative (https://en.wikipedia.org/wiki/Rh_disease). This got me thinking – how many blood types are there, and why do they exist? How is it that the blood flowing through my veins and yours can be made of the same basic elements and yet be so different?

All human blood consists of the same fundamental components – red blood cells, white blood cells, plasma and platelets. Of these, it is the red blood cells that give the blood it’s identity or ‘type’. The surface of red blood cells is covered with molecules, and the presence or absence of a particular molecule determines which blood group you belong to. The principal blood grouping system used for humans is the ABO system which categorizes people into one of four groups – A, B, AB, or O on the basis of the presence or absence of a particular antigen. An antigen is a molecule capable of producing an immune response only when it does not originate from within your own body. For example, a person with type B blood who has the B antigen on their red cells could not receive blood from a person of blood type A, since the A antigens on this donor’s blood would be foreign to the type-B recipient’s body and would therefore cause an immune response. People with blood type B have the B antigen on their red cells whilst type A people have the A antigen. If you belong to AB, your red cells have both A and B antigen types and if you are group O you have neither A nor B. This basic grouping can be expanded to 8 groups when another important factor ‘Rh’ is considered. The Rh antigen can either be present (+) or absent (-), so if like my grandmother you have A- blood it means that your red cells have A antigens and are negative for the Rh factor.

9523565066_53955846c9_zAt first glance categorizing blood into different types may seem like an academic exercise or a scientific curiosity but it has serious real world consequences. In the 1800s many doctors noted the seemingly unnecessary loss of life from blood loss; however few were brave enough to attempt transfusions. This reluctance stemmed from earlier attempts at transfusion in the 1600s in which animal blood was transfused into human patients. Most of these attempts ended in disaster and by the late 1600s animal blood transfusions were not only banned in both Britain and France but also condemned by the Vatican. However, after watching another female patient die from haemorrhaging during childbirth, the British obstetrician James Blundell resolved to find a solution. He thought back to the earlier transfusion attempts and correctly guessed that humans should only receive human blood. Unfortunately, what he didn’t realise was that any given human can only receive blood from certain other compatible humans. Blundell tried but with mixed success, and by the late 19th century blood transfusion was still regarded as a risky and dubious procedure that was largely shunned by the medical establishment.

Finally in 1900, the Austrian doctor Karl Landsteiner made a breakthrough discovery. For years, it had been noted that if you mixed patients’ blood together in test tubes, it sometimes formed clumps. However, since the blood was usually taken from people who were already ill, doctors simply assumed that clumping was caused by the patient’s illness and this curiosity was ignored. Landsteiner was the first to wonder what happened if the blood of healthy individuals was mixed together. Immediately he noticed that some mixtures of healthy blood clumped too. He set out to investigate this clumping pattern with a simple experiment. He took blood samples from the members of his lab (including himself) and separated each sample into red blood cells and plasma. Systematically, he combined the plasma of one person with the red cells of another and noted if it clumped. By working through each combination he sorted the individuals into three groups which he arbitrarily named A, B, and C (later renamed O). He noted that if two individuals belonged to the same group, mixing plasma from one with red blood cells of the other didn’t lead to clumping- the blood remained healthy and liquid. However, if blood from an individual in group A was mixed with a sample from an individual in group B (or vice versa) the blood would clump together. Group O individuals behaved differently. Landsteiner found that both A and B blood cells clumped when added to O plasma, however he could add A or B plasma to O red blood cells without clumping. We now know that this is because red blood cells express antigenic molecules on their surface. If an individual is given blood of the ‘wrong’ type (i.e. one that expresses a different antigen to the host’s blood) the person’s immune system notices that the transfused blood is foreign and attacks it, causing potentially fatal blood clots. Our knowledge of different blood types means that we can now make safe blood transfusions from one human to another thereby saving countless lives. In recognition of this fundamental discovery, Karl Landsteiner was awarded the Nobel Prize in Physiology or Medicine for his research in 1930.

Since Landsteiner’s work, scientists have developed ever more powerful tools for studying blood types. However, despite increasingly detailed knowledge of the genes and molecules expressed by different blood groups, it’s still unclear why different types exist at all. In an effort to understand the origins of blood types, researchers have turned to genetic analysis. The ABO gene is the gene responsible for producing the antigens expressed on the surface of our red blood cells. By comparing the human gene to other primates, researchers have found that blood groups are extremely old – both humans and gibbons have variants of A and B blood types and those variants come from a common ancestor around 20 million years ago.

The endurance of blood groups though millions of years of evolution has led many researchers to think that there could be an adaptive advantage to having different types. The most popular hypothesis for the existence of blood types is that they developed during our ancestors’ battles with disease. For example, if a pathogen exploited common antigens as a way of infecting its host, then individuals with rarer blood types may have had a survival advantage. In support of this, several studies have linked different disease vulnerabilities to different blood groups. For example, people with type O blood are more protected against severe types of malaria than people type A blood, and type O blood is more common in Africa and other parts of the world that have a high prevalence of malaria. This is suggestive of malaria being the selective force behind the development of type O blood.

As I sign up for my first blood donation appointment I think back to everything I’ve learnt about blood types. I’m eager to find out what my blood type is and what that might mean about the history of my ancestors, and the disease challenges they’ve faced. Most of all though I’m excited to continue my family’s tradition and contribute to one of the humankind’s greatest medical advancements.

4840377002_bf84a3cecc_z

Post by: Michaela Loft

*A- which is only present in ~7% of caucasians.

 

When matters of the mind meet ailments of the body: exploring the power of big data

AFP6E1 Silhouette of a woman sitting by a window in a dim room and holding her head
AFP6E1 Silhouette of a woman sitting by a window in a dim room and holding her head

According to the charity Mind 1 in 4 people in the UK are expected to experience some type of mental health problem each year. Importantly, those suffering from severe mental illness (SMI) also tend towards poorer physical health and higher mortally rates than those without a SMI. This link was investigated in a large longitudinal study published last September in the British Medical Journal from the University of Manchester’s Institute of Population Health and Lancaster University’s Division of Health Research, and offers new insight into the well-established link between poor physical and mental health.

Researchers used data from the Clinical Practice Research Datalink (CPRD) – a powerful not-for-profit research service which has been collecting anonymised medical data since 1987 – to explore SMI across the UK and probe how this is linked to social factors and a range of physical conditions. The researchers collected data annually between 2000 and 2012 from patients suffering from SMI* alongside control subjects with no SMI diagnosis. Control subjects were matched for age, sex and general practice (GP) – 5 controls for each SMI sufferer. The power of this research lies in both the number of individuals studied (more than 300,000 SMI sufferers and more than 1,700,000 matched controls) and the timescale over which the data was collected (12 years) – this being the first study of its kind to analyse mental health data in this way.

From these data the researchers found that the total number of individuals diagnosed with a SMI increased over the period between 2000 and 2010, with this increase being most striking in areas of higher social deprivation (social deprivation being estimated by GP postcode). These findings also highlighted an improvement in SMI diagnosis, indicating that people are now routinely receiving an SMI diagnosis earlier in life.

With regard to the association between mental and physical health the study found that all 16 physical conditions studied** were more common in patients with a SMI than in control patients. It was also observed that, over the study period, SMI sufferers showed a higher yearly increase in diagnosis rates for a range of conditions (including: diabetes, hypothyroidism, chronic obstructive pulmonary disease (COPD), chronic kidney disease (CKD) and stroke) compared to matched controls. This increase appeared to coincide with increased prescription of atypical antipsychotic medication in the SMI group. Indeed, this and previous studies suggest that a range of complex factors may interact in SMI patients to produce this effect. Specifically, a combination of antipsychotic medication, unhealthy lifestyles, social withdrawal and challenges associated with seeking and following medical advice may all lead to poor physical health in SMI sufferers. However, further research is still needed to fully understand which factors play a role in the link between mental illness and poor physical health.

The findings concerning associations between SMI, physical health and social deprivation are more complicated. This study suggests that SMI sufferers living in deprived regions are more likely to be diagnosed with diabetes mellitus, asthma, coronary  heart  disease, COPD, learning disability, osteoarthritis or epilepsy, whereas those living in more affluent conditions are more often diagnosed with CKD,  psoriasis,  cancer,  stroke  or dementia.

Although the researchers involved in this study stress that there are limitations to these findings, some important points have been raised, particularly in relation to mental health care and policy. Specifically, findings suggest that patients suffering from a SMI are indeed more likely to suffer from one or more associated physical condition. This points to the possible benefit of increased training for mental health professionals, especially in recognising indicators of poor physical health and identifying the complex needs of the patients in their care. Also, these results suggest that social and regional factors might influence treatment and diagnosis of certain physical conditions in patients with SMI. These findings warrant further study as this knowledge may be beneficial in shaping and targeting mental health care at the regional level.

With the cost of care for individuals with both mental and physical health problems exceeding the cost of treating either condition alone, it is important that mental health funding be prioritised and that studies such as this be encouraged.

*defined as: schizophrenia, affective disorder or other types of psychoses.

**Hypertension, diabetes (type I and II), asthma, hypothyroidism, osteoarthritis, chronic kidney disease (CKD), learning disability,  coronary  heart  disease,  epilepsy,  chronic obstructive pulmonary disease (COPD), cancer, stroke, heart failure, rheumatoid arthritis, dementia and psoriasis.

Post by: Sarah Fox

Save

The Case of the Jumping Carbons

Screen Shot 2016-06-05 at 21.12.02This year the Manchester branch of the British Science Association launched it’s first ever science journalism competition. They presented AS and A-level students across Greater Manchester with the daunting task of interviewing an academic researcher then using this material to create an article accessible to someone with no scientific background. This was by no means a simple task, especially since many of the researchers were working on basic research – the type of work which may not be sensational but which represents the real ‘nuts and bolts’ of scientific research and without which no major breakthroughs would ever be made. Despite the challenges implicit in this task all our entrants stepped up and we were astounded by the quality of work submitted.

Today we’re proud to publish our winning article written by Tilly Hancock from Oswestry School:

Screen Shot 2016-06-19 at 19.09.50

Imagine you are inside a nuclear reactor, a UK design. Not only are you inside it, but you are part of it; a carbon atom inside the graphite core which houses the control rods and fuel rods (the ‘moderator’). Around you the environment is glowing with heat and radiation, all given off in the splitting (fission) of uranium-235 nuclei. The temperature of 450°C is no problem, and you remain tightly bound in a lattice arrangement with your fellow carbons.

However, when the uranium nuclei split, they spit out more neutrons which pelt towards you at high speeds. One slams into you, and you slow it down, as is your job, so it travels at a suitable speed to cause more fission events. In this process you absorb the neutron’s energy, and get knocked out of your slot in the lattice. You whiz towards your fellow carbon atoms, knocking more out of their spaces like a billiard ball, wreaking havoc in the strict order of the graphite crystal. Eventually you transfer all of your extra energy to your neighbours and come to rest, filling a vacancy left by another displaced carbon or squeezing in between the orderly lattice layers (as an ‘interstitial’). Here you wait, ready to absorb the excess energy of the next neutron. The upheaval is routine to you, as during your life in the reactor you may switch places up to 30 times.

This is just one atom, but what are the consequences of millions jumping around like this?

Screen Shot 2016-06-07 at 16.46.44
A finite element model of a graphite sample and how the model behaves when irradiated or heated. Image credit: Dr Graham Hall. Manchester University

Well, the effects are unpredictable. The radiation barrage that the graphite endures can cause it to change its material properties; its thermal expansion, strength and even its dimensions, in strange ways. Even to the human eye, these changes would be noticeable. The moderator can change shape by up to 2%, depending on the grade of graphite; a surface that started smooth may finish rough. The dimensions may warp so that the control rods used to restrain the nuclear reaction may no longer fit into their channels. It is clearly important to completely understand how the graphite will change when designing new reactors or maintaining the existing ones. The problem is that we don’t.

For years, the only way to investigate the effects of the jumping carbon atoms has been using ‘materials test reactors’, which can take over 3 years and £10 million to complete a single experiment.

Is there an easier way to predict what the carbons’ dance can lead to?

Like in so many fields, computers are now proving their worth. Manchester University’s Dr Graham Hall designs models which do part of the test reactors’ job. He imagines how those millions of carbon atoms move around and uses similar previous models to predict some of the complex property changes. What’s more, to model a lump of graphite in a reactor-like environment, just one week would be expended, as opposed to those three years required by the test reactors. Although these models are unlikely to be used to design new reactors, they are tackling the problem of the variability between different grades of graphite.

Screen Shot 2016-06-06 at 13.44.03
The irradiation-induced dimensional changes of graphite at two irradiation temperatures predicted from finite element models compared with the experimental data. Image and data courtesy Dr Graham Hall, Manchester University.

Comparing computer predictions to experimental data has helped researchers advance their understanding of what those carbon atoms really get up to inside the reactors and more importantly, how this affects the moderator as a whole. Hopefully, one day soon only minimal usage of materials test reactors will be needed, to calibrate models like this one, sparing millions of pounds and many, many years, but for now the jumping carbon atoms will continue to keep researchers on their toes.

Post by: Tilly Hancock – courtesy of Sarah Fox (Volunteer with the British Science Association)

For more amazing scientific articles please visit our friends at Things We Don’t KnowThings We Don’t Know is a not-for-profit organisation that seeks to explain the cutting edge questions scientists are trying to solve, in everyday language.

Save

Shedding Light on the Nucleus

Screen Shot 2016-06-05 at 21.12.02This year the Manchester branch of the British Science Association launched it’s first ever science journalism competition. They presented AS and A-level students across Greater Manchester with the daunting task of interviewing an academic researcher then using this material to create an article accessible to someone with no scientific background. This was by no means a simple task, especially since many of the researchers were working on basic research – the type of work which may not be sensational but which represents the real ‘nuts and bolts’ of scientific research and without which no major breakthroughs would ever be made. Despite the challenges implicit in this task all our entrants stepped up and we were astounded by the quality of work submitted.

Today we’re proud to publish one of our runner up articles written by Hayley Martin from Oswestry School

“The nucleus can be thought of like an engine of a car – driving the actions of the cell”. This is an analogy made by Professor Dean Jackson at Manchester University. With a passion for the genome and forty years of research behind him Professor Jackson has become an expert in understanding mammalian nuclei and chromosomes and how the organisation of their structures defines the cell’s behaviour. In order for these cells to function correctly the genetic code stored in the DNA of each gene has to be interpreted by a process called gene expression, where information from the gene is used in the synthesis of the gene product. These gene products often include proteins such as enzymes, hormones and antibodies, all vital to our survival. Gene expression is immensely complicated due to the number of processes involved. Professor Jackson has been studying these processes and has helped to shed light on exactly why this expression is so complicated.

Figure 1 – The nucleus of a human cell – showing the distribution of DNA (blue), the transcription factories (green) and proteins (red) involved in further modification of RNA.
Figure 1 – The nucleus of a human cell – showing the distribution of DNA (blue), the transcription factories (green) and proteins (red) involved in further modification of RNA.

Transcription is the first process that contributes to gene expression – it is the process whereby information from DNA is copied and made into a new strand of RNA which goes on to synthesize proteins. Professor Jackson has been able to tag newly formed RNA with a fluorescent antibody that can be detected using a laser scanning confocal microscope. This equipment scans a beam of a specific wavelength of light through the specimen, causing the antibodies to fluoresce. The resulting image is displayed in Figure 1. Images such as this have allowed him to locate the areas in the nucleus where this RNA is formed – he refers to these areas as “transcription factories”. He has also found that these factories are made up of many other genes and proteins which assemble into specific complexes. Such knowledge is key to defining the required level of synthesis of each gene product. It also provides the potential for co-regulation of genes in that the way that one gene in this complex is expressed will affect the expression of another gene. Recent work has concluded that genes can have as many as 20 other genetic elements, known as enhancers, that contribute to the gene’s overall expression, which is why it is so complex.

Gene therapy is an exciting modern concept: It offers the prospect of improving lives without the need for drugs with potential side effects and offers possibilities for treating diseases that previously had limited therapeutic options. So far it has been considered as an approach to replacing mutated genes with normal functioning copies, inactivating or removing damaged genes and introducing a new gene that might help the body fight off a disease. With the use of new techniques such as ‘CRISPR’ gene insertion is relatively easy. However Professor Jackson’s research has highlighted how gene therapy isn’t as simple as just inserting a gene – it has to be controlled in the right way by these complex processes in order for the cell to have control of its actions. The difficulty in controlling these actions means that gene therapy is currently a risky process and is not a common treatment. Trials are underway to develop effective gene therapy methods of treating inherited disorders including haemophilia, cystic fibrosis and viral infections such as HIV. We can hope, with advances in the understanding of nuclear structure and processes of gene expression, that safe and effective gene therapy treatments will become a reality.

Post by: Hayley Martin – courtesy of Sarah Fox (Volunteer with the British Science Association)

Can the Onset of Psychosis Be Predicted by the Presence of Neuro-inflammation?

Screen Shot 2016-06-05 at 21.12.02This year the Manchester branch of the British Science Association launched it’s first ever science journalism competition. They presented AS and A-level students across Greater Manchester with the daunting task of interviewing an academic researcher then using this material to create an article accessible to someone with no scientific background. This was by no means a simple task, especially since many of the researchers were working on basic research – the type of work which may not be sensational but which represents the real ‘nuts and bolts’ of scientific research and without which no major breakthroughs would ever be made. Despite the challenges implicit in this task all our entrants stepped up and we were astounded by the quality of work submitted.

Today we’re proud to publish one of our runner up articles written by Maaham Saleem from Withington Girls’ School:

Imagine a life where the dawn of each new day is accompanied by severe hallucinations, delusions and an inability to respond to stimuli in a way that is deemed ‘normal’. Where the problems that you face heavily impair your ability to carry out social interactions, and leave you in a debilitated state. This life is reality for patient with psychosis, a mental health problem that causes people to perceive and interpret events differently from the average human mind. Psychosis can occur in a number of different conditions such as schizophrenia and bipolar disorder.

5147733588_e263f2b3f5_z
During recent times, a great deal of interest has arisen within the scientific community regarding the link between this condition and inflammation in the brain. In the late 20th century, post-mortem studies in patients with schizophrenia showed the presence of inflammation. However, these results were not always consistent, possibly due to differences in the regions of the brains which were examined. However, more recent studies, using brain scans in living patients, did find a more consistent increase in microglial activation in patients with psychosis, which is an indicator of neuro-inflammation. Microglia are resident, innate immune cells in the brain which have long been connected with the pathology of neurodegenerative diseases. The activation of these cells indicates inflammation, and it was suggested that individuals that display such inflammation may have a pre-disposition to developing psychotic disorders later in life.

At the Wolfson Molecular Imaging Centre of the University of Manchester, researchers are investigating whether this link between neuro-inflammation and psychosis does indeed exist. In order to ensure that the conclusions are valid, a large amount of evidence must be generated to support it and so a study is conducted in collaboration with other centres around the country. In this study, three groups of volunteers are tested; patients who have had psychosis for many years, patients for whom the onset of psychosis is recent, and healthy volunteers to act as controls. Each of these groups consists of twenty patients, therefore a total sample size of sixty patients is used in order to increase the statistical power of the results and increase the likelihood that they are representative of the majority of patients with psychosis.

3576501997_c519a2bfa4_z
All volunteers undergo a brain-scan called Positron Emission Tomography, or PET scan. PET scans involve the injection of a radioactive tracer into the body which emits positrons as it decays inside the tissues. This radiation can be detected by cameras. By using a specific radioactive tracer called [11C]PK11195, microglial activation can be measured in order to determine the amount of inflammation in the brain. Many of the results from studies to investigate this link between neuro-inflammation and psychosis seem to suggest that neuro-inflammation does indeed exist. Although of course more studies must be carried out in order to confirm this hypothesis, it does present an exciting new prospect of a possible treatment and establishment of preventative measures to assist patients with psychosis.

Post by: Maaham Saleem – courtesy of Sarah Fox (Volunteer with the British Science Association)

“Accidents will happen” – when serendipity meets drug discovery

Few would argue that advancements in science require hard work, long hours and intellectual minds. But a few accidents along the way can certainly help as well. Serendipity, the accidental, but very fruitful, discovery of something other than what you were looking for, has played a significant role in drug discovery through the ages. Here are just three examples to whet your appetite…

Warfarin

Screen Shot 2016-05-29 at 21.56.59Today, warfarin is the main blood thinner (anticoagulant) used in the treatment of blood clotting disorders, heart attacks and strokes. However, its story began on the prairies of North America and Canada during The Great Depression. During the 1920s, financial hardship forced farmers in these areas to feed damp or mouldy hay to their livestock. As a result, many of these seemingly healthy cattle started to die from internal bleeding (haemorrhaging) – an illness later given the name ‘sweet clover disease.’ Although investigators were able to attribute sweet clover disease to the consumption of mouldy hay, the underlying molecule responsible for causing the disease remained a mystery. Consequently, the only potential solutions offered to farmers at the time were to remove the mouldy hay or transfuse the bleeding animals with fresh blood.

It was not until 1939 in the lab of Karl Link that the guilty molecule was isolated – an anticoagulant known as dicoumarol. This is a derivative of the natural molecule coumarin, responsible for the well-recognised smell of freshly cut grass. With the idea of creating a new rat poison designed to kill rodents through internal bleeding, Link’s lab investigated a number of variations of coumarin under the funding of the Wisconsin Alumni Research Foundation (WARF). One of these derivatives was found to be particularly effective. Thus in 1948, the rat poison warfarin was born, named after the Body who funded its discovery.

Despite its clear anticoagulating properties, reservations remained about using warfarin clinically in humans. This was until 1951 when a man who had overdosed on warfarin, in an attempt to commit suicide, was successfully given vitamin K to reverse the anticoagulating effects of the rat poison. Hence, with a way to reverse these, the production of a variation of warfarin for human use was developed under the name ‘Coumadin’. This was later successfully used to treat President Dwight Eisenhower when he suffered a heart attack, giving the drug widespread popularity in the treatment of blood clots, which has continued for the last 60 years.

Penicillin

Screen Shot 2016-05-29 at 21.57.08No article on serendipity in drug discovery would be complete without at least mentioning the discovery of the antibiotic penicillin by Sir Alexander Fleming. It was 1928 when Fleming returned to his laboratory in the Department of Systematic Bacteriology at St. Mary’s in London after a vacation. The laboratory was in its usual untidy and disordered state. However, while clearing up the used Petri dishes piled above a tray of Lysol, he noticed something odd about some of the Petri dishes which had not come into contact with the cleaning agent. The Petri dishes contained separate colonies of staphylococci, a type of bacteria responsible for a number of infections such as boils, sepsis and pneumonia. There was also a colony of mould near the edge of the dish, around which there were no visible staphylococci or at least only a few small and nearly transparent groups. Following a series of further experiments, Fleming concluded that this mould, which he named ‘penicillin’, was not only able to prevent the growth of staphylococcal bacteria, but also killed this type of bacteria.

Screen Shot 2016-05-29 at 21.57.15However, while today the value of penicillin is well recognised, there was little interest in Fleming’s discovery in 1929. In fact, in a presentation of his findings at the Medical Research Club in the February of that year, not a single question was asked. It was only when pathologist Howard Florey from Oxford happened across Fleming’s paper on penicillin that the true clinical significance of the drug was acknowledged. In 1939, along with Ernst B. Chain and Norman Heatley, Florey conducted a series of experiments in which they infected mice with various bacterial strains including staphylococcus aureus. They then treated half the mice with penicillin while leaving the others as controls. While the controls died, the researchers found that penicillin was able to protect the mice against these types of infection. These exciting findings appeared in the Lancet in 1940 and eventually led to the commercial production of the drug for use in humans in the early 1940s in the United States. Since then, an estimated 200 million lives have been saved with the help of this drug.

Viagra

Despite today being widely known for its ability to help men with erectile dysfunction, Viagra was not originally designed for this purpose. The drug company Pfizer had initially hoped to develop a new treatment for angina, one that worked to reverse the constriction of blood vessels that occurs in this condition. However, clinical trials of Viagra, known then as compound UK92480, showed poor efficacy of the drug at treating the symptoms of angina. Nonetheless, it was highly effective in causing the male recipients in the trial to develop and maintain erections.

Upon further investigation, it was identified that, rather than acting to relax the blood vessels supplying the heart, Viagra caused the penile blood vessels to relax. This would lead to increased blood flow to this appendage, resulting in the man developing an erection. Following this unexpected discovery, Pfizer rebranded Viagra as an erectile dysfunction drug and launched it as such in 1998. Today, Viagra is one of the 20 most prescribed drugs in the United States and has, I’m sure, left Pfizer thanking their lucky stars they were involved in its serendipitous discovery.

The accidental discoveries of warfarin, penicillin and Viagra are just three instances of serendipity in drug discovery from a long list – one far longer than can fit in this article. Nevertheless, these cases provide excellent examples of how unplanned factors such as mysterious deaths, unexpected side effects during drug trials, and even just plain untidiness, has led to significant advances in the field of drug discovery. Such serendipitous events have given us a range of indispensable medications which we still rely on today, and I’m sure will continue to provide new and exciting breakthroughs in the future.

Post by: Megan Barrett

Palming off unsustainability

9090154930_c5e1eb08c6_kWhen I first landed in Singapore last October I expected to be greeted by clear skies and sunshine but, in reality, I couldn’t even see the famous skyline in front of me.  Welcome to the reality of living in South East Asia during the dry season. While most people will remember the news stories about the ‘haze crisis of 2015’, what many don’t realize is that this was not an isolated event, in fact haze is a persistent problem in this part of the world.

What causes the Haze?
Haze arises from the burning of forest areas for agriculture in the neighboring regions of Indonesia, normally through illegal ‘slash and burn’ practices of land clearing. Although slash and burn is not a new farming technique, increasing requirement of land for the growing of palm oil and paper production now results in larger and often uncontrolled fires. In addition, the land which is cleared is often peatland which, when burnt, leads to denser and longer lasting fires. The resulting ash and debris is carried to neighboring countries leading to dense smog – think of the foggiest day in the UK, then remember that fog is just caused by water whereas the haze in SE Asia is formed of ash and debris.

During the dry season this makes it hard to see and, for some, hard to breath and carry out daily tasks like walking upstairs or going to the shops. Already the PSI (pollutant standard index) here in Singapore is on the rise, reaching moderate levels while I’m writing this article.

6373026485_dc8f75e253_zWhile here in the city we think mostly of the effect the haze and land burning has on us but it’s not just people that are affected. Indonesia is one of the most bio-diverse countries on the planet, most famously home to Orangutans. Destruction of the forests doesn’t just pollute their air, it also destroys their homes. According to online sources, up to 5000 already endangered Orangutans are killed every year through the destruction of their habitat for palm oil productions

The palm oil problem
Palm oil is a type of vegetable oil and a highly concentrated form of fat found in many household products from cosmetics to junk food. Over 80% of the world’s palm oil is produced in the Indonesian regions of Borneo (Sumatra and Kalimantan), with the majority of this traded through companies right here in Singapore. Many people have not heard of palm oil due to frequent product mislabeling: note that it is often mislabeled as kernel oil or vegetable oil.

However, it is important to recognise that palm oil itself is not technically the problem.  The main issue lies with the unsustainability of current palm oil farming practices. Specifically the process of cut and burnt farming, failure to replace felled trees and the over-exploitation of land as the demand for palm oil overtakes tree growth. Even ‘so called’ sustainable methods of palm oil agriculture have come under scrutiny by environmentalists.

The crux of this issue is that, until the demand for palm oil products decreases, or better farming methods are devised the haze will continue in a futile cycle – food for thought as we tuck into ice cream on a hot day.

Post by: Stephanie Macdonald

For more information of products made using sustainable palm oil see here:

Ovarian cancer: early diagnosis is the best treatment

The 8th of May was world ovarian cancer day. Ovarian cancer is considered to be the most lethal gynaecological malignancy, being the  fourth most common cause of cancer death in women in the developed world (1). Early stage misdiagnosis is common, especially since symptoms (such as feeling bloating, abdominal pain, difficulty eating or constipation) can be incorrectly attributed to common stomach and digestive complaints. Indeed, of the approximately 7000 new cases diagnosed in the UK each year, only a small percentage of diagnoses (around 20%) (2) will be early stage where the survival rate is over 90%. The key to successful early diagnosis lies in the frequency and the number of symptoms a woman suffers.

image03

Risk factors and diagnosis:

Common risk factors include: menopause (this being the most significant risk), mutations in genes associated with DNA repair like BRCA1 and BRCA2 genes, family history, infertility or having fertility treatment, endometriosis or being overweight.

There is unfortunately no accurate screening to diagnose ovarian cancer. Current screening practices rely upon CA125 (a tumor marker that can be detected in the blood) screening, alongside abdominal and transvaginal ultrasound (3). However, CA125 screening lacks both specificity and sensitivity. Specifically, CA125 can be detected in a range of cancers and in benign conditions in premenopausal women like endometriosis, pelvic inflammatory disease or even pregnancy, it is also more accurate for late stage diagnosis.

Treatment:

The standard of care for patients with advanced disease involves surgical tumour debulking followed by chemotherapy but sadly most patients will relapse within 18 months and eventually will die from the disease. Over the years clinical trials have been set up to explore combinations of drugs which could improve the prognosis for ovarian cancer patients. One of these therapies is called targeted cancer therapy or biological therapy and involves stopping tumour growth through interfering with tumour biology. Tumours need to spread out to find oxygen and nutrients, a process called angiogenesis (this being crucial for tumour growth and proliferation). Several anti-angiogenic drugs have been developed and are now being tested in preclinical and clinical settings. The aim is to target not the tumour but the surrounding structures (blood vessels) necessary for tumour growth. VEGF is a molecule which is overexpressed in most solid tumours including ovarian cancer and is crucial in tumour angiogenesis. So this molecule represents a good drug target, the rationale being that inhibition of VEGF may eliminate or delay tumour growth. Bevacizumab (Avastin) is a monoclonal antibody that binds VEGF and sequesters it, causing blood vessels growth to stop, thus making it more difficult for the tumour to grow. It is the only anti-angiogenic drug licenced to be used in women with advanced ovarian cancer alongside chemotherapy. However, despite promising results in initial studies it has so far not proved beneficial to overall survival rates and some people consider it not to be cost-effective (4).

image02

Another targeted cancer therapy used to treat ovarian cancer is inhibition of PARP1 (a molecule involved in DNA repair). This drug is called olaparib and is offered to patients with deficient tumour suppressor proteins BRCA1 and BRCA2 since cells deficient in BRCA1/2 are more sensitive to PARP1 inhibitors. PARP1 and BRCA1/2 are involved in repairing broken DNA, in patients carrying BRCA mutations, inhibition of PARP1 results in cumulative DNA damage and tumor cell death.

There are also a range of antivascular agents (drugs which attacks the blood supply of a tumor) now entering clinical trials (5). We are also making advances in immunologic therapies and in drugs targeting relevant gene mutations but, in some cases, toxic side effects make these treatments unfeasible.

The future:

Despite advances in drug research, early-stage detection is still believed to be the key to improved survival. Therefore continued research into biological markers for early detection and diagnosis is vital. It is hoped that such research, alongside work into finding biomarkers that predict disease progression, drug response and allow us to select patients who benefit the most from a specific treatment option will give us a fighting chance against ovarian cancer.

References

  1. Paik ES, Lee Y-Y, Lee E-J, Choi CH, Kim T-J, Lee J-W, et al. Survival analysis of revised 2013 FIGO staging classification of epithelial ovarian cancer and comparison with previous FIGO staging classification. Obstetrics & Gynecology Science. 2015 ;58(2):124-34.
  2.         Hennesy BT, Coleman RL, Markman M. Ovarian cancer. Lancet 2009. 17;374:1371-82
  3.         Jayson GC, Khon EC, Kitchener HC, Ledermann JA. Ovarian Cancer. Lancet 2014. 2014;384:1376-88.
  4.     Robert H. Carlson. Antiangiogenic Therapy’s Value in Ovarian Cancer Questioned. Oncology Times 12/25/14
  5.        Bell-McGuinn K, Konner J, Tew W, Springgs DR. New drugs for ovarian cancer. Ann Oncol. 2011 Dec;22 Suppl 8:viii77-viii82. doi: 10.1093/annonc/mdr531.

Links of interest:

UK CR. Ovarian Cancer incidence by UK region 2015 [cited 2016 07.01]. Available from: http://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/ovarian-cancer/incidence#ref-0.
Post By: Cristina Ferreras

Share This