The brain bank comprises a group of scientists from the North West of England eager to enthuse and entertain with their scientific banter. To learn more about who we are see the our 'about' page. You can also find us on twitter @brainbankmanc or email us brainbankmanc@gmail.com.
Everyone loves summer with its long days, blazing sun and ice-cold drinks – all too often accompanied with the head-splitting pains associated with so-called ‘brain freeze’. The culprits range from ice-creams and milkshakes to chilled drinks, all having the same effect of leaving you feeling like you’ve been poked in the brain by a large sharp object. Suffering from brain freeze is not uncommon so you’re not alone. In fact approximately 40-80% of the population experience brain-freeze at some point in their life.
The scientific name for this is Sphenopalatine ganglioneuralgia which literally means nerve pain of the sphenopalatine ganglion, the nerve found at the roof of your mouth.
Brain freeze is characterised by sharp stabbing pains in the head, particularly across the forehead and temples, associated with the quick consumption of cold food or drinks. Although the sensation can be extremely painful, the pain normally fades quickly. So what causes this mysterious and painful phenomenon?
Although it is certainly not caused by our brains actually freezing, some scientists have suggested that the pain may indeed be linked to a decrease in brain temperature. One particular study from Japan found that during bouts of brain freeze, brought about by consuming a large volume of ice, the temperature of patients heads (taken from the ear) actually dropped in accordance with the cold stimuli, suggesting that brain temperature also decreased.
So what aspect of brain ‘cooling’ causes the pain associated with brain freeze? A pioneering study carried out by Jorge Serrador at Harvard Medical School may have found the answer. The study monitored blood flow to the brain in 13 volunteers as they sipped ice cold water (directed, with a straw, at the roof of their mouths). When brain freeze was induced volunteers were asked to raise their hands as a signal, then raising them again once the pain subsided.
It was found that one of the brains major suppliers of oxygenated blood, the anterior cerebral artery, dilated (increased in size) whilst drinking the iced water. This lead to an increase in blood flow to the brain and was regularly followed by pain. Interestingly, as the cerebral artery shrunk back to its original size, restoring regular blood flow, the symptoms associated with brain freeze subsided. Serrador concluded that the increase in blood flow and therefore size of the cerebral artery was responsible for the pain associated with brain freeze. Specifically, he speculated that the pain may be brought about by an increase in cranial pressure caused by the increased blood flow.
The brain is one of our most delicate and sensitive organs; extremely sensitive to changes in temperature. It is possible that brain-freeze may be a defence mechanism, protecting our brains from large fluctuations in temperature. If the ganglion, in the roof of the mouth, detects extreme cold it causes an increase in blood flow to the brain, which is important to keep it warm. As more blood flows into the brain, the pressure inside the skull increases causing pain. The blood vessel then constricts again to reduce the pressure.
It’s also interesting to note that migraine sufferers are actually more likely to experience brain-freeze than those who don’t suffer from migraines. Indicating that the two may share a common mechanism.
So, that’s a quick run-through of the science behind what is actually happening to your body during brain freeze. So next time you’re in the pub and someone tells you they have brain freeze, you can astound them with your knowledge of the science behind it!
I ‘like’ Facebook as much as the next person, or rather any of the other 950+ million users. The fact that people can stay in touch so easily in a metaphorically shrinking world without having to use a pen, paper, stamp or pigeon carrier is brilliant. However, what really amazes me about Facebook, Twitter, Tumblr, LinkedIn or any other social media is the phenomenon of ‘socialness’ itself.
In order to find the ‘social’ or ‘friend’ centre of the brain, scientists measured the size of different brain structures associated with making and maintaining friendships. In two different studies, they found that the size of an individual’s amygdala (the emotional centre of the brain) and their orbital prefrontal cortex (oPFC) were proportional to the number of real friends and social groups they had. In other words either having a larger amygdala or oPFC means you are more likely to be friendlier, or the more friends you ‘add’ to your social network, the larger those parts of the brain become. In fact, it’s probably a mix of both situations.
Robert Dunbar, a British anthropologist interested in the evolution of society, also attempted to define how the structure of the brain linked to the size of an individual’s social group: the equivalent to the average Facebook friend count. His early work focused on the brains of various species of monkey. From this work he found that he could predict the size of the animal’s social group from the size of their neocortex compared to the rest of their brain. He discovered that primate species which have a larger neocortex relative to the rest of the brain hang out in larger social groups, whereas those with a smaller neocortex have fewer ‘friends’. These monkeys’ brains have either evolved or changed to maintain friendships with a certain average number of other monkeys. From this Dunbar was able to predict that, if humans are like monkeys (which we are), our neocortex:brain ratio predicts that we should be cliquing into social groups of around 147.8 (with an upper limit of 300). What’s interesting is that this is essentially the case in real life: 150 is the average size of a tribal village, the optimum size in the Roman army’s military unit and the average number of friends on Facebook is actually creepily close too.
One question raised by Dunbar’s research is how or why the neocortex developed in the first place? The “Social Brain Hypothesis” suggests that primates evolved a larger neocortex and bigger social networks when they started eating fruit instead of leaves. Fruits contain way more calories than leaves, but are also harder to obtain and have a much shorter ‘shelf life’, meaning they pose more problems for a hungry monkey. Therefore if a monkey is to maintain a fruit-rich diet it is important for it to learn where to find fruit and how to tell whether or not it is ripe or safe to eat. It is thought that being part of a bigger social group allows all individuals to benefit from the group’s collective knowledge and thus from the extra energy found in fruit. Since the brain uses up so much energy to develop, it may be that this extra food source is partially responsible for the increase in neocortex size in these primate species.
Whether or not we realise it, most of us are hard-wired to seek out friendship. Our brains are social and we have evolved to cooperate and share – that’s why Facebook is such a massive phenomenon. But what does the size of our ‘friends’ section say about us? In a very modern experiment, psychologists asked people to rate another person’s attractiveness based on a fictional Facebook profile. These profiles were identical other than one factor: for each profile the experimenters altered the number of friends these fictional people had, either 103, 303, 503 or 703. These experiments found that 303 seemed to be the magic number, with participants rating profiles with this number of friends as being most ‘attractive’. Perhaps this could be a reflection of the upper limit of Dunbar’s number. Interestingly, both profiles with lower and higher friend counts were rated as being less attractive. Perhaps fewer friends is taken as an indication that a person is less sociable, whilst having too many friends may be seen as ‘trying too hard’. So when honing your online persona it’s more than just the pictures of that dodgy night out you have to worry about.
So why is Facebook in particular so popular so, dare I say it…addictive? There have been countless studies which show that going onto your Facebook account makes the pleasure centres of the brain, the same ones which activate when eating chocolate or having sex, ‘light up’. It seems that thinking and talking about ourselves is something we all enjoy. Psychologists found that participants in a study were happy to receive very little payment to talk about themselves whereas if they were required to chat about someone else they generally expected at least double the amount. The participants, on average, found talking about themselves so much more enjoyable that they would actually give up money in order to avoid talking about another person instead. Maybe that will help explain why people insist on posting mundane statuses online. (It doesn’t, however, give any excuses for those who use hashtags on Facebook…#wrongsocialmedia.)
But there are many more important benefits from having a strong, optimally-sized social group. Researchers in Kenya watched wild baboons to see how long higher socially ranking males and lower socially ranking males took to heal or recover from naturally occurring injuries and illnesses. Despite the highest and lowest ranked baboons experiencing a similar amount of biological stress, the lower-ranked baboons took an average of six days longer to heal or recover than alpha males. The researchers think this could be due to the positive impact that close friendships have on the immune and repair systems.
I’m definitely not saying that everyone should use Facebook in order to avoid getting ill, or that we should all frantically cull or add friends until our account reaches the magic ‘Dunbar’ number. But next time you log on to your account and scroll aimlessly through the trivial happenings recorded in your newsfeed or indulge in a little chat with an old friend, don’t blame yourself. Our brains are wired up to be social.
For this post I’m going to break from my normal light-hearted blogging to talk about a topic which is very serious and close to my heart – the “Right to Die”.
The “right to die” is the ethical entitlement of someone who is suffering from a debilitating and permanent illness and who has no quality of life to choose to end their life on their own terms often through either suicide or, if necessary, assisted suicide. This is a subject which crops up in the news on a regular basis and there is, understandably, a great deal of controversy surrounding this question.
Recently the “right to die” issue has surfaced surrounding the case of Tony Nicklinson, a 58-year old man who suffered from a condition known as locked-in syndrome. His condition meant he was unable to move or speak, communicating solely through eye movements. Mr Nicklinson went to court in an attempt to make it possible for a doctor to end his life without fear of prosecution. He argued that he was suffering a miserable and undignified life and wanted to end it on his terms. However, his appeal was unsuccessful. The court ruled that it was unable to “usurp the function of parliament” and did not have the power to grant his request. His devastation was clear to see and harrowing to watch. Sadly or fortunately, however you want to look at it, Mr Nicklinson died peacefully of ‘natural causes’ at home just 6 days after the court date, after refusing to eat and finally contracting pneumonia
I believe that the UK government urgently needs to review its policies on the right to die and voluntary euthanasia. This is partly so people, such as Mr Nicklinson, don’t have to suffer needlessly. However science also foresees a bigger problem society may soon have to face, one which is set to cause huge economic and ethical problems: the dementia time-bomb.
I have a personal interest in wishing that the law for euthanasia be changed, specifically with regard to dementia. I have seen two of my grandparents suffer from this devastating disorder – my paternal grandfather and maternal grandmother. In my grandfather’s case, he suffered from a low-level form of dementia for many years, before suddenly and rapidly going downhill. However, just a few months after his sudden deterioration, he died of an infection. When I heard about his death I was relieved: at least he didn’t have to suffer the indignity of full blown dementia for many years. However, this was the sad fate which befell my grandmother.
In 1999 my grandmother was diagnosed with suspected Alzheimer’s disease (the most common form of dementia). Our whole family, including her husband of nearly 60 years, watched her turn from a happy, chatty, busy woman into one who forgot who or where she was. She became violent and confused. She had a long, slow, painful descent into being completely helpless – unable to remember who she was or basic things like how to dress herself. After several years my mother made the heartbreaking decision to move her to a care home, since the burden of caring for her was too much for my 82-year old grandfather, who had been diagnosed with cancer. During her time in care she continued to deteriorate, a process we believe was accelerated by bad practice within the care home (we know she often went without sufficient food since no one ensured she ate her dinner – another simple necessity she had long forgotten the need for). As she descended, she slowly forgot who her grandchildren were, then her children. I still remember the moment she forgot who my grandfather was, when there was no recognition of the man she had married in 1948. This devastated my grandfather; he never really got over it, and began to give up his fight against his cancer, succumbing to the disease in 2010.
My grandmother eventually plateaued, but only after she’d forgotten how to walk, speak, go to the bathroom or do anything other than sit in a chair, constantly grinding her teeth and very occasionally mumbling a nonsensical sentence. Even those things stopped eventually and she essentially became a corpse whose heart happened to be beating … this was in no way a dignified way to live. She finally died in June 2012, after 13 long, painful years. My whole family was relieved when she passed away – she wasn’t suffering any more.
It is this experience which has strengthened my view on euthanasia. I strongly believe that it should be legal for people who are suffering enormously and have no quality of life to be able to end their life on their own terms.
Of course I’m aware of the problems surrounding legalising euthanasia. There’s a huge difference between the case of Tony Nicklinson and my grandmother. Tony Nicklinson was mentally sound but trapped in his non-functioning body, my grandmother was OK physically but mentally there was nothing left. These two cases would have to be treated very differently. The main difference being that, in one case the person is able to state for themselves that they wish to die whilst in the other, they are no longer in sound mind therefore unable to make that decision.
One of the main objections to the legalising of euthanasia is that it may put vulnerable people in harm’s way. Take for example people who are disabled and believe they are a burden on their relatives and carers, or a family who might just ship off their mad old grandma and be done with it, no matter what she may have wanted. These are all very real concerns which need to be addressed, however, I think that the vulnerable can be protected by implementing very strict controls around the process. These controls must ensure that euthanasia is only allowed in extreme cases, such as those mentioned above and that each separate case is subject to an extensive and thorough review. I also believe that interviews and psychiatric assessments are necessary for both the patient and their chosen representative (in cases where another person will ultimately have to make the decision) and that no action should be taken unless two or more doctors agree that euthanasia is the best option. Of course in the case of dementia it will be necessary for the patient to express their wishes whilst still in sound mind, perhaps relying on an advocate/representative to ultimately decide when euthanasia should be performed. It is also important to take into account the wishes of the doctor(s) involved in the process – no doctor should be forced to perform an act of euthanasia against their wishes, much like they cannot be forced to perform an abortion. But what’s wrong with introducing a legally-binding document, such as an advanced directive, stating that “if I get to a stage where my life has become devoid of any quality or dignity due to a debilitating and permanent illness, then I trust a designated person/people to decide when my life can be ended (subject to legal red tape and psychiatric evaluations).
My feelings on this matter don’t just come from my own personal experiences, but have also been formed through my research on Alzheimer’s disease. However, I must stress that I have had many debates with friends and colleagues (including those doing Alzheimer’s research) on the matter and that not all scientists agree with my views. The bioethics behind euthanasia are tricky – most people would only want their lives to end if they knew there was no possibility of a cure. As far as dementia goes, there is a huge amount of research being undertaken into a possible prevention or cure for the disease. A definitive prevention and/or cure is the ideal and if this ever occurs then there will be no need for euthanasia laws to exist. However, dementia is an extremely complicated condition, believed to be caused by a multitude of genetic and environmental factors. A cure still seems a very, very long way off. Clinical trials for drugs take many years from conception to being available on the market, so even if a breakthrough does occur at the research level, it may take ten years before any drugs are freely available. This also isn’t taking into account that dementia takes many forms – Alzheimer’s disease is the most common, but there are many other versions of dementia, including vascular dementia, dementia with Lewy bodies and Fronto-temporal dementia, which all have their own causes.
In cases such as locked in syndrome, the outlook may be even bleaker. A quick search for “locked in syndrome” on the journal website Pubmed doesn’t produce many papers (8124) and few seem to be about treatment. This is a big contrast to papers published on Alzheimer’s (85847) or lung cancer (211982). There is no cure for the syndrome; research is mostly concentrating on helping sufferers communicate. The best hope scientifically would be to prevent the syndrome by preventing strokes, which cause many cases of locked-in syndrome. Although there are a few isolated cases where people have recovered from the disease these cases seem to be rare.
Time is something which is not on our side when it comes to dealing with dementia. According to the Alzheimer’s Society, there are 800,000 people suffering from dementia in the UK at the moment. By 2021, only 9 years away, they estimate that this number will rise to over a million. The cost of dementia to the UK is predicted to be £23 billion in 2012, minus £8 billion a year which is saved by people caring for relatives with dementia themselves. Worldwide, there is expected to be 36 million people suffering from dementia, with that number expected to rise to 115 million by 2050(source).
I find those numbers utterly staggering, to me it seems like politicians are metaphorically sticking their fingers in their ears and singing loudly rather than confronting the issue, which is just getting bigger and bigger. Like it or not, in 9 years’ time, 1 million people are going to be suffering from dementia, most requiring round the clock care. This doesn’t even take into account the pressure and emotional stress put on the families of those 1 million people. Maybe a cure will be forthcoming sometime soon, but something needs to be done to combat this rising crisis – through increasing funding for Alzheimer’s research, improving the quality of care (for example, reclassifying from “social” care to “medical” care) and for helping people who are suffering to end their lives with dignity, and on their own terms.
Of course, I’m not saying that the minute someone gets diagnosed with dementia that we should ship them off to dignitas or some future UK based equivalent. This has to be a choice made by the individual when they are of sound mind, stated clearly by them in a legally binding document with assurances that it is not decided under pressure from anyone else. I just think that the opportunity should be there if it comes down to it – I know anyone in my family would prefer to end their lives rather than enduring the indignity my grandmother had to go through. It’s time for the government to at least seriously assess the possibility and consequences of making euthanasia legal. In my opinion, the option of euthanasia should be available to those who require it, but it should not be made easy. So my take home message would be – Make it hard, but make it possible.
I have a love hate relationship with wine – I love it and it hates me. That’s at least the way it seems the morning after we’ve been in close proximity. But why does alcohol make you feel so rotten the morning after and can the dreaded hangover be avoided? I decided to look into the science behind a hangover and see whether I can enjoy a glass of pinot without wanting to spend the next day in bed eating my own body weight in carbs.
Cause number 1: Dehydration
Most people are aware of the fact alcohol is diuretic, which means it makes you wee more. The result is that the next morning you run the risk of dehydration along with a dry mouth and headache. Lovely stuff.
Prevention: Try to drink water between alcoholic drinks and/or drink water before you go to bed.
If you’ve ever tried the approach of downing a pint of water before you go to bed after a heavy night on le booze you’ll be aware of the fact that, although it may help, it doesn’t mean you get away hangover free. So there must be more to a hangover than just the dehydration… In fact, it turns out alcohol is pretty poisonous and not just in the “what’s your poison?” sense, more in a surprisingly toxic way.
Cause number 2: Acetaldehyde
When we drink alcohol it is absorbed into our blood stream and works its way around our body. When it reaches the brain it makes you feel relaxed an uninhibited, which is the part we all enjoy, however this is not the only place alcohol leaves its mark. In the liver alcohol is metabolised (broken down) into different compounds which can then be removed from the body as waste. This process requires several steps before the final non-toxic products of water and carbon dioxide are made.
The first step is to turn the alcohol into acetaldehyde using an enzyme called alcohol dehydrogenase. The side effects of having acetaldehyde in your system include nausea, headaches and vomiting – sound familiar?
Prevention: There is none. I know – rubbish. You just have to wait for your body to metabolise the acetaldehyde into its less harmful by-products. So unfortunately if you spend the morning having an unwanted date hugging the toilet you just have to wait it out. As acetaldehyde is even more toxic than alcohol moderation is probably the key.
Cause number 3: NAD+ depletion
The metabolism of alcohol and acetaldehyde use a compound called NAD+. This NAD+ is also vital for the day to day health of your cells. It helps converts water, oxygen and a compound called pyruvate into energy. If the NAD+ has been used up metabolising alcohol, your cells need to make more. The cells convert pyruvate into lactate and this reaction produces more NAD+. Unfortunately long term build up of lactate is also linked to kidney damage. The more I read about alcohol the more I realise it’s pretty nasty stuff! The second consequence is that when pyruvate is converted to lactate, your liver becomes less efficient at regulating your blood sugar levels and blood sugar can become very low. Ever had the desire to eat the entire contents of your cupboards post pinot? That’ll be the low blood sugar.
Prevention; There’s not a lot you can do about the depleted NAD+ other than wait for your liver to do its magic (otherwise known as metabolism) and restore the natural balance. As for the low blood sugar, based on the assumption you’re not still hugging the potty, it’s a good idea to make sure you eat. That’s a free pass for a one way ticket to pasta-ville in my eyes.
Cause number 4: Reactive oxygen species and cell damage:
I’ve grouped these together because I don’t think it’s fair to say they cause a hangover however for regular drinkers they probably represent the biggest danger since they can cause longer lasting damage.
The acetaldehyde that is made during alcohol metabolism is a bit of a renegade and can attach itself to things in the cell that it shouldn’t, including a protein called glutathione. When attached to acetaldehyde, glutathione is prevented from doing other important jobs inside the cell which, when experienced regularly can lead to cell damage . More worryingly acetaldehyde can also bind to DNA and damage it, which can increase the risk of developing cancer.
There is a separate chemical pathway that your liver cells can use to metabolise alcohol. Instead of using alcohol dehydrogenase it uses an enzyme called cytochrome p450. This method of alcohol breakdown still produces acetaldehyde but has the added bonus of churning out reactive oxygen species. These little nasties are, as the name suggests, incredibly reactive. They can cause a lot of damage to your cells by reacting with proteins and DNA. This method of breaking down alcohol is used far less by your cells than the alcohol dehydrogenase method so the less you drink the less likely you are to produce the reactive oxygen species.
Prevention: Eat food rich in cysteine post alcohol which includes eggs, chicken and oats. Cysteine is an important building block of glutathione, so making sure you get more into your body gives your cells a fighting chance at making more glutathione. Have a glass of vitamin C rich orange juice. Vitamin C is powerful anti-oxidant, meaning it can interact with the reactive oxygen species, preventing them from reacting with protein and DNA in your cells.
I’m sorry to say the best prevention for a hangover and damaging your health long term is avoiding alcohol in the first place. Regular exposure to alcohol and the damage caused to cells is linked to an increased chance of developing cancer. To me this is a far more important reason to avoid drinking than a fuzzy head the morning after and is a very good argument in favour of moderation. If you feel drunk that means there’s too much alcohol in your body for your liver to metabolise and you’re getting a backlog of alcohol related nasties in your system – so moderation really is key. On that note, if someone can recommend a low alcohol wine that doesn’t taste like a mix of sugar water and ass, please do let me know.
We are without doubt one of the worlds most complicated machines. Our bodies are made up of trillions of cells, most of which contain a full copy of our own individual genome. Despite containing identical genetic information individual cells vary hugely in structure and function, for example, just think of the differences between skin and brain cells. This variation is achieved through the specialised way each cell reads it’s own copy of the genome, allowing cells to create only the components they require to function. One of the biggest challenges faced by biologists and medics today is bringing together our understanding of genes, gene processing, cellular and systems biology to gain a better understanding of how our bodies work, and what happens when things go wrong. Such research has lead us to appreciate just how individual we really are! It has highlighted how a combination of genes, environment and even our own compliment of bacteria can profoundly affect the way our cells function and ultimately our health. This ability to delve deeper into our inner workings has also spawned a new field of medicine known as personalised medicine.
Personalised medicine refers to the idea of tailoring treatment to an individual – the right drug or medical intervention for the right patient. In practice this doesn’t mean making a new set of drugs for each individual, instead, it focuses on defining specific groups of people who are more or less likely to respond to treatment. Our current system relies on a ‘blanket’ method of treatment, i.e. everyone suffering from the same ailment will be treated with the same drugs/procedures. However, we now know that differences in genes and gene expression mean that individuals are unlikely to respond in the same way to medical treatments. This may go some way to explaining why between 30% and 70% of patients fail altogether to respond to drug treatment (1). It is ultimately hoped that a better understanding of how genes influence drug metabolism and function will improve both patient prognosis and reduce unnecessary spending on unsuccessful treatments. Indeed, progress is already being made towards a more personal approach to medicine:
One of the first major successes for personalised medicine came from the breast cancer drug Herceptin (Trastuzumab). In approximately 20% of invasive breast cancers a cell surface protein known as the HER is over produced causing cells to replicate uncontrollably. Screening tests have been produced to detect this defect and HER positive patients have been successfully treated with herceptin (an antibody which binds to the HER protein reducing cell replication). This breakthrough was soon followed by the discovery of a genetic abnormality, known as the Philadelphia translocation, associated with chronic myelogenous leukemia (CML). Since a high percentage of individuals with CML (~95%) also express this genetic abnormality, drugs targeted at blocking the protein produced by this abnormal gene were developed as a treatment. One of the most successful of these drugs (Gleevec) has now been FDA approved for the treatment of ten different cancers. These findings show how patient-specific genetic information can lead to improved health outcomes and novel therapeutics.
The scope of personalised medicine is not limited to drug development, it is also being used to assess the long term prognosis and treatment requirements of some cancer patients. A number of genetic screens have been developed to assess the likelihood of tumour recurrence following initial treatment in certain types of breast cancer. One of the more widely used tests, Oncotype DX, assesses 21 different genes found in tumour cells and, from these, predicts the likelihood of regrowth. This means that patients with low recurrence scores, therefore good prognosis, can be saved the stress of further unnecessary therapy and the healthcare system saves on the cost of providing unnecessary treatment. The success of this predictive screen has lead to further research into similar screening for other types of cancer and it is hoped that these tests will soon be widely available as predictive tools.
Beyond cancer, personalised medicine is also being investigated as a way of understanding the adverse side effects linked with certain drug treatments. Current research has highlighted a number of genes associated with drug hypersensitivity and variations in metabolism. It is hoped that this knowledge will, in the future, allow doctors to predict which individuals are more likely to experience adverse side effects to during treatment and tailor their prescriptions and doses accordingly.
However, despite these successes, personalised medicine still has a long way to go before its full potential can be realised. One of the first and arguably most important challenges facing this field will be defining which genetic and cellular abnormalities lead to disease. In a small number of cases this has already been achieved, for example we know exactly what gene leads to to cystic fibrosis. However most disorders are more complex, stemming from abnormalities an any of a number of different genes. Therefore uncovering the precise risk factors for these disorders will require a staggering amount of data from a huge number of individuals. Also, just to make things a bit more complicated, the emerging field of ‘epigenetics’ is now challenging the belief that we are simply the sum of our genes.
Epigenetics is revealing how interactions with the environment can change the way our cells read our genetic code. This means that two people with identical genes could still suffer from very different ailments depending on what environmental factors they are exposed to. Finally it is important to consider the ethical implications of these procedures, i.e. data protection, patient confidentiality and access to (perhaps costly) personalised treatments (this is discussed in more depth here).
Although certainly challenging, I don’t think these problems are insurmountable and the gains of personalised treatments will certainly be worth the scientific investment. Therefore, with continued funding and research effort I hope it is only a matter of time before we see more personalised diagnosis and treatment available to the wider public.
The topic of race is one of fierce debate; never far from our minds and commonly discussed both in the media and down the pub. Britain is one of the most diverse and multicultural countries on the planet but the development of this multiculturalism has grown from a torrid past and race relations continue to dominate the national psyche. The ever-growing diversity of our country means that race relations are becoming more and more crucial to many socio-political advances. Indeed a number of intergroup interactions come to the forefront every year, with prominent events from this year including the allegations of racial abuse against former England football captain John Terry. Understanding what defines our prejudices and creates these racial tensions is an aspect of race relations which does not receive widespread media coverage, despite its potentially major implications for society – so what is currently known about the neuroscience of race?
Most of the early work on race relations came from the field of social psychology. Henri Tajfel and John Turner were early pioneers of ‘social identity theory’ – a theory which explores people beliefs and prejudices based on their membership and status within different social groups. Their work at the University of Bristol (UK) in the 1970s described the phenomena of ingroups and outgroups. They assigned volunteers to one of two groups based on relatively superficial preferences, i.e. individuals may have been assigned to a certain group due to their appreciation of a certain style of art. Individuals within these groups were then asked to rate their preference for other volunteers either within their own group (ingroup) or in another group (outgroup). Tajfel and Turner consistently found a prejudice towards the outgroup individuals and a preference for ones ingroup. This research suggests that we have an innate mechanism of preference towards those who we perceive to be similar to ourselves over those who are ‘different’ – no matter how insignificant or trivial that difference may be.
Interestingly this inbuilt prejudice can be masked, as is often the case in similar studies using different racial groups. However, recent neuroscience research suggests that prejudices may still exist despite the conscious effort to hide them.
Research by Elizabeth Phelps and colleagues at New York University (US) believe they have uncovered one of the brain pathway involved in determining reactions to faces of different race. This research provides some intriguing insights into our views of different racial groups. Using fMRI (functional magnetic resonance imaging), Phelps and her team have discovered a network of interconnected brain regions that are more active in the brain of white participants in response to a picture of a black face than to a white face.
This circuit includes the fusiform gyrus, amygdala, ACC (anterior cingulated cortex) and the DLPFC (dorsolateral prefrontal cortex). Activity in the fusiform gyrus is not surprising, since this region has been linked to processing of colour information and facial recognition. Intuitively, this region should play a simple role in the initial recognition of a black face. The next region in this circuit is the amygdala. The amygdala is responsible for processing/regulation of emotion and it is here where the circuit becomes more intriguing. A simple explanation of amygdala involvement could be that black faces evoke more emotion in white participants than white faces. Further along the circuit the roles become more complex as we move into the higher areas of the brain. The ACC and the DLPFC are regions that have both been linked to higher order processes. The ACC is commonly reported to be active in tasks that involve conflict. This region is commonly activated in tests such as the ‘stroop test’: this involves naming the font colour of written words which either agree (BLUE) or disagree (BLUE). In this case, the ACC is active during the second conflicting task. The DLPFC is one of the most sophisticated areas of the human brain, responsible for social judgement and other such complex mental processes.
A study conducted by Mahzarin Banaji and a team from Yale and Harvard Universities in the USA may explain why activity is seen in areas involved in conflict resolution and social judgement when viewing ‘outgroup’ faces. This research showed that activation of these pathways was time dependent. When images of ‘outgroup’ faces were flashed for a very short time (30 milliseconds) significant activation was seen in the fusiform gyrus and amygdala but none was observed in the ACC or DLPFC. However, when these images were shown for a longer period of time (525 milliseconds) activity in the amygdala was virtually abolished, replaced by strong activity in the ACC and DLPFC. This research yields vital insight into the role of the ACC and DLPFC and the possible presence of inbuilt prejudice. One interpretation of these findings is that after a short presentation, the ‘raw’ inbuilt activity is strong, showing unintentional emotive activity to ‘outgroup’ faces, while after the longer exposure time this activity is abolished by the influence of the ACC and the DLPFC, which provide a more rational regulation of this response.
This suggests that a member of today’s society knows consciously that racial prejudice is wrong and so activity in the DLPFC could represent a conscious decision to be unbiased. The ACC activity may represent conflict between this conscious DLPFC process and the subconscious emotion seen in the amygdala activity. Obviously, a mere increase in amygdala activity does not necessarily signify negative emotion. Therefore this automatic activity may not represent inbuilt racism, instead it may simply reflect heightened awareness and deeper thought when assessing faces from another racial group. However, one thing it does highlight is the obvious differences in the processing of ‘outgroup’ faces.
This research could have serious implications for our understanding of inter-race relations. Therefore, although this activity is subconscious and unlikely to be linked with conscious racial discrimination, it may still play a key role in influencing how we go about our daily lives – choosing jobs, places to live, friends and so on. However, since our brains are malleable, racial prejudice such as this can be lessened, a prime example being through inter-racial friendships and marriages. It is possible that this ingroup vs. outgroup association of race will diminish more and more as our education and upbringing continues to become more multicultural. But for now, easing these racial divides may take a lot of thought.
Kubota et al. The Neuroscience of Race. Nature Neuroscience
Cunningham et al. Separable Neural Components in the Processing of Black and White Faces. Psychological Science
Learn a little more about Oliver:
My research looks into the effects of diabetes on the nervous system. Diabetes is nearly 4 times as common as all types of cancer combined and around half of those with diabetes have nerve damage. Most people are not aware of this very common condition and I am trying to increase awareness of the disorder and understand what causes diabetic patients to feel increased pain and numbness/tingling in their hands/feet.
Intestinal parasites infect more than a billion people world-wide, of which approximately 10% become ill. Although the thought of parasitic worms may be enough to turn people’s stomachs and put them off their food, for some sufferers of severe auto-immune diseases these worms may actually be able to provide relief or even remission of symptoms. We understand the negative effects worms can have such as nausea, vomiting and weight loss. However, research is now highlighting circumstances where their presence may indeed be beneficial in relieving symptoms of a number of diseases.
Helminthic therapy is a type of treatment where patients suffering from immune diseases are deliberately infected with parasitic intestinal worms in the hope that this will relieve their symptoms. Although this therapy is relatively new, there are a handful of promising studies indicating that worms may indeed represent a viable treatment for these diseases.
One of these studies was carried out by P’ng Loke, a parasitic immunologist. Loke’s work centred on the study of a man he met in 2007 who he later found had deliberately infected himself with parasitic worms. At first glance the man appeared to be perfectly healthy with nothing more than a genuine interest in parasites. However, it was soon revealed that, in an attempt to cure his inflammatory bowel disease (ulcerative colitis), he had infected himself with human roundworm which burrowed into the lining of his colon. Ulcerative colitis is an auto-immune disease characterized by open sores in the colon lining leading to intense abdominal pain and vomiting. Although the precise cause of this disorder is not well understood, severe cases have been linked to disruptions in mucus production within the colon. After coming across the controversial work of the parasitologist Joel Weinbeck, the man ingested a large quantity of the worm and was soon symptom free.
Endoscopic image of a bowel section known as the sigmoid colon afflicted with ulcerative colitis. The internal surface of the colon is blotchy and broken in places.
Colonoscopies of his intestines following infection revealed that wherever the worms formed colonies, there was a concurrent decrease in the number of ulcers. This decrease in ulceration is believed to be a beneficial side effect of the body’s immune response against these worms. Upon infection the body’s immune system increases production of both inter-leukin IL-22 (a protein important for healing the colon lining) and a number of mucus-producing cells found throughout the colon. This increase in intestinal mucus forms a protective barrier across the surface of the gut, protecting it from bacteria and thereby reducing inflammation.
Along with a possible role in the treatment of colitis in humans, studies in animals have found that infection with worms can either alleviate symptoms or entirely protect against diseases such as asthma, rheumatoid arthritis and some food allergies. A role in Crohn’s disease (a long term condition causing inflammation of the lining of the digestive system) has also been suggested. Results from a clinical trial show that ingestion of swine whipworm causes remission of symptoms in 72% of cases, and improvement but not remission of symptoms in a further 7%.
What is interesting is that the prevalence of auto-immune diseases in the developed world is high, but the incidence of parasitic worms is relatively low. In contrast, in the lesser-developed world where the incidence of worms is high, the occurrence of auto-immune diseases is sparse. Could it be that in our quest for sanitation and clean water, we may have damaged one of our friends, one of our bodies natural source of defence against itself; the intestinal parasite?
Although some cases show evidence that parasite infection may play a role in protecting against certain disorders, it is still impossible to predict how any one individual will respond to such an infection. Indeed, it may be the case that in some patients the worms may cause more harm than good. Therefore continued research into safe and effective forms of helminthic therapy is required before we can truly distinguish these parasites as friend or foe!
Several popular blockbusters, including I Am Legend and Rise of the Planet of the Apes, have envisioned the use of viruses, rigged to deliver therapeutic DNA to patients as a way of curing disease. In these films the scientists using these techniques are ecstatic when they discover that they’ve been successful, but their joy quickly turns to horror as the virus mutates out of control and begins to destroy the human population. This is undoubtedly a nightmare scenario, but how close do these films come to the truth? Can viruses, commonly known to cause disease, actually be used as a cure? How likely is it that they will mutate out of control and destroy the world? If this is the case, then why are they being used at all?
Viruses are responsible for a range of diseases, including the common cold, influenza, HIV (Human Immunodeficiency Virus) and Ebola. The sole selfish function of a virus is to infect a host (e.g. a human) then use this host to make more copies of its own DNA. It does this by entering a host cell and hijacking its DNA-making machinery, forcing it to make more viral DNA. However, it doesn’t stop there. Once the host cell has made sufficient viral DNA the virus then commandeers the host cell’s other machinery to create more intact viruses which “bud off” from the infected cell ready to infect its neighbours.
So, if viruses are so deadly and infectious can they seriously be used to cure disease? Surprisingly, the answer is yes. The use of viruses to deliver new DNA to human cells is being investigated as part of a technique known as Gene Therapy.
Certain diseases, such as cystic fibrosis, are caused by known defects in a single gene. The idea behind gene therapy is to fix damaged DNA. This can be achieved by either swapping the defective gene for a working one, repairing the damaged gene by mutating it back to a healthy form or by “switching off” the defective gene. Viruses are being investigated as carriers or ‘vectors’ for delivery of new, undamaged, DNA. However, gene therapy cannot offer a miracle cure for all known disorders. In fact it is only a feasible treatment for disorders stemming from a small number of recognised genetic mutations. Therefore, the idea of a single gene therapy functioning as a cure for Alzheimer’s or all known cancers, as seen in the movies, is purely fictitious!
A typical virus consists of a viral-coat (like the skin of a balloon) enclosing DNA and a small number of proteins. In gene therapy, the virus is modified so that its DNA is replaced with DNA required for the therapy. The virus is then injected into the patient where it targets and infects cells, replacing damaged DNA with the new healthy DNA. The part of the virus which allows it to replicate has been removed, meaning that whilst the virus retains the ability to infect cells and alter DNA, it has lost the ability to replicate itself and infect neighbouring cells (this infectious ability is called virulence).
Viruses which are currently being investigated for use in gene therapy include: adenovirus (responsible for the common cold), retrovirus (HIV is a retrovirus) and Herpes Simplex virus (as the name suggests, is responsible for herpes infections and also cold sores). Part of the appeal of using viruses in gene therapy is that they may be used to target healthy DNA to specific cell types. This can be achieved by manufacturing viruses which can recognise and infect only certain types of cell. This means that “innocent” cells which are not expressing the disease-causing gene should not be infected.
In the films, the therapeutic virus mutates back to its virulent form, or an even more virulent one. It then spreads a fatal disease throughout the population, causing a global catastrophe. One of the concerns about using viruses for gene therapy is that this nightmare scenario might come true. This possibility is currently under intensive study within controlled research environments. Although current research has found that recombination and a return to virulence may be possible for certain viruses, this may not be the case for all viral vectors used in gene therapy. However, if this technology is proven to pose a real risk then such research will likely be discontinued.
There are also other problems with using viruses as DNA carriers. The introduction of any foreign material into the body is likely to produce an immune response. The surface of the viral-coat is not smooth, it actually expresses a number of extruding proteins which may be recognised by a patient’s immune system. This means that the host’s immune system may recognise the foreign body and attempt to dispose of it. Strong immune responses can be fatal, especially in someone already weakened by a genetic disease. This problem could, in theory, be circumvented by removing proteins and other foreign bodies from the outside of the virus to lessen the chances of it being recognised as a foreign object.
Currently, gene therapy using viral vectors is not approved by the U.S. Food and Drug Administration (FDA) since concerns have arisen surrounding the deaths of two patients participating in gene therapy trials. One died of a severe immune response to the viral carrier. The other appeared to develop leukaemia, leading to fears that viral vectors may cause cancer.
A liposome
There are alternatives to using viruses for gene therapy. One option is to use liposomes, small balls of lipid (fat droplets found in the cell membrane) which contain the DNA needed to fix the gene. These shouldn’t produce an immune response since liposomes are made from materials found naturally in cells. However, the disadvantage of liposomes is that they can’t target specific cell types. Another alternative is to simply inject the DNA directly into target cells. The advantage of this is that it won’t cause an unwanted immune response. However, injecting DNA is an immensely tricky process and may only be possible with certain types of cell. Also, since this ‘naked’ DNA does not integrate well into cells, it may not be expressed as reliably as DNA delivered by other methods.
So how likely is it that in the future viral vectors will be used to help cure disease? Although the process is still in its infancy and concerns over its feasibility need to be addressed the principle is promising. Indeed, there have already been some positive steps towards implementing these techniques! In 2008, a group at University College London (UCL) used gene therapy to successfully improve the sight of a patient with the eye disease Leber’s congenital amaurosis. The patient suffered no adverse effects since the study used an adeno-associated virus, a strain which cannot replicate itself without the addition of a partner virus. The adeno-associated viral DNA also usually inserts itself into a specific region in chromosome 19 (whilst insertion from other viruses may be random), meaning it is less likely to interfere with other functional DNA.
Therefore there is still hope that viruses can be used as vehicles for gene therapy. If current problems can be overcome, it may prove to be a revolutionary method for treating ‘single-gene’ diseases. Indeed, it is unlikely that viruses will cause the apocalyptic effects seen in these films, since any reversion to virulence is likely to be caught long before it infects the general population. But that wouldn’t make for such entertaining viewing would it?
Salamanders are interesting creatures. When attacked by a predator, they will shed their tail and flee. The tail will wriggle around on its own for long enough to hopefully distract the predator and allow the salamander to escape. Fortunately, losing a tail is not a huge bother for the little guy; he’ll just grow back a new one.
Outside a select group of amphibians, the ability of vertebrate species to regenerate complex organs and body parts is extremely limited. In humans, it is almost non-existent (with the notable exception of the liver). But what cannot be done by nature, we can achieve though science! Or at the least, it may be possible in the future…
Regenerative medicine is a relatively new field that is creating new organs and new possibilities. Last month, it was reported in The Lancet that a 10 year old Swedish girl received a successful transplant of a vein grown in a lab from her own stem cells1. The girl was suffering from a liver portal vein obstruction, a potentially fatal disease. If she had been an adult, a vein would have been removed from her neck or leg and used as a replacement. However in the case of a still growing child, this procedure carries unacceptable difficulties. Instead, they grew one.
A vein from a deceased donor was stripped of all its cells and the remaining scaffold “seeded” with the girl’s own stem cells. This scaffold was then placed in a “bioreactor” which bombarded the structure with chemicals necessary for programming the stem cells to develop in a specific way. This allowed the new vein to mature before transplantation. Following a second similar procedure the girl returned to full health.
When it comes to organ regeneration experts have identified 4 levels of complexity:
Simple flat structures such as skin and endothelium.
Tubular structures such as blood vessels and tracheae.
“Hollow” organs such as bladders.
Organs with complex substructures such as kidneys or lungs.
The Lancet report is just the latest example of engineered organs being transplanted into humans. Indeed, similar results have previously been achieved with bladders, urethras and tracheae2-4. These manufactured organs have the advantage over normal transplants of being wholly compatible with the patient’s immune system, rendering the need to take dangerous immunosuppressants moot. However, whilst the scaffold technique is useful as a proof-of-concept, in practice problems can arrise. The scaffold may not be a suitable size, causing it to mismatch with surrounding tissue. The growth of cells on the scaffold may be uneven or insufficient. Furthermore, no group have yet created a level D organ using this method.
These challenges have led scientists to look for other options. 3D printing is an industrial manufacturing technique that has recently adapted and expanded into biomedical applications. A 3D printer creates objects by the process of “additive manufacturing”. The printer deposits layers of material (usually small, discrete pieces of metal or plastic) on top of one another onto a flat surface according to a programmed input. These layers are then melted together to form a complete object. This process is used to form many complex objects including machine components, jewellery and toys.
Very recently, 3D printers have become capable of laying down material at the micrometre resolution (one-thousandth of a millimetre), a scale which allows accuracy at the level of a single cell. Biologists have since been experimenting with using 3D printers to print human organs. Small clumps of cells are bound together with a rapidly-degrading film that allows these clumps to be printed three-dimensionally in a fairly structured manner. The cells are then deposited in a shape and structure similar to a specific organ. Whilst the arrangement of the printed cells is not quite the same as a naturally occurring organ, the cells are able to detect their environment and re-organise themselves into a more conventional structure. One example is shown below.
Large blood vessels are generally composed of the following: from the centre outwards, the internal empty space (the lumen), the internal barrier (endothelium), then smooth muscle cells and finally the fibrous tissue. a) The template (top) shows printed cylinders of smooth muscle cell stacked on top of each others. The result (bottom) shows that over time the cells assembled automatically in a manner that produces a lumen. b) The template (top) shows printed cylinders of mixed smooth muscle cells (red) and endothelial cells (green). A cross-section of a mature blood vessel shows that over time the endothelial cells migrate towards the centre. c) Top showing a template consisting of fibroblast cells (red) and smooth muscle cells (green). A cross section of the resulting blood vessel showing clear fusion and segregation of the two cell types. From Jakal et al.5
The printed cells used for this technique can either be specific cell types as shown above or more generic stem cells. Once 3D constructs have been formed, the tissue is maturated in a bioreactor to form a stable organ, similar to previous techniques. The now consolidated organ is essentially functional and in theory could be used for transplantation. However, such is the novelty of this technique that it hasn’t yet been put into practice.
Bioprinting has huge strengths over similar techniques including customisation, flexibility and reproducibility whilst not relying on donor material. In theory any cell type, body structure or tissue may be used in this technique providing the correct design and the correct type of “ink” is used. Even bone may be printable using a combination of osteogenic cells and temporary connective tissue replacements.
The potential for this technology is vast. Imagine if we were to bear the maximum fruits of regenerative research. Organ donor waiting lists would become a thing of the past. No more children on dialysis. No more diabetics on insulin. No more hormone disorders (we could replace faulty hormone-releasing organs). Men and women injured in wars or in car crashes would have limbs replaced or healed.
And what about non-therapeutic uses? Want a breast implant? No need to implant a silicon or saline bag – just add more breast. The number of animals killed for research purposes would greatly decrease. When you can test drugs and conduct experiments on organs grown in a lab, the need to extract these organs from animals would no longer exist.
Whatever the possibilities may be, research is still at an early stage. Printers of adequate sophistication cost in the range of tens of thousands of pounds, choking research in this area. We can only hope that as prices collapse, so will the barriers to progress.
Post by Chris Logie
References
1. Olausson M. et al. (2012). “Transplantation of an allogeneic vein bioengineered with autologous stem cells: a proof of concept study.” Lancet 380 9838:230-7.
2. Atala, A. et al (2006). “Tissue-engineered autologous bladders for patients needing cystoplasty”. Lancet 367 9518:1241-6.
3. Raya-Rivera, A. et al. (2011). “Tissue-engineered autologous urethras for patients who need reconstruction: an observational study.” Lancet 377 9772:1175-82.
4. Macchiarini, P. (2009). “Clinical transportation of a tissue-engineered airway.” Lancet 372 9655:2023-30.
5. Jakab, K. (2010). “Tissue engineering by self-assembly and bio-printing of living cells.” Biofabrication 2 2:022001
6. Fedorovich, N. E. (2009). “Organ printing: the future of bone regeneration.” Trends Biotechnol 29 12:601-6
In 2009 Professor David Nutt caused controversy for the UK government’s Advisory Council on the Misuse of Drugs after stating the cannabis should be declassified to a Class C (rather than Class B) illegal drug. During his time on the Council, Prof. Nutt had also claimed that recreational use of ecstasy is less dangerous than horse-riding. He was sacked from his government-advice post by Alan Johnson, the then Labour UK Home Secretary, who wrote to Nutt, “I cannot have public confusion between scientific advice and policy and have therefore lost confidence in your ability to advise me as chair of the ACMD.” By that logic, Johnson must have breathed a huge sigh of relief when three more scientist experts on the Council soon resigned.
Science and politics share a very complicated relationship, and have done since time began. Galilieo was condemned for stating that the Earth revolved around the sun. He was kept under life-long house arrest for his theory. Darwin’s theory of evolution has been exploited as an argument for any number of political agendas, all the way from communists to Nazis. Global warming was (and still is) a political minefield for climate scientists. The disciplines of science and politics are so intertwined such that good science is intrinsically political and policies should always be informed by science. Unfortunately, in today’s society there is a massive disconnection between scientists and politicians.
This rift is exemplified by the shocking fact that there is only one British Member of Parliament out of 650 that has a scientific, research-based PhD (Lib Dem MP for Cambridge Julian Huppert, Biological Chemistry, in case you’re wondering). Similarly in the United States, 3 in 435 people in the House of Representatives have a non-medical, scientific background. And considering Prof. Nutt’s dismissal, it seems that scientists are seen by politicians as commodity experts whose advice can be cherry-picked for a bit of ‘policy-based evidence-making’. Winston Churchill once said that science should be ‘on tap but not on top’. But what’s stopping us scientists from getting properly involved in politics?
I’m tempted to argue that, to a certain degree, it’s our own fault. As scientists, we are notoriously rubbish at PR. I imagine many of us wouldn’t want to be seen dead testiculating* with the rest of the mob in Parliament. Sadly, a lot of scientists’ work is viewed as slow, expensive, secretive and not immediately socially beneficial. The current stereotype of a scientist is sadly pretty much the same. Politicians, on the other hand, work to very tough deadlines in order to combine ethical, social, moral and economic factors into their party’s policies. These policies actually make a huge difference to people’s daily lives. As if that wasn’t enough, politicians have to try and kiss babies, refrain from calling women bigots and avoid cameras when beating up youths during the Election. Makes the lab seem pretty cushy.
The main thing scientists have got going for them if they fancy residing in Downing Street is the doctrine of science itself. As Carl Sagan (the American Brian Cox of his time) said, “science is a way of thinking much more than it is a body of knowledge”. The knowledge bit isn’t bad either and it lasts far beyond the sell-by date of parties’ policies. The logic that underlies the analysis of data gathered from random, blinded, controlled trials is the perfect way of objectively testing different policies. And we’ve got buckets of that logic to share with our MPs.
U.S. President Barack Obama praises his ‘dream team’ of scientific advisors for their advice, “even when it’s inconvenient, indeed, especially when it is inconvenient”. As scientists we may not have rhetoric on our side to sugar-coat the facts; but shouldn’t that
be an advantage? We should not just be ready to inform and educate policy-makers; we should be ready to objectively challenge their decisions. In return, politicians shouldn’t dispose of us when they don’t like what we have to say. Professor Nutt hasn’t given up; he has now formed the Independent Scientific Committee on Drugs (win). Personally I think he sets an outstanding example to both scientists and politicians alike.
*to testiculate (verb): to gesture animatedly whilst spouting absolute b*llocks.
You must be logged in to post a comment.