Playing at a better future: Could video games improve your life?

Your brain is plastic. No, not like the picture to the right but in the sense that everything which makes us who we are (our thoughts, beliefs and understanding of the world around us) can be subject to change. This change may come from our interactions with the world, as we learn to adapt and live in a changing environment, or the change may come from within, as we make conscious decisions to view the world differently. This natural plasticity helped our ancestors adapt when their environments changed and undoubtedly played an important role in their continued survival. However, a recent media storm has grown around the way brains, especially teenage brains, may be altered in response to societies’ increasing use of technology. This interest has raised concerns surrounding the impact technology, such as social media and video games, could have on the growing brain.

Video games in particular may be thought to bring together a ‘perfect storm’ of attributes primed to alter your brain. Specifically, they provide us with challenges that stretch our abilities but that are also matched to the our current gaming level; thus, are always achievable. This type of challenge makes us feel particularly good, since we feel as though we have earned our own rewards (such as in-game experience points or unlocking a new level of game play) through what we perceive to be hard work. Thus, we feel a sense of accomplishment and our brains are thought to undergo changes which reinforce certain game-related behaviours.

A number of scientific studies have explored the negative effects gaming can have on the developing brain. And, there have been a range of reactive articles exploring the notion of a dystopian future where a generation of emotionally blunted sociopathic adults cruise around heartlessly re-enacting crimes from games such as Grand Theft Auto. However, it is important to understand that many diverse activities lead to changes in brain structure and function and that these changes are not always negative. Indeed, some studies are now beginning to highlight the positive effects games have on development and how games may be designed to improve mental function.

Interestingly, game developers and scientists are now coming together in the hope of tackling depression, a major cause of disability, especially amongst young adults (up to a quarter of young people will have experienced a depressive disorder by the age of 19). Sadly, shortages in trained councillors and the reluctance of some young people to seek traditional help means that fewer than a fifth of young people with depressive disorders will actually receive treatment.

A research group, lead by professor Sally Merry at the University of Auckland, have developed a role playing game (SPARX), based around the principles of cognitive behavioural therapy (CBT), which aims to help young people cope with depressive disorders. SPARX is an interactive first person role playing game which allows the user to design a playable character, who is then charged with restoring ‘balance’ to a fantasy world dominated by GNATs (Gloomy Negative Automatic Thoughts). The game leads the player through a range of interactive levels where they learn different CBT techniques aimed at interrupting and readdressing negative thought patterns. At the beginning and end of each level the user interacts with a ‘guide’ who explains the purpose of the in-game activities, provides education, gauges the players mood and sets them real-life challenges (equivalent to homework). Players’ progress is monitored throughout and young people who are not seen to improve are prompted to seek further help from their referring clinicians (a trailer of SPARX is available at www.sparx.org.nz).

Studies suggest that SPARX significantly reduces depression, hopelessness and anxiety in young gamers and that the game is at least as good as traditional CBT. Game designers have also worked hard to make sure the game is engaging for young people; and this seems to have worked: 60% of players completed the whole game while 86% completed at least 4 levels and the majority of young people stated that they would recommend the game to their friends. This is a pretty impressive statistic, since teenage gamers are notoriously hard to please and a self help fantasy RPG certainly sounds like the kind of thing teens would dismiss as being ‘lame’. The success of this intervention suggests that such games could be a great way to treat patients who do not have access to therapy or who may be reluctant to engage with conventional therapeutic methods.

Ultimately the world of gaming is huge and only getting larger. It is currently estimated that by the age of 21 the average young gamer will have spent around 10,000 hours gaming; this is almost equivalent to the time they will have spent in school! With young adults investing so much of their free time in the gaming world, it’s about time we set about understanding the influence games have on development and perhaps, as SPARX has done, start putting these games to work for us. Just think, if we could harness the pleasure gamers feel when working towards gaming-related goals, we could use this medium not only to educate but perhaps also to encourage people to ‘play’ at the biggest puzzle game around – scientific research. The future seems full of amazing possibilities, so put your game face on and join the fun!

[youtube http://www.youtube.com/watch?v=dE1DuBesGYM]

Post by: Sarah Fox

 

First patients enrolled on study aimed to improve outcome following brain injury.

Formed from around 80-90 billion neurons and with a consistency so soft you could cut

A CT of the head years after a traumatic brain injury showing an empty space marked by the arrow were the damage occurred.
A head CT image taken years after a traumatic brain injury, showing an empty space marked by the arrow were the damage occurred.

it with a table knife, the brain is a delicate vulnerable organ. Unfortunately, despite its hard outer shell (the skull), the brain is still susceptible to many forms of damage, both external and internal. Two common forms of brain damage are subarachnoid haemorrhage (SAH – a type of stroke caused by bleeding in and around the brain) and traumatic brain injury (TBI – occurring when an external force causes injury to the brain, i.e. hitting your head). It is not always possible to prevent this type of injury, however, scientists from Edge Therapeutics, Inc are currently working hard to develop life-saving hospital products capable of improving the outcome of patients following SAH and TBI.

Edge Therapeutics are currently enrolling patients on Phase I/II clinical trials for their pipeline drug EG-1962. Despite its inaccessible name, EG-1962 is designed to perform a unique and possibly life-saving function. The drug is designed to treat a state known as delayed cerebral ischemia (DCI). DCI is a complication and major cause of death and disability which occurs in patients within the first two weeks following brain injury. As the name suggests, DCI causes cellular damage through ischaemia (restriction of blood flow to the tissue). This ischaemia can result from a number of mechanisms stemming from the site of brain injury, including cerebral vasospasm (a narrowing of vessels carrying blood), cortical spreading ischaemia (decreased blood flow caused by mass activation of large populations of brain cells) and microthrombembolism (a blockage of blood flow around small, trauma-induced blood clots).

blood
Cerebral angiogram showing the blood vessels in a brain.

EG-1962, also referred to as nimodipine microparticles, is a novel preparation of the FDA-approved drug nimodipine. This preparation encapsulates nimodipine in a biodegradable coating which can be injected directly at the site of injury, releasing nimodipine steadily over a period of 21 days. This new system is thought to be an improvement on the current method of oral delivery, which is more likely to cause nasty side effects (such as low blood pressure and lung complications) and less likely to supply sufficient drug to areas where it is needed.

E. Francois Aldrich, M.D. (an Associate Professor of Neurosurgery at the University of Maryland and the Chief of Cerebrovascular Surgery) stated that he hopes the study will help select on optimal dose of EG-1962, which could potentially prevent DCI, therefore improving the lives of a number of patients suffering from various forms of brain injury.

The current study, dubbed NEWTON (Nimodipine microparticles to Enhance recovery While reducing TOxicity after subarachNoid hemorrhage), will enrol up to 96 patients in approximately 20 centres internationally. This study aims to ensure EG-1962 is safe; to discover the most safely effective dose; and to assess whether EG-1962 offers a significant improvement over oral nimodipine. Results are expected in the first half of this year and Dr. R. Loch Macdonald, Chief Scientific Officer at Edge Therapeutics, hopes that these findings will lead to further advances in the clinical development of the drug.

Although a significant number of drugs undergoing Phase I/II trials will fail to progress any further, it is hoped that this treatment or similar preparations may soon be available to reduce the damage caused by DCI.

Post by: Sarah Fox

Should Backyard Brains bug out?

roach1A US company, Backyard Brains, has recently been criticised for marketing a device which allows users to create their own ‘cyborg’ cockroach, using a mobile phone app to control the critter’s movements. The ‘kickstarter’ funded project, headed by graduate students with a passion for science education, has caused serious controversy, including accusations that the device will “encourage amateurs to operate invasively on living organisms” and “encourage thinking of complex living organisms as mere machines or tools”. But is it possible that these concerns are misguided?

As a scientist with a passion for public engagement, on many occasions I’ve struggled with two fundamental and opposing concepts which make this work a very delicate balancing act:

  1. Science is complicated and often a bit dry.
  2. If you want to engage non-scientists, it is often necessary to ‘sex things up’ with provocative language and concepts which pique their interest.

And here lies the problem.

Let’s take Backyard Brains’ ‘RoboRoach’ as an example. The students who began this project noticed a fundamental problem: “One in five people are likely to suffer from a neural affliction at some point in their lives and many such disorders are currently untreatable. Thus, we are in desperate need of more research in this area”. However, unlike chemistry, physics and some other aspects of biology; there are no hands-on ways to engage young people with neuroscience.

This means that when most budding nBrain copyeuro-researchers reach university (myself included), they are often woefully unprepared for the work they will be doing. I still remember struggling with the concepts of electro-chemical gradients and the technology used to record signals from the living brain. After 8 years I’d say I’m finally getting there. But, with our lab looking into early Alzheimer’s diagnostics and treatments, I can’t help but wish I had been better prepared to move quickly into this complicated and immensely important field of study.

The Backyard Brains tool kit certainly ticks all the boxes as a cheap, easy to use method to teach future scientists. And I don’t doubt that the procedures they use balance causing the least possible harm with giving young scientists a chance to learn things they would otherwise not encounter until late in their university education. So I have no qualms with the premise behind ‘RoboRoach’. But I do see a problem with how this teaching tool has been marketed. Terms like ‘RoboRoach’ and ‘cyborg’, not to mention this t-shirt, cheapen the premise behind this project and give critics ample fodder to argue that these scientists are heartless and happy to make light of (and profit from) a serious matter.

So this is where my earlier points come into play. I understand why Backyard Brains used this marketing technique. I’ve been to a number of public engagement lectures where one message is constantly driven home: if you want people to care about your scientific work, you have to make it sound “cool”. So, to be honest Backyard Brains are following this message to a tee. If you read through their web page they even admit this:

“The name “The RoboRoach” and the tagline “Control a Living Insect from Your Smartphone” was chosen to be provocative and to capture the public’s interest. A more accurate though much drier title would have been: “The RoboRoach: Study the effect of frequency and pulse duration on activating sensory circuits in the cockroach locomotion system, and the subsequent adaptation.” This is an accurate description, and these devices are currently used by scientists at research universities. However, such a description though would have alienated novices who have never had any exposure to neuroscience or neural interface experiments. We aim to bring neuroscience to people not necessarily in graduate school and thus chose an easily understandable, provocative name.”

However, I also understand why critics have called their stance ‘disingenuous’, especially when their website contains honest, well argued, ethical considerations alongside seemingly flippant statements which appear to trivialise the whole project; like this: “The RoboRoach is the world’s first commercially available cyborg! That’s right… A real-life Insect Cyborg! Part cockroach and part machine”statement from their kickstarter page.

Unfortunately, although this marketing may have bought them funding, it has also cost them the trust of many critics.

But if you can step outside the controversy and look at the basics of this project, I do believe that this work is both timely and necessary. Here, budding researchers learn how nerve cells communicate and, on a basic level, how to interface with a living brain. The techniques they learn are similar to those used in deep brain stimulation for treatment of Parkinson’s disease; a procedure which has given many sufferers a whole new lease of life! (see video below) And, to top it off, the cockroaches in question continue on to live a full life following the experiments (a fate preferable to that of most wild roaches).

[youtube http://youtu.be/h8tWlYv1Ykc]

So, although I certainly understand the criticisms aimed at this product. I also honestly believe that, if used as intended as an academic tool, this kit could be an important first step in training future neuro-researchers; perhaps even giving them the head start they need to cure some of the most devastating neurological afflictions.

Post by: Sarah Fox

Hope for new MS drug which could repair damaged cells.

Researchers from the private biotech firm ENDECE Neural have just announced the development of a new compound they believe may have the potential to repair damage caused by multiple sclerosis (MS).

MS is the most common neurological disorder affecting young adults in the western hemisphere. Although scientists are still unsure of what causes the disorder, it is known that symptoms stem from damage to the fatty covering surrounding nerve cells, known as the myelin sheath. It is believed that in the early stages of the disease the body’s own immune cells (cells usually primed to seek out and destroy foreign agents within the body, such as viruses and parasites) mistake myelin for a foreign body and launch an attack. Since myelin is essential for fast neural communication and cell protection, the symptoms of MS stem from a slowing of neural communication and ultimately nerve cell damage.

oligoThe myelin surrounding cells in the brain and spinal cord is provided by cells called oligodendrocytes. These cells reach out a number of branching arms which wrap around segments of surrounding neurons, forming the myelin sheath. The majority of drugs available for treatment of MS aim to reduce initial damage to this sheath. However, researchers from ENDECE are now investigating treatments which can increase the number of oligodentrocytes in the central nervous system, thus leading to remyelination of damaged cells. Dr. James G. Yarger, CEO and co-founder of ENDECE notes, “For decades, researchers have been seeking ways to induce remyelination in diseases such as MS that are characterized by demyelination,”. And now this dream may be becoming a reality.

ENDECE’s work revolves around their pipeline drug NDC-1308. Although the name isn’t likely to turn any heads, its properties just might. Following the observation that pregnant women typically do not experience the symptoms of MS during their third trimester, a number of researchers have been exploring a possible role for estrogen in the treatment of MS. ENDECE researchers created 40 separate estradiol analogues (substances similar in structure to estradiol but with a range of key modifications) and assessed their biological effects. From this work they found that one analogue (NDC-1308) had a particularly potent effect on oligodentrocyte precursor cells (OPCs – cells with the ability to become mature oligodentrocytes), causing them to differentiate into mature oligodendrocytes. In follow-up studies researchers found that treatment with NDC-1308 led to remyelination in a mouse model of MS, specifically showing a 20% increase in myelination in the hippocampus (a region of the brain known to experience demyelination in this model). NDC-1308 was also found to cause remyelination in the rat and to induce cultured OPC cells to differentiate into mature oligodendrocytes. Taken together, these findings suggest that NDC-1308 may prove effective in restoring the lost myelin sheath on damaged axons in patients with MS.

Dr. Yarger states, “We envision NDC-1308 being administered either alone or in combination with current therapeutics that target the immune response and/or inflammation associated with MS. By inducing remyelination, it may be possible to restore muscle control, mobility, and cognition in patients with MS. Therefore, a drug that induces remyelination, such as NDC-1308, can potentially double the size of the current market for MS therapeutics.”

NDC-1308 is still in late preclinical development, and has yet to go through rigorous safety screening and clinical trials. However, as a drug that potentially stimulates remyelination, it represents a whole new strategy for the pharmaceutical treatment of MS patients in the future.

Post by: Sarah Fox

Cancer resistant rodents – the naked truth

The naked mole rat is a quirky little creature. These mouse-size rodents may be curious-looking, but they are fast becoming the rising star of cancer and ageing research. Their unusual lifestyle alone makes them interesting – unlike any other known mammal, mole rats are eusocial. They live in large underground colonies, forming a social structure more akin to a hive of beesmolerat than any rodent species. The colony centres around a single female, known as a queen, who mates with a handful of fertile males. The rest of the colony, which can consist of over 80 individuals, are infertile workers.

The scientific interest in naked mole rats stems from a number of intriguing observations; firstly, the naked mole rat can live for up to 30 years, around ten times longer than a mouse or rat. In fact, relative to body size, if humans were to live as long as these little guys it wouldn’t be uncommon for us to reach our 600th birthday! Equally fascinating is the fact that these animals never appear to suffer from cancer. Long term studies of naked mole rat colonies have consistently failed to find any incidence of naturally occurring tumours in these lucky rodents.

But there’s more than luck involved in this process. Research suggests that a specific adaptation, which originally evolved to make these rodents more manoeuvrable in tight spaces, also gives naked mole rat cells some serious personal space issues. Their cells never divide to the point of overcrowding (a process necessary for tumour development). This gifts the mole rat with resistance to cancer.

But how is this possible?

Researchers from the University of Rochester in New York have found that mole rat cells make a unique ‘gloopy’ polysaccharide known as high-molecular-mass hyaluronan (HMM-HA) which is released from specialised cells called fibroblasts. This substance is similar but much larger than human, mouse or guinea pigs (one of the mole rat’s closest relatives) hyaluronan. When hyaluronan comes into contact with cells it causes a range of reactions, the nature of which depends on its size. High-mass hyaluronan stops cells from dividing and also shows anti-inflammatory properties, whereas low-mass hyaluronan has the opposite effect. Thus, the properties of high-mass hyaluronan may explain why cultured mole rat cells are much more ‘anti-social’ than those from other mammals, preferring to grow at a lower density than tissue from mice, humans or guinea pigs.

molrrat4
We love you naked mole rat! (this little guy is certainly on my Christmas list – http://tinyurl.com/nttn6gq)

It was also found that mole rat cells are resistant to manipulations which would lead to tumour growth in other mammals. However, if HMM-HA production is reduced in mole rat cells then tumours are able to form. This indicates that the interaction between HMM-HA and the cell is vital for tumour resistance.

Scientists are now investigating how HMM-HA instructs cells to stop dividing. It is hoped that in the future an understanding of these mechanisms may open new avenues in the field of cancer prevention and life extension. So perhaps the enigmatic, awkward looking, naked mole rat is proof that beauty really is only skin deep!

Post by: Sarah Fox

Welcome to the pleasuredome: How we evolved to love music

Part of an ancient cave bear femur flute discovered in Slovenia in 1995
Part of an ancient cave bear femur flute discovered in Slovenia in 1995

In 2008 at Hohle Fels, a Stone Age cave in Southern Germany, archaeologists discovered what is thought to be the oldest example of a man-made musical instrument: a vulture bone flute dating back to the period when ancestors of modern humans settled in the area (~40,000 years ago). This discovery suggests that our ancestors were probably grooving to their own beat long before this time – making music, arguably, one of the most ancient human cognitive traits.

This raises an interesting question: In a time before electric duvets and home pizza delivery, how and why did our ancestors find time to indulge in such a non-essential task as the creation of music?

This was a mystery contemplated by the father of evolution Charles Darwin. In The Descent of Man he questions why a skill which appears to provide no survival advantage should have evolved at all, stating “As neither the enjoyment nor the capacity of producing musical notes are faculties of the least direct use to man in reference to his ordinary habits of life, they must be ranked among the most mysterious with which he is endowed”. However, in his autobiography he later suggests a solution to this mystery while reflecting on his own lack of musical appreciation, lamenting “If I had to live my life again, I would have made a rule to read some poetry and listen to some music at least once every week; for perhaps the parts of my brain now atrophied would thus have kept active through use. The loss of these tastes is a loss of happiness, and more probably to the moral character, by enfeebling the emotional part of our nature”. Here Darwin seems to have stumbled upon a fact with which many of us would intuitively agree, the notion that music can enrich our life by generating and enhancing emotions. But can we find a biological basis for this assumption?

Do you hear what I hear? – How our brains process and store sounds and melodies:

Scientists believe that we are unique in the way our brains process sounds. Unlike other animals, the auditory centres of our brains are strongly interlinked with regions important for storing memories; meaning, we are very good at combining sounds experienced at different times. This ability may have been crucial for the evolution of complex verbal communication. For example, consider times when the meaning of a spoken sentence does not become apparent until the last word – we’d have a pretty hard time understanding each other if by the end of a sentence we had already forgotten how it started! This is a skill even our closest relatives appear to lack, and one which is necessary for development of both language and musical appreciation.

We are also really good at forming long term memories for sounds – think about your favourite song, are you able to hear the music in your ‘mind’s ear’? Scientists have found that most people are able to imagine music with a surprising level of accuracy.

It is believed that throughout life, as we listen to our own culturally specific music styles, our brains develop a template of what music should sound like. These templates are specific to each individual, depending on what forms of music they are exposed to. From this we develop the ability to predict how certain music styles should sound and are able to tell when something doesn’t quite fit our expectations. The musical templates we develop throughout life provide us with a standard against which we judge the desirability of new melodies.

How music tickles the brain’s pleasure centres:

Life can be a bit of a maze, and there are times when we need something or someone to give us the thumbs up and let us know that we’re doing things right. Like a parent praising a child, our brains provide us with an internal ‘reward’ signal to let us know we’re on the right track. This system, in the brain’s mesolimbic area, is responsible for the hedonistic sense of pleasure produced by evolutionarily desirable behaviours, such as eating, sex or caring for offspring. Scientists are now able to see this reward system and the behaviours which activate it using positron emission tomography (PET) imaging. Interestingly, along with activation caused by behaviours with an obvious survival advantage, researchers have found that the strong emotional response people experience when listening to music (defined as the feeling of chill you get when listening to a particularly emotionally charged piece)  also activates this reward system.

bassImaging studies reveal that the rewarding aspect of music is also a very personal phenomenon, since mesolimbic activation can be initiated by different melodies in different people. This is due to the way our brains are connected. Auditory and frontal cortex regions, which store our musical preferences, are linked to mesolimbic reward pathways meaning that the sensation of music-induced pleasure is defined by your own personal musical preferences.

It is therefore possible that music could have started life as a way of strengthening social groups, through shared preferences – something which still happens today. Groups linked by a shared emotional experience could form stronger bonds which may ultimately have helped group survival. These findings indicate that our ability to enjoy music may be less mysterious than Darwin originally thought.

Post by: Sarah Fox

What songs give you the chills? Have you formed long lasting friendships over shared music tastes? Let us know your stories in the comments below.

Comments: The future of secondary school science

Exam time is fast approaching and once again this year pupils will not only be fretting MH900410098about their potential grades, but also over the following inevitable barrage of claims concerning falling exam standards. Yes, however hard you may have worked for that A* to C grade, according to the tabloids, your efforts were futile. Particularly since modern GCSEs are now little more than the academic equivalent of an award for ‘taking part’ – spell your name correctly and walk home with a qualification. But we all know that this is not really the case, that the real situation is significantly more complex.

The truth is, contrary to what we hear from politicians, comparison of exam standards is not an exact science. A seminar held in 2010 by the examinations group Cambridge Assessment concluded that “it is not possible to compare standards, definitively, over long periods of time and perhaps attempting to do so is simply confounding the problem.” Professor Gordon Stobart, from the Institute of Education compared the debate over exam standards with climbing Mount Everest noting that: “In 1953 two people got to the top of Everest, an extraordinary achievement at the time. Yet on a single day in 1996, 39 people stood on the summit.” Does this mean that the mountain is getting easier to climb? Not necessarily, it may simply reflect the fact that more people are attempting the climb and that those who do so are now better equipped.

MH900401121I took my GCSEs around 12 years ago and still remember feeling my success was tempered by claims that exams were ‘getting easier’. I can certainly vouch for the fact that they didn’t feel easy! But, then again, I had nothing to compare them to since, at that time, they were the hardest exams I’d ever taken. Interestingly, the small amount of research which exists in this area shows modern GCSEs are not equivalent to their predecessors the O-levels. A study by the Royal Society of Chemistry (the Five Decade Challenge) found that current students had a harder time answering exam questions taken from the old O-level syllabus than questions written after the GCSE switch-over. The scores for all GCSE-style questions, irrespective of date, remained relatively stable. The study found that students performed well on tests of recall but found problem-solving and tests of quantitative skill challenging.

There are many explanations for these and similar results. It is possible that exams are getting easier. However, it’s equally possible that changes to the syllabus and style of question mean that modern students show different strengths than those required to answer O-level style questions.

MH900402266Anecdotal accounts argue that a culture of ‘teaching to the test’ means that modern students are encouraged to play the system, favouring lessons on exam technique over studying all available material. A particularly worrying example of this can be seen here. To be honest, I do remember a lot of emphasis being placed on past paper learning, knowing how to answer questions and rote learning of facts and figures – something I’m actually pretty terrible at. Add to this a survey by the Confederation of British Industry showing that “more than four out of 10 employers are unhappy with youngsters’ use of English, while 35% bemoan their numeracy skills” and the notion that lecturers often complain about students’ lack of initiative, a worrying picture starts to emerge.

Wherever the problems lie, I believe that it is unfair to blame the students for these failings. Constantly reminding them that the exams they agonised over for the last few years were ‘easy’ won’t solve anything and at worst could be damaging. I also doubt teachers are at fault; they are instead victims of a culture that craves an end result without caring how it is achieved. Instead, we need to take a long hard look at the current system itself and decide whether or not it is still fit for purpose. Luckily this is exactly what education secretary Michael Gove is doing right now. In a recent letter to Ofqual he argues that that “there is an urgent need for reform, to ensure that young people have access to qualifications that set expectations that match and exceed those in the highest performing jurisdictions.”

He is embarking on a mammoth task, which I certainly don’t envy. Not least when it MH900426563comes to science education. With public debate ranging from GM crops to vaccinations, scientific understanding is a must in today’s society. Especially since it has been argued that individuals without a working appreciation of science are more likely to be swayed by pseudo-science and unfounded propaganda. Therefore, providing our children with a strong working understanding of basic science is a must.

Unfortunately I worry that Mr Gove’s reforms run the risk of ‘missing the mark’ when it comes to science. They appear to concentrate heavily on standardising the format of secondary school teaching, removing emphasis on coursework and ensuring qualifications are “linear, with all assessments taken at the end of the course.” This may indeed provide “qualifications that set expectations that match and exceed those in the highest performing jurisdictions.” However, I worry it will fail to tackle the true failings in our current science curriculum.

The Science and Technology Committee Report of Science Education – 2002 states that: “the current curriculum aims to engage all students with science as a preparation for life. At the same time it aims to inspire and prepare some pupils to continue with science post-16. In practice it does neither of these well.” Even more damning is the report’s observations on course structure. It states that “practical work, including fieldwork, is a vital part of science education. It helps students to develop their understanding of science, appreciate that science is based on evidence and acquire hands-on skills that are essential if students are to progress in science.” However, it recognises that due to pressures and time constraints placed on teachers, coursework now has “little MH900448347educational value and has turned practical work into a tedious and dull activity for both students and teachers.” From this they conclude that “many students lose any feelings of enthusiasm that they once had for science… neither enjoy or engage with the subject… they develop a negative image of science which may last for life.” And I can’t see this situation improving if reform means more emphasis on achievement in a final exam and less emphasis on continuous coursework assessments.

The proposed system may place more pressure on teachers to maintain standards through exam achievement alone, running the risk of exacerbating our ‘teach to the test’ culture and marginalising the significance of practical skills development. I hope that if these changes are thoughtfully implemented such problems may be avoided. However, the outcome of this still remains to be seen.

I wonder if there is scope for the scientific community to become further involved in secondary school science education. Successful projects such as I’m a Scientist Get me Out of Here are already gaining in popularity. But, there is still much more we can do. For example: developing online e-learning resources covering the basic curriculum whilst also enabling active scientists, working in related fields, to communicate with students through blogs and forums – placing the curriculum on the context of real-world research. I know scientists are concerned about how their subjects are taught, so perhaps it’s a good time to start building better links with schools and really getting involved?

Post by: Sarah Fox

Diabetes and Alzheimer’s: Could overeating lead to dementia?

The number of people suffering from diabetes is on the rise. This rise runs alongside a worldwide increase in obesity, with around 10 percent of the population suffering from diabetes, and 12 percent considered obese.

MH900431256Although we know bad eating habits increase our risk of developing diabetes, this doesn’t seem to be enough to make us ditch the junk! I know, despite having diabetes run in my family, that when the stress piles up I always crave comfort foods. But new research might soon encourage me to change these eating habits. Yes, if a long term risk of heart disease, blindness and nerve damage aren’t enough to make me snack less, the looming threat of Alzheimer’s may just do the trick.

Numerous studies have shown that people with type 2 diabetes are twice as likely to develop Alzheimer’s than the rest of the population. But why?

alzheimer_brainAlzheimer’s is a pretty complicated problem. In fact a confident diagnosis can still only be made following post-mortem. We know that in the late stages of the disease the brain is shrunken and riddled with clusters of mismanaged proteins called plaques and tangles. But what we don’t really understand is why these proteins start to misbehave in the first place.

The emerging picture is of a complex patchwork of many factors: all of which can initiate a downward cascade toward Alzheimer’s disease. Now, diabetes seems to be forming another patch on this causation quilt.

Type 2 diabetes, the kind that can develop later in life, is brought about by a number of factors: including obesity. This leads to an imbalance in insulin production. In non-diabetics insulin is produced at constant levels, causing cells around the body to absorb glucose from the blood; a process which is necessary for regulating carbohydrate and fat metabolism. Insulin can also cross into the brain and has been found to aid cognitive function.

Although it may seem counter-intuitive, the chronic high levels of blood insulin seen in many diabetics actually means less insulin crosses into the brain. This, combined with fluctuations in blood sugar, may explain why a number of diabetics report reduced cognitive function. But this is not the end of the story. Diabetes also has an effect on the metabolism of fat, leading to an overproduction of ceramides. These ‘waxy fats’ are released into the blood and cross into the brain. Once there, they cause brain insulin resistance and encourage inflammation.

It is believed that this mixture of insulin resistance and inflammation causes Alzheimer’s related proteins to collect in the brain and form plaques. In fact, scientists have recently discovered that inducing insulin resistance in the brains of mice and rats leads to both memory loss and accumulation of plaques.

This research certainly seems compelling, although within the scientific community the jury is still out on the exact role diabetes plays in the development of Alzheimer’s. I personally doubt that diabetes alone can be hailed as a causative factor for Alzheimer’s. However, if we connect the dots the two certainly seem to be linked, perhaps through overconsumption of fatty/sugary junk foods? Whatever the outcome, I know that this research will certainly make me think twice before reaching for the snacks in future!

Post by: Sarah Fox

Growing old artistically

The creation of art requires a complex interplay between brain and body. Indeed, the appearance of a finished piece is intimately linked to both the subjective experiences and mental processes of the artist. Scientists are beginning to appreciate how art can be used to study changes in body and mind as individuals age. This research is opening new doors in both our understanding of the ageing process and the way we diagnose and treat age-related disorders.

The ageing body:

Arguably a painter’s most important tool is vision. Unfortunately, it is commonplace for vision to deteriorate with advancing age. This deterioration can lead to a decrease in colour and contrast discrimination, increased glare and a decreased field of view. Perceptual changes such as these will all affect the way an artist perceives the world, an effect which can be observed through changes in their artistic style and composition. Take Monet for example. As he grew older, Monet developed severe cataracts in both eyes. By the age of 65 this disorder was already affecting his visual acuity and colour vision. He could no longer perceive a vivid colour pallete, instead seeing the world as desaturated and yellow. This change was reflected in his art. Monet painted a series of canvases depicting water lilies in the gardens of his home town, Giverny. The changes in his visual perception can be seen in the two images below showing the same scene painted prior to and following development of cataracts.

Monet: Lilys at Giverny before cataracts
Monet: Lilys at Giverny before cataracts
Same scent after development of cataracts
Same scene after development of cataracts

The ageing eye also often suffers from changes in its optical media – the fluid filling the eye ball, hardening and yellowing of its lens and a decrease in pupil size. These changes all reduce the amount of light which eventually reaches the retina at the back of the eye. Indeed, it is estimated that by age 60 the retina will be receiving only one third of the light a 20 year-old eye would have received. Overall, these changes reduce an individual’s ability to distinguish fine detail and cause a shift in colour vision towards the red end of the visual spectrum. An example of this can be seen in the later work of artists such as Rembrandt. Notice the lack of detail and shift to a yellowed pallete in his later works.

496px-Self-portrait_at_34_by_Rembrandt
Rembrandt early self portrait
464px-Rembrandt_van_Rijn_142_version_02
Later work

Another disorder of the ageing body which has a notable affect on artistic output is arthritis. This reduces dexterity and movement, leading to less detailed work often with larger brush strokes.

The ageing brain:

The separate elements of an artistic composition are as broad and variable as the artist’s own mind. Scientists have found that certain artistic styles can be linked to different regions of the brain. And that damage to these regions can dramatically change an artist’s style. An example of such change can be seen in ageing individuals suffering from Alzheimer’s disease. Alzheimer’s sufferers experience a loss of visuospatial skills, due to degeneration of their posterior parietal and temporal cortices*. This means that sufferers’ artwork becomes progressively more abstract and less spatially precise. However, at least in the disease’s earlier stages, this does not necessarily diminish the artistic appeal of their work. Although pieces may lose spatial precision this is often replaced with an appealing sense of colour and form. For example, work by the artist Carolus Horn can be seen to alter significantly as his Alzheimer’s progressed, becoming more two-dimensional and less detailed. However, alongside these changes his work also became more vibrant and developed a simplistic charm.

bridge1
Carolus Horn: Prior to disease onset
bridge2
Post Alzheimer’s

Unfortunately as the disease progresses further the sufferers’ artistic deficits become more acute, until finally images bear no resemblance to their intended subject.

Interestingly, in certain forms of dementia (especially frontotemporal dementia (FTD) with degeneration of the left anterior temporal lobe) some individuals develop artistic talents which were not present before disease onset. Many of these patients develop a compulsive need to paint, repeating the same picture many times. These compulsions may explain how patients can become relatively accomplished in a short space of time.

The study of changing artistic style in patients with degenerative dementias is giving scientists a valuable insight into how their brains function. Indeed, this area of research may one day open up a range of novel diagnostics and therapeutic interventions.

However, perhaps the most poignant observation made in recent years is the effect art can have on the lives of patients and their families. Some families have found that art represents a way to communicate with loved ones who have long since lost the ability to communicate verbally. Sufferers also benefit from focusing on their artistic strengths. This gives patients a feeling of accomplishment they previously lacked and, in some cases, can provide temporary relief from their symptoms. Art seems to have the ability to improve the quality of life for dementia sufferers and their families, whilst also offering an amazing insight into the working of their minds. Therefore, it’s great to see new research focusing on this area and organisations such as the Hilgos foundation emerging, who offer grants for art students working with Alzheimer’s patients.

[youtube http://www.youtube.com/watch?v=I_Te-s6M4qc]

Post by: Sarah Fox

* Posterior parietal regions are important for perception of space and appreciation of movement in space and time, while temporal areas are required for perception of form and depth.

What can science add to the abortion debate?

Few topics generate such a passionate division in opinion as abortion and ultimately there is no easy answer when choosing between an unborn child’s right to life and a woman’s right to freedom over her own body. However, after reading about the uproar caused by the tragic death of Savita Halappanavar, I knew I wanted to add my own voice to this debate. Which left me pondering the following question: what can a science blog bring to the table when tackling a heated moral debate like this?

The answer, I believe, is something few mainstream sources address: the development of the brain and consciousness (as we understand it) in the growing fetus.

Of course if you are of the opinion that ‘life begins at the moment of conception’ the emergence of consciousness is probably a moot point. However, according to recent statistics more than 60% of UK adults and 18-35 year-olds in the republic of Ireland are pro-choice. This means that, under certain circumstances, they accept abortion as a viable option, raising a particularly difficult question. Assuming abortion, in theory, is acceptable, is there a point during the pregnancy when it becomes unacceptable and how do we decide where to draw this line?

Current UK legislation states that an abortion must be carried out during the first 24 weeks of pregnancy. However, guidelines also state that the procedure should ideally be performed before 12 weeks. Current legislation bases its ‘upper limit’ on the survival rate of premature babies, which is significantly reduced prior to 24 weeks (Percentage of babies successfully discharged from hospital after premature birth at 24 weeks: 33.6%, 23 weeks: 19.9%, 22 weeks:9.1%).

Almost 90% of UK abortions are performed within the first 12 weeks of pregnancy. During this time there is no scientific doubt that the developing fetus is incapable of  any form of conscious awareness. The fetal brain does not begin to develop until 3-4 weeks into the pregnancy, at which point it is little more than a hollow tube filled with dividing neurons. Between weeks 4 and 8 this neural tissue grows forming the major divisions of the adult brain (forebrain, midbrain, hindbrain and spinal cord). By 8 weeks recognisable facial features have developed and the cerebral cortex separates into two distinct hemispheres. By the end of the first trimester (12 weeks) nerve cells are beginning to form rudimentary connections between different areas of the brain. However, these connections are sparse and incapable of performing the same functions as an adult brain. So by 12 weeks, although the fetus is certainly starting to look like a little human, the neural circuits responsible for conscious awareness are yet to develop.

The first trimester is also the time when around three quarters of spontaneous miscarriages occur. Miscarriages are possible throughout the pregnancy and are much more common than most people realise. One in eight women who are aware of their pregnancy experience a miscarriage, with many more occurring before the woman is even aware she has fallen pregnant.

As the complexity of the fetal brain grows, forming structures similar to those we recognise in the adult, so the does the fetus’ ability to experience and respond to its environment. Indeed, studies have shown that from 16 weeks the fetus can respond to low frequency sound and by 19 weeks will withdraw a limb or flinch in response to pain. An observer would certainly think these responses look very much like the start of conscious awareness. However, during these early days the neural pathways responsible for converting senses to conscious experiences have yet to develop. This means what we are seeing are just reflexes, probably controlled entirely by the developing brainstem and spinal cord.

In fact, we know that the brain structures necessary for conscious experience of pain do not develop until 29-30 weeks, while the conscious processing of sounds is only made possible after the 26th week. Even when the fetal brain possesses all its adult structures, scientists are cautious to assume it posesses what we refer to as ‘consciousness’. This is mainly because the low oxygen levels and a constant barrage of sleep-inducing chemicals from the placenta ensure that, until birth, the foetus remains heavily sedated.

Ultimately, although science cannot and should not try and answer the moral questions behind abortion, it can give us some amazing insights into how the brain develops. It seems that, in the womb, a fetus is unlikely to ever experience traditional consciousness. However, we do know that from the time neural pathways are in place (the last weeks before birth) the fetus can form rudimentary memories. Meaning that after birth it can show a preference for its mother’s voice and other sounds and smells experienced in the womb – yes, newborn babies show a liking for the smell of amniotic fluid.

Therefore, although the ‘upper limit’ on abortion remains relatively arbitrary. Its current position at 24 weeks appears to fit well with both premature birth survival rates and, in terms of neural development, a time before any major connections are in place. Making it, in my eyes, as pretty good point at which to draw this line.

Post by: Sarah Fox