The dream-reading machine

The film Inception starts with Leonardo DiCaprio and Joseph Gordon-Levitt attempting to infiltrate someone’s subconscious. They are trying to steal the target’s dreams. This wonderfully futuristic concept may be a thing of science-fiction movies, but researchers in Japan might just be on the road to seeing what you dream.

Research carried out by Yukiyasu Kamitani’s group at the Advanced Telecommunications Research Institute in Kyoto, published in the journal Science, used an fMRI (functional magnetic resonance imaging) brain scan to monitor volunteers’ brain activity whilst they drifted off to sleep. By creating a computer algorithm to predict what this brain activity meant, they were able to predict what a subject was dreaming about.

To begin, 3 volunteers were placed into an fMRI scanner and shown Google images of many different objects. The activity in the visual areas of the brain was monitored by the scanner and uploaded to a computer. Words associated with each of theses images were processed and arranged into groups of like-meaning words, called synsets. For example, words such as Structure, Building, House and Hotel would be grouped together in their own synset. Words within a synset were ranked depending on their importance, and the most important was used to describe that synset. For the example above, the word Building would be the highest ranked word in that synset. This allowed them to narrow down a number of possible words/objects to one word of ‘best-fit’.  Images of houses, hotels, offices, would all be narrowed down to Building.

The computer was given information of the synsets that related to each image, along with the brain activity at the time that image was shown. This allowed the researchers to match brain activity to certain images and words. The computer now knew that when the subject saw a picture of a house, their brain responded in a certain way. This brain activity was grouped together with activity when the subject sees an office, and a hotel etc.

Now came the real test. By categorising brain activity based on what a person sees, could they read what a person was dreaming about? The 3 subjects were placed in the scanner and told to fall asleep if they felt tired. The electrical activity of their brain was recorded by EEG (electroencephalogram) in order to see when they fell into the early stages of sleep. During these early stages, one does not normally have vivid ‘dreams’ but typically light hallucinations.

As these hallucinations started, the brain activity in visual areas was recorded and run through the algorithm. The algorithm came up with the synsets that were most likely to be represented by that brain activity, and used the Google images from before to present a video of what it thought the person was ‘dreaming’ about. This can be seen in the video below. To test how accurate the computer was at predicting the ‘dreams’, the volunteer was awoken and asked what they had just seen.

When the researchers compared what the participants reported they were seeing with the computers prediction, they found that the computer was correct in 60% of cases. This is significantly higher than getting it right by chance. The computer was able to use brain activity during the early stages of sleep to read and predict what the volunteer was seeing.

This study is not without its limitations. Firstly, what most of us see as ‘dreaming’ is not thought to occur in these early stages. We believe ‘dreaming’ occurs mainly during rapid-eye movement sleep, a stage of sleep that occurs around an hour later than these early stages of sleep (see below). What are measured here are hallucinations that occur when we are falling to sleep. Furthermore, an fMRI machine is incredibly loud due to its large, spinning magnets. It is questionable that the sleep stage observed in these participants is truly what we would regard as sleep.

 

A typical sleep cycle. Researchers recorded hallucinations in stages 1 and 2. We normally dream in REM (rapid eye movement) sleep. Image credit to Sleep 1102.


Secondly, a success rate of 60% is hardly news to excite those wanting to perform dream extraction. The crude prediction is not an exact match of what someone is seeing (as you can see from the video above). The computer is able to recognise that you were seeing a building, but not that you were cleaning windows of your own house for example. It is clear that it will take some time to really enter the realms of dream-reading. The interpretation of this crude prediction is also hampered by the fact that the study was based on only 3 participants. It is not clear whether this result will scale up to the larger public.

Despite these limitations, what the researchers have done is remarkable. They have shown that these early sleep hallucinations create very similar patterns of activity in the brain to when we are awake. They have shown a relatively accurate way to decode this activity into what the subject is seeing. And they have opened up the possibility of studying the function and nature of sleep in more detail. But don’t worry; the Thought Police won’t be after you just yet.

 

By Oliver Freeman @ojfreeman

 

News and Views: The Brain Activity Mapping Project – What’s the plan?

“If the human brain were so simple that we could understand it, we would be so simple that we couldn’t” – Dr. Emerson Pugh

Isabelle Abbey:

An ambitious project intended to unlock the inscrutable mysteries of nerve cell interactions in the brain is on its way. Labelled America’s ‘next big thing’ in neuroscience research, the ‘BRAIN’ (Brain Research through Advancing Innovative Neurotechnologies) initiative will use highly advanced technologies in an attempt to map the wiring of the human brain.

Cajal drew some of the billions of neurons in the human cortex..technology has come a long way since 1899

Also referred to as the ‘Brain Activity Map’ Project (BAM), the BRAIN initiative aims to decode the tens of thousands of connections between each of the ~86 billion neurones that form the basis of human brain. Scientists believe completing the map will be an invaluable step that may have huge implications for therapeutically tackling neurological pathology.

Moving forward in this manner does seem particularly appropriate. For the past 10 years, we have been reaping the benefits of technologies like fMRI and PET scanning, which have allowed us to visualize the brain in a way that has never been done before. From measuring behaviours to diagnosing abnormalities, the contribution of neuroimaging to our understanding of brain physiology and pathology is undeniable.

Paul Alivastos, the lead author of the paper detailing the BAM proposal, aims to develop novel toolkits that can simultaneously record the activities of billions of the cells in the live brain, rather than from macroscopic slices. Eventually, these technologies will allow for the accurate depiction of the flow of information in the human brain, and how this may differ in pathological states such as in Alzheimer’s or autism.

Despite the daunting nature of the task at hand, this proposal has been met with much political enthusiasm. On 2nd April Barack Obama announced the American Government would be backing the project by approving a $100m funding budget for its first year of operation.

The humble nematode worm, 1mm long

But might this project need some grounding? After all, Alivastos and his co-authors are yet to establish the basis for which such tools can be developed or the extent to which these technologies could be used. The years of extensive research that has concentrated on mapping the wiring of a simple nematode worm, consisting of only several hundred nervous system cells, is yet to allow us to accurately predict the worm’s behaviour. So, some scepticism does seem reasonable.

While we must be cautious in predicting ambitious benefits from such a project, the map Alivastos and his colleagues have envisaged gives reason enough to be hopeful for the next decade in our neuroscientific appreciation of human cognition.

Natasha Bray:

As a neuroscience researcher, I can’t help but take an interest in the BRAIN initiative proposed by President Obama earlier this month. It’s a massive pot of cash designed not only to further the neuroscientific knowledge base, but also to create jobs and technologies that can’t even be described yet. As Izzy mentions above, the project is an ambitious and important undertaking that merits the now fashionable label of ‘big science’.

The BRAIN initiative is funded by a big pot of money from different resources including DARPA (the Defence Advanced Research Projects Agency), the National Science Foundation, the National Institute for Health, Google and various other institutes and charities.

So far, even defining the project and choosing suitable methods has been a challenge. The research leaders have proposed “to record every action potential from every neuron within a circuit”. Bear in mind action potentials (nerve impulses) happen in a matter of a couple of thousandths of a second, while a single circuit may encompass many millions of cells. At the moment, neuroscientists can record action potentials from up to about 100 cells simultaneously. We can work out anatomical circuits. We just can’t record from every cell within them; there is not one single tool in neuroscience’s toolbox that is currently capable of gathering that kind of data (yet).

There are, however, candidate techniques that could be improved or perhaps combined. Imaging techniques, including optical, calcium or voltage imaging, or magnetic imaging such as fMRI and MEG can scan on different scales in both time and space. Neurons’ electrical activity can be recorded using silicon-based nanoprobes or very tightly-spaced electrodes. Researchers have even suggested synthesising DNA that records action potentials as errors in the DNA strand like a ticker tape. Advances in all these technologies are still being made, making them the most likely candidates.

Added to the difficult choice of method is the serious task of storing and analysing quadrillions of bytes of data, plus the fact that it’ll take about ten years just to complete an activity map of the fly brain. It’s clear there are significant hurdles to jump. Then again, no one said big science would be easy…or cheap. But the potential benefits of big science are huge. The Human Genome Project had a projected cost of $3 billion, but was completed within its budget and has already proved a huge investment both intellectually and financially. It’s famously estimated that for every dollar originally invested in the Human Genome Project, an economic return of $140 has already been made.

I see the BRAIN initiative as a very worthy cause, a good example of aspirational ‘big science’ and a great endorsement for future neuroscience. One gripe I have with it, however, is that it seems a little like Obama’s catch-up effort in response to Europe’s Human Brain Project (HBP). The HBP involves 80 institutions striving towards creating a complex computer infrastructure powerful enough to mimic the human brain, right down to the molecular level. Which begs the question: surely in order to build an artificial brain you need to understand how it’s put together in the first place? I really hope that the BRAIN initiative and Human Brain Project put their ‘heads together’ to help each other in untangling the complex workings of the brain.

Preventing mitochondrial disease: Can three (parents) be the magic number?

Since September 2012, there has been a consultation in the UK on whether to allow the creation of three-person embryos. This may sound like an odd debate to be having, but there is a good reason for trialling this technique: to reduce the risk of genetic mitochondrial disease.

What are mitochondria?

Mitochondria

Often referred to as the “generators” or “batteries” of a cell, mitochondria provide the energy required for the cell to work normally. Each mitochondrion is tiny, only about 1 μM (1 thousandth of a millimetre) long, but their function is essential. Several mitochondria are found in each cell – the higher the energy requirements of the cell, the higher the number of mitochondria found there.

The curious thing about mitochondria is that they have their own little set of DNA. This DNA is responsible for the production of the building blocks that make up oxidative enzymes – proteins which are important for energy generation. Mitochondrial DNA consists of 16,569 base pairs, a tiny fraction of the 3.3 billion base pairs found in the nuclear genome.

Mitochondria have many unique features not found in any other part of the cell. Their DNA is circular – this is a feature normally found in bacterial cells (also known as “prokaryotic” cells), whereas humans and other animals store their DNA as strands in the nucleus (these are called “eukaryotic” cells). Mitochondria also have their own unique set of ribosomes, the machines which make proteins in the cell.

Pikachu

These distinctions have led scientists to theorise that mitochondria may have a different origin to the rest of the components in a cell. It is thought that they were once free-living organisms, something like bacteria. A long time ago, in the early days of evolution, these bacteria invaded an early incarnation of a cell. Both bacteria and the cell were able to co-exist perfectly together – the cell provided the bacteria with essential proteins and the bacteria were able to generate plenty of energy which the cell could use. It’s a bit like if your house was invaded by Pikachu – he would provide you with free electricity as long as you kept him well-fed. Both of you would benefit from the arrangement.

As this partnership worked so well, the bacteria were eventually assimilated into the cell and became a permanent feature. This is known as endosymbiosis – a mutually beneficial co-development of host and invader.

Mitochondrial Disease

Mitochondrial disease affects each sufferer differently. The affected mitochondria may only be in one tissue type or they could be in several. The most commonly affected organs include the brain, muscle and kidneys, because these require a lot of energy. There is a huge variety of symptoms, making it very hard to diagnose. Some types of mitochondrial disease have more common symptoms and so are termed under collective names –such as Alpers’ Disease and Leigh Syndrome. The onset is usually in childhood but it can also develop in adults. About 4000 children a year in the US are affected by mitochondrial disease and in severe cases it is fatal, with the child unlikely to reach adulthood. So far, there is no known cure.

Three-person embryos

Every embryo contains three separate genetic components: DNA from the father, DNA from the mother and mitochondrial DNA. These are brought together when an egg cell, containing both maternal and mitochondrial DNA, fuses with a sperm cell containing paternal DNA. In cases of mitochondrial disease, the mitochondrial DNA in the egg cell is damaged, and this damage can be passed on to the child who may then develop disease symptoms. By creating three-person embryos, scientists are hoping to prevent mitochondrial disease by replacing the faulty mitochondria with normal ones before the embryo develops.

There are two techniques to create three person embryos which are being discussed. The first is called “maternal spindle transfer”. The idea behind this is to take an egg from the mother and remove the nucleus containing all of her genetic material apart from the mitochondrial DNA. A donor egg with healthy mitochondria has its nucleus removed and replaced with the nucleus from the mother’s egg. The egg will then be fertilised by the father’s sperm, in a similar way to conventional IVF.

The maternal spindle transfer technique has been successful in animal trials. In human trials however, only about half the eggs made using this technique developed normally. The researchers involved still think the results are encouraging enough that the technique should be allowed to go the next stage: clinical trials. Currently, this is illegal in both the US and the UK. The present government debate is whether to change the law to allow these clinical trials to occur.

The second technique is called “pro-nuclear transfer” and involves fertilising both the mother’s and donor’s eggs with the father’s sperm. Before the eggs divide, the nucleus is removed from both eggs, and the nucleus from the mother’s egg is placed in the donor’s. Doug Turnbull and his team at Newcastle University in the UK have pioneered this technique and have successfully developed embryos to about 100 cells (the “blastocyst” stage).

A mother, a father and a little bit extra

Mitochondria contribute only a tiny amount of the DNA to a person’s genome. Therefore, a three-person embryo would consist mostly of the DNA from the father and mother, with only a small proportion coming from the donated mitochondria.

Mitochondrial genomeThere is much controversy surrounding “three-person embryos”. For starters, the phrase itself sounds a bit weird and unnatural. What’s more, there are multiple ethical issues and moral arguments, such as “interfering with nature” or who will have parental rights. Some people are worried about what impact having three genetic parents would have on a child’s development. Others point out that this won’t cure existing sufferers; it would just prevent new babies from being born with the disease. Furthermore, it is not known what effect this technique could have on future generations.

However, the concept of “three parents” is not as bad as it sounds. The tiny mitochondrial genome is only responsible for certain basic processes. So, it appears unlikely that having the mitochondria from another person will have a big impact on the development of the characteristics of the embryo or the child, such as its appearance or personality.

It may be possible to reduce any “three-parent” risks by using mitochondria from a family member of the father. The mitochondrial genome is always inherited from the mother, as mitochondria are present in the egg at fertilisation. In the same way, the father’s mitochondria will have been inherited from his own mother. Donation of an egg from a maternal relative of the father (his mother, a sister or maternal aunt) would ensure the embryo would still inherit the exact mitochondrial DNA of one parent, in this case, the father rather than the mother.

The concepts and techniques behind mitochondrial donation have been subjected to ethical reviews, which concluded that the techniques are promising but that more research is needed. However, doing further research would require a change in the current law as genetic modification has never been tried to this extent in humans.

The future of mitochondrial donation

The Human Fertilisation and Embryology Authority (HFEA) have been consulting public opinion of three-parent embryos. They published their results in March 2013, finding that 44% of the 1000 people surveyed approved of the technique, with 29% against it. However, an open online questionnaire found that 455 people were in favour with 502 against. So, public opinion is clearly divided on the issue.

I think the term “three-person embryos” or “three-parent babies” should be dropped because it has alarming connotations, making the technique sound strange and unnatural – a bit like the “Frankenfood” label given to GM crops. Describing it as “mitochondrial donation” may encourage people to understand its potential benefits and may help dispel controversy. The very existence of mitochondria in our own cells proves that something that seems unnatural can be benign or even beneficial – if those proto-bacteria hadn’t invaded the host cells all those millions of years ago, life as we know it would never have developed in the first place.

The UK has always been at the forefront of scientific innovation, especially with fertility. This was highlighted by the recent passing of Sir Robert Edwards, one of the scientists who pioneered the IVF technique (unfortunately his death in April 2013 was somewhat overshadowed). His legacy was to bring desperately wanted children into the world, and now we have a chance to improve on that by adapting his technique to reduce suffering. I sincerely hope the government gives the green light to further investigate this concept. Of course, lots of work still needs to be done before the technique can actually be used, if it can be used at all. However, I think the researchers should be given the opportunity to develop this potentially life-saving technique.

Post by: Louise Walker

News and Views: The Festival of Neuroscience – A 5 minute guide.

BNA

 

Between 7th-10th April 2013, neuroscientists from across the globe met in London for the British Neuroscience Association’s ‘Festival of Neuroscience’. Here is my whistle-stop tour of the main talking points.

 

 

Drugs, neuroscience and society

Instigated by the divisive Professor David Nutt, delegates heard about research into cognitive enhancing drugs. Professor Judy Illes suggested these drugs should be labelled ‘neuro-enablers’ not ‘neuro-enhancers’ to focus on their role in improving cognition in those affected by diseases such as Down’s syndrome. Topics debated included: Should we criminalise use in healthy people?; Should we allow their use in exams, job interviews etc.?; Should we allow them only if declaration of use were compulsory?; Would this lead to two-tiered exams – users and non-users?

 

Dr Paul Howard-Jones spoke of ‘neuromyths’ in education. He highlighted the oft-cited theory that children all have their own learning style (visual, kinaesthetic, auditory etc.) as having no scientific basis. ‘Neuromyths’ are routine in the field of education, he said.

 

Professor Emeritus Nicholas Mackintosh described findings in the Royal Society’s report into neuroscience and the law. Bottom-line is that there is very little evidence thus far to suggest one can use brain scans successfully in a court of law. Only once has neuroscience been used successfully in court.

 

Pain, placebo and consciousness

Professor Irene Tracey gave a fantastic plenary talk on imaging pain in the brain. She gave wonderful insights into how the placebo effect is very real and can be seen in the brain. A placebo can hijack the same systems of the brain that (some) painkillers act on. This could have strong implications for the experimental painkiller vs. placebo set-up of randomised controlled trials.

 

Professor Ed Bullmore spoke about the connectivity of the brain. He described the intricate connections of the brain and how some regions are highly connected whilst others have only sparse connections. He noted that in a coma, highly connected regions lose connectivity and sparsely connected regions gain connectivity. This goes nicely with the work by Giulio Tononi into theories of consciousness.

 

Professor Yves Agid entertained with his animated talk on subconsciousness. He argued that the basal ganglia are crucial for subconscious behaviours. He showed that the basal ganglia become dysfunctional in diseases such as Tourette’s syndrome and Parkinson’s disease – both of which involve defective subconsciousness.

 

Great science

Professor David Attwell told a wonderful story about glia, the support cells of the brain. Glia are often forgotten about when talking about functions of the brain, but Prof Attwell described fantastic research condemning this. Glia are involved in brain activity, both in health and disease. Glia are involved in regulating the speed of nerve cell communication. Glia may also be involved in learning and memory.

 

Professor Tim Bliss, one of the pioneers of research into memory formation, spoke about his seminal discovery of long-term potentiation. He recalled the story behind how he and his colleague Terje Lømo discovered one of the mechanisms that mammals use to store long-term memories. He even owned up to falling asleep during the published night-time experiment and failing to jot down the data for a short period! #overlyhonestmethods

 

Professor Anders Björklund gave a public lecture on his life’s work into stem cell therapy to treat Parkinson’s disease. He showed some wonderful results that have really made a difference to patients’ lives. This therapy shows good improvement in ~40% of cases. Work continues into why it does not work in all cases.

 

To follow what else was said during the conference, see #BNAneurofest.

 

By Oliver Freeman @ojfreeman

How Bluetooth could save your life

“The iPhone is great, but what if I wanted to put it in my brain?” John A Rogers from the University of Illinois asked this question at a recent talk about electronics that work inside our bodies. From stretchy electronic devices on the surface of our skin to implanted devices that “talk” to our smartphones; the future of medicine could be getting under your skin.

We are now able to produce silicone circuits which are as flexible as a rubber band and as thin as a temporary tattoo. This means that devices can be stuck on to our skin and left to measure signs of ill health such as temperature, hydration and heartbeat. These devices are particularly useful in neonatal care. But scientists are now taking this technology even further…

dissolvableElectronic devices that are safe enough to be implanted inside our bodies and simply dissolve away when they’ve done their job are becoming a reality. They are made from silicone and magnesium, which exist naturally in small concentrations inside the body and are safe enough to be implanted. The innovation that makes these dissolvable devices possible is the development of tiny silicone membranes with imprinted magnesium circuitry. These membranes can be less than 100 millionth of a meter thick and dissolve easily in the slightly alkali conditions of our blood. Scientists can control the amount of time these devices stick around inside the body by wrapping them in a thin layer of silk protein.

800px-Silkworms3000pxSilk is non-harmful and dissolvable, so makes an ideal covering material. Silk fibres (from silk worms) are broken down by boiling in salt water to create a kind of liquid silk that is then used to coat the devices. By altering the processing of the silk protein, it is possible to control how long it will take to dissolve in the body, hence controlling how long the device will last.

The first gadgets to be produced in this way simply heat up; these can be implanted into wounds or at the site of a bone fracture during surgery. Raising the temperature by just a few degrees at the site of a wound can be enough to kill bacteria and ensure the area remains sterile. Scientists have also made devices that can measure electrical activity within the brain, though so far these have only been tested in animals. The future of this disappearing technology is very exciting, for instance in allowing controlled drug delivery in a particular location.

Another new technology being developed in the world of medical electronics involves wireless communication from inside the body. Scientists have produced a wireless implant that can predict a heart attack. This small chip can be implanted under the skin to detect various substances circulating in the bloodstream, including a molecule called troponin. This is released by heart muscle when it is under the extreme strain that precedes a heart attack. The implant has a radio transmitter that sends signals to a patch outside the body. This can then transmit data to a smartphone via Bluetooth. The chip is currently being trialled in patients in intensive care, but in the future could be used by those who are at high risk of heart problems. In the future, chips like this one could also be used to detect other metabolites in the body, so could prove useful for monitoring a wide range of conditions. For instance, in diabetes accurate and simple monitoring of blood glucose could be extremely useful. The application of Bluetooth to medical devices that operate from inside the body could prove to be a significant step forward in the monitoring of a number of serious conditions.

The future of biomedical devices is looking positive; the application of developments in physics and materials sciences to medical problems is very exciting. From the prevention of infection to predicting a heart attack these devices are likely to save many lives.

Post by: Claire Scofield

News and Views: The importance of vaccination

NeedleThere has been a story in the news recently about a measles outbreak in Swansea and certain other areas of Wales. The cause of this outbreak is attributed to a lack of children being vaccinated with the controversial Measles, Mumps and Rubella (MMR) jab. This measles outbreak highlights the troubled relationship between the general public and vaccination.

The drop in numbers of children receiving the MMR jab can probably be traced back to a 1998 news story. A paper was published in the journal The Lancet stating there was a link between the MMR jab and cases of autism and bowel disease. The study was led by Andrew Wakefield, a former surgeon. Wakefield claimed that instead of the single MMR jab, a vaccination should be administered in three single doses, one for each disease.

However, there were several major problems with the science in the paper. No other scientists could identify the link between the MMR jab and autism that Wakefield and his team claimed. Investigation by the journalist Brian Deer also revealed that Wakefield had a “conflict of interest”, in that he was being paid by a law firm trying to prove that the MMR jab was harmful. This should have been declared to the Lancet, but wasn’t. Therefore his motives appeared to be more financial than scientific1. Eventually after a large, long hearing Wakefield was struck off the medical register in 2010.

A greater problem has arisen from all this. However dishonestly Wakefield behaved, his original claim was never that “all vaccinations are bad”. He claimed that one particular vaccine had a (disproven) link to disease.  However, it appears that some people have become generally mistrustful of all vaccines and worry that they all cause serious disease.  For example Michele Bachmann, a US congresswoman contending for the Republican nomination for president in 2012, claimed that the HPV vaccine led to mental retardation. This statement was not based on scientific evidence or due to any research on the HPV vaccine; she was quoting a parent who had also made that claim without any actual evidence. Other people, including celebrities, both here and abroad, have begun to claim links between some vaccines and diseases which have never been scientifically proven. This had led to a multitude of preventable illnesses and deaths because people are unsure about whether to be vaccinated or not.

Can vaccines be harmful? They do sometimes contain “attenuated” or less virulent versions of the disease-causing microbe to stimulate the immune system. This could theoretically lead to a person who is vaccinated getting the disease instead if the virus reverts to virulence. However, vaccines are rigorously tested before being administered, so any side effects can be detected and assessed before it enters the general population. If the side effects are too bad or the vaccine is not effective enough, it will not be administered. Occasionally things can go wrong, but the prevention of these diseases generally outweighs the risks of using the vaccine.

The media has apparently made little attempt to rectify the public’s mistrust in vaccines. Whilst the original story about the link between MMR and autism was blasted across the front pages of the national papers, the subsequent retraction of the paper (in 2010) and Wakefield’s dismissal have not been as heavily reported. This means that people still have a vague remembrance that “vaccinations are bad” and are not being vaccinated because the story has been poorly clarified. Unfortunately, this has led to several outbreaks of measles, as well as other diseases such as whooping cough that can be prevented by vaccination.

It is important that as many people get vaccinated as possible. When enough of the population is vaccinated against a certain disease, the spread of that disease is limited. This protects people that have not, or cannot, be vaccinated. This concept is known as “herd immunity” but, for it to be successful, a large number of the population need to be vaccinated. This is called the “herd immunity threshold” and may need to be up to 95% of the population to be effective.

I’m not suggesting that you should get every vaccine which is available. However, if you or someone you know is due to have a vaccine and you’re worried, ask your doctor (and get second opinions) about potential side effects or the importance of the vaccine. It is important to make an informed decision about whether to be vaccinated or not based on scientific and medical evidence rather than hysterical celebrities or a retracted paper.

1 Reference: http://www.bmj.com/content/342/bmj.c5347 and references therein

Post by: Louise Walker

The Superhuman Savants

Savant syndrome is an incredibly rare and extraordinary condition where individuals with neurological disorders acquire remarkable ‘islands of genius’. What’s more, these ‘superhuman’ savants may be crucial in understanding our own brains. ‘Savant’, derived from the French verb savoir meaning ‘to know’, is a term to describe those who suffer from a condition that often has an profound impact on their ability to perform simple tasks, like walking or talking, but show astonishing skills that far exceed the cognitive capacities of most of the people in the world. Autistic savants account for 50% of people with savant syndrome, while the other 50% have other forms of developmental disability or brain injury. Quite remarkably, as many as 1 in 10 autistic people show some degree of savant skill.

Kim Peek. Copyright Darold A. Treffert, M.D. and the Wisconsin Medical Society, from WikiCommons

The best known autistic savant was a character played by Dustin Hoffman in the 1988 film ‘Rain Man’’. What few people know is that this character was based around the unbelievable skills of a real-life savant called Kim Peek. Kim Peek suffered from developmental abnormalities that meant he was born with a malformed cerebellum – which lies at the back of the brain and is important for coordinating movement and thoughts – and without the corpus callosum, the sizable stalk of nerve tissue that connects the left and right hemispheres of the brain. Known by friends as ‘Kim-puter’, his astonishing powers of memory fascinated scientists for years. Quite literally, he had a phenomenal capacity to store extraordinary quantities of information in his mental ‘hard drive’. He also had a profound ability to recall information, close to the speed at which a search engine can scope the internet. In 2009, at the age of 54 he had read 9,000 books, all of which he could recite off by heart. He could simultaneously read the left page with his left eye, and the right page with his right eye. What seems quite unbelievable is that at the age of 58 he was still unable to perform everyday simple tasks such as buttoning his clothes. He could not comprehend simple proverbs and struggled greatly in social situations, yet is considered one of the most powerfully gifted savants of all time.

Considering the vast repertoire of human ability, it is fascinating that other savant skills mostly occur in a narrow range of just 5 specific categories:

  1. 1.     Music

Leslie Lemke was born with cerebral palsy and brain damage, and was diagnosed with a rare condition that forced doctors to remove his eyes. Leslie was severely disabled: throughout his childhood he could not talk or move. He had to be force-fed in order to teach him how to swallow and he did not learn to stand until he was 12. Then one night, when he was 16 years old, his mother woke up to the sound of Leslie playing Tchaikovsky’s Piano Concerto No. 1. Leslie, who had no classical music training, was playing the piece flawlessly after hearing it just once earlier on the television. Despite being blind and severely disabled, Leslie showcased his remarkable piano skills in concerts to sell-out crowds around the world for many years.

  1. 2.     Art

Stephen Wiltshire was diagnosed as mute and severely autistic at an early age. Despite having no language or communication skills, at the age of 7, he began the first of many masterful detailed architectural drawings of cityscapes that were remarkably accurate. Known as the ‘Human Camera’ Stephen can draw these landscapes after only observing them briefly.  In 2005, Stephen completed a 10m-long accurate drawing of a Tokyo skyscraper panorama from memory after just one short helicopter ride.

  1. 3.     Calendar calculating

George and Charles Finn, known as the ‘Bronx Calendar Twins’ were both autistic savants. Their particular skill was being able to calculate the day of any date in the past and the future. This talent extended so far that they could accurately calculate any day 40,000 years backwards and forward.

  1. 4.     Mathematics

The first documented savant in 1789 was Thomas Fuller, who was severely mentally handicapped but had unbelievably rapid mathematical calculating abilities. When asked how many seconds a man had lived who was 70 years, 17 days, and 12 hours old, he gave the correct answer of 2,210,500,800 in 90 seconds, even correcting for the 17 leap years included.

  1. 5.     Mechanical or Spatial Skills

Ellen Boudreaux, despite being blind and autistic, could navigate her way around without ever bumping into things. As she walks, Ellen moves around using echolocation:  – she makes chirping noises that bounce off objects in her path such that she can detect the reflected sound, a bit like human sonar.

Interestingly, savant syndrome is four times more likely to occur in men than women. This intriguing difference has sparked much interest in the scientific community, and subsequently the ‘right compensation theory’ of savant ability was established. It appears that during foetal development, the left hemisphere of the brain develops slightly slower than the right hemisphere, and is thus subject to detrimental influences at different stages. High levels of circulating testosterone makes the male foetus more susceptible to damage because this sex hormone can impair neuronal function and delay growth of the vulnerable hemisphere. It was proposed that the right hemisphere may then compensate for this impaired growth, by overdeveloping. So while savants may not be able to walk or talk, the skill development on the other side of the brain is highly advanced, and so may lead to these amazing ‘superhuman’ skills. Left hemisphere damage is often seen in autistic patients, so this theory of ‘left damage/right compensation’ may explain how the savant brain develops differently from others’. Although this theory seems credible, the highly diverse nature of savant syndrome means that no single hypothesis can explain every case.

What is important to consider is that not all savants have developmental neurological disorders. The syndrome does sometimes emerge as a consequence of severe brain injury. Orlando Serrell is an ‘acquired savant’ who at 10 years old, was violently struck on the left hand side of his head by a baseball. Following the incident, Orlando suddenly exhibited astonishing complex calendar calculating abilities and could accurately recall the weather of every day since the accident. Orlando’s case and others alike imply the intriguing possibility that a hidden potential for astonishing skills or prodigious memory exists within all of us, expressed as a consequence of complex and unknown triggers in our environment. The prospect of dormant ‘superhuman’ gifts is a much debated topic, and may have a whole range of implications for the future.

These examples are just few of the thousands of savants suffering from autism and other neurological disorders that exist in the world today. While all the anatomical and psychological evidence contests the development of such skills, the reality of such a syndrome questions our modern understanding of ‘normal’ brain functioning. Until we can establish how savant syndrome skills emerge, it is difficult to certify that any proposed models of human cognition and memory are reliable representations of neurological behaviour.

By Isabelle Abbey-Vital

Fighting jet lag – a simple case of wearing more layers?

Pioneering research has found that one of the best ways to beat jet lag may be by wearing more layers, sitting by a fire and having plenty of cups of tea. Scientists have found that our biological clocks are driven not only by light, but also by our body heat.

fireImagine you’ve been on a relaxing holiday. You’ve done nothing more than catch some sun, top up your tan, and sip cocktails on the beach. Why, despite the relaxing nature of your holiday do you return feeling more tired and fatigued than when you went? It is all to do with jet lag.

After a long-haul flight that crosses over many time zones, you can feel excessively tired and nauseous, with poor concentration and memory. Usually the more time zones you cross, the more severe these symptoms.  It also takes longer to recover, the longer the flight.

So why do we get jet lag?

We suffer from jet lag because of disruptions to our internal body clock which regulates things called circadian rhythms. These rhythms control many of our bodily functions and behaviours such as body temperature, appetite, hormone release and sleep patterns. They are controlled by a part of the brain called the SCN – the suprachiasmatic nucleus, located just above the roof of our mouths.

Circadian_rhythm_labeledOur body clock is synchronised to our environment using light signals, which signal to our brain what time of day it is.  During long haul travel, the cells in the brain’s ‘body clock’ become confused by the change in the light and act out of sync with each other. This is the point where we experience symptoms associated with jet lag.

Scientists have known about jet lag for a long time, but we know little about how to treat it successfully.  If you look on the internet you can find numerous sites giving tips on how to beat jet lag- or at least improve the symptoms. From my own experience, every time I’ve travelled to America and tried some of these, they have rarely touched the surface.

If you want to avoid jet lag the advice is to establish a new routine so that you eat and sleep according to the time zone you’re in, avoid napping during the day, and making sure you get as much natural light as possible. Research has shown that experiencing light during the evening causes a delay in our body clock meaning our bodies rhythms move later in the day. If we are exposed to light during the early morning, our clock becomes advanced and our rhythms start earlier in the day.

This stuff is all pretty old news. The link between the circadian clock and temperature is, on the other hand, altogether remarkable.  Scientists have found lots of evidence that point towards our biological clocks being driven by our body heat. Fruit flies exposed to drastic changes in temperatures exhibited changes to their body clock. They found that cells in the back of the brain called ‘dorsal clock cells’ were important in synchronising the body clock at warmer temperatures. Cells at the front of the brain -‘ventral clock cells’, synchronised the clock at cooler temperatures.

These findings may be key in helping us defeat jet lag by easing our body clock back into its status quo. It may be as simple as piling on layers of chunky jumpers, scarves and hats if you come from somewhere blisteringly hot, to be plunged into a cold climate. Vice versa, stripping down to as little clothing as possible may help battle jet lag if returning from somewhere cold. It’s all about easing our bodies back into its normal routine; not plunging straight into the deep end.

Post by: Samantha Lawrence