Been there done that…or have I? Understanding the phenomenon of Déjà vu

dejavu1Have you ever experienced that overwhelming sense of familiarity with a place or situation, when it shouldn’t be familiar at all? For example, have you visited a restaurant in a city that you’ve never been to before and had this strange sense that you’ve been there even though you know for sure you haven’t?

This sensation is known as déjà vu which, when translated literally from French, means ‘already seen’.  It is also surprisingly common – around 70% of the population report these feelings, with most reports coming from those aged 15-25.

Déjà vu occurs randomly and with no prior warning of its onset.  Because of this unpredictability déjà vu is hard to study and, therefore, poorly understood – unfortunately scientists in white coats holding clip boards aren’t usually waiting around to attach electrodes to you when you experience this.

The earliest reports of this phenomenon came from as far back as 1876, when Emilie Bairac coined the term déjà vu. Psychics were quick to latch on to the phenomenon as evidence that we had all lived past lives; explaining that these strange feelings of familiarity, in unfamiliar situations, came from things we had encountered in our previous lives. However, soon more scientific reasoning began to gain credibility.

dejavu2Whilst déjà vu is reported in healthy individuals, there also appears to be a strong connection with epileptic patients. In fact, many of the earliest reports of déjà vu came from patients with epilepsy. These unusual experiences were thought to be linked to seizure activity in the medial temporal lobe – part of the brain involved in sensory perception, speech and memory association. During seizures, it was found that these neurons were ‘mis-firing’ and sending confusing messages to the brain and body.

We now know that a subset of epileptic patients regularly have bouts of déjà vu before a seizure. These seizures are evoked by changes in the brain’s electrical activity, which cause over-excitation to spread across multiple regions – much like a tidal wave rippling across the brain from the epicentre of a tsunami.  It is these electrical disturbances which create the feeling of déjà vu. Therefore, epileptic patients may hold the key to uncovering the origins of déjà vu – although the precise mechanisms likely differ to those in healthy individuals.

However, the precise mechanisms responsible for déjà vu in healthy patients are still highly elusive.

One theory is that the phenomenon has links to regions of the brain involved in recognising familiar objects and recalling memories. During an episode of déjà vu it is thought that these parts of the brain ‘mis-fire’ and produce the feeling of things being familiar when they actually aren’t.

Another theory is related to errors in memory processing. Usually when a memory is processed the new experience is first transfered to our short term memory and then, at a later stage, separately consolidated into long term memory. During episodes of déjà vu it is thought that a novel situation by-passes the part of the brain that processes short-term memory and instead is committed directly to our long term memory. A novel experience may then feel familiar when in fact it is not and the ‘memory’ you think you are recalling is in fact the present scene.

Despite being difficult to reproduce, scientists at Colarado State University think they may have replicated an experience similar to déjà vu in the lab. The study, headed by cognitive psychologist Anne Cleary, intended to identify whether déjà vu could be induced in participants exploring a virtual world. Subjects were given a head-mounted video screen to wear, which displayed 128 pictures of different villages that were split into pairs. Unbeknownst to the subjects, objects were positioned in the same place across the pairs to create a similar layout but with minor differences. Findings of the study suggested that déjà vu occurred most often when the layout of the new scenes were very similar to a previous scene, but not so similar that the subjects were able to recognise the new scene as being similar to the old one. In simple terms, déjà vu can occur when there are subconscious similarities between an old event and a new one, but the contrast between them is enough that you don’t recognise them as being similar.

Memories are stored in a region of the brain known as the hippocampus, these memory traces are held within groups of cells that have strong links with one another. Similar memories, such as sitting in a bar drinking a pint of lemonade and sitting in a bar drinking a glass of water a week later, are often stored across overlapping groups of cells. The brain needs a way to differentiate between these similar events and uses a process known as pattern separation.

dejavu3Scientists studying pattern separation manipulated a gene in mice, which they believed to be linked to this process. These mice were then left in a box and given a small foot-shock which made them freeze. The mice were then guided into another box but, this time, didn’t receive a shock. Mice that had an intact pattern separation gene froze in the ‘safe’ box and took a while to figure out they would not receive another. However, mice without the gene figured this out more quickly. Some scientists think that it is this circuit that can be used to explain déjà vu; this pattern separation circuit misfires so that the ability to separate new experiences from similar past experiences is lost, giving the feeling of déjà vu.

Déjà vu has also been linked to levels of the neurotransmitter dopamine, although research into this is sparse. This link was suggested after a healthy, middle aged doctor was prescribed drugs that are known to increase the activity of dopamine in the brain. After beginning his course of treatment, the doctor had recurrent episodes of déjà vu that subsequently disappeared after he stopped taking them. Despite this apparently obvious link little more is known about the role of dopamine during bouts of déjà vu.

Although early reports of déjà vu date as far back as the late 1800’s, we still know relatively little about exactly how and why it happens. There are many theories around that range from far-fetched ideas based on psychic and spiritual origins, to ideas that explain déjà vu as errors in the memory-making pathway. However, to elucidate the exact mechanisms, much more research is still needed.

Post by: Sam Lawrence

The open access debate: Should we pay for knowledge?

One of the bigger issues facing researchers today is how to access scientific information. A lot of research is published in restricted access journals, where the information is hidden behind a paywall. But, many scientists feel like this should not be the case and that all research should be accessible to anyone who needs it.

I’m going to start this post with a confession. Whilst I knew that the ‘open access’ debate was rife amongst the online scientific community, particularly on Twitter, I never really paid it much attention. The reason for this was that if there was a paper I wanted to read I just popped my university username and password into the publisher’s website and downloaded the article. I never thought about whether this information was open access or not.

BooksThe principle of open access is that scientific content should be freely available to everyone and can be read immediately online with full re-use rights (with correct attribution). However, many scientific journals are closed-access meaning that a fee must be paid in order to read a particular article.

I became aware that there was a problem with this restricted access when friends who worked at different universities complained that they couldn’t access certain articles that were important for their research. I began to realise that, whilst my university had paid for very thorough access, not everyone’s did. Amazingly, this subscription system was actually preventing researchers from accessing information that could be crucial to their research.

I have now left academia but still have an active interest in the world of research. However, since my graduation, my access to scientific journals has been revoked and I have now found the door to scientific knowledge slammed in my face.

It came as somewhat of a surprise to me when I started my undergraduate degree that universities have to pay subscription fees to access certain journals. This includes what are considered the ‘gold standard’ journals – Science, Nature and Cell – published by AAAS, Nature Publishing Group and Cell Press respectively. The prices that are paid for these subscriptions are staggering. My alma mater, the University of Manchester, states on its website that it is currently spending £4.5 million a year on these subscriptions.

Using a computerBeyond the lab, the wider importance of open access was brought home to me recently when I was chatting to someone who had read about a “new cure” for a previously incurable disease. When I asked how they had come across this information, the reply was “I found it on the internet”. I tried to gently tell them that the “cure” in question was not currently backed up by scientific research. However, my scepticism was immediately shot down by the reply, “Well, how can I see this scientific information?”

Here is the crux of the matter. I feel that people should be able to access the information that they need. If this person could find plenty of non-scientific articles proposing miracle cures, surely they should also be able to find the primary scientific literature to determine whether these articles reflect the actual research?

It does appear that the publishing scene is slowly changing. There are now a number of publishers who proudly declare themselves as open access. This includes the Public Library of Sciences (also known as PLoS) and BioMed Central. Another open-access publisher is eLife, which counts Nobel Prize winner Randy Schekman amongst its editors. Prof Schekman is outspoken about the need for open access, writing in the Guardian that “it is the quality of the science, not the journal’s brand, that matters”.

One of the arguments against open access is that journals obviously have to make money. However, journals also make money by charging the authors of the scientific papers to publish in them. It can cost the authors thousands of pounds to publish an article in a high-impact scientific journal. Another concern about open access is that it may erode the quality of scientific publishing and science in general. Whether these concerns are founded remain to be seen.

moneyTo get around the cost issue, open-access journals have to charge extra for their articles – BioMed Central has an article processing charge of £750-£1520 per article, depending on the journal. One of the advantages of these open-access publishers is that the articles are published instantly online. Therefore, the lack of printing costs should keep the journal’s overheads down.

Some closed-access journals are now responding to the increased pressure to make their articles freely available. AAAS have announced a new open-access journal called Science Advances. However, this move has provoked unhappiness amongst open-access advocates for two reasons. Firstly, many scientists balk at the fact that AAAS plans to charge authors a steep £3,300 to get certain extras like a CC-BV licence (which allows for full reuse of papers and is required by the Research Councils UK for their funded researchers). There is also a surcharge if the article is over 10 pages long. The other reason for the dismay of the open access community is the appointment of Kent Anderson as the journal’s publisher, who is at odds with the founder of PLoS, Michael Eisen, over the benefits of open-access publishing. These concerns have prompted over 100 scientists to publish an open letter to AAAS, asking them to remove the extra charges.

So, should all research be open access? I truly believe that science at its very heart should be free to anyone who wants to use it, be they researchers or interested members of the public. The shift towards open access is encouraging and hopefully someday the big journals will understand the need for everyone, not just academics at rich universities, have the right to see any scientific research which is of interest to them.

Post by: Louise Walker

Nudge: how science is being used to influence our behaviour

Do you ever feel you are being influenced by things beyond your control? Well you’re not alone. In 2009 the UK government put together a special unit (the Behavioural Insights Team AKA the Nudge Unit), dedicated to using insights from behavioural economics and psychology to influence our behaviour.

Although the Nudge Unit may sound like something from a bleak dystopian future, where our every action is monitored and controlled, it’s best not to judge the idea too hastily. So, let’s take a minute to get acquainted with the ‘nudge’…

The idea behind the nudge stems from a simple fact about human behaviour: ‘no matter how smart a person is, many of the basic choices they make on a day-to-day basis will be purely impulsive with little or no logical basis‘. This may sound unusual, but if you think about it, it actually makes sense. Could you imagine how hard life would be if every mundane daily decision required deep contemplation? You’d probably never even make it out of bed in the morning!

Scientists believe that our brains accomplish tasks by relying on two different systems or modes of thinking. System-one is a bit of an air head; it’s fast, automatic and emotional. Whilst system-two is like your inner professor; slow, ruminating and logical. It’s no secret that when it comes to important decisions, system-two is your best bet. But, we don’t always have the time or resources to engage this system, meaning that many of our everyday mental decisions are actually made ‘on the fly’ by system-one. To test this hypothesis, try answering the following question:

Baseball bat

A bat and ball cost £1.10. The bat costs £1 more than the ball. How much does the ball cost?

Can you hear system-one shouting out the answer ’10p’? This answer may instinctively feel correct, but with a bit of extra thought it’s easy to come to the correct answer of ‘5p’. For more examples of the system-one/system-two divide see the video below:

 

Yes, poor impulsive system-one has many flaws. It is heavily swayed by social pressure, easily tricked, and has a tendency to favour short-term pleasure over long-term success; and with these flaws comes a certain level of predictability. It is this predictability that is now being utilised by the government’s Nudge unit to influence our behaviour.

In the 2008 book Nudge: Improving decisions about health, wealth and happiness, behavioural scientists from the University of Chicago laid out guidelines on how to apply behavioural nudges to policy. Now, six years on, concepts from this work are being used across the world to influence everything from tax fraud to antisocial bathroom habits.

Here are a couple of examples:

Schiphol flies:

AuthoFly urinalrities at Schiphol airport in Amsterdam were at a loss over excessive cleaning bills in their male toilets – where patrons seemed to hit everything but the urinal. However, economist Aad Kieboom had a solution. Rather than posting signs in the room asking patrons to improve their aim, he suggested that airport authorities etched a small picture of a fly into each urinal. This unusual solution worked by giving men something to aim at and reportedly reduced the airport’s lavatory cleaning bill by 80%. This is also arguably the most celebrated example of a nudge (a strategy for changing human behaviour based on an understanding of what real people are like).

Manchester tax dodgers:

In a recent document the UK’s Nudge unit discuss how the application of behavioural insights can be used to reduce fraud, error and debt. Indeed, even our own fair city has begun to participate in nudge politics. In 2011, Manchester residents claiming single person discount on their council tax were randomly sent one of three different letters, asking them to fill out a form to renew their claim. The first form was a standard document commonly used by the council, the other two however used nudges in an attempt to encourage honesty. These nudges were pretty simple, including simplified language, clear messages and a reminder that providing false information is an act of fraud. Amazingly, the study suggests that simply re-wording these forms did indeed lead to a reduction in the number of fraudulent claims.

So, our impulsive system-one certainly seems susceptible to the odd nudge, but many questions still remain. For example, which nudges work best? – Has anyone spotted the motorway signs stating ‘Bin your litter, other people do’? This sign was based on the theory that people are more likely to comply if they think that complying is a social norm. Personally, I find this particular nudge a bit condescending. OK, so I’m yet to throw litter out may car window just to make a point, but I also don’t feel compelled to comply. Also, when does a nudge become a shove and who decides the best direction to nudge people in? These are all important questions that need some serious thought. But, overall I think that the nudge is certainly an interesting concept and one that could offer more insights into human behaviour.

What are your views? Has anyone spotted any more hidden nudges? Add your comment below, other people do!


Post by: Sarah Fox

Neuroinformatics: scary stuff.

At the University there are always talks and lectures happening across campus and this year I have successfully managed to sneak into some quite intellectual (and generally confusing!) talks explaining new research.

brian1Recently I attended a lecture on ‘Neuroinformatics’ by Dan Goodman, a researcher at the Harvard Medical School and creator of a computer program that imitates neural behaviour called Brian.

Neuroscience, just like modern technology, is getting awfully fiddly. Long gone are the days when relatively simple discoveries such as Galvani’s seminal observation that a frog’s muscle twitches when a current is passed through it, were viewed as ground­breaking (Galvani, 1700’s). This discovery opened the door for further work exploring the role of electricity in nerve cell communication – a field which took neuroscience research to a new level of complexity.

Thanks to work by Neher and Sackmann in the 1970’s developing the patch clamp technique, we can now study electrical activity of single brain cells (neurons). The patch clamp technique allowed researchers to probe the electrophysiological (think electrical and living) properties of single neurons.

We now know that neurons communicate with one another through a special cellular language involving brief fluctuations in electrophysiological activity (known as spikes). (which look something like the image below)

Approximate plot of a typical action potential shows its various phases as the action potential passes a point on a cell membrane. The membrane potential starts out at -70 mV at time zero. A stimulus is applied at time = 1 ms, which raises the membrane potential above -55 mV (the threshold potential). After the stimulus is applied, the membrane potential rapidly rises to a peak potential of +40 mV at time = 2 ms. Just as quickly, the potential then drops and overshoots to -90 mV at time = 3 ms, and finally the resting potential of -70 mV is reestablished at time = 5 ms. Schematic of an action potential (Wikipedia)
Approximate plot of a typical action potential shows its various phases as the action potential passes a point on a cell membrane. The membrane potential starts out at -70 mV at time zero. A stimulus is applied at time = 1 ms, which raises the membrane potential above -55 mV (the threshold potential). After the stimulus is applied, the membrane potential rapidly rises to a peak potential of +40 mV at time = 2 ms. Just as quickly, the potential then drops and overshoots to -90 mV at time = 3 ms, and finally the resting potential of -70 mV is reestablished at time = 5 ms.
Schematic of an action potential (Wikipedia)

However, the brain is much more than single cells and their associated spikes, it is actually made up of many neurons, all interacting with and influencing each other. Therefore, studying the behaviour of singular neurons is a rather long, laborious process that,

Typical neuroscientist at the end of a long week
Typical neuroscientist at the end of a long week

although informative, cannot tell us about how the brain functions as a whole. This problem could be solved if we could study the human brain directly, but as you can imagine it is a little hard to get willing human volunteers. (see left)

Luckily, scientists such as Alan Turing toyed with the relationship between computers, mathematics and the brain, developing the idea of the ‘human computer’. Advances in technology mean that scientists can now study many neural interactions simultaneously. But, as the experiments grow in complexity, so does the amount of data to anaylse….(Leading to many, many sleep deprived  scientists – also see left )…

So, computer scientists and mathematicians, armed with knowledge of models and computational methods, rose to the neuroscientist’s aid. Thus, the interdisciplinary age began.

Dan argued that complicated algorithms brought some neuroscientists out in cold sweats, so he tried to create easy, user friendly, software – hence Brian was born. I could feel the itchy feet of one researcher behind me, dying to challenge the speaker to an intellectual dual (scientists are quite territorial over their fields of expertise and occasionally I have to resist the urge to stand up with a bell and shout “round 1”). The antagonist, in this case, 2268845904_e0ddae5fec_owas somewhat skeptical about Brian and its benefits over SpinNaker, another computer based simulation designed to model brain circuits. Words that I would have needed to google beforehand such as ‘GPU’, ‘jinja’  and ‘Numpy’ were thrown around the room ­ and I realised that I agreed with Dan – after a 3 year Neuroscience degree, the only word I understood was ‘Andriod’ and that is because of my phone!  At the end of the talk, he ran a demo of Brian to show how it imitates neuronal behaviors at both the network and single cell level. This is important because if we can  create a human brain online, we can manipulate it to see how diseases such as Dementia occur, giving us a sneaky bit more insight into how to tackle these problems.

Afterwards, several things dawned on me; specifically, the complexity of neuroscience analysis and how important it is to be a Jack­-of­-all-­trades in the research industry, as well as having an open mind and being a little nosy when it comes to areas of research outside your own comfort zone ­ you never know what you might need to know these days..

Post by: Clare McCullagh

The Brain on Tetris

You’re probably all too familiar with Tetris as a procrastination tool – but did you know about its far more reputable role in psychological research?

If you’ve ever played Tetris for a while, you may have noticed its lingering effects– such as daydreaming of objects in the room slotting together. If this sounds vaguely familiar, the diagnosis is (I kid you not): Tetris Syndrome.

That this is indeed a real phenomenon is backed by the fact that it makes a respectably lengthy appearance on Wikipedia. According to said source, it “occurs when people devote sufficient time and attention to an activity that it begins to overshadow their thoughts, mental images and dreams”.

Thus Tetris not only occupies your mind during the task itself, but seems to form a lasting impression on the brain. This has led to its use in psychology, helping to understand various aspects of how our brains work.

5062098635_da006874b5_o

Tetris and skill

For one, Tetris has helped shine some light on what happens in the brain when our skills develop. A study in 2009 showed that over the course of a month of playing Tetris, brain areas linked to playing the game gradually reduce their use of glucose (the brain’s natural fuel) while skill levels continuously improve. This means, despite our brains appearing less engaged, they’re doing a better job. The conclusion: greater skills come from a more energy efficient brain.

Tetris to prevent PTSD

That Tetris can help us understand skill acquisition isn’t too surprising….but what about its use as a treatment for post-traumatic stress disorder (PTSD)? Researchers at Oxford University showed that volunteers who played Tetris after watching a truly traumatic video reported half as many flashbacks over the next few days than those who did a trivia quiz. The team’s explanation is that the high cognitive demands of Tetris prevent the traumatic memory from ‘settling in’. As it takes around 6 hours for memories to enter a more long-term state, this treatment has a very limited time-window to work. It essential means we’d need Tetris arcades set up in warzones – seems somewhat questionable if you ask me.

The Tetris diet?

Ok so, maybe Tetris isn’t the thing to play during exam-time then. But maybe it’s worth a go when you’re feeling a bit peckish. It turns out Tetris has the potential to reduce cravings. A group of individuals who were asked about their cravings where split into two: one who played Tetris and one who got to stare at the loading screen (heartless, I know). The players got over their cravings whereas the control group didn’t. Tetris: the new diet? The media definitely took it that way.

 

Dreams of Tetris

Tetris has also been moonlighting in sleep research. A curious study from Harvard University investigated what happens to the dozing brain after playing Tetris for a ridiculously long period of time. In addition to the usual healthy average Joes and Janes, the study included a few amnesics, as well as a selection of “Tetris experts” (there’s actually a global ranking system for Tetris professionals).

 

9470385475_b9ecf799a0_o

Across three days the ‘experimentees’ played a total 7 hours of Tetris and were asked to describe what they saw when drifting off to sleep each night. “Tetris experts” could hear music (the famous Russian Tetris theme Korobeniki) and see colours from versions they’d played years before; while the amnesics were pretty confused as they didn’t have a clue what Tetris was, nor why some strange person in a lab coat was sitting in their bedroom. Yet even they described geometric shapes falling from the sky and slotting into spaces.

Besides showing that amnesics actually can form visual memories, this study seems to suggest that our daydreams and dozing thoughts are serving a purpose, a kind of subconscious training and integration of old and newly learnt abilities perhaps.

So, far from being just some trivial game that you cannot actually ever win (think about it…), its power to occupy and sway the mind has actually made Tetris an extremely fascinating research tool.

 

Post by: Isabel Hutchison