Meek no more: turning mice into predators

A recent study published in the journal Cell, has shown that  switching on a particular group of neurons in the mouse brain can turn these otherwise timid creatures into aggressive predators. Why would anyone want to do this you might ask? After all, with the tumultuous political events of 2016, do we really want to add killer mice to our worries? Thankfully, the researchers aren’t planning to take over the world one rodent at a time, instead they want to understand how the brain coordinates the complex motor patterns associated with hunting.

During the evolution of vertebrates, the morphology of the head changed to allow for an articulated jaw. This is a more scientific way of describing the type of jaw most of us are familiar with: an opposable bone at the entrance of the mouth that can be used to grasp and manipulate food. This anatomical change allowed for the development of active hunting strategies and the associated neural networks to coordinate such behaviours. The researchers wanted to identify which parts of the brain contain the networks for critical hunting behaviours such as prey pursuit and biting. They began by looking at an evolutionarily old part of the brain known as the amygdala, specifically the central nucleus of the amygdala (CeA), because this area has been shown to increase its activity during hunting and has connections to parts of the brainstem controlling the head and face.

In order to study this part of the brain, the authors used a technique called optogenetics. This technique involves introducing the gene for a light sensitive ion channel into specific neurons. It is then possible  to ‘switch on’ the neurons (i.e. cause them to fire bursts of electrical activity) simply by shining blue light onto them. This is what the researchers did with the neurons in the CeA.

To begin with the researchers wanted to find out what happens when you simply switch on the these neurons. To test this they put small moving toys, resembling crickets, into the cage as ‘artificial prey’ and watched the animals’ behaviour. The mice were largely indifferent to these non-edible ‘prey’, however as soon as the light was switched on the mice adopted a characteristic hunting position, seized the toys, and bit them. This never occurred when light was off. The scientists also tested the mice with live crickets (i.e. prey that mice would naturally hunt). When using live prey the mice (without the light activation) hunted as normal. However, when the light was switched on the researcher saw that the time needed for the mice to capture and subdue their prey was much shorter and any captured crickets were immediately eaten. The combination of these results suggests that stimulation of the central nucleus of the amygdala (CeA) not only mimicked natural hunting but increased the predatory behaviour of these mice.

One question that might spring to mind from this study is: How do we know that these mice are really hunting? Perhaps the light had unintended effects such as making the mice particularly aggressive or maybe very hungry? After all, both explanations could account for the increased biting of non-edible objects and the faster, more aggressive cricket hunting. To argue against increased aggression levels, the authors point out that CeA stimulation did not provoke more attacks on other mice – something you might expect of an overly aggressive mouse. So what about increased hunger? The scientists in this study also think this is unlikely because they allowed the mice access to food pellets and found no difference in how many pellets were consuming during the time the laser was on versus the time the laser was off.

So how is hunting behaviour controlled by the CeA? The hunting behaviour displayed by mice can be divided into two aspects: locomotion (prey pursuit and capture) and the coordination of craniofacial muscles for the delivery of a killing bite. The scientists hypothesised that the CeA may mediate these two different types of behaviour through connections with different parts of the brain. The two key brain regions investigated in this study were the parvocellular region of the reticular formation in the brainstem (PCRt) and a region of the midbrain called the periaqueductal grey (PAG).

By using optogenetics the researchers were able to selectively stimulate the CeA to PCRt projection and found that this caused the mice to display feeding behaviours. Interestingly, stimulating this pathway seemed to only elicit the motor aspects of eating e.g. chewing rather than increasing the mice’s hunger. Conversely, disrupting the function of this pathway interfered with the mice’s ability to eat. Taking this into a ‘live’ setting, the mice could still pursue their prey and subdue it using their forepaws, but they struggled to deliver a killing bite. The researchers then turned their attention to the pathway between the CeA and the PAG. They found that stimulating this projection caused mice to start hunting more quickly, pursue their prey faster, and hunt for longer. Unlike the experiment above, stimulating this pathway had no effect on feeding-type behaviours. Now the scientists geared up for the big experiment: they’ve shown that stimulating the CeA leads to predatory hunting. They’ve shown that biting and pursuit seem to be controlled by different pathways from the CeA. So they decided to see if activating both pathways simultaneously (CeA to PCRt and CeA to PAG) could mimic the effects of stimulating the CeA itself. Indeed, they found that stimulating these two pathways together led the mice to robustly initiate attacks on artificial prey.

So what can we learn from this study? The scientists have demonstrated that the CeA acts as a command system for co-ordinating key behaviours for efficient hunting via two independent pathways. However, there are still some key questions remaining, for example, what determines whether the CeA sends those commands? The scientists hypothesise that cues such as the sight or smell of prey might cause the CeA to respond and send the command to elicit the appropriate motor actions. However, they can’t prove this in the current study.

Despite these limitations, this paper is a great example of how scientists can use cutting edge tools, like optogenetics, to tease apart the brain pathways responsible for different aspects of a complex behaviour such as hunting.

Post by: Michaela Loft

Save

Are you my type? The fascinating world of blood types

4910573791_48625c7c5a_zWith our 30th birthdays on the horizon my best friend and I decided to make a bucket list. As the months ticked by one thing on my list stood out – blood donation. I must admit I feel slightly ashamed that, peering over the horizon of the big 30, I’m yet to give blood, especially since both my mum and maternal grandmother donated so regularly in their youth that they were given awards. My grandmother always told me she felt compelled to donate since she had a relatively rare blood type*. She also told me that the reason my mother was an only child was because of an incompatibility between my mother’s blood, which was positive for a blood factor, and her own, which was negative (https://en.wikipedia.org/wiki/Rh_disease). This got me thinking – how many blood types are there, and why do they exist? How is it that the blood flowing through my veins and yours can be made of the same basic elements and yet be so different?

All human blood consists of the same fundamental components – red blood cells, white blood cells, plasma and platelets. Of these, it is the red blood cells that give the blood it’s identity or ‘type’. The surface of red blood cells is covered with molecules, and the presence or absence of a particular molecule determines which blood group you belong to. The principal blood grouping system used for humans is the ABO system which categorizes people into one of four groups – A, B, AB, or O on the basis of the presence or absence of a particular antigen. An antigen is a molecule capable of producing an immune response only when it does not originate from within your own body. For example, a person with type B blood who has the B antigen on their red cells could not receive blood from a person of blood type A, since the A antigens on this donor’s blood would be foreign to the type-B recipient’s body and would therefore cause an immune response. People with blood type B have the B antigen on their red cells whilst type A people have the A antigen. If you belong to AB, your red cells have both A and B antigen types and if you are group O you have neither A nor B. This basic grouping can be expanded to 8 groups when another important factor ‘Rh’ is considered. The Rh antigen can either be present (+) or absent (-), so if like my grandmother you have A- blood it means that your red cells have A antigens and are negative for the Rh factor.

9523565066_53955846c9_zAt first glance categorizing blood into different types may seem like an academic exercise or a scientific curiosity but it has serious real world consequences. In the 1800s many doctors noted the seemingly unnecessary loss of life from blood loss; however few were brave enough to attempt transfusions. This reluctance stemmed from earlier attempts at transfusion in the 1600s in which animal blood was transfused into human patients. Most of these attempts ended in disaster and by the late 1600s animal blood transfusions were not only banned in both Britain and France but also condemned by the Vatican. However, after watching another female patient die from haemorrhaging during childbirth, the British obstetrician James Blundell resolved to find a solution. He thought back to the earlier transfusion attempts and correctly guessed that humans should only receive human blood. Unfortunately, what he didn’t realise was that any given human can only receive blood from certain other compatible humans. Blundell tried but with mixed success, and by the late 19th century blood transfusion was still regarded as a risky and dubious procedure that was largely shunned by the medical establishment.

Finally in 1900, the Austrian doctor Karl Landsteiner made a breakthrough discovery. For years, it had been noted that if you mixed patients’ blood together in test tubes, it sometimes formed clumps. However, since the blood was usually taken from people who were already ill, doctors simply assumed that clumping was caused by the patient’s illness and this curiosity was ignored. Landsteiner was the first to wonder what happened if the blood of healthy individuals was mixed together. Immediately he noticed that some mixtures of healthy blood clumped too. He set out to investigate this clumping pattern with a simple experiment. He took blood samples from the members of his lab (including himself) and separated each sample into red blood cells and plasma. Systematically, he combined the plasma of one person with the red cells of another and noted if it clumped. By working through each combination he sorted the individuals into three groups which he arbitrarily named A, B, and C (later renamed O). He noted that if two individuals belonged to the same group, mixing plasma from one with red blood cells of the other didn’t lead to clumping- the blood remained healthy and liquid. However, if blood from an individual in group A was mixed with a sample from an individual in group B (or vice versa) the blood would clump together. Group O individuals behaved differently. Landsteiner found that both A and B blood cells clumped when added to O plasma, however he could add A or B plasma to O red blood cells without clumping. We now know that this is because red blood cells express antigenic molecules on their surface. If an individual is given blood of the ‘wrong’ type (i.e. one that expresses a different antigen to the host’s blood) the person’s immune system notices that the transfused blood is foreign and attacks it, causing potentially fatal blood clots. Our knowledge of different blood types means that we can now make safe blood transfusions from one human to another thereby saving countless lives. In recognition of this fundamental discovery, Karl Landsteiner was awarded the Nobel Prize in Physiology or Medicine for his research in 1930.

Since Landsteiner’s work, scientists have developed ever more powerful tools for studying blood types. However, despite increasingly detailed knowledge of the genes and molecules expressed by different blood groups, it’s still unclear why different types exist at all. In an effort to understand the origins of blood types, researchers have turned to genetic analysis. The ABO gene is the gene responsible for producing the antigens expressed on the surface of our red blood cells. By comparing the human gene to other primates, researchers have found that blood groups are extremely old – both humans and gibbons have variants of A and B blood types and those variants come from a common ancestor around 20 million years ago.

The endurance of blood groups though millions of years of evolution has led many researchers to think that there could be an adaptive advantage to having different types. The most popular hypothesis for the existence of blood types is that they developed during our ancestors’ battles with disease. For example, if a pathogen exploited common antigens as a way of infecting its host, then individuals with rarer blood types may have had a survival advantage. In support of this, several studies have linked different disease vulnerabilities to different blood groups. For example, people with type O blood are more protected against severe types of malaria than people type A blood, and type O blood is more common in Africa and other parts of the world that have a high prevalence of malaria. This is suggestive of malaria being the selective force behind the development of type O blood.

As I sign up for my first blood donation appointment I think back to everything I’ve learnt about blood types. I’m eager to find out what my blood type is and what that might mean about the history of my ancestors, and the disease challenges they’ve faced. Most of all though I’m excited to continue my family’s tradition and contribute to one of the humankind’s greatest medical advancements.

4840377002_bf84a3cecc_z

Post by: Michaela Loft

*A- which is only present in ~7% of caucasians.

 

Sensation: “it’s the rat’s whiskers!”

Screen Shot 2015-02-08 at 21.09.12When I tell people I study rodent whiskers I’m often met with a slightly puzzled look. More often than not, I get asked ‘why?’ which is probably fair enough since to the general population it might seem like a slightly odd thing to do. Unlike my colleagues studying Alzheimer’s disease or chronic pain, there’s no obvious reason why we should be interested in the whisker system- after all humans don’t have whiskers (hipster beards aside) and at first glance there’s no obvious medical benefit. However, there are in fact many amazing features of the rodent whisker system which could provide fundamental insights in neuroscience, alongside many opportunities for technological and medical advances.

One key problem in neuroscience is understanding how the sense organs (eyes, ears, nose, tongue and skin) take information from the outside world and translate it into something we can perceive. Take olfaction for example: a smell begins life as a small number of air borne chemicals hitting receptors in our nose. But, from these humble beginnings it can – with the help of a bit of brain power – become so much more; the appetising aroma of freshly baked bread or the memories of summers long-past engrained in the smell of petrol and freshly cut grass.

Another sense we sometimes take for granted is our sense of touch, how do we manage to distinguish between smooth and rough surfaces? Rodents are experts at navigating the world in the dark using only their sense of touch and they use their whiskers in much the same way that we would use our fingertips – to get information about something in front of them. Rodents rhythmically brush and tap about 60 large vibrissae (whiskers) against objects to determine their size, shape, orientation, and texture. This behaviour is called ‘whisking’. When a whisker bends against an object, forces and torques are generated at the whisker base. By quantifying these mechanical signals we are able to understand what information the rat’s brain is receiving.

4368236955_c3eaf7080c_mThe fact that the whisker system uses mechanical information is a great advantage for scientists as it means that we can measure the exact inputs coming into the receptors at the base of the whisker (forces and bending moments are easier to measure than the equivalent input for every other sensory system). For example, the equivalent in the visual system would be attempting to measure every single photon of light hitting the retina- something that is currently impossible. Furthermore, the whisker system has a well-defined neural pathway, meaning we know the route that information takes from the base of the whisker, through the brainstem and thalamus all the way up into the cortex. In this pathway, each whisker is faithfully represented so that at any stage along the neural pathway, you will find neurons that fire to the movement of a single whisker and no other. These features provide an unparalleled opportunity to study how the neurons ‘code’ information from the external world and how the brain ‘decodes’ it.

If we can understand the way information from the outside world is transduced within the nervous system, we can use this knowledge for a surprisingly wide-range of applications. One of the most promising applications is the construction of robotic whiskers which can be used in situations where visual information is difficult to obtain. For example, fog, darkness and glare can all interfere with optical sensors. A tactile based sensor, however, could provide crucial information where optical sensors fail. For example, fault detection in piping, machinery or ducts.

Another potential application is in the field of intravascular surgery. This application would require our robotic whiskers to be miniaturized in a biocompatible manner. An increasing number of surgeries are being conducted non-invasively. Although there are several optical sensing methods available to the surgeon, none of these methods can replicate the sense of touch that is lost when performing non-invasive surgery.

Whilst there is a still a way to go before whisker based technologies can be fully used in the situations outlined above, we are certainly making significant headway – for example take a look at the following video showing ‘Whiskerbot’- a robotic active touch system created by scientists at the University of West England and the University of Sheffield.

[youtube http://www.youtube.com/watch?v=TBxjChQ6SjA]

Hopefully, projects like this will not only inspire others to study this fascinating sensory system, but also pave the way for innovative technological advances.

Post by: Michaela Loft