Digital technologies: a new era in healthcare

Our NHS provides world-class care to millions of people every year. But, due to funding cuts and the challenges of providing care to an ageing population with complex health needs, this vital service is unsurprisingly under strain.  At the same time, with the mobile-internet at our fingertips, we have become accustomed to quick, on-demand services. Whether it’s browsing the internet, staying connected on social media or using mobile banking, our smartphones play important roles in nearly every aspect of our lives. It is therefore not surprising to find that over 70% of the UK population are now going online for their healthcare information.

This raises a question: could digital health (in particular mobile health apps) play a role in bolstering our faltering health service?

Unfortunately, to date, healthcare has been lagging behind other services in the digital revolution. When most other sectors grabbed onto the digital train, healthcare remained reluctant. Nevertheless, the potential for mobile technology to track, manage and improve patient health, is being increasingly recognised.

ClinTouch for instance, is a mobile health intervention co-created by a team of Manchester-based health researchers at The Farr Institute of Health Informatics’ Health eResearch Centre. ClinTouch is a psychiatric–symptom assessment tool developed to aid management of psychosis (a condition affecting 1 in 100 people). The app was co-designed by health professionals and patients, ensuring that the final output reflected both the needs of patients and clinicians. It combines a patient-focussed front end which allows users to record and monitoring their symptoms whilst simultaneously feeding this information back to clinicians to provide an early warning of possible relapse. The project has the potential to empower patients and improve relationships between the user and their physician. Moreover, if ClinTouch can reduce 5% of relapse cases, it will save the average NHS trust £250,000 to £500,000, per year (equating to a possible saving of £119 million to the NHS over three years!).

Adopting disruptive technologies such as ClinTouch can have meaningful benefits for patients and the NHS. And there are signs that the healthcare sector is warming up to the idea. Earlier this year the National Institute for Health and Care Excellence (NICE) announced that they are planning to apply health technology assessments to mobile health apps and only this week, the NHS announced a £35 million investment in digital health services.

On Thursday 27th April, the North West Biotech Initiative will be hosting an interactive panel discussion on the future of digital health. We will be joined by a fantastic line-up speakers providing a range of perspectives on the topic, including:

Professor Shôn Lewis: Principal Investigator of the ClinTouch project and professor of Adult Psychiatry at The University of Manchester who will be speaking about the development of and the potential impact of the ClinTouch app. 

Tom Higham: former Executive Director at FutureEverything and a freelance digital art curator and producer, interested in the enabling power of digital technology. Tom is also diagnosed with type 1 diabetes, has worked with diabetes charity JDRF UK and has written about the benefits of and the need for improvements in mobile apps for diabetes care.

Anna Beukenhorst: a PhD candidate currently working on the Cloudy with a Chance of Pain project, a nationwide smartphone study investigating the association between the weather and chronic pain in more than 13,000 participants.

Reina Yaidoo: founder of Bassajamba, a social enterprise whose main aim is increase participation of underrepresented groups in science and technology. Bassajamba are currently working with several diabetes support groups to develop self-management apps, which incorporate an aspect of gamification.

Professor Tjeerd Van Staa: professor of eHealth Research in the Division of Informatics, Imaging and Data Sciences, at The University of Manchester. He is currently leading the CityVerve case on the use of Internet of Things technology to help people manage their Chronic Obstructive Pulmonary Disease (COPD).

Dr Mariam Bibi: Senior Director of Real World Evidence at Consulting for McCann Health, External advisor for Quality and Productivity at NICE and an Associate Lecturer at Manchester Metropolitan University. She will be talking about the regulatory aspect of bringing digital technology to healthcare.

The event is open anyone with an interest in digital health, including the general public, students and academics. It is free to attend and will be a great opportunity to understand the potential role of digital technology in healthcare and to network with local business leaders, academics and students working at the forefront of digital healthcare.

Date: 27th April 2017
Venue: Moseley Theatre, Schuster Building, The University of Manchester
Time: 3.30pm – 6.00pm
Register to attend! http://bit.ly/2o4fzd7
Questions about the event? Please get in touch with us at: [email protected]

Guest post by: Fatima Chunara

Save

Save

Neural coding 2: Measuring information within the brain

In my previous neuroscience post, I talked about the spike-triggered averaging method scientists use to find what small part of a stimulus a neuron is capturing. This tells us what a neuron is interested in, such as the peak or trough of a sound wave, but it tells us nothing about how much information a neuron is transmitting about a stimulus to other neurons. We know from my last neuroscience post that a neuron will change its spike firing when it senses a stimulus it is tuned to. Unfortunately, neurons are not perfect and they make mistakes, sometimes staying quiet when they should fire or firing when they should be quiet. Therefore, when the neuron fires, listening neurons can not be fully sure a stimulus has actually occurred. These mistakes lead to a loss of information as signals get sent from neuron to neuron, like Chinese whispers.

Figure 1: Chinese whispers is an example of information loss during communication. Source: jasonthomas92.blogspot.com

It is very important for neuroscientists to ascertain information flow within the brain because this is underlines all other computational processes that happen. After all, to process information within the brain you must first correctly transmit it in the first place! To understand and quantify information flow, neuroscientists use a branch of mathematics known as Information Theory. Information theory centers around the idea of a sender and a receiver. In the brain, both the sender and receiver are normally neurons. The sender neuron encodes a message about a stimulus in a sequence of spikes. The receiving neuron/neurons try to decode this spike sequence and ascertain what the stimulus was. Before the receiving neuron gets the signal, it has little idea what the stimulus was. We say this neuron has a high uncertainty about the stimulus. By receiving a signal from the sending neuron, this uncertainty is reduced, the extent of this reduction in uncertainty depends on the amount of information carried in the signal. Just in case that is not clear, lets use an analogy…so imagine you are a lost hiker with a map.

Figure 2: A map from one of my favourite games. Knowing where you are requires information. Source: pac-miam.deviantart.com

You have a map with 8 equally sized sectors and all you know is that you could be in any of them. You then receive a phone call telling you that you are definitely within 2 sectors on the map. This phone call actually contains a measurable amount of information. If we assume the probability of being in any part of the map prior to receiving the phone call is equal then you have a 1/8 chance of being in each part of the map. We need to calculate a measure of uncertainty and for this we use something called Shannon entropy. This measurement is related to the number of different possible areas there are in the map, so a map with 2000 different areas will have greater entropy than a map with 10 sectors. In our example we have an entropy of 3 bits. After receiving the message, the entropy drops to 1 bit because there are now only two map sectors you could be in. So the telephone call caused our uncertainty about our location to drop from 3 bits to 1 bit of entropy. The information within the phone call is equal to this drop in uncertainty which is 3 – 1 = 2 bits of information. Notice how we didn’t need to know anything about the map itself or the exact words in the telephone call, only what the probabilities of your location were before and after the call.

In neurons, we can calculate information without knowing the details of the stimulus a neuron is responding to. The trick is to stimulate a neuron with the same way over many repeated trials using a highly varying, white-noise stimulus (see the bottom trace in Figure 3).

Figure 3: Diagram showing a neuron’s response to 5 repeats of identical neuron input (bottom). The responses are shown as voltage traces (labelled neuron response). The spike times can be represented as points in time in a ‘raster plot’ (top).

So how does information theory apply to this? Well, recall how Shannon entropy is linked with the number of possible areas contained within a map. In a neuron’s response, entropy is related to the number of different spike sequences a neuron can produce. A neuron producing many different spike sequences has a greater entropy.

In the raster plots below (Figure 4) are the responses of three simulated neurons using computer models that closely approximate real neuron behaviour. They are responding to a noisy stimulus (not shown) similar to the one shown at the bottom of Figure 3. Each dot is a spike fired at a certain time on a particular trial.

Figure 4: Raster plots show three neuron responses transmitting different amounts of information. The first (top) transmits about 9 bits per second of response, the second (middle) transmits 2 bits/s and the third (bottom) transmits 0.7 bits/s.

In all responses, the neuron is generating different spike sequences, some spikes are packed close together in time, while at other times, these spikes are spaced apart. This variation gives rise to entropy.

In the response of the first neuron (top) the spike sequences change in time but do not change at all across trials. This is an unrealistically perfect neuron. All the variable spike sequences follow the stimulus with 100% accuracy. When the stimulus repeats in the next trial the neuron simple fires the same spikes as before, producing vertical lines in the raster plot. Therefore, all that entropy in the neuron’s response is because of the stimulus and is therefore transmitting information. This neuron is highly informative; despite firing relatively few spikes it transmits about 9 bits/second…pretty good for a single neuron.

The second neuron (Figure 4, middle) also shows varying spike sequences across time, but now these sequences vary slightly across trials. We can think of this response as having two types of entropy, a total entropy which measures the total amount of variation a neuron can produce in its response, and a noise entropy. This second entropy is caused by the neuron changing its response to unrelated influences, such as other neuron inputs, electrical/chemical interferences and random fluctuations in signaling mechanisms within the neuron. The noise entropy causes the variability across trials in the raster plot and reduces the information transmitted by the neuron. To be more precise, the information carried in this neuron’s response it whatever remains from the total entropy when the noise entropy is subtracted from it…about 2 bits/s in this case.

In the final response (bottom), the spikes from the neuron only weakly follow the stimulus and are highly variable across trials. Interestingly it shows the most complex spike sequences of spikes of all three examples. It therefore has a very large total entropy, which means it has the capacity to transmit a great deal of information. Unfortunately, much of this entropy is wasted because the neurons spends most its time varying its spike patterns randomly instead of with the stimulus. This makes its noise entropy very high and the useful information low, it transmits a measly 0.7 bits/s.

So, what should you take away from this post. Firstly that neuroscientists can accurately measure the amount of information can transmit. Second, that neurons are not perfect and cannot respond the same way even to repeated identical stimuli. This leads to the final point that this noise within neurons limits the amount of information they can communicate to each other.

Of course, I have only shown a simple view of things in this post. In reality, neurons work together to communicate information and overcome the noise they contain. Perhaps in the future, I will elaborate on this further…

Post by: Dan Elijah.

To share or not to share: delving into health data research.

In January this year I made a bold move, well at least bold for someone who is often accused of being painfully risk averse. I waved a fond farewell to life in the lab to take on a new role where I have been able to combine my training as a researcher with my passion for science engagement. In this role I work closely with health researchers and the public, building the scaffolding needed for the two to work together and co-produce research which may improve healthcare for millions of patients across the UK. The group I work alongside are collectively known as the Health eResearch Centre (part of the world-leading Farr Institute for Health Informatics) and are proud in their mission of using de-identified electronic patient data* to improve public health.

For me, taking on this role has felt particularly poignant and has lead me to think deeply about the implications and risks of sharing such personal information. This is because, like many of you, my health records contain details which I’m scared to share with a wider audience. So, with this in mind, I want to invite you inside my head to explore the reasons why I believe that, despite my concerns, sharing such data with researchers is crucial for the future of public health and the NHS.

It’s no secret that any information stored in a digital form is at risk from security breaches, theft or damage and that this risk increases when information is shared. But, it’s also important to recognise that these risks can be significantly reduced if the correct structures are put in place to protect this information. Not only this but, when weighing up these risks, I also think that it is immensely important to know the benefits sharing data can provide.

With this in mind, I was really impressed that, within the first few weeks of starting this role, I was expected to complete some very thorough data security training (which, considering I won’t actually be working directly with patient data almost seemed like overkill). I was also introduced to the catchily titled ISO 27001 which, if my understanding is correct, certifies that an organisation is running a ‘gold standard’ framework of policies and procedures for data protection – this being something we as a group hope to obtain before the year is out. This all left me with the distinct feeling that security is a major concern for our group and that it is considered to be of paramount importance to our work. I also learned about data governance within the NHS and how each NHS organisation has an assigned data guardian who is tasked with protecting the confidentiality of patient and service-user information. So, I’m quite sure information security is taken exceedingly seriously at every step of the data sharing chain.

But what will the public gain from sharing their health data?

We all know that, in this cyber age, most of us have quite an extensive digital-data footprint. It’s no accident that my Facebook feed is peppered with pictures of sad dogs encouraging me to donate money to animal charities while Google proudly presents me with adverts for ‘Geek gear’ and fantasy inspired jewellery. I don’t make too much effort to ensure that my internet searches are private, so marketers probably see me as easy prey. This type of data mining happens all the time, with little benefit to you or me and, although we may install add blocking software, few of us make a considered effort to stop this from happening. Health data, on the other hand, is not only shared in a measured and secure manner but could offer enormous benefits to the UK’s health service and to us as individual patients.

Our NHS is being placed under increasing financial strain, with the added pressure of providing care to a growing, ageing population with complex health needs. Meaning that it has never been more important to find innovative ways of streamlining and improving our care system. This is where health data researchers can offer a helping hand. Work using patient data can identify ‘at risk’ populations, allowing health workers to target interventions at these groups before they develop health problems. New drugs and surgical procedures can also be monitored to ensure better outcomes and fewer complications.

And this is already happening across the UK – the Farr Institute are currently putting together a list of 100 projects which have already improved patient health – you can find these here. Also, in 2014 the #datasaveslives campaign was launched. This highlights the positive impact health-data research is having in the UK by building a digital library of this work – type #datasaveslives into Google and explore this library or join the conversation on twitter.

One example is work on a procedure to unblock arteries and improve outcomes for patients suffering from coronary heart disease:

In the UK this procedure is carried out in one of two ways: Stents (a special type of scaffolding used to open up arteries and improve blood flow) can be inserted either through a patient’s leg (the transfemoral route) or via the wrist (the transradial route). Insertion through the wrist is a more modern technique which is believed to be safer and less invasive – however both methods are routinely performed across the UK.
Farr institute researchers working between The University of Manchester’s Health eResearch Centre and Keele University used de-identified health records (with all personal information removed) to analyse the outcomes of 448,853 surgical stent insertion procedures across the UK between 2005 and 2012.

This study allowed researchers to calculate, for the first time, the true benefits of the transradial method. They showed that between 2005 and 2012 the use of transradial surgery increased from 14% in 2005 to 58% in 2012 – a change which is thought to have saved an estimated 450 lives. They also discovered that the South East of England had the lowest uptake of surgery via the wrist.

This work shows one example of how research use of existing health records can highlight ways of improving patient care across the country – thanks to this research the transradial route is now the dominant surgical practice adopted across the UK (leading to an estimated 30% reduction in the risk of mortality in high risk patients undergoing this procedure).

Reading through all these studies and imagining the potential for future research does convince me that, even with my concerns, the benefits of sharing my data far outweigh the risks. But, I also recognise that it is of tantamount importance for patients and the public to be aware of how this process works and to play an active role in shaping research. It seems that when the public have the opportunity to question health data scientists and are fully informed about policy and privacy many feel comfortable with sharing their data. This proves that we need to strive towards transparency and to keep an active dialogue with the public to ensure we are really addressing their needs and concerns.

This is an amazingly complex and interesting field of study, combining policy, academic research, public priority setting and oodles of engagement and involvement – so I hope over the next year to be publishing more posts covering aspects of this work in more detail.

Post by: Sarah Fox

*The kind of data which is routinely collected during doctor and hospital appointments but with all personal identifiable information removed.

 

Save

The moons of Jupiter and the speed of light

Recently, I was setting up my telescope to image the great planet Jupiter. I was interested in capturing an eclipse of one its largest moons, Io. Everything was ready, all the batteries were charged, the telescope was aligned and tracking the planet, but there was a problem. The eclipse just wasn’t happening. My computer programme predicted it to start at 21:10 on the 12th March 2017, but nothing happened. I was more than surprised, my computer is normally accurate to the second. So I checked the settings, the time is internet controlled so no problem there, the computer showed other stars in their correct positions so I knew it was not having problems with other parts of the sky. Then at about 21:48, Io started to cast a dark circle on Jupiter. I was amazed, I have never seen a total eclipse on Earth but I can now see one on Jupiter. But why was it more than 30 minutes late? It turns out that my confusion was shared by astronomers in the 17th century and, in an effort to explain the discrepancies of Io’s eclipse times, they inadvertently measured the speed of light.

It was the 17th century astronomers Giovanni Domenico Cassini, Ole Rømer and Jean Picard (not from Star Trek) who first studied the eclipses of Io on Jupiter whilst trying to solve the famous longitude problem: before the invention of accurate clocks, there was little way of knowing how far east or west you were sailing from a given location (normally Paris or London). Galileo himself proposed to use the predicable orbits of Jupiter’s moons to calculate the time on Earth, which can then be used to calculate longitude.

Ole Rømer (left) and Giovanni Cassini (right). Along with Jean Picard these pioneering 17th century astronomers observed and studied hundreds of Jovian eclipses. (Wikipedia Commons)

Unsurprisingly, this proved too difficult a task to do on a moving ship with the primitive optical equipment available at the time. On land, however, this method could be used to improve maps and navigation. So Cassini and Rømer set to work. They observed hundreds of Jovian eclipses over several months and were able to determine the difference in longitude between Paris and their location. Unfortunately, there was a problem; after accurately calculating the orbit of Io, Cassini found that sometimes during the year, eclipses were occurring earlier while at other times eclipses  happened later than predicted. Cassini logically surmised that light had to travel at a finite speed instead of instantaneously spanning the distance from Jupiter to Earth. For instance, when the Earth and Jupiter are on near opposite sides of the Sun, the light traveling from Jupiter will take longer to reach Earth (around 54 minutes). This causes the Io eclipses to appear delayed. When the Earth is between the Sun and Jupiter (a period called Opposition), then light from Jupiter takes only about 37 minutes to reach Earth making eclipses of Io happen earlier than expected.

An eclipse of Io imaged by my myself on 12-13/03/2017. The Io eclipse cases a dark spot on Jupiters northern cloud band. The delay of this event caused by the speed of light prompted me to write this post! (My own work)

Strangely, Cassini never followed up his discovery, Rømer continued observing and recording Io eclipses and defined an equation that related the delay caused by the speed of light to the angle between Earth and Jupiter. However, it would not have been possible to publish an actual speed of light because the distances between the planets were not known then. Interestingly, Rømer could have shown the speed of light as a ratio of Earth’s orbital speed…but for some reason he didn’t. It was another famous astronomer, Christian Huygens, who took that credit. He used Rømer’s detailed observations and formula to define the speed of light as 7600 times faster than Earth’s orbital speed.  This equates to a speed of 226328 km/s which is only 25% lower than the true value of light speed.

Christian Huygens, a leader in 17th century science. He was the first person to define the speed of light using the eclipses of Io. (Wikipedia commons)

This was the first time a universal constant had been calculated quantitatively and since then the speed of light has played a huge role in James Clerk Maxwell’s theory of electromagnetism and Einstein’s theories of relativity. But for anyone peering into the night sky, the work of these great men more than 300 years ago shows us that starlight is old…and by looking at it we are looking back in time. We see Jupiter as it was 40-50 minutes ago, the nearest star 4 years ago and the relatively nearby Andromeda galaxy 2.56 million years ago. Not bad for 17th century science.

I think next time I’m sitting by my telescope waiting for an Io eclipse, I’ll be a bit more appreciative of the significance that 30 minute delay had on our understanding of the universe.

Post by: Dan Elijah.

Save

I come in peace: Engaging life on a flat Earth

Did you know that the Earth is actually flat, not round and that NASA and the government fuel the round Earth conspiracy?….No, neither did I but this mind-boggling world view is currently gaining momentum on the internet and has recently found its way onto my radar.

To give you a bit of background:

Alongside my vociferous online academic rantings and day job helping researchers and the lay public work together to design and implement health research, I also spend a fair bit of time volunteering with the British Science Association (the BSA). The BSA is a charity and learned society founded in 1831 with many strings to its academic bow; including the standardisation of electrical units (including the Ohm, Volt and Amp). Today it is supported by a huge backbone of volunteers working tirelessly across the country to improve the public perception of science – letting everyone know that there is much more to science than just mind boiling equations and stuffy white haired professors.

Our small group of Mancunian volunteers meet monthly to mastermind and implement a huge range of engagement activities. Over the years I’ve been with the group I’ve found myself designing an endangered species treasure hunt (based on a mash-up of Pokemon Go and geocashing), baking cake pops for an astronomy and art crossover event held on the site of Manchester City centre’s oldest observatory and, just last week, hosting over 40 AS/A-level students at a science journalism workshop.

As a group we work hard to make sure our activities are fun and open to everyone – no matter what their academic background. But, we’re not naive, so we recognise that our reach is still pretty small and that there are many communities in our home city who will never have heard of us. This is why we have been working with a BSA volunteer from our Birmingham branch who’s role has been to help us find out more about Manchester’s hard to reach communities and discover how we can offer them meaningful engagement. It was during one of our meetings she said that she had been in contact with someone who runs a computer coding club for local teenagers and had noticed that some of these youngsters were adamant supporters of the ‘flat Earth’ theory – which is apparently backed up by a number of celebrities including rapper B.o.B who recently went on a amusing and disturbing Twitter rant about the topic.

This got me thinking. If science has never really been your thing, which is fine by the way just like P.E was never my thing, how do you avoid falling down the black hole of conspiracy theories (Illuminati, anti-vaccination, flat Earth)?

These theories offer an alternative world view which can, at first glance, appear to fit much better with the world we see and experience around us every day than the complex and often invisible world of science. Take flat Earth as a example. In our everyday lives we interact with both flat and round objects (compare a table top with a yoga ball) and, from these interactions, we build up an understanding of how these objects work. On a very basic level we see that things fall off a ball, you can’t really balance things on it like you can a table and it has an obvious curvature. Then take a look at the Earth. We can stand and walk along it with no obvious indication of its curvature, water sits flat in rivers and oceans it doesn’t run down the sides of the Earth as you would see if you spilled a glass of water onto a yoga ball. So, assuming you have little or no interest in astronomy (perhaps you live in the city center so don’t get a good view of the night sky anyway) and the mathematics of gravity and scale makes your head hurt, it’s easy to understand why you may choose to mistrust theories which you cannot test or see for yourself.

So, with this in mind, my question is: Is it possible to design activities and interactions that don’t patronise or assume knowledge but enable people to test scientific theories in ways that make sense and allow them to simply observe the outcomes with their own eyes?

We are now hoping to meet with this community, attend some of their activities, make friends and let them know scientists are just ordinary people. Then we want to jump in and put together a small accessible science festival where everyone can have fun and hopefully engage with science on a small scale. I get the feeling it’s not going to be an easy sell but will undoubtedly be worth it if done properly.

My mind is bubbling with ideas, including the possibility of sending a Go-Pro camera up on a balloon and playing back the footage – the possibilities are endless…although sadly our budget isn’t. Whatever happens, I’m excited and will keep you all updated on our progress as things move forward.

For now I want to invite anyone reading this to drop me a line in the comments below. Perhaps you’re an academic who has worked on a similar event and has some ideas, or maybe you’re keen on the flat Earth theory and want to tell us more about what you believe? Either way I’d love to hear from you.

Post by: Sarah Fox

Update: A pretty interesting gif image of a few pictures my telescope loving partner took last night showing Jupiter spinning on its axis – notice how the great red spot moves round. Perhaps we could bring our telescopes along to the festival and have a play 🙂

Neural coding 1: How to understand what a neuron is saying.

In this post I am diverting from my usual astrophotography theme and entering the world of computational neuroscience, a subject I studied for almost ten years. Computational neuroscience is a relatively new interdisciplinary branch of neuroscience that studies how areas of the brain and nervous system process and transmit information. An important and still unsolved question in computational neuroscience is how do neurons transmit information between themselves. This is known as the problem of neural coding and by solving this problem, we could potentially understand how all our cognitive functions are underpinned by neurons communicating with each other. So for the rest of this post I will attempt to discuss how we can read the neural code and why the code is so difficult to crack.

Since the twenties we have known that excited neurons communicate through electrical pulses called action potentials or spikes (see Figure 1). These spikes can quickly travel down the long axons of neurons to distant destinations before crossing a synapse and activating another neuron (form more information on neurons and synapses see here).

Figure 1. Neural action potentials. An action potential diagram is shown on the left as if recorded from inside a neuron (see inset). For an action potential arise and propagate through a neuron, it must reach a certain threshold (red dashed line). If it doesn’t the neuron will remain at rest. The right panel shows a real neurons firing spikes in the cortex of a mouse. Taken from Gentet LJ et al. (2010).

You would be forgiven for thinking that the neural coding problem is solved: neurons fire a spike when the see a stimulus they like and communicate this fact to other nearby neurons, while at other times they stay silent. Unfortunately, the situation is a bit more complex. Spikes are the basic symbol used by neurons to communicate, much like letters are the basic symbols of a written language. But letters only become meaningful when many are used together. This analogy is also true for neurons. When a neuron becomes excited it produces a sequence of spikes that, in theory, represent the stimuli the neuron is responding to. So if you can correctly interpret the meaning of spike sequences you could understand what a neuron is saying. In Figure 2, I show a hypothetical example of a neuron responding to a stimulus.

Figure 2. A stimulus (top trace) fluctuates over time (s(t)) and spikes from a hypothetical neuron are recorded. The stimulus is repeated 5 times producing 5 responses r1,2,3…5 shown below the stimulus. Each response is composed of spikes (vertical lines) and periods of silence. By counting the number of spikes within small time window lasting Δt seconds, we can calculate the firing rate of the neuron (bottom trace).

In this example a neuron is receiving an constantly fluctuating input. This is a bit like the signal you would expect to see from a neuron receiving a constant stream of inputs from thousands of other neurons. In response to this stimulus the receiving neuron constantly changes its spike firing rate. If we read this rate we can get a rough idea of what this neuron is excited by. In this case, the neuron fires faster when the stimulus is high and is almost silent when the stimulus is low. There is a mathematical method that can extract the stimulus that produces spikes, known as reverse correlation (Figure 3).

Figure 3. Reverse correlation can identify what feature of the stimulus (top) makes a neuron fire a spike (bottom). Stimulus samples are taken before each spike (vertical lines) and then averaged to produce a single stimulus trace representing the average stimulus that precedes a spike.

The method is actually very simple; each time a spike occurs we take a sample of the stimulus just before the spike. Hopefully many spikes are fired and we end up with many stimulus samples, in Figure 3 the samples are shown as dashed boxes over the stimulus. We then take these stimulus samples and average them together. If these spikes are being fired in response to a common feature in the stimulus we will be able to see this. This is therefore simple method of finding what a neuron actually responds to when it fires a spike. However, there are limitations to this procedure. For instance, if a neuron responds to multiple features within a stimulus then these will be averaged together leading to a misleading result. Also, this method assumes that the stimulus contains a wide selection of different stimulus fluctuations. If it doesn’t then you can never really know what a neuron is really responding to because you may not have stimulated it with anything it likes!

In my next two posts, I will discuss how more advanced methods from the realms of artificial intelligence and probability theory have helped neuroscientists more accurately extract the meaning of neural activity.

Post by: Daniel Elijah

Afforestation Vs reforestation

It is well known that deforestation is an increasing global problem. Even those with little scientific background are bombarded with information through social media, specifically regarding consequences of deforestation including global warming. Indeed, many charities, schools and individuals are now taking a stand and doing all they can to tackle this problem.

The planting of trees can be divided into two categories: afforestation and reforestation. Reforestation refers to planting trees on land that was previously forest whereas afforestation refers to planting trees on patches of land which were not previously covered in forest. The general idea behind both is: as many trees as possible, wherever possible.
However, ecology is a complex science. Are we focusing too much on carbon sequestration and not enough on the planets ecosystems as a whole? Are some ecosystems being neglected and forgotten? Perhaps. This article will cover some issues associated with afforestation and reforestation.

Reforestation is beneficial when trees have been previously removed. However, these new trees will never create exactly the same ecosystem as the original forest. Indeed, the original trees which were cleared may have been hundreds, even thousands of years old meaning that it may take many years for the new trees to catch up. In addition to this, rare species lost during the original deforestation may not be replaced, meaning extinction and a reduction of biodiversity could be inevitable.

Tropical grassy Biome

Afforestation can also have negative consequences especially if the tree planters don’t consider the environment they are introducing the new trees into. The idea of afforestation is to plant trees on patches of unused, degrading land. However, land which may appear degraded may actually house its own ecosystem, for example a Savanna or tropical grassy biome. Research has suggested that tropical grassy biomes are often misunderstood and neglected. These ecosystems can provide important ecological services. In addition to this, these ecosystems could contain rare species, which could be outcompeted by the introduction of new trees.Therefore, although carbon sequestration will increase, many ecosystems will be negatively affected or lost.

It has to be noted that both reforestation and afforestation can be advantageous when tackling global warming. However, possible negative impacts must also be taken into account in order to protect the planet as a whole. This can be achieved by ensuring that deforestation is kept to a minimum and afforestation only occurs on truly degraded land. There is desperate need for more research into areas of land before trees are planted upon them. The biggest challenge today is education. Charities, schools and individuals need to be made aware of this before it’s too late. Without awareness, irreversible damage can occur unknowingly. Effective conservation work requires more than just planning trees at random and this needs to be taken considered on a global scale.
If we don’t stand up for all of our precious ecosystems, who will?

Post by: Alice Brown

References:
http://www4.ncsu.edu/~wahoffma/publications/pdf/Parr2014TREE_TropicalGrassyBiomes_MisunderstoodNeglectedAndUnderThreat.pdf
https://pixabay.com/en/photos/plain/?cat=nature

 

Save

Save

Meek no more: turning mice into predators

A recent study published in the journal Cell, has shown that  switching on a particular group of neurons in the mouse brain can turn these otherwise timid creatures into aggressive predators. Why would anyone want to do this you might ask? After all, with the tumultuous political events of 2016, do we really want to add killer mice to our worries? Thankfully, the researchers aren’t planning to take over the world one rodent at a time, instead they want to understand how the brain coordinates the complex motor patterns associated with hunting.

During the evolution of vertebrates, the morphology of the head changed to allow for an articulated jaw. This is a more scientific way of describing the type of jaw most of us are familiar with: an opposable bone at the entrance of the mouth that can be used to grasp and manipulate food. This anatomical change allowed for the development of active hunting strategies and the associated neural networks to coordinate such behaviours. The researchers wanted to identify which parts of the brain contain the networks for critical hunting behaviours such as prey pursuit and biting. They began by looking at an evolutionarily old part of the brain known as the amygdala, specifically the central nucleus of the amygdala (CeA), because this area has been shown to increase its activity during hunting and has connections to parts of the brainstem controlling the head and face.

In order to study this part of the brain, the authors used a technique called optogenetics. This technique involves introducing the gene for a light sensitive ion channel into specific neurons. It is then possible  to ‘switch on’ the neurons (i.e. cause them to fire bursts of electrical activity) simply by shining blue light onto them. This is what the researchers did with the neurons in the CeA.

To begin with the researchers wanted to find out what happens when you simply switch on the these neurons. To test this they put small moving toys, resembling crickets, into the cage as ‘artificial prey’ and watched the animals’ behaviour. The mice were largely indifferent to these non-edible ‘prey’, however as soon as the light was switched on the mice adopted a characteristic hunting position, seized the toys, and bit them. This never occurred when light was off. The scientists also tested the mice with live crickets (i.e. prey that mice would naturally hunt). When using live prey the mice (without the light activation) hunted as normal. However, when the light was switched on the researcher saw that the time needed for the mice to capture and subdue their prey was much shorter and any captured crickets were immediately eaten. The combination of these results suggests that stimulation of the central nucleus of the amygdala (CeA) not only mimicked natural hunting but increased the predatory behaviour of these mice.

One question that might spring to mind from this study is: How do we know that these mice are really hunting? Perhaps the light had unintended effects such as making the mice particularly aggressive or maybe very hungry? After all, both explanations could account for the increased biting of non-edible objects and the faster, more aggressive cricket hunting. To argue against increased aggression levels, the authors point out that CeA stimulation did not provoke more attacks on other mice – something you might expect of an overly aggressive mouse. So what about increased hunger? The scientists in this study also think this is unlikely because they allowed the mice access to food pellets and found no difference in how many pellets were consuming during the time the laser was on versus the time the laser was off.

So how is hunting behaviour controlled by the CeA? The hunting behaviour displayed by mice can be divided into two aspects: locomotion (prey pursuit and capture) and the coordination of craniofacial muscles for the delivery of a killing bite. The scientists hypothesised that the CeA may mediate these two different types of behaviour through connections with different parts of the brain. The two key brain regions investigated in this study were the parvocellular region of the reticular formation in the brainstem (PCRt) and a region of the midbrain called the periaqueductal grey (PAG).

By using optogenetics the researchers were able to selectively stimulate the CeA to PCRt projection and found that this caused the mice to display feeding behaviours. Interestingly, stimulating this pathway seemed to only elicit the motor aspects of eating e.g. chewing rather than increasing the mice’s hunger. Conversely, disrupting the function of this pathway interfered with the mice’s ability to eat. Taking this into a ‘live’ setting, the mice could still pursue their prey and subdue it using their forepaws, but they struggled to deliver a killing bite. The researchers then turned their attention to the pathway between the CeA and the PAG. They found that stimulating this projection caused mice to start hunting more quickly, pursue their prey faster, and hunt for longer. Unlike the experiment above, stimulating this pathway had no effect on feeding-type behaviours. Now the scientists geared up for the big experiment: they’ve shown that stimulating the CeA leads to predatory hunting. They’ve shown that biting and pursuit seem to be controlled by different pathways from the CeA. So they decided to see if activating both pathways simultaneously (CeA to PCRt and CeA to PAG) could mimic the effects of stimulating the CeA itself. Indeed, they found that stimulating these two pathways together led the mice to robustly initiate attacks on artificial prey.

So what can we learn from this study? The scientists have demonstrated that the CeA acts as a command system for co-ordinating key behaviours for efficient hunting via two independent pathways. However, there are still some key questions remaining, for example, what determines whether the CeA sends those commands? The scientists hypothesise that cues such as the sight or smell of prey might cause the CeA to respond and send the command to elicit the appropriate motor actions. However, they can’t prove this in the current study.

Despite these limitations, this paper is a great example of how scientists can use cutting edge tools, like optogenetics, to tease apart the brain pathways responsible for different aspects of a complex behaviour such as hunting.

Post by: Michaela Loft

Save

Video astronomy: an update

Early last year I posted an article discussing the merits of webcam imaging. I had just bought some new equipment and wanted to put my enthusiasm into blog form. I was getting fed up with the annoying short observing time our cloudy nights provide us in the UK. Traditional long exposure photography, used to capture faint galaxies and nebulae, is simply out of the question on all but the clearest of nights. However, webcam astronomy is easy to learn, cheap and quick enough to do between clouds. Not only this but, on Moonlit nights when long exposure photography would produce washed out pictures of galaxies, webcam imaging can deliver great Lunar landscapes. Also, during the day, a webcam coupled with a telescope can capture the ever-changing surface of the Sun, meaning you can do astronomy without losing sleep!

So it is now time to show you some of my attempts at webcam astronomy. Before I show any processed images I first want to demonstrate the main limitation facing astrophotography (other than light pollution); atmospheric turbulence. In image 1, a section of the Moon is being videoed; notice how the detail is constantly shifting in and out of focus. This distortion is caused by currents of air at different temperatures which bend and scatter the light passing through the atmosphere.

Image 1. A movie clip of craters Aristoteles (top left) and Eudoxus (top right). The image shimmers because of the constant turbulence in Earth’s atmosphere. Author’s own work.

Although this may look bad, atmospheric distortion can get far worse! For instance, if the Moon moves close to the horizon then light coming from its surface has to travel through far more air, which badly distorts and scatters this light. Just look at how distorted the Sun looks as it is setting. Atmospheric distortion can also be caused in other ways. In image 2, the Moon was passing just above my house, which unfortunately is not well insulated. This atmospheric distortion caused by hot air escaping from my house dramatically reduces the detail you can see – I’d ask my wife to keep the heating off while I’m imaging but I fear this wouldn’t go down too well.

Image 2. Another movie clip taken when the Moon was setting just above my house. The hot air causes increased turbulence that causes the detail of the lunar landscape to dance and blur. Author’s own work.

Luckily webcam astronomy possesses one amazing advantage over other forms of photography. Unlike traditional long exposure astrophotography, video recordings produces thousands of individual images (or frames) of your target, this means you can be very strict about which frames to keep and which to discard. For example, to get one high quality image, I take about 4 minutes of video containing 14400 frames at 60 frames/sec. I then pick the best 2000 of these frames and, using a program called Pipp, I can stack them together to reduce noise and improve detail (see previous post about stacking). This procedure means I can remove all the frames that were distorted by the atmosphere.

So after all that processing what am I left with? The answer is superior detail, better than any individual frame in the movie or even images taken using long exposure photography. In Image 3, Lunar detail as small as 1Km across can be seen, since the Moon was 370000Km away at that point, this resolution is equivalent to spotting a 2cm wide 1p coin from 7.4Km away! Quite an achievement for my small telescope. All because I have used only the frames taken during atmospheric stillness.

Image 3. A stacked image taken using the best 2000 frames of the movie (Figure 1). The resolution has now improved substantially. Author’s own work.

Even during strong atmospheric turbulence, reasonable detail can be retrieved, in Image 4, Lunar craters as small as about 5 Km can be seen, not as good as in Image 3 but still impressive.

Image 4. The stacked image from the movie shown in Image 2. Despite the strong atmospheric disturbance, fine detail can still be resolved. The crater to the far left is Sacrobosco. Author’s own work.

Of course webcam astronomy is not limited to the Moon. With the correct light rejecting filters, you can turn this powerful technique onto the Sun. During July 2016 there was a fantastic chain of Sunspots (see Image 5), these features change shape every day: merging, splitting and distorting providing a very dynamic and unique astronomical sight.
Of course before undertaking solar photography a few considerations must be addressed. (1) Make sure you research how to observe/image the Sun safely, I will not be happy if you go out and blind yourself after reading this article. (2) Be aware that the Sun will heat up your telescope creating turbulent air inside the tube, to avoid this problem I covered my scope in kitchen foil.

Image 5. A stacked image of sunspots taken on 19/07/2016. The internal structure of the Sunspots can be seen as well as individual granulations across the solar surface. Author’s own work.

The planets are probably the most popular and evocative telescopic targets of all. Thankfully webcam imaging provides an easy way to image them and make your own Solar system collections! I’ve added my own incomplete collection in Figure 6. The sizes the planets appear are to scale.

Image 6. My Solar System collection: Jupiter (top left), Uranus (top middle), Neptune (top right), Mars (bottom left) and Venus (bottom middle). Author’s own work.

For the planets, I used exactly the same method as with the Moon. The hardest part is finding the planets in the night sky. If you are unfamiliar with night sky then their locations can be found using planetarium software like Stellarium. I must also mention that you will need some experience finding Uranus and Neptune, they are faint and you will need to be able to use a finder scope to home in on these planets.

In conclusion, I started learning astrophotography in the wrong order, webcam astronomy provides all the excitement of capturing a new world in your back garden but without the long nights, tiresome setup and ruinously expensive equipment.  So fetch that old scope out of your garage, buy a webcam and get recording I have evidence to show you wont be disappointed.

Post by: Daniel Elijah.

Carfentanil: The next step in the opioid crisis?

The US is in the midst of a national opioid epidemic. The use of opioids, which includes prescription drugs and heroin, has quadrupled since 1999. The Centres for Disease Control and Prevention (CDC) has confirmed that these drugs now kill more people than car accidents in the US, making it the most common form of preventable death.

Opioids are a class of opium-derived compounds that relieve pain. These drugs use the same receptors as endorphins, eliciting analgesic effects by inhibiting the release of neurotransmitters in the spinal cord. Exploited for centuries, they are still considered one of the most efficacious treatments for pain, despite serious side effects including physical and psychological addiction.

Fentanyl, a synthetic opioid developed for use in surgery, was first linked with overdose deaths in 2005 . Alarmingly, the number of overdose cases involving fentanyl have escalated in recent years, with its misuse regularly making the headlines due the sheer number of deaths associated with this drug. High profile cases, such as the death of the global star Prince have only added to this.

Carfentanil, another drug with a similar structure to fentanyl, has recently exploded onto the scene as carfentanil-laced drugs rear their toxic heads. An analogue of fentanyl, carfentanyl was first synthesised in the US in 1974 by Janssen Pharmaceutica (owned by Johnson & Johnson). This opiate was designed for use as a general anaesthetic in large animals such as rhinos – just 2mg of carfentanil can knock out an African elephant. Due to its extreme potency the lethal dose range of this drug is unknown in humans, which greatly amplifies the risk involved in taking the drug.  Carfentanil is 10,000 times more potent than morphine, and 100 times more potent than fentanyl. As with other opioids, carfentanil causes death by respiratory distress or cardiac arrest, leading to death within minutes.

So, why are these drugs being increasingly abused? One explanation is that prescription of opioid drugs have increased since the 1970s – this being the result of a series of papers published downplaying the risk of addiction associated with use of opioid painkillers such as oxycontin and fentanyl. They were marketed to doctors as wonder drugs for treating day-to-day pain, with little addiction potential. As we now know, this turned out not to be the case. The resulting willingness of doctors to prescribe opioid painkillers increased the availability of these drugs. This problem was in turn worsened by a subset of pharmacies illegally filling out multiple prescriptions and the phenomenon of ‘doctor shopping’, where patients obtain prescriptions from multiple doctors at once. Currently, over 650,000 new opioid prescriptions are dispensed every day in the US by doctors.

A number of recent studies found that almost half of young people using heroin had abused prescription opioids beforehand. This comes as no surprise when such potent drugs are used routinely to treat even minor sports injuries in young people. As a result of this alarming trend, new regulations were implemented in the US in 2014 to attempt to restrict the misuse of prescription painkillers. Unfortunately, this has forced many people experiencing drug addiction to turn to prescription fraud and illegally produced pills. Cartels in Mexico, the primary supplier of heroin to the US, have stepped in to provide cheaper and more potent opiate alternatives. Evidently, the reduction in the availability of legally-produced drugs has failed to remedy the issue of opioid misuse.

The unknown quantity and composition of the drugs bought on the street, combined with the recent explosion in recreational use, has led to a surge in accidental overdoses. In 2016, both fentanyl and carfentanil have been found as additives in heroin, cocaine and counterfeit Xanax pills in Florida, Ohio and neighbouring Michigan (including Detroit) among other states. Like any other illicit drug, users have no way of determining the strength or purity of what they have bought to any degree of accuracy.

The latest spike in overdoses has led to the DEA issuing a public health warning, with the Acting Administrator Chuck Rosenberg describing carfentanil as ‘crazy dangerous’ . It is hard to put a figure on the number of cases involving carfentanil as there are issues with obtaining samples and identifying how much was taken, with some facilities also unable to identify the compound in toxicology reports at post mortems.
The opioid antagonist naloxone (sold as Narcan™ nasal spray) also struggles to reverse the effects of fentanyl and carfentanil, with reports of patients needing up to five times the recommended amount of naloxone for a heroin overdose. As a result it can take up to five minutes to revive a patient, an effect that normally takes a matter of seconds, vastly increasing the chance of lasting brain damage and death.

On average, opioid overdoses kill 91 Americans every day. This disturbing figure will continue to rise unless rapid change is seen in both government policy and in society as a whole. There remains no easy solution to opioid problem, and with a single gram of carfentanil able to cause 50,000 fatal overdoses, it seems the situation will only worsen unless dramatic changes are put into effect. Continued research into addiction causes and treatments, coupled with investigation into new medications to treat pain are also necessary for long-term management of this devastating crisis.

Post by: Sarah Lambert

Save