‘Hangry’ humans – why an empty stomach can make us mean

There’s no point denying it, at one point or another, we’ve all been guilty of being ‘hangry’. Whether you’re a frequent culprit or just an occasional offender, getting angry when hungry is a common crime in many households, and one that can result in arguments, ‘fallings out’ and even a night spent sleeping on the couch. But is it really our fault or is there a more biological reason to blame? An increasing body of research suggests our blood glucose may be the real culprit.

The glucose we obtain from our diet is a key source of energy, required for our bodies to function and delivered to all of our cells via our blood. Out of all the organs of the body, our brain is the most energy-consuming, using around 20% of the energy our bodies produce. It also relies almost completely on glucose as its energy source, making an efficient supply of this sugar essential to maintaining proper brain function. This is particularly true for higher-order brain processes such as self-control, which require relatively high levels of energy to carry out, even for the brain. Since self-control allows us to resist such impulsive urges as out-of-control eating or aggressive outbursts, if our brain does not have sufficient energy to perform this process, our ability to stem these unwanted impulses can suffer.

Low levels of glucose in our blood can also result in an increase in certain chemicals in our body, believed to be linked to aggression and stress. Cortisol, for instance, colloquially named the ‘stress hormone’, has been shown to increase in individuals when they restrict their caloric (and therefore glucose) intake. Neuropeptide Y concentrations have also been shown to be higher in individuals with conditions associated with impulsive aggression when compared to healthy volunteers.

Given such evidence, it therefore makes sense that low levels of blood glucose, like those experienced when we are hungry, could plausibly lead us to become more aggressive. The association between blood glucose and level of aggression has been observed in multiple studies, including Ralph Bolton’s 1970s research of the Quolla Indians. These Peruvian highlanders are well-known for their high rates of unpremeditated murder and seemingly irrational acts of violence. Having observed both this behaviour and a strong sugar craving among the Quolla Indians, Bolton decided to investigate the possible link between hunger and aggression. In agreement with his hypothesis, Bolton found that the Quolla Indians commonly experienced low blood glucose levels, and that those with the lowest levels tended to be the most aggressive.

In another, more recent study, similar findings were observed in college students who took part in a competitive task. Participants were randomly assigned to consume either a glucose beverage or placebo drink containing a sugar substitute. Following this, participants then competed against an opponent in a reaction time task, which has been shown previously to provide a measure of aggression. Before beginning the task, the students could set the intensity of noise their partner would be blasted with if they lost. As predicted, participants who drank the glucose drink behaved less aggressively towards their partner, choosing lower noise intensities, compared with those who had consumed a sugar substitute. This suggested that hunger-related aggression, or ‘hangriness’, could be ameliorated by boosting one’s glucose levels.

One notable (though some may argue rather dark) study into the ‘hangry’ condition investigated the relationship between blood glucose and aggressiveness in married couples. As well as pitting spouses against each other in a similar reaction time task to the one described above, participants were also given a voodoo doll of their partner and told to stick pins in the doll each evening, depending on how angry they were at their partner. (Warning, do not try this at home). As with previous studies, lower levels of blood glucose resulted in participants blasting their spouses with higher noise intensities and sticking more pins in the voodoo dolls, suggesting greater levels of anger and aggression.

While these studies do not necessarily ascertain causality, the relationship between low blood glucose and the tendency to become aggressive makes biological sense, since glucose is the main energy source our brains need to control such negative impulses. As observed in studies and experienced by many of us, ‘hangry’-related crimes can also be easily avoided by supplying the potential offender with food, further supporting the role of glucose in hunger-related anger. So next time ‘hangriness’ threatens to ruin the harmony in your household, fill your mouth with food rather than foul language, and save yourself a night banished to the couch.

Post by: Megan Freeman

Save

Save

How your smartphone could improve your health

Lamiece Hassan on why unlocking the potential of smartphone data could be the next frontier for health research.

I have an addiction to my smartphone. It helps me to navigate not only the streets of my adopted home city of Manchester, but life in general; everything from banking to shopping, scheduling, videoing, networking, dating and, on occasion, making phone calls.  And it helps me to monitor things, like my patterns in exercise, diet and sleep. I’m the type who posts annoying screenshots of their step count on Instagram after a big night (#danceallnight). To some this could seem a somewhat unhealthy, yet common, obsession. However, I’m keen to learn how our increasing attachment to technology can actually help to generate new insights into health and disease and benefit others.

You see, your smartphone is a sort of digital Swiss Army knife, jam-packed with vital sensors and tools that collect, process and transmit all manner of data. Furthermore, it’s a constant companion, always on and always with you, effortlessly tracking your everyday routines. To researchers like me, who would otherwise have to dedicate significant time and effort to collecting these data themselves, smartphone apps are appealing, inexpensive tools for generating a wealth of high quality data on everyday life on a mass-scale.  Moreover, this type of ‘big data’ could hold the key to better understanding and treatments for many health conditions – like seasonal allergies, dementia and Parkinson’s.

One area where patient data is currently lacking is seasonal allergies.  Allergies are basically the result of the body’s immune system ‘misfiring’ and incorrectly responding to harmless substances or ‘allergens’, such as pollen. These allergies are very common in the Western world. One in four people will experience an allergy at some point in their lives and this number is increasing.  However, the causes are unclear.  Dr Sheena Cruickshank, an immunologist at The University of Manchester, explains the situation: “The rise in seasonal allergies like hay fever could be down to all sorts of things – such as changes in pollen exposure, pollution or maybe a lack of childhood exposure to germs. We have good quality data on many of the suspected causes but we don’t know how people are actually being affected. Gathering real-time data on a mass-scale about when and where symptoms occur could really help to change all of that.”

A nationwide study is currently underway to fill in these blanks and try to better understand seasonal allergies, all using a smartphone app called #BritainBreathing*. Allergy sufferers act as ‘citizen sensors’, using the app to keep a daily log of their symptoms (or lack thereof) like sneezing, itchy eyes and wheezing and track them over time. The app automatically does the rest, automatically sharing anonymised reports with the research team, with a time-stamp and approximate location.

Whilst sometimes trivialised, hay fever symptoms can be severe for some people and it is often associated with other conditions, such as asthma and eczema. Caroline, now 32, has had all three since childhood: ” I’ve had eczema since I was a baby, then I got hay fever and asthma later on around primary school age. At one point I was constantly on antihistamines.” Could a smartphone app help people like Caroline get a better handle on what their triggers might be? “When you’re young everyone else manages it for you, but when you get older you need to build up a picture in your own head to start to think about triggers: what is it, where was I, what was I doing at the time? Everyone carries their phone around now so that would be a good place to start.”

Indeed, decoding data has been key to other recent breakthroughs in the world of allergy research.  Whilst big is often beautiful, advances in statistical methods have arguably been just as important to unlock the insights hidden within the data. For example, combining data from several long-term studies (which collectively tracked almost 10,000 children from birth) helped researchers to question the stereotype of the so-called “allergic march”; a supposedly classic progression of symptoms starting in childhood, beginning with eczema, then progressing to wheeze and finally hay fever.  Using sophisticated analysis techniques, researchers showed that individuals fall into one of several ‘profiles’ and that this classic sequence is much less common than once thought (less than 7% followed this pattern). Findings like these appear to strengthen the case for acknowledging how variable patterns of allergic conditions can be, with slightly different symptoms and trajectories.

Teaming smartphone data with data from research studies like these has, to date, been an area with largely untapped potential. However, researchers are increasingly recognising the opportunities in bringing together different sources of data – including smartphones, wearable fitness gadgets and medical records – to shed light on diseases like dementia and Parkinson’s. For example, the 100 for Parkinson’s project invited people to use a smartphone app to track aspects of their health (including sleep quality, mood, exercise, diet and stress) for 100 days and donate their data to research.

Of course, it’s not all plain sailing. Some have expressed concerns about the quality of data, the ability to produce meaningful analyses and safeguarding personal information. However, the ability to work with the public to build large datasets to allow us to gain insights into both health and disease states mean that it’s an option increasingly being considered by a large array of scientific and medical fields. Is the smartphone the future of health research or is the challenge of disentangling the complex data generated by constant tracking more trouble than it’s worth?  We’ll just have to wait and see. I, for one, think it’s an opportunity too big to pass up.

*The free Britain Breathing app is available on the App Store and Google Play now.

Post by: Lamiece Hassan

Save

Seasons and Sefton

In temperate regions such as the UK, our ecosystems experience seasonal dynamic fluctuations, as our moderate climate slowly fluctuates throughout the year. These fluctuations follow an annual trend, with many species of tree blossoming in spring before shedding their leaves in an impressive colourful autumn display leaving just bare branches through the winter days. In sync with this, animals appear to breed as temperatures increase yet hibernate through cooler days.

For those of you living in Liverpool, student or otherwise, it is well known that Sefton park is one of the most popular places to visit for its aesthetic beauty. I have lived in Liverpool for 4 years and have always been intrigued by the ecosystems it has to offer. Here I have documented how the park changes throughout the year by capturing photos at four different occasions between September 2016 and May 2017:

September 2016
November 2016
March 2017
May 2017

The science behind these changes is fascinating. One of the most noticeable differences observed in the park can be seen in the trees, specifically in how their leaves reflect the fluctuating seasons. Throughout the winter months, trees enter a period of dormancy in order to survive the low temperatures. However, despite their stark dormant appearance, deep within their branches they are actually busy maintaining themselves through respiration and enzyme synthesis and preparing for the coming spring.

As spring approaches, these trees begin to bud leaves and flowers, a change brought about in response to an increase in temperature and light availability. Throughout the summer months, different shades of green dominate the park. It is the photosynthetic pigment chlorophyll which gives leaves their vibrant green colour. This pigment enables plants to absorb energy from sunlight, specifically, it absorbs light in the blue and red portions of the electromagnetic spectrum while reflecting the near-green portion, therefore producing the vivid shades of green we see throughout the summer.

The breakdown of chlorophyll in the autumn reveals carotenoids in the leaves causing them to change from green to yellow/orange and creating a variety of colour throughout the park. Eventually, leaf abscission occurs.
Leaf abscission refers to the controlled process by which trees shed their leaves. This occurs from the Abscission zone (at the base of the leaf’s stem). Abscission zone cells differentiate in early plant growth and are able to respond to a number of environmental stressors and plant hormones. When light levels start to reduce and chlorophyll is degraded, levels of the plant hormone auxin decrease which in turn increases sensitivity in the abscission zone to another hormone ethylene. When the plant is exposed to ethylene cell wall-degrading enzymes such as cellulase and polygalacturonase are activated and abscission occurs.The trees then enter dormancy and the process repeats itself. There is a clear seasonal regulation of growth. And, it’s not only trees which follow this cycle, other flowering plants also respond to changes in seasons and sunlight which, in turn, allows many insects and mammals to thrive building a complex and beautiful ecosystem around these plants.

The images included in this article provide a visual representation of how our planet constantly changes. Sefton provides city dwellers with the ability to witness these changes first hand throughout the year – and we can guarantee you a mystical view on whatever day you decide to visit.

Take home message: Next time you take a trip to Sefton, have a look at the forever changing ecosystems and think about the biological processes occurring beneath the visual changes.

Post by: Alice Brown:

References
http://www.journals.uchicago.edu/doi/abs/10.1086/283724?journalCode=an
https://link.springer.com/chapter/10.1007%2F978-94-011-4453-7_45#page-1
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1087143/pdf/plntphys00512-0027.pdf
http://postharvest.tfrec.wsu.edu/pages/PC2000F
http://scienceline.ucsb.edu/getkey.php?key=1110
https://www.ncbi.nlm.nih.gov/pubmed/17737985

Save

Fractals – a bridge between maths, computing and the arts.

A few months ago, I became involved with a group called Moss Code. Their aim is to use computer coding to inspire and engage with people from the strongly Afro-Carribean Manchester suburb, Moss Side. I was made aware that the Afro-Carribean culture actually has a strong heritage using fractal-like sequences in their art and architecture. Please see this TED talk on the subject. My hope is to try and make a simple computer program for people to generate their own unique fractal patterns, with the possibility of printing them onto t-shirts and fabric bags! So in this post I want to share some of the amazing details of fractals and how such complex behaviour arises from surprisingly simple mathematics.

Figure 1. A range of different Julia Set fractals, all share classic fractal properties including self-similarity and symmetry.

Figure 1 shows a range of different Julia Set fractals despite containing very different patterns, they are all generated by the same equation, z = z² + c. So how does such complex behaviour arise from this simple equation? It all hinges on how the variable z grows when you iterate the equation. To clarify, when you iterate an equation you use the answer from the previous calculation as the input to the next.

Lets use a simple example. Say z starts at 0, and c = 1. The value c is a constant and cannot change, only z is able to change. The first iteration gives z = 0² + 1 which is 1. Now z=1, so the next iteration will be z = 1² + 1, which is 2. The next iteration gives 5, then 26, then 677, then 458330, then 210066388901, and so on. You can clearly see that z grows very quickly.

However, for some values of c, the value z stays much the same even after many iterations. You can try to tweak c to find the point between z remaining stable and shooting off towards infinity. If you try this, you’ll find that there is no simple cut-off point but a complex, chaotic region that we recognise is actually the basis of the fractal pattern. In Figure 2, I show this chaotic region by plotting the number of iterations the equation goes through before z reaches a predefined limit.

Figure 2. By changing c in the above equation even by a very small amount, we can see the number of iterations needed to reach a predefined threshold changes, at first steadily, but then chaotically.

It begins changing very slowly and predictably, but at some point it becomes chaotic. Sometimes the equation requires many iterations to reach the limit, while given another very similar value of c, the number of iterations required becomes very low. What is causing this behaviour? The simplest answer is positive feedback, or a runaway effect.

Figure 3. The equation z = z2 + c is iterated 30 times. The changing absolute value1 of z is shown for two similar values of c. Note the drastically different behaviour.

This effect is illustrated in Figure 3. Here the blue line increases sharply upwards while the green line fluctuates only slightly. The differences between the two lines is that the value c is altered by 0.003577. For the blue line this change is enough to make it go through a very rapid self-sustaining increase. While the green line goes up but then decreases again. It is this property of z and c that lies at the heart of creating the beautiful fractals in Figure 1.

Getting complex

The fact that the equation z = z² + c can decrease might be confusing. Surely, as z gets large, squaring it would just make it larger. Even if z is negative then squaring it will just turn it positive. So why doesn’t z get ridiculously large for all values of c? At this point it is important to say that both the values of c and z are not actually normal numbers, they are complex numbers.

Normal numbers are exactly what you would expect…each number is a single value which can be positive, negative or a fraction/decimal or all of these things. Complex numbers are a bit more…well, complex. They contain two components; a real number and an imaginary number. The real number is essentially the same as a normal number but the imaginary number (which is represented using either i or j) can become a negative number when it is squared, a normal number can never do that. It is this imaginary component of c and z that allows the equation z = z² + c to decrease when it is iterated.

Now we have cleared that up, lets break down what’s going on in a fractal image. The fractals shown in Figure 1 are simply showing the number of iterations needed for z to reach a threshold (in this case, 100). The two axes represent the different values of the real and imaginary components of the complex number c.

Figure 4. A fractal with the number of iterations needed for z to reach 100 labelled to 3 locations.

To get the colour of the image, we simply count the number of iterations needed for z = z² + c to reach 100. In the bottom of Figure 4, only 30 iterations were required, meaning the z increased quickly. Closer to the nucleus of the spiral, z increased more slowly, meaning the number of iterations the rises. If you followed the spiral inwards for ever, you would find that z would never reach the threshold and the number of required iterations would be infinite.

So to summarise, the amazing complexity of fractals is actually based on a simple equation or rule. In this post, I have only covered one type of fractal…the Julia Set. There are of course many others, such as the famous Mandelbrot set, Cantor set, Koch snowflake and many others, each with their own set of rules and equations. In my opinion, fractals are most remarkable because these abstract mathematical patterns are actually seen everywhere in the natural world; from small-scales such as Alveoli in your lungs or crystals of ice on a windscreen, to the large-scales like the outline of a coastline or the structure of galaxys. Fractals really bridge the gap between the simple mathematical world and the real world whilst providing amazing beauty along the way.

Post by: Dan Elijah.

Save

The Nuclear (Waste) War

Article by Rose Linihan, student of Xaverian College (Manchester) and winner of the British Science Association’s  2017 Science Journalism contest.

The United Kingdom currently faces nuclear threat. And no, not that kind. There is in fact a potential energy crisis on its way, involving huge energy shortages and 100,000 tonnes of nScreen Shot 2017-05-26 at 14.33.25uclear waste, to be precise.

There are currently nine nuclear power stations here in the UK, providing 22% of our total electricity. The Government have decided they want nuclear power to continue to provide a portion of our energy, alongside other low-carbon options. The general public conception of nuclear power is notoriously bad, and yet nuclear power is very effective. It’s a low-carbon way of producing the energy needed to power everything in the UK, from our toasters to TVs, and radioactivity is all around us – there’s even radioactivity in bananas!

Nuclear energy itself is produced by a process called fission, whereby a very unstable isotope of an element called uranium is split into two smaller radioactive nuclei and 2 or 3 neutrons are released and lots of energy. In a nuclear reactor, uranium fuel is surrounded by graphite (material that used to be in pencils) moderators and keep the reaction under control by slowing the neutrons down so they’re at the optimum speed for a further reaction to occur. After it has done its job inside the nuclear reactor, this graphite is known as nuclear waste.

However, our current reactors are now old and so require decommissioning and replacing with new and more advanced models, or else there will be a national energy shortage. Which leaves the us with the problem of the 100,000 tonnes of radioactive nuclear waste. Not to mention 300,000 tonnes worldwide. The NDA (Nuclear Decommissioning Authority) is responsible for decommissioning nuclear waste and their present plan of how to do this is to wait 100 years and then bury the waste in a geological disposal facility. Another option is to go down a similar route to US whereby waste is shipped in containers and the stored in underground tunnels by machines. These options are both very expensive, costing a whopping £20 billion, not to mention being very time consuming and the fact that suitable geological sites are rare. So what do we do? Dump it at the bottom of the ocean? Bury it somewhere? Launch it into space? Or something else…

Alex Theodosiou is a post-doctoral research associate at Manchester University, working in the field of nuclear decommissioning as part of the Nuclear Graphite Research Group. They work as part of a consortium to come up with novel methods of tackling the nuclear waste crisis. Alex is currently researching the thermal treatment of nuclear graphite by reacting it with oxygen at high tempuratures to produce carbon dioxide. This carbon dioxide can then be managed using carbon capture techniques such as liquefication. Alex says ‘This will lead to a massive volume reduction in the graphite inventory and should help reduce overall costs involved with decommissioning, as well as reduce the lengthy timescales currently predicted.’ It could also have wider applications such as nuclear weapon disposal.

Alex’s laboratory work is small scale and involves using a few grams of nuclear grade graphite and heating it with a tube furnace under various conditions, before using a gas analyser to monitor the species formed. This lab data can then be transferred to an industrial scale by partner companies who use a plasma furnace and greater volumes of graphite, to produce results on 1000x the scale.

Alex and his colleages hope that together they can develop a commericially viable decommissioning strategy for the nuclear sector, to propose to the NDA to hopefully win the war against nuclear waste!

Informatics for health – an interdisciplinary extravaganza.

A few weeks ago I attended the European Federation for Medical Informatics and the Farr Institute of Health Informatics Research’s Manchester-based conference – Informatics for Health 2017. The conference was a vibrant mix of academic thought topped off with a generous helping of public collaboration, showing that the field of health and medical informatics takes collaboration and public involvement very seriously.

Since health informatics covers all aspects of health-data collection, storage and processing it would be impossible to do justice to the sheer breadth of research presented at this conference in a single article. Therefore, here I will focus on a couple of my personal highlights.

On Tuesday the 25th, Susan Michie from University College London gave a keynote talk about the Human Behavioural Change Project:

With environmental, social and health concerns appearing endemic in our society, Suzan noted that one of the best ways to address these issues would be through targeted behavioural change interventions. These take a huge array of forms from subtle nudges implemented by many governments and large organisations (encouraging everything from litter reduction to targeted urinal use – see here for examples), to less than subtle public health campaigns. These interventions are widely documented across academic literature and show a range of outcomes and successes. Susan outlined a vision where this literature could be used to answer the big question:

‘What behaviour change interventions work, how well, for whom, in what setting, for what behaviours and why’

This is undoubtedly a pretty ambitious question to answer and it is made harder by the fact that the literature on this subject, although vast, is often fragmented, inconsistent and sometimes incomplete. So how do Susan’s team propose to tackle this big data problem?

The Human Behaviour-Change Project, funded by the Wellcome Trust, draws together some of the best minds in behavioural, computer and information science. Their output will depend on the close working relationships and interplay between all disciplines involved.

Behaviour scientists have been tasked with developing an ‘ontology’, basically a standardised method of categorising different behavioural change interventions. It is then hoped that this standardised ontology can be used to both sort existing literature and as a template on which new studies can be based. It is hoped that this will add some much needed order to the current fragmented literature and pave the way for further analysis. Specifically, computer scientists on this team will use Natural Language Processing (a branch of computer science which employs artificial intelligence and computational linguistics to sort and process large bodies of text) to extract and organise information from these studies, whilst also learning as they process this information.

Finally information scientists, the big data miners, will develop effective user interfaces which allow researchers to delve into this data and to untangle it in a way that reveals answers to many important research questions.

This is undoubtedly a huge task but with the combined input of so many specialists it certainly seems tractable.

On Wednesday the 26th the conference was drawn to a close with a compelling talk from Sally Okun, Vice President for Advocacy, Policy and Patient Safety at PatientsLikeMe, an online patient powered research network. The PatientsLikeMe network partners with 500,000+ patients living with 2700+ conditions and offers a platform for patients to share experiences and where researchers can learn more about treatments directly from those undergoing them. Indeed, more than 90 peer reviewed papers have already stemmed from data collected through the PatientsLikeMe network.

The theory behind this work is compelling and almost begs the question as to why such networks are not yet commonplace. Indeed, it’s no secret that online marketers spend billions analysing our search histories and purchase data in an attempt to feed us highly personalised targeted marketing, so why shouldn’t patient experiences be used to tailor personalised medicine? Although there are undoubtedly greater complications linked to the use of patient data, not to mention the perils of misinformation, this is no excuse not to try and work towards a digital ideal.

Sally also discussed the launch of their new platform, the Digital Me. This platform will combine a plethora of personal health data including genetic data, medical histories, activity tracking – basically if you can collect it you can include it. Their hope is that this data can be used to personalise medical treatments, tailoring them to your own individual requirements. Indeed, advances in statistical methods could take us beyond blanket prescribing and into a world where your digital profile can be compared to those similar to you (similarity being based on a large number of patient characteristics) and recommendations made based on successes and failure of treatments for you nearest digital neighbours (those sharing most of your traits).

As my first experience of an informatics-based conference, I was struck by both the breadth and depth of knowledge in the field and the ethos of working together to optimise our outputs – a skill which is often found lacking in other fields. It was also plain that researchers in this area value patient input and many elements of this conference were tailored to be accessible and engaging for a lay audience. Indeed, representatives from HeRC’s own patient public forum who attended the event enjoyed the opportunity to engage further with researchers and learn about engagement and involvement work being conducted across the field.

Post by: Sarah Fox

Save

Save

Vets, pets, data sets and beyond.

From the 10th to the 14th of April 2017 researchers from the UK’s flagship project on companion animal surveillance, the Small Animal Veterinary Surveillance Network (SAVSNET), set up shop at Edinburgh’s international science festival.

SAVSNET* uses big data to survey animal disease across the UK and ultimately aims to improve animal care through identification of trends in diseases observed by veterinary practitioners.

This work offers huge benefits for companion animals, meaning that interventions can be targeted towards those most at risk and risk factors for disease can be identified across the population.

There is also significant crossover between this work and that of human health data science. Indeed, lessons learned from the processing and analysis of big data from vets may be used to inform aspects of human data analysis while work on shared and zoonotic diseases, antibacterial use and resistance also offer significant benefit to human health.

So, for this week, we took our science to the public to engage, inspire, raise awareness and stimulate discussion about our work.

SAVSNET mascots Alan, Phil and PJ

The SAVSNET Liverpool team worked hard to develop a wide range of activities designed to bring data science to life and to raise awareness of their work while Dr Sarah Fox, from HeRC’s PPI team joined the fun to expand discussions beyond pets and into the realms of human health.

Our stall was designed to take the public on a data journey, a journey which began with our resident mascots Alan, Phil and PJ, who were suffering from a parasitic problem. Hidden in our fluffy friend’s fur were a host of unwanted passengers – ticks (not the real thing but small sticky models we used to represent real ticks). Visitors helped us to remove these pests from our mascots and learned that every time this process is performed by a vet, a medical record is created for that procedure. Indeed, vets across the country are regularly called upon to remove such pests and, assuming the practice is signed-up to the SAVSNET system, information on these procedures is transferred to data scientists.

The next stage of our data journey is one health researchers are very familiar with but which may remain a mystery amongst the general public – sorting and analysing these records.

Interactive sticker-wall showing seasonal tick prevalence.

Our stall was equipped with a large touch-screen PC, linked to the SAVSNET database and programmed to pull out and de-identify all vet records which made reference to the word tick. It was explained that, in order to perform a complete analysis of the prevalence of ticks across the UK, data scientists needed to manually sort through these selected records and confirm the presence or absence of a tick at the time of the recorded consultation. Now visitors to our stall could take part in their own citizen science project as they helped us to sort through these records, uncovering ticks and adding their findings to our maps of regional and seasonal tick prevalence. Dogs came up trumps as the pet most likely to visit their local vet to have ticks removed, while the ticks themselves seemed to indiscriminately pop up all around the UK (even in the centre of London) while also having a preference for outings during the warmer summer months.

In the final stage of our data journey, visitors had the chance to get hands-on with some data science theory.

A few beautifully coloured ticks alongside our wooded data blocks.

Dr Alan Radford, a reader in infection biology from the University of Liverpool, developed a novel way of exploring sample theory and odds ratios using wooden building blocks.

This activity consisted of hundreds of wooden blocks sporting either cat or dog stickers, a subsection of which also housed a smaller tick sticker (on their rear). Visitors were told that these blocks represented all the information available on cats and dogs in the UK. After conceding that they would not be able to count all of these blocks independently, visitors were encouraged to form groups and choose a smaller sub-sample of ten blocks each. Visitors counted how many of their chosen ten blocks showed cat stickers and how many showed dog stickers. As a rule most groups of ten contained more dogs than cats – since overall there were more dog blocks in the total population. However, inevitably we also saw variability and some individuals chose more cat blocks than dogs. This tactile and visual example of sample theory allowed a discussion regarding sample bias and how increasing the number or size of samples taken would bring you closer to the correct population value. Finally visitors were asked to turn their blocks around and count how many of their dogs and cats also had ticks. In our example cats were more likely to house a resident parasite but, with fewer cats to sample from, this was not always immediately obvious. Specifically, assuming a visitor chose 7 dog blocks and 3 cat blocks then found that 4 of their dogs had ticks while only two of their cats did, they might be forgiven for thinking that within our sample dogs were more prone to ticks. However, from this data our older visitors were taught how to calculate an odds ratio, which could show that our cats were actually more likely to house ticks than dogs. It was also noted that similar calculations are often used to calculate risk in medical studies and that it is often these vales which are reported in the media.

The view down our microscope of our preserved pests.

Alongside our data blocks, younger visitors also had the chance to get up close and personal with real life ticks, through both a colouring exercise and by peeking down our microscope at a range of preserved specimens.

Finally, we discussed how tick data and similar veterinary information could be used to improve the health of companion animals and to better understand disease outbreaks across the country. It was at this point we also introduced the idea that similar methods could also be applied to human health data in order to streamline and improve our healthcare services. Our discussions centred around the successes already shown in The Farr Institute for Health Informatics’ 100 Ways case studies and HeRC’s work, including improvements in surgical practice and regional health improvements from HeRC’s Born in Bradford study – whilst also engaging in a frank discussion around data privacy and research transparency. Visitors were encouraged to document their views on these uses of big data on our post-it note wall, garnering comments to the questions: “What do you think of big data?” and “Should we use human data?” A majority of visitors chose to comment on our second question, generally expressing positive feelings concerning this topic but, with many also noting the need for tight data privacy controls. Comments of note include:

Should we use human data?
Yes, but with controls and limited personal info
We need to get better at persuading people to change behaviour and ask the right questions to collect the right data.
Yes, it’s towards a good cause and can help people.
Using data is a good idea if it helps to make people better.
Yes, as long as there are sufficient controls in place.
Yes, but don’t sell it.
Yes, if you are careful not to breach privacy.

The data detectives.

Overall we had a great time at the festival and hope everyone who visited out stall took away a little bit of our enthusiasm and a bit more knowledge of health data science.

* co-funded by the BBSRC and in collaboration with the British Small Animal Veterinary Association (BSAVA) and the University of Liverpool.

Post by: Sarah Fox

Save

Save

Save

Digital technologies: a new era in healthcare

Our NHS provides world-class care to millions of people every year. But, due to funding cuts and the challenges of providing care to an ageing population with complex health needs, this vital service is unsurprisingly under strain.  At the same time, with the mobile-internet at our fingertips, we have become accustomed to quick, on-demand services. Whether it’s browsing the internet, staying connected on social media or using mobile banking, our smartphones play important roles in nearly every aspect of our lives. It is therefore not surprising to find that over 70% of the UK population are now going online for their healthcare information.

This raises a question: could digital health (in particular mobile health apps) play a role in bolstering our faltering health service?

Unfortunately, to date, healthcare has been lagging behind other services in the digital revolution. When most other sectors grabbed onto the digital train, healthcare remained reluctant. Nevertheless, the potential for mobile technology to track, manage and improve patient health, is being increasingly recognised.

ClinTouch for instance, is a mobile health intervention co-created by a team of Manchester-based health researchers at The Farr Institute of Health Informatics’ Health eResearch Centre. ClinTouch is a psychiatric–symptom assessment tool developed to aid management of psychosis (a condition affecting 1 in 100 people). The app was co-designed by health professionals and patients, ensuring that the final output reflected both the needs of patients and clinicians. It combines a patient-focussed front end which allows users to record and monitoring their symptoms whilst simultaneously feeding this information back to clinicians to provide an early warning of possible relapse. The project has the potential to empower patients and improve relationships between the user and their physician. Moreover, if ClinTouch can reduce 5% of relapse cases, it will save the average NHS trust £250,000 to £500,000, per year (equating to a possible saving of £119 million to the NHS over three years!).

Adopting disruptive technologies such as ClinTouch can have meaningful benefits for patients and the NHS. And there are signs that the healthcare sector is warming up to the idea. Earlier this year the National Institute for Health and Care Excellence (NICE) announced that they are planning to apply health technology assessments to mobile health apps and only this week, the NHS announced a £35 million investment in digital health services.

On Thursday 27th April, the North West Biotech Initiative will be hosting an interactive panel discussion on the future of digital health. We will be joined by a fantastic line-up speakers providing a range of perspectives on the topic, including:

Professor Shôn Lewis: Principal Investigator of the ClinTouch project and professor of Adult Psychiatry at The University of Manchester who will be speaking about the development of and the potential impact of the ClinTouch app. 

Tom Higham: former Executive Director at FutureEverything and a freelance digital art curator and producer, interested in the enabling power of digital technology. Tom is also diagnosed with type 1 diabetes, has worked with diabetes charity JDRF UK and has written about the benefits of and the need for improvements in mobile apps for diabetes care.

Anna Beukenhorst: a PhD candidate currently working on the Cloudy with a Chance of Pain project, a nationwide smartphone study investigating the association between the weather and chronic pain in more than 13,000 participants.

Reina Yaidoo: founder of Bassajamba, a social enterprise whose main aim is increase participation of underrepresented groups in science and technology. Bassajamba are currently working with several diabetes support groups to develop self-management apps, which incorporate an aspect of gamification.

Professor Tjeerd Van Staa: professor of eHealth Research in the Division of Informatics, Imaging and Data Sciences, at The University of Manchester. He is currently leading the CityVerve case on the use of Internet of Things technology to help people manage their Chronic Obstructive Pulmonary Disease (COPD).

Dr Mariam Bibi: Senior Director of Real World Evidence at Consulting for McCann Health, External advisor for Quality and Productivity at NICE and an Associate Lecturer at Manchester Metropolitan University. She will be talking about the regulatory aspect of bringing digital technology to healthcare.

The event is open anyone with an interest in digital health, including the general public, students and academics. It is free to attend and will be a great opportunity to understand the potential role of digital technology in healthcare and to network with local business leaders, academics and students working at the forefront of digital healthcare.

Date: 27th April 2017
Venue: Moseley Theatre, Schuster Building, The University of Manchester
Time: 3.30pm – 6.00pm
Register to attend! http://bit.ly/2o4fzd7
Questions about the event? Please get in touch with us at: [email protected]

Guest post by: Fatima Chunara

Save

Save

Neural coding 2: Measuring information within the brain

In my previous neuroscience post, I talked about the spike-triggered averaging method scientists use to find what small part of a stimulus a neuron is capturing. This tells us what a neuron is interested in, such as the peak or trough of a sound wave, but it tells us nothing about how much information a neuron is transmitting about a stimulus to other neurons. We know from my last neuroscience post that a neuron will change its spike firing when it senses a stimulus it is tuned to. Unfortunately, neurons are not perfect and they make mistakes, sometimes staying quiet when they should fire or firing when they should be quiet. Therefore, when the neuron fires, listening neurons can not be fully sure a stimulus has actually occurred. These mistakes lead to a loss of information as signals get sent from neuron to neuron, like Chinese whispers.

Figure 1: Chinese whispers is an example of information loss during communication. Source: jasonthomas92.blogspot.com

It is very important for neuroscientists to ascertain information flow within the brain because this is underlines all other computational processes that happen. After all, to process information within the brain you must first correctly transmit it in the first place! To understand and quantify information flow, neuroscientists use a branch of mathematics known as Information Theory. Information theory centers around the idea of a sender and a receiver. In the brain, both the sender and receiver are normally neurons. The sender neuron encodes a message about a stimulus in a sequence of spikes. The receiving neuron/neurons try to decode this spike sequence and ascertain what the stimulus was. Before the receiving neuron gets the signal, it has little idea what the stimulus was. We say this neuron has a high uncertainty about the stimulus. By receiving a signal from the sending neuron, this uncertainty is reduced, the extent of this reduction in uncertainty depends on the amount of information carried in the signal. Just in case that is not clear, lets use an analogy…so imagine you are a lost hiker with a map.

Figure 2: A map from one of my favourite games. Knowing where you are requires information. Source: pac-miam.deviantart.com

You have a map with 8 equally sized sectors and all you know is that you could be in any of them. You then receive a phone call telling you that you are definitely within 2 sectors on the map. This phone call actually contains a measurable amount of information. If we assume the probability of being in any part of the map prior to receiving the phone call is equal then you have a 1/8 chance of being in each part of the map. We need to calculate a measure of uncertainty and for this we use something called Shannon entropy. This measurement is related to the number of different possible areas there are in the map, so a map with 2000 different areas will have greater entropy than a map with 10 sectors. In our example we have an entropy of 3 bits. After receiving the message, the entropy drops to 1 bit because there are now only two map sectors you could be in. So the telephone call caused our uncertainty about our location to drop from 3 bits to 1 bit of entropy. The information within the phone call is equal to this drop in uncertainty which is 3 – 1 = 2 bits of information. Notice how we didn’t need to know anything about the map itself or the exact words in the telephone call, only what the probabilities of your location were before and after the call.

In neurons, we can calculate information without knowing the details of the stimulus a neuron is responding to. The trick is to stimulate a neuron with the same way over many repeated trials using a highly varying, white-noise stimulus (see the bottom trace in Figure 3).

Figure 3: Diagram showing a neuron’s response to 5 repeats of identical neuron input (bottom). The responses are shown as voltage traces (labelled neuron response). The spike times can be represented as points in time in a ‘raster plot’ (top).

So how does information theory apply to this? Well, recall how Shannon entropy is linked with the number of possible areas contained within a map. In a neuron’s response, entropy is related to the number of different spike sequences a neuron can produce. A neuron producing many different spike sequences has a greater entropy.

In the raster plots below (Figure 4) are the responses of three simulated neurons using computer models that closely approximate real neuron behaviour. They are responding to a noisy stimulus (not shown) similar to the one shown at the bottom of Figure 3. Each dot is a spike fired at a certain time on a particular trial.

Figure 4: Raster plots show three neuron responses transmitting different amounts of information. The first (top) transmits about 9 bits per second of response, the second (middle) transmits 2 bits/s and the third (bottom) transmits 0.7 bits/s.

In all responses, the neuron is generating different spike sequences, some spikes are packed close together in time, while at other times, these spikes are spaced apart. This variation gives rise to entropy.

In the response of the first neuron (top) the spike sequences change in time but do not change at all across trials. This is an unrealistically perfect neuron. All the variable spike sequences follow the stimulus with 100% accuracy. When the stimulus repeats in the next trial the neuron simple fires the same spikes as before, producing vertical lines in the raster plot. Therefore, all that entropy in the neuron’s response is because of the stimulus and is therefore transmitting information. This neuron is highly informative; despite firing relatively few spikes it transmits about 9 bits/second…pretty good for a single neuron.

The second neuron (Figure 4, middle) also shows varying spike sequences across time, but now these sequences vary slightly across trials. We can think of this response as having two types of entropy, a total entropy which measures the total amount of variation a neuron can produce in its response, and a noise entropy. This second entropy is caused by the neuron changing its response to unrelated influences, such as other neuron inputs, electrical/chemical interferences and random fluctuations in signaling mechanisms within the neuron. The noise entropy causes the variability across trials in the raster plot and reduces the information transmitted by the neuron. To be more precise, the information carried in this neuron’s response it whatever remains from the total entropy when the noise entropy is subtracted from it…about 2 bits/s in this case.

In the final response (bottom), the spikes from the neuron only weakly follow the stimulus and are highly variable across trials. Interestingly it shows the most complex spike sequences of spikes of all three examples. It therefore has a very large total entropy, which means it has the capacity to transmit a great deal of information. Unfortunately, much of this entropy is wasted because the neurons spends most its time varying its spike patterns randomly instead of with the stimulus. This makes its noise entropy very high and the useful information low, it transmits a measly 0.7 bits/s.

So, what should you take away from this post. Firstly that neuroscientists can accurately measure the amount of information can transmit. Second, that neurons are not perfect and cannot respond the same way even to repeated identical stimuli. This leads to the final point that this noise within neurons limits the amount of information they can communicate to each other.

Of course, I have only shown a simple view of things in this post. In reality, neurons work together to communicate information and overcome the noise they contain. Perhaps in the future, I will elaborate on this further…

Post by: Dan Elijah.

To share or not to share: delving into health data research.

In January this year I made a bold move, well at least bold for someone who is often accused of being painfully risk averse. I waved a fond farewell to life in the lab to take on a new role where I have been able to combine my training as a researcher with my passion for science engagement. In this role I work closely with health researchers and the public, building the scaffolding needed for the two to work together and co-produce research which may improve healthcare for millions of patients across the UK. The group I work alongside are collectively known as the Health eResearch Centre (part of the world-leading Farr Institute for Health Informatics) and are proud in their mission of using de-identified electronic patient data* to improve public health.

For me, taking on this role has felt particularly poignant and has lead me to think deeply about the implications and risks of sharing such personal information. This is because, like many of you, my health records contain details which I’m scared to share with a wider audience. So, with this in mind, I want to invite you inside my head to explore the reasons why I believe that, despite my concerns, sharing such data with researchers is crucial for the future of public health and the NHS.

It’s no secret that any information stored in a digital form is at risk from security breaches, theft or damage and that this risk increases when information is shared. But, it’s also important to recognise that these risks can be significantly reduced if the correct structures are put in place to protect this information. Not only this but, when weighing up these risks, I also think that it is immensely important to know the benefits sharing data can provide.

With this in mind, I was really impressed that, within the first few weeks of starting this role, I was expected to complete some very thorough data security training (which, considering I won’t actually be working directly with patient data almost seemed like overkill). I was also introduced to the catchily titled ISO 27001 which, if my understanding is correct, certifies that an organisation is running a ‘gold standard’ framework of policies and procedures for data protection – this being something we as a group hope to obtain before the year is out. This all left me with the distinct feeling that security is a major concern for our group and that it is considered to be of paramount importance to our work. I also learned about data governance within the NHS and how each NHS organisation has an assigned data guardian who is tasked with protecting the confidentiality of patient and service-user information. So, I’m quite sure information security is taken exceedingly seriously at every step of the data sharing chain.

But what will the public gain from sharing their health data?

We all know that, in this cyber age, most of us have quite an extensive digital-data footprint. It’s no accident that my Facebook feed is peppered with pictures of sad dogs encouraging me to donate money to animal charities while Google proudly presents me with adverts for ‘Geek gear’ and fantasy inspired jewellery. I don’t make too much effort to ensure that my internet searches are private, so marketers probably see me as easy prey. This type of data mining happens all the time, with little benefit to you or me and, although we may install add blocking software, few of us make a considered effort to stop this from happening. Health data, on the other hand, is not only shared in a measured and secure manner but could offer enormous benefits to the UK’s health service and to us as individual patients.

Our NHS is being placed under increasing financial strain, with the added pressure of providing care to a growing, ageing population with complex health needs. Meaning that it has never been more important to find innovative ways of streamlining and improving our care system. This is where health data researchers can offer a helping hand. Work using patient data can identify ‘at risk’ populations, allowing health workers to target interventions at these groups before they develop health problems. New drugs and surgical procedures can also be monitored to ensure better outcomes and fewer complications.

And this is already happening across the UK – the Farr Institute are currently putting together a list of 100 projects which have already improved patient health – you can find these here. Also, in 2014 the #datasaveslives campaign was launched. This highlights the positive impact health-data research is having in the UK by building a digital library of this work – type #datasaveslives into Google and explore this library or join the conversation on twitter.

One example is work on a procedure to unblock arteries and improve outcomes for patients suffering from coronary heart disease:

In the UK this procedure is carried out in one of two ways: Stents (a special type of scaffolding used to open up arteries and improve blood flow) can be inserted either through a patient’s leg (the transfemoral route) or via the wrist (the transradial route). Insertion through the wrist is a more modern technique which is believed to be safer and less invasive – however both methods are routinely performed across the UK.
Farr institute researchers working between The University of Manchester’s Health eResearch Centre and Keele University used de-identified health records (with all personal information removed) to analyse the outcomes of 448,853 surgical stent insertion procedures across the UK between 2005 and 2012.

This study allowed researchers to calculate, for the first time, the true benefits of the transradial method. They showed that between 2005 and 2012 the use of transradial surgery increased from 14% in 2005 to 58% in 2012 – a change which is thought to have saved an estimated 450 lives. They also discovered that the South East of England had the lowest uptake of surgery via the wrist.

This work shows one example of how research use of existing health records can highlight ways of improving patient care across the country – thanks to this research the transradial route is now the dominant surgical practice adopted across the UK (leading to an estimated 30% reduction in the risk of mortality in high risk patients undergoing this procedure).

Reading through all these studies and imagining the potential for future research does convince me that, even with my concerns, the benefits of sharing my data far outweigh the risks. But, I also recognise that it is of tantamount importance for patients and the public to be aware of how this process works and to play an active role in shaping research. It seems that when the public have the opportunity to question health data scientists and are fully informed about policy and privacy many feel comfortable with sharing their data. This proves that we need to strive towards transparency and to keep an active dialogue with the public to ensure we are really addressing their needs and concerns.

This is an amazingly complex and interesting field of study, combining policy, academic research, public priority setting and oodles of engagement and involvement – so I hope over the next year to be publishing more posts covering aspects of this work in more detail.

Post by: Sarah Fox

*The kind of data which is routinely collected during doctor and hospital appointments but with all personal identifiable information removed.

 

Save

Share This