Fractals – a bridge between maths, computing and the arts.

A few months ago, I became involved with a group called Moss Code. Their aim is to use computer coding to inspire and engage with people from the strongly Afro-Carribean Manchester suburb, Moss Side. I was made aware that the Afro-Carribean culture actually has a strong heritage using fractal-like sequences in their art and architecture. Please see this TED talk on the subject. My hope is to try and make a simple computer program for people to generate their own unique fractal patterns, with the possibility of printing them onto t-shirts and fabric bags! So in this post I want to share some of the amazing details of fractals and how such complex behaviour arises from surprisingly simple mathematics.

Figure 1. A range of different Julia Set fractals, all share classic fractal properties including self-similarity and symmetry.

Figure 1 shows a range of different Julia Set fractals despite containing very different patterns, they are all generated by the same equation, z = z² + c. So how does such complex behaviour arise from this simple equation? It all hinges on how the variable z grows when you iterate the equation. To clarify, when you iterate an equation you use the answer from the previous calculation as the input to the next.

Lets use a simple example. Say z starts at 0, and c = 1. The value c is a constant and cannot change, only z is able to change. The first iteration gives z = 0² + 1 which is 1. Now z=1, so the next iteration will be z = 1² + 1, which is 2. The next iteration gives 5, then 26, then 677, then 458330, then 210066388901, and so on. You can clearly see that z grows very quickly.

However, for some values of c, the value z stays much the same even after many iterations. You can try to tweak c to find the point between z remaining stable and shooting off towards infinity. If you try this, you’ll find that there is no simple cut-off point but a complex, chaotic region that we recognise is actually the basis of the fractal pattern. In Figure 2, I show this chaotic region by plotting the number of iterations the equation goes through before z reaches a predefined limit.

Figure 2. By changing c in the above equation even by a very small amount, we can see the number of iterations needed to reach a predefined threshold changes, at first steadily, but then chaotically.

It begins changing very slowly and predictably, but at some point it becomes chaotic. Sometimes the equation requires many iterations to reach the limit, while given another very similar value of c, the number of iterations required becomes very low. What is causing this behaviour? The simplest answer is positive feedback, or a runaway effect.

Figure 3. The equation z = z2 + c is iterated 30 times. The changing absolute value1 of z is shown for two similar values of c. Note the drastically different behaviour.

This effect is illustrated in Figure 3. Here the blue line increases sharply upwards while the green line fluctuates only slightly. The differences between the two lines is that the value c is altered by 0.003577. For the blue line this change is enough to make it go through a very rapid self-sustaining increase. While the green line goes up but then decreases again. It is this property of z and c that lies at the heart of creating the beautiful fractals in Figure 1.

Getting complex

The fact that the equation z = z² + c can decrease might be confusing. Surely, as z gets large, squaring it would just make it larger. Even if z is negative then squaring it will just turn it positive. So why doesn’t z get ridiculously large for all values of c? At this point it is important to say that both the values of c and z are not actually normal numbers, they are complex numbers.

Normal numbers are exactly what you would expect…each number is a single value which can be positive, negative or a fraction/decimal or all of these things. Complex numbers are a bit more…well, complex. They contain two components; a real number and an imaginary number. The real number is essentially the same as a normal number but the imaginary number (which is represented using either i or j) can become a negative number when it is squared, a normal number can never do that. It is this imaginary component of c and z that allows the equation z = z² + c to decrease when it is iterated.

Now we have cleared that up, lets break down what’s going on in a fractal image. The fractals shown in Figure 1 are simply showing the number of iterations needed for z to reach a threshold (in this case, 100). The two axes represent the different values of the real and imaginary components of the complex number c.

Figure 4. A fractal with the number of iterations needed for z to reach 100 labelled to 3 locations.

To get the colour of the image, we simply count the number of iterations needed for z = z² + c to reach 100. In the bottom of Figure 4, only 30 iterations were required, meaning the z increased quickly. Closer to the nucleus of the spiral, z increased more slowly, meaning the number of iterations the rises. If you followed the spiral inwards for ever, you would find that z would never reach the threshold and the number of required iterations would be infinite.

So to summarise, the amazing complexity of fractals is actually based on a simple equation or rule. In this post, I have only covered one type of fractal…the Julia Set. There are of course many others, such as the famous Mandelbrot set, Cantor set, Koch snowflake and many others, each with their own set of rules and equations. In my opinion, fractals are most remarkable because these abstract mathematical patterns are actually seen everywhere in the natural world; from small-scales such as Alveoli in your lungs or crystals of ice on a windscreen, to the large-scales like the outline of a coastline or the structure of galaxys. Fractals really bridge the gap between the simple mathematical world and the real world whilst providing amazing beauty along the way.

Post by: Dan Elijah.

Save

The Nuclear (Waste) War

Article by Rose Linihan, student of Xaverian College (Manchester) and winner of the British Science Association’s  2017 Science Journalism contest.

The United Kingdom currently faces nuclear threat. And no, not that kind. There is in fact a potential energy crisis on its way, involving huge energy shortages and 100,000 tonnes of nScreen Shot 2017-05-26 at 14.33.25uclear waste, to be precise.

There are currently nine nuclear power stations here in the UK, providing 22% of our total electricity. The Government have decided they want nuclear power to continue to provide a portion of our energy, alongside other low-carbon options. The general public conception of nuclear power is notoriously bad, and yet nuclear power is very effective. It’s a low-carbon way of producing the energy needed to power everything in the UK, from our toasters to TVs, and radioactivity is all around us – there’s even radioactivity in bananas!

Nuclear energy itself is produced by a process called fission, whereby a very unstable isotope of an element called uranium is split into two smaller radioactive nuclei and 2 or 3 neutrons are released and lots of energy. In a nuclear reactor, uranium fuel is surrounded by graphite (material that used to be in pencils) moderators and keep the reaction under control by slowing the neutrons down so they’re at the optimum speed for a further reaction to occur. After it has done its job inside the nuclear reactor, this graphite is known as nuclear waste.

However, our current reactors are now old and so require decommissioning and replacing with new and more advanced models, or else there will be a national energy shortage. Which leaves the us with the problem of the 100,000 tonnes of radioactive nuclear waste. Not to mention 300,000 tonnes worldwide. The NDA (Nuclear Decommissioning Authority) is responsible for decommissioning nuclear waste and their present plan of how to do this is to wait 100 years and then bury the waste in a geological disposal facility. Another option is to go down a similar route to US whereby waste is shipped in containers and the stored in underground tunnels by machines. These options are both very expensive, costing a whopping £20 billion, not to mention being very time consuming and the fact that suitable geological sites are rare. So what do we do? Dump it at the bottom of the ocean? Bury it somewhere? Launch it into space? Or something else…

Alex Theodosiou is a post-doctoral research associate at Manchester University, working in the field of nuclear decommissioning as part of the Nuclear Graphite Research Group. They work as part of a consortium to come up with novel methods of tackling the nuclear waste crisis. Alex is currently researching the thermal treatment of nuclear graphite by reacting it with oxygen at high tempuratures to produce carbon dioxide. This carbon dioxide can then be managed using carbon capture techniques such as liquefication. Alex says ‘This will lead to a massive volume reduction in the graphite inventory and should help reduce overall costs involved with decommissioning, as well as reduce the lengthy timescales currently predicted.’ It could also have wider applications such as nuclear weapon disposal.

Alex’s laboratory work is small scale and involves using a few grams of nuclear grade graphite and heating it with a tube furnace under various conditions, before using a gas analyser to monitor the species formed. This lab data can then be transferred to an industrial scale by partner companies who use a plasma furnace and greater volumes of graphite, to produce results on 1000x the scale.

Alex and his colleages hope that together they can develop a commericially viable decommissioning strategy for the nuclear sector, to propose to the NDA to hopefully win the war against nuclear waste!

Informatics for health – an interdisciplinary extravaganza.

A few weeks ago I attended the European Federation for Medical Informatics and the Farr Institute of Health Informatics Research’s Manchester-based conference – Informatics for Health 2017. The conference was a vibrant mix of academic thought topped off with a generous helping of public collaboration, showing that the field of health and medical informatics takes collaboration and public involvement very seriously.

Since health informatics covers all aspects of health-data collection, storage and processing it would be impossible to do justice to the sheer breadth of research presented at this conference in a single article. Therefore, here I will focus on a couple of my personal highlights.

On Tuesday the 25th, Susan Michie from University College London gave a keynote talk about the Human Behavioural Change Project:

With environmental, social and health concerns appearing endemic in our society, Suzan noted that one of the best ways to address these issues would be through targeted behavioural change interventions. These take a huge array of forms from subtle nudges implemented by many governments and large organisations (encouraging everything from litter reduction to targeted urinal use – see here for examples), to less than subtle public health campaigns. These interventions are widely documented across academic literature and show a range of outcomes and successes. Susan outlined a vision where this literature could be used to answer the big question:

‘What behaviour change interventions work, how well, for whom, in what setting, for what behaviours and why’

This is undoubtedly a pretty ambitious question to answer and it is made harder by the fact that the literature on this subject, although vast, is often fragmented, inconsistent and sometimes incomplete. So how do Susan’s team propose to tackle this big data problem?

The Human Behaviour-Change Project, funded by the Wellcome Trust, draws together some of the best minds in behavioural, computer and information science. Their output will depend on the close working relationships and interplay between all disciplines involved.

Behaviour scientists have been tasked with developing an ‘ontology’, basically a standardised method of categorising different behavioural change interventions. It is then hoped that this standardised ontology can be used to both sort existing literature and as a template on which new studies can be based. It is hoped that this will add some much needed order to the current fragmented literature and pave the way for further analysis. Specifically, computer scientists on this team will use Natural Language Processing (a branch of computer science which employs artificial intelligence and computational linguistics to sort and process large bodies of text) to extract and organise information from these studies, whilst also learning as they process this information.

Finally information scientists, the big data miners, will develop effective user interfaces which allow researchers to delve into this data and to untangle it in a way that reveals answers to many important research questions.

This is undoubtedly a huge task but with the combined input of so many specialists it certainly seems tractable.

On Wednesday the 26th the conference was drawn to a close with a compelling talk from Sally Okun, Vice President for Advocacy, Policy and Patient Safety at PatientsLikeMe, an online patient powered research network. The PatientsLikeMe network partners with 500,000+ patients living with 2700+ conditions and offers a platform for patients to share experiences and where researchers can learn more about treatments directly from those undergoing them. Indeed, more than 90 peer reviewed papers have already stemmed from data collected through the PatientsLikeMe network.

The theory behind this work is compelling and almost begs the question as to why such networks are not yet commonplace. Indeed, it’s no secret that online marketers spend billions analysing our search histories and purchase data in an attempt to feed us highly personalised targeted marketing, so why shouldn’t patient experiences be used to tailor personalised medicine? Although there are undoubtedly greater complications linked to the use of patient data, not to mention the perils of misinformation, this is no excuse not to try and work towards a digital ideal.

Sally also discussed the launch of their new platform, the Digital Me. This platform will combine a plethora of personal health data including genetic data, medical histories, activity tracking – basically if you can collect it you can include it. Their hope is that this data can be used to personalise medical treatments, tailoring them to your own individual requirements. Indeed, advances in statistical methods could take us beyond blanket prescribing and into a world where your digital profile can be compared to those similar to you (similarity being based on a large number of patient characteristics) and recommendations made based on successes and failure of treatments for you nearest digital neighbours (those sharing most of your traits).

As my first experience of an informatics-based conference, I was struck by both the breadth and depth of knowledge in the field and the ethos of working together to optimise our outputs – a skill which is often found lacking in other fields. It was also plain that researchers in this area value patient input and many elements of this conference were tailored to be accessible and engaging for a lay audience. Indeed, representatives from HeRC’s own patient public forum who attended the event enjoyed the opportunity to engage further with researchers and learn about engagement and involvement work being conducted across the field.

Post by: Sarah Fox

Save

Save

Vets, pets, data sets and beyond.

From the 10th to the 14th of April 2017 researchers from the UK’s flagship project on companion animal surveillance, the Small Animal Veterinary Surveillance Network (SAVSNET), set up shop at Edinburgh’s international science festival.

SAVSNET* uses big data to survey animal disease across the UK and ultimately aims to improve animal care through identification of trends in diseases observed by veterinary practitioners.

This work offers huge benefits for companion animals, meaning that interventions can be targeted towards those most at risk and risk factors for disease can be identified across the population.

There is also significant crossover between this work and that of human health data science. Indeed, lessons learned from the processing and analysis of big data from vets may be used to inform aspects of human data analysis while work on shared and zoonotic diseases, antibacterial use and resistance also offer significant benefit to human health.

So, for this week, we took our science to the public to engage, inspire, raise awareness and stimulate discussion about our work.

SAVSNET mascots Alan, Phil and PJ

The SAVSNET Liverpool team worked hard to develop a wide range of activities designed to bring data science to life and to raise awareness of their work while Dr Sarah Fox, from HeRC’s PPI team joined the fun to expand discussions beyond pets and into the realms of human health.

Our stall was designed to take the public on a data journey, a journey which began with our resident mascots Alan, Phil and PJ, who were suffering from a parasitic problem. Hidden in our fluffy friend’s fur were a host of unwanted passengers – ticks (not the real thing but small sticky models we used to represent real ticks). Visitors helped us to remove these pests from our mascots and learned that every time this process is performed by a vet, a medical record is created for that procedure. Indeed, vets across the country are regularly called upon to remove such pests and, assuming the practice is signed-up to the SAVSNET system, information on these procedures is transferred to data scientists.

The next stage of our data journey is one health researchers are very familiar with but which may remain a mystery amongst the general public – sorting and analysing these records.

Interactive sticker-wall showing seasonal tick prevalence.

Our stall was equipped with a large touch-screen PC, linked to the SAVSNET database and programmed to pull out and de-identify all vet records which made reference to the word tick. It was explained that, in order to perform a complete analysis of the prevalence of ticks across the UK, data scientists needed to manually sort through these selected records and confirm the presence or absence of a tick at the time of the recorded consultation. Now visitors to our stall could take part in their own citizen science project as they helped us to sort through these records, uncovering ticks and adding their findings to our maps of regional and seasonal tick prevalence. Dogs came up trumps as the pet most likely to visit their local vet to have ticks removed, while the ticks themselves seemed to indiscriminately pop up all around the UK (even in the centre of London) while also having a preference for outings during the warmer summer months.

In the final stage of our data journey, visitors had the chance to get hands-on with some data science theory.

A few beautifully coloured ticks alongside our wooded data blocks.

Dr Alan Radford, a reader in infection biology from the University of Liverpool, developed a novel way of exploring sample theory and odds ratios using wooden building blocks.

This activity consisted of hundreds of wooden blocks sporting either cat or dog stickers, a subsection of which also housed a smaller tick sticker (on their rear). Visitors were told that these blocks represented all the information available on cats and dogs in the UK. After conceding that they would not be able to count all of these blocks independently, visitors were encouraged to form groups and choose a smaller sub-sample of ten blocks each. Visitors counted how many of their chosen ten blocks showed cat stickers and how many showed dog stickers. As a rule most groups of ten contained more dogs than cats – since overall there were more dog blocks in the total population. However, inevitably we also saw variability and some individuals chose more cat blocks than dogs. This tactile and visual example of sample theory allowed a discussion regarding sample bias and how increasing the number or size of samples taken would bring you closer to the correct population value. Finally visitors were asked to turn their blocks around and count how many of their dogs and cats also had ticks. In our example cats were more likely to house a resident parasite but, with fewer cats to sample from, this was not always immediately obvious. Specifically, assuming a visitor chose 7 dog blocks and 3 cat blocks then found that 4 of their dogs had ticks while only two of their cats did, they might be forgiven for thinking that within our sample dogs were more prone to ticks. However, from this data our older visitors were taught how to calculate an odds ratio, which could show that our cats were actually more likely to house ticks than dogs. It was also noted that similar calculations are often used to calculate risk in medical studies and that it is often these vales which are reported in the media.

The view down our microscope of our preserved pests.

Alongside our data blocks, younger visitors also had the chance to get up close and personal with real life ticks, through both a colouring exercise and by peeking down our microscope at a range of preserved specimens.

Finally, we discussed how tick data and similar veterinary information could be used to improve the health of companion animals and to better understand disease outbreaks across the country. It was at this point we also introduced the idea that similar methods could also be applied to human health data in order to streamline and improve our healthcare services. Our discussions centred around the successes already shown in The Farr Institute for Health Informatics’ 100 Ways case studies and HeRC’s work, including improvements in surgical practice and regional health improvements from HeRC’s Born in Bradford study – whilst also engaging in a frank discussion around data privacy and research transparency. Visitors were encouraged to document their views on these uses of big data on our post-it note wall, garnering comments to the questions: “What do you think of big data?” and “Should we use human data?” A majority of visitors chose to comment on our second question, generally expressing positive feelings concerning this topic but, with many also noting the need for tight data privacy controls. Comments of note include:

Should we use human data?
Yes, but with controls and limited personal info
We need to get better at persuading people to change behaviour and ask the right questions to collect the right data.
Yes, it’s towards a good cause and can help people.
Using data is a good idea if it helps to make people better.
Yes, as long as there are sufficient controls in place.
Yes, but don’t sell it.
Yes, if you are careful not to breach privacy.

The data detectives.

Overall we had a great time at the festival and hope everyone who visited out stall took away a little bit of our enthusiasm and a bit more knowledge of health data science.

* co-funded by the BBSRC and in collaboration with the British Small Animal Veterinary Association (BSAVA) and the University of Liverpool.

Post by: Sarah Fox

Save

Save

Save

Digital technologies: a new era in healthcare

Our NHS provides world-class care to millions of people every year. But, due to funding cuts and the challenges of providing care to an ageing population with complex health needs, this vital service is unsurprisingly under strain.  At the same time, with the mobile-internet at our fingertips, we have become accustomed to quick, on-demand services. Whether it’s browsing the internet, staying connected on social media or using mobile banking, our smartphones play important roles in nearly every aspect of our lives. It is therefore not surprising to find that over 70% of the UK population are now going online for their healthcare information.

This raises a question: could digital health (in particular mobile health apps) play a role in bolstering our faltering health service?

Unfortunately, to date, healthcare has been lagging behind other services in the digital revolution. When most other sectors grabbed onto the digital train, healthcare remained reluctant. Nevertheless, the potential for mobile technology to track, manage and improve patient health, is being increasingly recognised.

ClinTouch for instance, is a mobile health intervention co-created by a team of Manchester-based health researchers at The Farr Institute of Health Informatics’ Health eResearch Centre. ClinTouch is a psychiatric–symptom assessment tool developed to aid management of psychosis (a condition affecting 1 in 100 people). The app was co-designed by health professionals and patients, ensuring that the final output reflected both the needs of patients and clinicians. It combines a patient-focussed front end which allows users to record and monitoring their symptoms whilst simultaneously feeding this information back to clinicians to provide an early warning of possible relapse. The project has the potential to empower patients and improve relationships between the user and their physician. Moreover, if ClinTouch can reduce 5% of relapse cases, it will save the average NHS trust £250,000 to £500,000, per year (equating to a possible saving of £119 million to the NHS over three years!).

Adopting disruptive technologies such as ClinTouch can have meaningful benefits for patients and the NHS. And there are signs that the healthcare sector is warming up to the idea. Earlier this year the National Institute for Health and Care Excellence (NICE) announced that they are planning to apply health technology assessments to mobile health apps and only this week, the NHS announced a £35 million investment in digital health services.

On Thursday 27th April, the North West Biotech Initiative will be hosting an interactive panel discussion on the future of digital health. We will be joined by a fantastic line-up speakers providing a range of perspectives on the topic, including:

Professor Shôn Lewis: Principal Investigator of the ClinTouch project and professor of Adult Psychiatry at The University of Manchester who will be speaking about the development of and the potential impact of the ClinTouch app. 

Tom Higham: former Executive Director at FutureEverything and a freelance digital art curator and producer, interested in the enabling power of digital technology. Tom is also diagnosed with type 1 diabetes, has worked with diabetes charity JDRF UK and has written about the benefits of and the need for improvements in mobile apps for diabetes care.

Anna Beukenhorst: a PhD candidate currently working on the Cloudy with a Chance of Pain project, a nationwide smartphone study investigating the association between the weather and chronic pain in more than 13,000 participants.

Reina Yaidoo: founder of Bassajamba, a social enterprise whose main aim is increase participation of underrepresented groups in science and technology. Bassajamba are currently working with several diabetes support groups to develop self-management apps, which incorporate an aspect of gamification.

Professor Tjeerd Van Staa: professor of eHealth Research in the Division of Informatics, Imaging and Data Sciences, at The University of Manchester. He is currently leading the CityVerve case on the use of Internet of Things technology to help people manage their Chronic Obstructive Pulmonary Disease (COPD).

Dr Mariam Bibi: Senior Director of Real World Evidence at Consulting for McCann Health, External advisor for Quality and Productivity at NICE and an Associate Lecturer at Manchester Metropolitan University. She will be talking about the regulatory aspect of bringing digital technology to healthcare.

The event is open anyone with an interest in digital health, including the general public, students and academics. It is free to attend and will be a great opportunity to understand the potential role of digital technology in healthcare and to network with local business leaders, academics and students working at the forefront of digital healthcare.

Date: 27th April 2017
Venue: Moseley Theatre, Schuster Building, The University of Manchester
Time: 3.30pm – 6.00pm
Register to attend! http://bit.ly/2o4fzd7
Questions about the event? Please get in touch with us at: [email protected]

Guest post by: Fatima Chunara

Save

Save

Neural coding 2: Measuring information within the brain

In my previous neuroscience post, I talked about the spike-triggered averaging method scientists use to find what small part of a stimulus a neuron is capturing. This tells us what a neuron is interested in, such as the peak or trough of a sound wave, but it tells us nothing about how much information a neuron is transmitting about a stimulus to other neurons. We know from my last neuroscience post that a neuron will change its spike firing when it senses a stimulus it is tuned to. Unfortunately, neurons are not perfect and they make mistakes, sometimes staying quiet when they should fire or firing when they should be quiet. Therefore, when the neuron fires, listening neurons can not be fully sure a stimulus has actually occurred. These mistakes lead to a loss of information as signals get sent from neuron to neuron, like Chinese whispers.

Figure 1: Chinese whispers is an example of information loss during communication. Source: jasonthomas92.blogspot.com

It is very important for neuroscientists to ascertain information flow within the brain because this is underlines all other computational processes that happen. After all, to process information within the brain you must first correctly transmit it in the first place! To understand and quantify information flow, neuroscientists use a branch of mathematics known as Information Theory. Information theory centers around the idea of a sender and a receiver. In the brain, both the sender and receiver are normally neurons. The sender neuron encodes a message about a stimulus in a sequence of spikes. The receiving neuron/neurons try to decode this spike sequence and ascertain what the stimulus was. Before the receiving neuron gets the signal, it has little idea what the stimulus was. We say this neuron has a high uncertainty about the stimulus. By receiving a signal from the sending neuron, this uncertainty is reduced, the extent of this reduction in uncertainty depends on the amount of information carried in the signal. Just in case that is not clear, lets use an analogy…so imagine you are a lost hiker with a map.

Figure 2: A map from one of my favourite games. Knowing where you are requires information. Source: pac-miam.deviantart.com

You have a map with 8 equally sized sectors and all you know is that you could be in any of them. You then receive a phone call telling you that you are definitely within 2 sectors on the map. This phone call actually contains a measurable amount of information. If we assume the probability of being in any part of the map prior to receiving the phone call is equal then you have a 1/8 chance of being in each part of the map. We need to calculate a measure of uncertainty and for this we use something called Shannon entropy. This measurement is related to the number of different possible areas there are in the map, so a map with 2000 different areas will have greater entropy than a map with 10 sectors. In our example we have an entropy of 3 bits. After receiving the message, the entropy drops to 1 bit because there are now only two map sectors you could be in. So the telephone call caused our uncertainty about our location to drop from 3 bits to 1 bit of entropy. The information within the phone call is equal to this drop in uncertainty which is 3 – 1 = 2 bits of information. Notice how we didn’t need to know anything about the map itself or the exact words in the telephone call, only what the probabilities of your location were before and after the call.

In neurons, we can calculate information without knowing the details of the stimulus a neuron is responding to. The trick is to stimulate a neuron with the same way over many repeated trials using a highly varying, white-noise stimulus (see the bottom trace in Figure 3).

Figure 3: Diagram showing a neuron’s response to 5 repeats of identical neuron input (bottom). The responses are shown as voltage traces (labelled neuron response). The spike times can be represented as points in time in a ‘raster plot’ (top).

So how does information theory apply to this? Well, recall how Shannon entropy is linked with the number of possible areas contained within a map. In a neuron’s response, entropy is related to the number of different spike sequences a neuron can produce. A neuron producing many different spike sequences has a greater entropy.

In the raster plots below (Figure 4) are the responses of three simulated neurons using computer models that closely approximate real neuron behaviour. They are responding to a noisy stimulus (not shown) similar to the one shown at the bottom of Figure 3. Each dot is a spike fired at a certain time on a particular trial.

Figure 4: Raster plots show three neuron responses transmitting different amounts of information. The first (top) transmits about 9 bits per second of response, the second (middle) transmits 2 bits/s and the third (bottom) transmits 0.7 bits/s.

In all responses, the neuron is generating different spike sequences, some spikes are packed close together in time, while at other times, these spikes are spaced apart. This variation gives rise to entropy.

In the response of the first neuron (top) the spike sequences change in time but do not change at all across trials. This is an unrealistically perfect neuron. All the variable spike sequences follow the stimulus with 100% accuracy. When the stimulus repeats in the next trial the neuron simple fires the same spikes as before, producing vertical lines in the raster plot. Therefore, all that entropy in the neuron’s response is because of the stimulus and is therefore transmitting information. This neuron is highly informative; despite firing relatively few spikes it transmits about 9 bits/second…pretty good for a single neuron.

The second neuron (Figure 4, middle) also shows varying spike sequences across time, but now these sequences vary slightly across trials. We can think of this response as having two types of entropy, a total entropy which measures the total amount of variation a neuron can produce in its response, and a noise entropy. This second entropy is caused by the neuron changing its response to unrelated influences, such as other neuron inputs, electrical/chemical interferences and random fluctuations in signaling mechanisms within the neuron. The noise entropy causes the variability across trials in the raster plot and reduces the information transmitted by the neuron. To be more precise, the information carried in this neuron’s response it whatever remains from the total entropy when the noise entropy is subtracted from it…about 2 bits/s in this case.

In the final response (bottom), the spikes from the neuron only weakly follow the stimulus and are highly variable across trials. Interestingly it shows the most complex spike sequences of spikes of all three examples. It therefore has a very large total entropy, which means it has the capacity to transmit a great deal of information. Unfortunately, much of this entropy is wasted because the neurons spends most its time varying its spike patterns randomly instead of with the stimulus. This makes its noise entropy very high and the useful information low, it transmits a measly 0.7 bits/s.

So, what should you take away from this post. Firstly that neuroscientists can accurately measure the amount of information can transmit. Second, that neurons are not perfect and cannot respond the same way even to repeated identical stimuli. This leads to the final point that this noise within neurons limits the amount of information they can communicate to each other.

Of course, I have only shown a simple view of things in this post. In reality, neurons work together to communicate information and overcome the noise they contain. Perhaps in the future, I will elaborate on this further…

Post by: Dan Elijah.

To share or not to share: delving into health data research.

In January this year I made a bold move, well at least bold for someone who is often accused of being painfully risk averse. I waved a fond farewell to life in the lab to take on a new role where I have been able to combine my training as a researcher with my passion for science engagement. In this role I work closely with health researchers and the public, building the scaffolding needed for the two to work together and co-produce research which may improve healthcare for millions of patients across the UK. The group I work alongside are collectively known as the Health eResearch Centre (part of the world-leading Farr Institute for Health Informatics) and are proud in their mission of using de-identified electronic patient data* to improve public health.

For me, taking on this role has felt particularly poignant and has lead me to think deeply about the implications and risks of sharing such personal information. This is because, like many of you, my health records contain details which I’m scared to share with a wider audience. So, with this in mind, I want to invite you inside my head to explore the reasons why I believe that, despite my concerns, sharing such data with researchers is crucial for the future of public health and the NHS.

It’s no secret that any information stored in a digital form is at risk from security breaches, theft or damage and that this risk increases when information is shared. But, it’s also important to recognise that these risks can be significantly reduced if the correct structures are put in place to protect this information. Not only this but, when weighing up these risks, I also think that it is immensely important to know the benefits sharing data can provide.

With this in mind, I was really impressed that, within the first few weeks of starting this role, I was expected to complete some very thorough data security training (which, considering I won’t actually be working directly with patient data almost seemed like overkill). I was also introduced to the catchily titled ISO 27001 which, if my understanding is correct, certifies that an organisation is running a ‘gold standard’ framework of policies and procedures for data protection – this being something we as a group hope to obtain before the year is out. This all left me with the distinct feeling that security is a major concern for our group and that it is considered to be of paramount importance to our work. I also learned about data governance within the NHS and how each NHS organisation has an assigned data guardian who is tasked with protecting the confidentiality of patient and service-user information. So, I’m quite sure information security is taken exceedingly seriously at every step of the data sharing chain.

But what will the public gain from sharing their health data?

We all know that, in this cyber age, most of us have quite an extensive digital-data footprint. It’s no accident that my Facebook feed is peppered with pictures of sad dogs encouraging me to donate money to animal charities while Google proudly presents me with adverts for ‘Geek gear’ and fantasy inspired jewellery. I don’t make too much effort to ensure that my internet searches are private, so marketers probably see me as easy prey. This type of data mining happens all the time, with little benefit to you or me and, although we may install add blocking software, few of us make a considered effort to stop this from happening. Health data, on the other hand, is not only shared in a measured and secure manner but could offer enormous benefits to the UK’s health service and to us as individual patients.

Our NHS is being placed under increasing financial strain, with the added pressure of providing care to a growing, ageing population with complex health needs. Meaning that it has never been more important to find innovative ways of streamlining and improving our care system. This is where health data researchers can offer a helping hand. Work using patient data can identify ‘at risk’ populations, allowing health workers to target interventions at these groups before they develop health problems. New drugs and surgical procedures can also be monitored to ensure better outcomes and fewer complications.

And this is already happening across the UK – the Farr Institute are currently putting together a list of 100 projects which have already improved patient health – you can find these here. Also, in 2014 the #datasaveslives campaign was launched. This highlights the positive impact health-data research is having in the UK by building a digital library of this work – type #datasaveslives into Google and explore this library or join the conversation on twitter.

One example is work on a procedure to unblock arteries and improve outcomes for patients suffering from coronary heart disease:

In the UK this procedure is carried out in one of two ways: Stents (a special type of scaffolding used to open up arteries and improve blood flow) can be inserted either through a patient’s leg (the transfemoral route) or via the wrist (the transradial route). Insertion through the wrist is a more modern technique which is believed to be safer and less invasive – however both methods are routinely performed across the UK.
Farr institute researchers working between The University of Manchester’s Health eResearch Centre and Keele University used de-identified health records (with all personal information removed) to analyse the outcomes of 448,853 surgical stent insertion procedures across the UK between 2005 and 2012.

This study allowed researchers to calculate, for the first time, the true benefits of the transradial method. They showed that between 2005 and 2012 the use of transradial surgery increased from 14% in 2005 to 58% in 2012 – a change which is thought to have saved an estimated 450 lives. They also discovered that the South East of England had the lowest uptake of surgery via the wrist.

This work shows one example of how research use of existing health records can highlight ways of improving patient care across the country – thanks to this research the transradial route is now the dominant surgical practice adopted across the UK (leading to an estimated 30% reduction in the risk of mortality in high risk patients undergoing this procedure).

Reading through all these studies and imagining the potential for future research does convince me that, even with my concerns, the benefits of sharing my data far outweigh the risks. But, I also recognise that it is of tantamount importance for patients and the public to be aware of how this process works and to play an active role in shaping research. It seems that when the public have the opportunity to question health data scientists and are fully informed about policy and privacy many feel comfortable with sharing their data. This proves that we need to strive towards transparency and to keep an active dialogue with the public to ensure we are really addressing their needs and concerns.

This is an amazingly complex and interesting field of study, combining policy, academic research, public priority setting and oodles of engagement and involvement – so I hope over the next year to be publishing more posts covering aspects of this work in more detail.

Post by: Sarah Fox

*The kind of data which is routinely collected during doctor and hospital appointments but with all personal identifiable information removed.

 

Save

The moons of Jupiter and the speed of light

Recently, I was setting up my telescope to image the great planet Jupiter. I was interested in capturing an eclipse of one its largest moons, Io. Everything was ready, all the batteries were charged, the telescope was aligned and tracking the planet, but there was a problem. The eclipse just wasn’t happening. My computer programme predicted it to start at 21:10 on the 12th March 2017, but nothing happened. I was more than surprised, my computer is normally accurate to the second. So I checked the settings, the time is internet controlled so no problem there, the computer showed other stars in their correct positions so I knew it was not having problems with other parts of the sky. Then at about 21:48, Io started to cast a dark circle on Jupiter. I was amazed, I have never seen a total eclipse on Earth but I can now see one on Jupiter. But why was it more than 30 minutes late? It turns out that my confusion was shared by astronomers in the 17th century and, in an effort to explain the discrepancies of Io’s eclipse times, they inadvertently measured the speed of light.

It was the 17th century astronomers Giovanni Domenico Cassini, Ole Rømer and Jean Picard (not from Star Trek) who first studied the eclipses of Io on Jupiter whilst trying to solve the famous longitude problem: before the invention of accurate clocks, there was little way of knowing how far east or west you were sailing from a given location (normally Paris or London). Galileo himself proposed to use the predicable orbits of Jupiter’s moons to calculate the time on Earth, which can then be used to calculate longitude.

Ole Rømer (left) and Giovanni Cassini (right). Along with Jean Picard these pioneering 17th century astronomers observed and studied hundreds of Jovian eclipses. (Wikipedia Commons)

Unsurprisingly, this proved too difficult a task to do on a moving ship with the primitive optical equipment available at the time. On land, however, this method could be used to improve maps and navigation. So Cassini and Rømer set to work. They observed hundreds of Jovian eclipses over several months and were able to determine the difference in longitude between Paris and their location. Unfortunately, there was a problem; after accurately calculating the orbit of Io, Cassini found that sometimes during the year, eclipses were occurring earlier while at other times eclipses  happened later than predicted. Cassini logically surmised that light had to travel at a finite speed instead of instantaneously spanning the distance from Jupiter to Earth. For instance, when the Earth and Jupiter are on near opposite sides of the Sun, the light traveling from Jupiter will take longer to reach Earth (around 54 minutes). This causes the Io eclipses to appear delayed. When the Earth is between the Sun and Jupiter (a period called Opposition), then light from Jupiter takes only about 37 minutes to reach Earth making eclipses of Io happen earlier than expected.

An eclipse of Io imaged by my myself on 12-13/03/2017. The Io eclipse cases a dark spot on Jupiters northern cloud band. The delay of this event caused by the speed of light prompted me to write this post! (My own work)

Strangely, Cassini never followed up his discovery, Rømer continued observing and recording Io eclipses and defined an equation that related the delay caused by the speed of light to the angle between Earth and Jupiter. However, it would not have been possible to publish an actual speed of light because the distances between the planets were not known then. Interestingly, Rømer could have shown the speed of light as a ratio of Earth’s orbital speed…but for some reason he didn’t. It was another famous astronomer, Christian Huygens, who took that credit. He used Rømer’s detailed observations and formula to define the speed of light as 7600 times faster than Earth’s orbital speed.  This equates to a speed of 226328 km/s which is only 25% lower than the true value of light speed.

Christian Huygens, a leader in 17th century science. He was the first person to define the speed of light using the eclipses of Io. (Wikipedia commons)

This was the first time a universal constant had been calculated quantitatively and since then the speed of light has played a huge role in James Clerk Maxwell’s theory of electromagnetism and Einstein’s theories of relativity. But for anyone peering into the night sky, the work of these great men more than 300 years ago shows us that starlight is old…and by looking at it we are looking back in time. We see Jupiter as it was 40-50 minutes ago, the nearest star 4 years ago and the relatively nearby Andromeda galaxy 2.56 million years ago. Not bad for 17th century science.

I think next time I’m sitting by my telescope waiting for an Io eclipse, I’ll be a bit more appreciative of the significance that 30 minute delay had on our understanding of the universe.

Post by: Dan Elijah.

Save

I come in peace: Engaging life on a flat Earth

Did you know that the Earth is actually flat, not round and that NASA and the government fuel the round Earth conspiracy?….No, neither did I but this mind-boggling world view is currently gaining momentum on the internet and has recently found its way onto my radar.

To give you a bit of background:

Alongside my vociferous online academic rantings and day job helping researchers and the lay public work together to design and implement health research, I also spend a fair bit of time volunteering with the British Science Association (the BSA). The BSA is a charity and learned society founded in 1831 with many strings to its academic bow; including the standardisation of electrical units (including the Ohm, Volt and Amp). Today it is supported by a huge backbone of volunteers working tirelessly across the country to improve the public perception of science – letting everyone know that there is much more to science than just mind boiling equations and stuffy white haired professors.

Our small group of Mancunian volunteers meet monthly to mastermind and implement a huge range of engagement activities. Over the years I’ve been with the group I’ve found myself designing an endangered species treasure hunt (based on a mash-up of Pokemon Go and geocashing), baking cake pops for an astronomy and art crossover event held on the site of Manchester City centre’s oldest observatory and, just last week, hosting over 40 AS/A-level students at a science journalism workshop.

As a group we work hard to make sure our activities are fun and open to everyone – no matter what their academic background. But, we’re not naive, so we recognise that our reach is still pretty small and that there are many communities in our home city who will never have heard of us. This is why we have been working with a BSA volunteer from our Birmingham branch who’s role has been to help us find out more about Manchester’s hard to reach communities and discover how we can offer them meaningful engagement. It was during one of our meetings she said that she had been in contact with someone who runs a computer coding club for local teenagers and had noticed that some of these youngsters were adamant supporters of the ‘flat Earth’ theory – which is apparently backed up by a number of celebrities including rapper B.o.B who recently went on a amusing and disturbing Twitter rant about the topic.

This got me thinking. If science has never really been your thing, which is fine by the way just like P.E was never my thing, how do you avoid falling down the black hole of conspiracy theories (Illuminati, anti-vaccination, flat Earth)?

These theories offer an alternative world view which can, at first glance, appear to fit much better with the world we see and experience around us every day than the complex and often invisible world of science. Take flat Earth as a example. In our everyday lives we interact with both flat and round objects (compare a table top with a yoga ball) and, from these interactions, we build up an understanding of how these objects work. On a very basic level we see that things fall off a ball, you can’t really balance things on it like you can a table and it has an obvious curvature. Then take a look at the Earth. We can stand and walk along it with no obvious indication of its curvature, water sits flat in rivers and oceans it doesn’t run down the sides of the Earth as you would see if you spilled a glass of water onto a yoga ball. So, assuming you have little or no interest in astronomy (perhaps you live in the city center so don’t get a good view of the night sky anyway) and the mathematics of gravity and scale makes your head hurt, it’s easy to understand why you may choose to mistrust theories which you cannot test or see for yourself.

So, with this in mind, my question is: Is it possible to design activities and interactions that don’t patronise or assume knowledge but enable people to test scientific theories in ways that make sense and allow them to simply observe the outcomes with their own eyes?

We are now hoping to meet with this community, attend some of their activities, make friends and let them know scientists are just ordinary people. Then we want to jump in and put together a small accessible science festival where everyone can have fun and hopefully engage with science on a small scale. I get the feeling it’s not going to be an easy sell but will undoubtedly be worth it if done properly.

My mind is bubbling with ideas, including the possibility of sending a Go-Pro camera up on a balloon and playing back the footage – the possibilities are endless…although sadly our budget isn’t. Whatever happens, I’m excited and will keep you all updated on our progress as things move forward.

For now I want to invite anyone reading this to drop me a line in the comments below. Perhaps you’re an academic who has worked on a similar event and has some ideas, or maybe you’re keen on the flat Earth theory and want to tell us more about what you believe? Either way I’d love to hear from you.

Post by: Sarah Fox

Update: A pretty interesting gif image of a few pictures my telescope loving partner took last night showing Jupiter spinning on its axis – notice how the great red spot moves round. Perhaps we could bring our telescopes along to the festival and have a play 🙂

Neural coding 1: How to understand what a neuron is saying.

In this post I am diverting from my usual astrophotography theme and entering the world of computational neuroscience, a subject I studied for almost ten years. Computational neuroscience is a relatively new interdisciplinary branch of neuroscience that studies how areas of the brain and nervous system process and transmit information. An important and still unsolved question in computational neuroscience is how do neurons transmit information between themselves. This is known as the problem of neural coding and by solving this problem, we could potentially understand how all our cognitive functions are underpinned by neurons communicating with each other. So for the rest of this post I will attempt to discuss how we can read the neural code and why the code is so difficult to crack.

Since the twenties we have known that excited neurons communicate through electrical pulses called action potentials or spikes (see Figure 1). These spikes can quickly travel down the long axons of neurons to distant destinations before crossing a synapse and activating another neuron (form more information on neurons and synapses see here).

Figure 1. Neural action potentials. An action potential diagram is shown on the left as if recorded from inside a neuron (see inset). For an action potential arise and propagate through a neuron, it must reach a certain threshold (red dashed line). If it doesn’t the neuron will remain at rest. The right panel shows a real neurons firing spikes in the cortex of a mouse. Taken from Gentet LJ et al. (2010).

You would be forgiven for thinking that the neural coding problem is solved: neurons fire a spike when the see a stimulus they like and communicate this fact to other nearby neurons, while at other times they stay silent. Unfortunately, the situation is a bit more complex. Spikes are the basic symbol used by neurons to communicate, much like letters are the basic symbols of a written language. But letters only become meaningful when many are used together. This analogy is also true for neurons. When a neuron becomes excited it produces a sequence of spikes that, in theory, represent the stimuli the neuron is responding to. So if you can correctly interpret the meaning of spike sequences you could understand what a neuron is saying. In Figure 2, I show a hypothetical example of a neuron responding to a stimulus.

Figure 2. A stimulus (top trace) fluctuates over time (s(t)) and spikes from a hypothetical neuron are recorded. The stimulus is repeated 5 times producing 5 responses r1,2,3…5 shown below the stimulus. Each response is composed of spikes (vertical lines) and periods of silence. By counting the number of spikes within small time window lasting Δt seconds, we can calculate the firing rate of the neuron (bottom trace).

In this example a neuron is receiving an constantly fluctuating input. This is a bit like the signal you would expect to see from a neuron receiving a constant stream of inputs from thousands of other neurons. In response to this stimulus the receiving neuron constantly changes its spike firing rate. If we read this rate we can get a rough idea of what this neuron is excited by. In this case, the neuron fires faster when the stimulus is high and is almost silent when the stimulus is low. There is a mathematical method that can extract the stimulus that produces spikes, known as reverse correlation (Figure 3).

Figure 3. Reverse correlation can identify what feature of the stimulus (top) makes a neuron fire a spike (bottom). Stimulus samples are taken before each spike (vertical lines) and then averaged to produce a single stimulus trace representing the average stimulus that precedes a spike.

The method is actually very simple; each time a spike occurs we take a sample of the stimulus just before the spike. Hopefully many spikes are fired and we end up with many stimulus samples, in Figure 3 the samples are shown as dashed boxes over the stimulus. We then take these stimulus samples and average them together. If these spikes are being fired in response to a common feature in the stimulus we will be able to see this. This is therefore simple method of finding what a neuron actually responds to when it fires a spike. However, there are limitations to this procedure. For instance, if a neuron responds to multiple features within a stimulus then these will be averaged together leading to a misleading result. Also, this method assumes that the stimulus contains a wide selection of different stimulus fluctuations. If it doesn’t then you can never really know what a neuron is really responding to because you may not have stimulated it with anything it likes!

In my next two posts, I will discuss how more advanced methods from the realms of artificial intelligence and probability theory have helped neuroscientists more accurately extract the meaning of neural activity.

Post by: Daniel Elijah

Share This