To share or not to share: delving into health data research.

In January this year I made a bold move, well at least bold for someone who is often accused of being painfully risk averse. I waved a fond farewell to life in the lab to take on a new role where I have been able to combine my training as a researcher with my passion for science engagement. In this role I work closely with health researchers and the public, building the scaffolding needed for the two to work together and co-produce research which may improve healthcare for millions of patients across the UK. The group I work alongside are collectively known as the Health eResearch Centre (part of the world-leading Farr Institute for Health Informatics) and are proud in their mission of using de-identified electronic patient data* to improve public health.

For me, taking on this role has felt particularly poignant and has lead me to think deeply about the implications and risks of sharing such personal information. This is because, like many of you, my health records contain details which I’m scared to share with a wider audience. So, with this in mind, I want to invite you inside my head to explore the reasons why I believe that, despite my concerns, sharing such data with researchers is crucial for the future of public health and the NHS.

It’s no secret that any information stored in a digital form is at risk from security breaches, theft or damage and that this risk increases when information is shared. But, it’s also important to recognise that these risks can be significantly reduced if the correct structures are put in place to protect this information. Not only this but, when weighing up these risks, I also think that it is immensely important to know the benefits sharing data can provide.

With this in mind, I was really impressed that, within the first few weeks of starting this role, I was expected to complete some very thorough data security training (which, considering I won’t actually be working directly with patient data almost seemed like overkill). I was also introduced to the catchily titled ISO 27001 which, if my understanding is correct, certifies that an organisation is running a ‘gold standard’ framework of policies and procedures for data protection – this being something we as a group hope to obtain before the year is out. This all left me with the distinct feeling that security is a major concern for our group and that it is considered to be of paramount importance to our work. I also learned about data governance within the NHS and how each NHS organisation has an assigned data guardian who is tasked with protecting the confidentiality of patient and service-user information. So, I’m quite sure information security is taken exceedingly seriously at every step of the data sharing chain.

But what will the public gain from sharing their health data?

We all know that, in this cyber age, most of us have quite an extensive digital-data footprint. It’s no accident that my Facebook feed is peppered with pictures of sad dogs encouraging me to donate money to animal charities while Google proudly presents me with adverts for ‘Geek gear’ and fantasy inspired jewellery. I don’t make too much effort to ensure that my internet searches are private, so marketers probably see me as easy prey. This type of data mining happens all the time, with little benefit to you or me and, although we may install add blocking software, few of us make a considered effort to stop this from happening. Health data, on the other hand, is not only shared in a measured and secure manner but could offer enormous benefits to the UK’s health service and to us as individual patients.

Our NHS is being placed under increasing financial strain, with the added pressure of providing care to a growing, ageing population with complex health needs. Meaning that it has never been more important to find innovative ways of streamlining and improving our care system. This is where health data researchers can offer a helping hand. Work using patient data can identify ‘at risk’ populations, allowing health workers to target interventions at these groups before they develop health problems. New drugs and surgical procedures can also be monitored to ensure better outcomes and fewer complications.

And this is already happening across the UK – the Farr Institute are currently putting together a list of 100 projects which have already improved patient health – you can find these here. Also, in 2014 the #datasaveslives campaign was launched. This highlights the positive impact health-data research is having in the UK by building a digital library of this work – type #datasaveslives into Google and explore this library or join the conversation on twitter.

One example is work on a procedure to unblock arteries and improve outcomes for patients suffering from coronary heart disease:

In the UK this procedure is carried out in one of two ways: Stents (a special type of scaffolding used to open up arteries and improve blood flow) can be inserted either through a patient’s leg (the transfemoral route) or via the wrist (the transradial route). Insertion through the wrist is a more modern technique which is believed to be safer and less invasive – however both methods are routinely performed across the UK.
Farr institute researchers working between The University of Manchester’s Health eResearch Centre and Keele University used de-identified health records (with all personal information removed) to analyse the outcomes of 448,853 surgical stent insertion procedures across the UK between 2005 and 2012.

This study allowed researchers to calculate, for the first time, the true benefits of the transradial method. They showed that between 2005 and 2012 the use of transradial surgery increased from 14% in 2005 to 58% in 2012 – a change which is thought to have saved an estimated 450 lives. They also discovered that the South East of England had the lowest uptake of surgery via the wrist.

This work shows one example of how research use of existing health records can highlight ways of improving patient care across the country – thanks to this research the transradial route is now the dominant surgical practice adopted across the UK (leading to an estimated 30% reduction in the risk of mortality in high risk patients undergoing this procedure).

Reading through all these studies and imagining the potential for future research does convince me that, even with my concerns, the benefits of sharing my data far outweigh the risks. But, I also recognise that it is of tantamount importance for patients and the public to be aware of how this process works and to play an active role in shaping research. It seems that when the public have the opportunity to question health data scientists and are fully informed about policy and privacy many feel comfortable with sharing their data. This proves that we need to strive towards transparency and to keep an active dialogue with the public to ensure we are really addressing their needs and concerns.

This is an amazingly complex and interesting field of study, combining policy, academic research, public priority setting and oodles of engagement and involvement – so I hope over the next year to be publishing more posts covering aspects of this work in more detail.

Post by: Sarah Fox

*The kind of data which is routinely collected during doctor and hospital appointments but with all personal identifiable information removed.

 

Save

The moons of Jupiter and the speed of light

Recently, I was setting up my telescope to image the great planet Jupiter. I was interested in capturing an eclipse of one its largest moons, Io. Everything was ready, all the batteries were charged, the telescope was aligned and tracking the planet, but there was a problem. The eclipse just wasn’t happening. My computer programme predicted it to start at 21:10 on the 12th March 2017, but nothing happened. I was more than surprised, my computer is normally accurate to the second. So I checked the settings, the time is internet controlled so no problem there, the computer showed other stars in their correct positions so I knew it was not having problems with other parts of the sky. Then at about 21:48, Io started to cast a dark circle on Jupiter. I was amazed, I have never seen a total eclipse on Earth but I can now see one on Jupiter. But why was it more than 30 minutes late? It turns out that my confusion was shared by astronomers in the 17th century and, in an effort to explain the discrepancies of Io’s eclipse times, they inadvertently measured the speed of light.

It was the 17th century astronomers Giovanni Domenico Cassini, Ole Rømer and Jean Picard (not from Star Trek) who first studied the eclipses of Io on Jupiter whilst trying to solve the famous longitude problem: before the invention of accurate clocks, there was little way of knowing how far east or west you were sailing from a given location (normally Paris or London). Galileo himself proposed to use the predicable orbits of Jupiter’s moons to calculate the time on Earth, which can then be used to calculate longitude.

Ole Rømer (left) and Giovanni Cassini (right). Along with Jean Picard these pioneering 17th century astronomers observed and studied hundreds of Jovian eclipses. (Wikipedia Commons)

Unsurprisingly, this proved too difficult a task to do on a moving ship with the primitive optical equipment available at the time. On land, however, this method could be used to improve maps and navigation. So Cassini and Rømer set to work. They observed hundreds of Jovian eclipses over several months and were able to determine the difference in longitude between Paris and their location. Unfortunately, there was a problem; after accurately calculating the orbit of Io, Cassini found that sometimes during the year, eclipses were occurring earlier while at other times eclipses  happened later than predicted. Cassini logically surmised that light had to travel at a finite speed instead of instantaneously spanning the distance from Jupiter to Earth. For instance, when the Earth and Jupiter are on near opposite sides of the Sun, the light traveling from Jupiter will take longer to reach Earth (around 54 minutes). This causes the Io eclipses to appear delayed. When the Earth is between the Sun and Jupiter (a period called Opposition), then light from Jupiter takes only about 37 minutes to reach Earth making eclipses of Io happen earlier than expected.

An eclipse of Io imaged by my myself on 12-13/03/2017. The Io eclipse cases a dark spot on Jupiters northern cloud band. The delay of this event caused by the speed of light prompted me to write this post! (My own work)

Strangely, Cassini never followed up his discovery, Rømer continued observing and recording Io eclipses and defined an equation that related the delay caused by the speed of light to the angle between Earth and Jupiter. However, it would not have been possible to publish an actual speed of light because the distances between the planets were not known then. Interestingly, Rømer could have shown the speed of light as a ratio of Earth’s orbital speed…but for some reason he didn’t. It was another famous astronomer, Christian Huygens, who took that credit. He used Rømer’s detailed observations and formula to define the speed of light as 7600 times faster than Earth’s orbital speed.  This equates to a speed of 226328 km/s which is only 25% lower than the true value of light speed.

Christian Huygens, a leader in 17th century science. He was the first person to define the speed of light using the eclipses of Io. (Wikipedia commons)

This was the first time a universal constant had been calculated quantitatively and since then the speed of light has played a huge role in James Clerk Maxwell’s theory of electromagnetism and Einstein’s theories of relativity. But for anyone peering into the night sky, the work of these great men more than 300 years ago shows us that starlight is old…and by looking at it we are looking back in time. We see Jupiter as it was 40-50 minutes ago, the nearest star 4 years ago and the relatively nearby Andromeda galaxy 2.56 million years ago. Not bad for 17th century science.

I think next time I’m sitting by my telescope waiting for an Io eclipse, I’ll be a bit more appreciative of the significance that 30 minute delay had on our understanding of the universe.

Post by: Dan Elijah.

Save

I come in peace: Engaging life on a flat Earth

Did you know that the Earth is actually flat, not round and that NASA and the government fuel the round Earth conspiracy?….No, neither did I but this mind-boggling world view is currently gaining momentum on the internet and has recently found its way onto my radar.

To give you a bit of background:

Alongside my vociferous online academic rantings and day job helping researchers and the lay public work together to design and implement health research, I also spend a fair bit of time volunteering with the British Science Association (the BSA). The BSA is a charity and learned society founded in 1831 with many strings to its academic bow; including the standardisation of electrical units (including the Ohm, Volt and Amp). Today it is supported by a huge backbone of volunteers working tirelessly across the country to improve the public perception of science – letting everyone know that there is much more to science than just mind boiling equations and stuffy white haired professors.

Our small group of Mancunian volunteers meet monthly to mastermind and implement a huge range of engagement activities. Over the years I’ve been with the group I’ve found myself designing an endangered species treasure hunt (based on a mash-up of Pokemon Go and geocashing), baking cake pops for an astronomy and art crossover event held on the site of Manchester City centre’s oldest observatory and, just last week, hosting over 40 AS/A-level students at a science journalism workshop.

As a group we work hard to make sure our activities are fun and open to everyone – no matter what their academic background. But, we’re not naive, so we recognise that our reach is still pretty small and that there are many communities in our home city who will never have heard of us. This is why we have been working with a BSA volunteer from our Birmingham branch who’s role has been to help us find out more about Manchester’s hard to reach communities and discover how we can offer them meaningful engagement. It was during one of our meetings she said that she had been in contact with someone who runs a computer coding club for local teenagers and had noticed that some of these youngsters were adamant supporters of the ‘flat Earth’ theory – which is apparently backed up by a number of celebrities including rapper B.o.B who recently went on a amusing and disturbing Twitter rant about the topic.

This got me thinking. If science has never really been your thing, which is fine by the way just like P.E was never my thing, how do you avoid falling down the black hole of conspiracy theories (Illuminati, anti-vaccination, flat Earth)?

These theories offer an alternative world view which can, at first glance, appear to fit much better with the world we see and experience around us every day than the complex and often invisible world of science. Take flat Earth as a example. In our everyday lives we interact with both flat and round objects (compare a table top with a yoga ball) and, from these interactions, we build up an understanding of how these objects work. On a very basic level we see that things fall off a ball, you can’t really balance things on it like you can a table and it has an obvious curvature. Then take a look at the Earth. We can stand and walk along it with no obvious indication of its curvature, water sits flat in rivers and oceans it doesn’t run down the sides of the Earth as you would see if you spilled a glass of water onto a yoga ball. So, assuming you have little or no interest in astronomy (perhaps you live in the city center so don’t get a good view of the night sky anyway) and the mathematics of gravity and scale makes your head hurt, it’s easy to understand why you may choose to mistrust theories which you cannot test or see for yourself.

So, with this in mind, my question is: Is it possible to design activities and interactions that don’t patronise or assume knowledge but enable people to test scientific theories in ways that make sense and allow them to simply observe the outcomes with their own eyes?

We are now hoping to meet with this community, attend some of their activities, make friends and let them know scientists are just ordinary people. Then we want to jump in and put together a small accessible science festival where everyone can have fun and hopefully engage with science on a small scale. I get the feeling it’s not going to be an easy sell but will undoubtedly be worth it if done properly.

My mind is bubbling with ideas, including the possibility of sending a Go-Pro camera up on a balloon and playing back the footage – the possibilities are endless…although sadly our budget isn’t. Whatever happens, I’m excited and will keep you all updated on our progress as things move forward.

For now I want to invite anyone reading this to drop me a line in the comments below. Perhaps you’re an academic who has worked on a similar event and has some ideas, or maybe you’re keen on the flat Earth theory and want to tell us more about what you believe? Either way I’d love to hear from you.

Post by: Sarah Fox

Update: A pretty interesting gif image of a few pictures my telescope loving partner took last night showing Jupiter spinning on its axis – notice how the great red spot moves round. Perhaps we could bring our telescopes along to the festival and have a play 🙂

Neural coding 1: How to understand what a neuron is saying.

In this post I am diverting from my usual astrophotography theme and entering the world of computational neuroscience, a subject I studied for almost ten years. Computational neuroscience is a relatively new interdisciplinary branch of neuroscience that studies how areas of the brain and nervous system process and transmit information. An important and still unsolved question in computational neuroscience is how do neurons transmit information between themselves. This is known as the problem of neural coding and by solving this problem, we could potentially understand how all our cognitive functions are underpinned by neurons communicating with each other. So for the rest of this post I will attempt to discuss how we can read the neural code and why the code is so difficult to crack.

Since the twenties we have known that excited neurons communicate through electrical pulses called action potentials or spikes (see Figure 1). These spikes can quickly travel down the long axons of neurons to distant destinations before crossing a synapse and activating another neuron (form more information on neurons and synapses see here).

Figure 1. Neural action potentials. An action potential diagram is shown on the left as if recorded from inside a neuron (see inset). For an action potential arise and propagate through a neuron, it must reach a certain threshold (red dashed line). If it doesn’t the neuron will remain at rest. The right panel shows a real neurons firing spikes in the cortex of a mouse. Taken from Gentet LJ et al. (2010).

You would be forgiven for thinking that the neural coding problem is solved: neurons fire a spike when the see a stimulus they like and communicate this fact to other nearby neurons, while at other times they stay silent. Unfortunately, the situation is a bit more complex. Spikes are the basic symbol used by neurons to communicate, much like letters are the basic symbols of a written language. But letters only become meaningful when many are used together. This analogy is also true for neurons. When a neuron becomes excited it produces a sequence of spikes that, in theory, represent the stimuli the neuron is responding to. So if you can correctly interpret the meaning of spike sequences you could understand what a neuron is saying. In Figure 2, I show a hypothetical example of a neuron responding to a stimulus.

Figure 2. A stimulus (top trace) fluctuates over time (s(t)) and spikes from a hypothetical neuron are recorded. The stimulus is repeated 5 times producing 5 responses r1,2,3…5 shown below the stimulus. Each response is composed of spikes (vertical lines) and periods of silence. By counting the number of spikes within small time window lasting Δt seconds, we can calculate the firing rate of the neuron (bottom trace).

In this example a neuron is receiving an constantly fluctuating input. This is a bit like the signal you would expect to see from a neuron receiving a constant stream of inputs from thousands of other neurons. In response to this stimulus the receiving neuron constantly changes its spike firing rate. If we read this rate we can get a rough idea of what this neuron is excited by. In this case, the neuron fires faster when the stimulus is high and is almost silent when the stimulus is low. There is a mathematical method that can extract the stimulus that produces spikes, known as reverse correlation (Figure 3).

Figure 3. Reverse correlation can identify what feature of the stimulus (top) makes a neuron fire a spike (bottom). Stimulus samples are taken before each spike (vertical lines) and then averaged to produce a single stimulus trace representing the average stimulus that precedes a spike.

The method is actually very simple; each time a spike occurs we take a sample of the stimulus just before the spike. Hopefully many spikes are fired and we end up with many stimulus samples, in Figure 3 the samples are shown as dashed boxes over the stimulus. We then take these stimulus samples and average them together. If these spikes are being fired in response to a common feature in the stimulus we will be able to see this. This is therefore simple method of finding what a neuron actually responds to when it fires a spike. However, there are limitations to this procedure. For instance, if a neuron responds to multiple features within a stimulus then these will be averaged together leading to a misleading result. Also, this method assumes that the stimulus contains a wide selection of different stimulus fluctuations. If it doesn’t then you can never really know what a neuron is really responding to because you may not have stimulated it with anything it likes!

In my next two posts, I will discuss how more advanced methods from the realms of artificial intelligence and probability theory have helped neuroscientists more accurately extract the meaning of neural activity.

Post by: Daniel Elijah

Afforestation Vs reforestation

It is well known that deforestation is an increasing global problem. Even those with little scientific background are bombarded with information through social media, specifically regarding consequences of deforestation including global warming. Indeed, many charities, schools and individuals are now taking a stand and doing all they can to tackle this problem.

The planting of trees can be divided into two categories: afforestation and reforestation. Reforestation refers to planting trees on land that was previously forest whereas afforestation refers to planting trees on patches of land which were not previously covered in forest. The general idea behind both is: as many trees as possible, wherever possible.
However, ecology is a complex science. Are we focusing too much on carbon sequestration and not enough on the planets ecosystems as a whole? Are some ecosystems being neglected and forgotten? Perhaps. This article will cover some issues associated with afforestation and reforestation.

Reforestation is beneficial when trees have been previously removed. However, these new trees will never create exactly the same ecosystem as the original forest. Indeed, the original trees which were cleared may have been hundreds, even thousands of years old meaning that it may take many years for the new trees to catch up. In addition to this, rare species lost during the original deforestation may not be replaced, meaning extinction and a reduction of biodiversity could be inevitable.

Tropical grassy Biome

Afforestation can also have negative consequences especially if the tree planters don’t consider the environment they are introducing the new trees into. The idea of afforestation is to plant trees on patches of unused, degrading land. However, land which may appear degraded may actually house its own ecosystem, for example a Savanna or tropical grassy biome. Research has suggested that tropical grassy biomes are often misunderstood and neglected. These ecosystems can provide important ecological services. In addition to this, these ecosystems could contain rare species, which could be outcompeted by the introduction of new trees.Therefore, although carbon sequestration will increase, many ecosystems will be negatively affected or lost.

It has to be noted that both reforestation and afforestation can be advantageous when tackling global warming. However, possible negative impacts must also be taken into account in order to protect the planet as a whole. This can be achieved by ensuring that deforestation is kept to a minimum and afforestation only occurs on truly degraded land. There is desperate need for more research into areas of land before trees are planted upon them. The biggest challenge today is education. Charities, schools and individuals need to be made aware of this before it’s too late. Without awareness, irreversible damage can occur unknowingly. Effective conservation work requires more than just planning trees at random and this needs to be taken considered on a global scale.
If we don’t stand up for all of our precious ecosystems, who will?

Post by: Alice Brown

References:

Click to access Parr2014TREE_TropicalGrassyBiomes_MisunderstoodNeglectedAndUnderThreat.pdf

https://pixabay.com/en/photos/plain/?cat=nature

 

Save

Save

Meek no more: turning mice into predators

A recent study published in the journal Cell, has shown that  switching on a particular group of neurons in the mouse brain can turn these otherwise timid creatures into aggressive predators. Why would anyone want to do this you might ask? After all, with the tumultuous political events of 2016, do we really want to add killer mice to our worries? Thankfully, the researchers aren’t planning to take over the world one rodent at a time, instead they want to understand how the brain coordinates the complex motor patterns associated with hunting.

During the evolution of vertebrates, the morphology of the head changed to allow for an articulated jaw. This is a more scientific way of describing the type of jaw most of us are familiar with: an opposable bone at the entrance of the mouth that can be used to grasp and manipulate food. This anatomical change allowed for the development of active hunting strategies and the associated neural networks to coordinate such behaviours. The researchers wanted to identify which parts of the brain contain the networks for critical hunting behaviours such as prey pursuit and biting. They began by looking at an evolutionarily old part of the brain known as the amygdala, specifically the central nucleus of the amygdala (CeA), because this area has been shown to increase its activity during hunting and has connections to parts of the brainstem controlling the head and face.

In order to study this part of the brain, the authors used a technique called optogenetics. This technique involves introducing the gene for a light sensitive ion channel into specific neurons. It is then possible  to ‘switch on’ the neurons (i.e. cause them to fire bursts of electrical activity) simply by shining blue light onto them. This is what the researchers did with the neurons in the CeA.

To begin with the researchers wanted to find out what happens when you simply switch on the these neurons. To test this they put small moving toys, resembling crickets, into the cage as ‘artificial prey’ and watched the animals’ behaviour. The mice were largely indifferent to these non-edible ‘prey’, however as soon as the light was switched on the mice adopted a characteristic hunting position, seized the toys, and bit them. This never occurred when light was off. The scientists also tested the mice with live crickets (i.e. prey that mice would naturally hunt). When using live prey the mice (without the light activation) hunted as normal. However, when the light was switched on the researcher saw that the time needed for the mice to capture and subdue their prey was much shorter and any captured crickets were immediately eaten. The combination of these results suggests that stimulation of the central nucleus of the amygdala (CeA) not only mimicked natural hunting but increased the predatory behaviour of these mice.

One question that might spring to mind from this study is: How do we know that these mice are really hunting? Perhaps the light had unintended effects such as making the mice particularly aggressive or maybe very hungry? After all, both explanations could account for the increased biting of non-edible objects and the faster, more aggressive cricket hunting. To argue against increased aggression levels, the authors point out that CeA stimulation did not provoke more attacks on other mice – something you might expect of an overly aggressive mouse. So what about increased hunger? The scientists in this study also think this is unlikely because they allowed the mice access to food pellets and found no difference in how many pellets were consuming during the time the laser was on versus the time the laser was off.

So how is hunting behaviour controlled by the CeA? The hunting behaviour displayed by mice can be divided into two aspects: locomotion (prey pursuit and capture) and the coordination of craniofacial muscles for the delivery of a killing bite. The scientists hypothesised that the CeA may mediate these two different types of behaviour through connections with different parts of the brain. The two key brain regions investigated in this study were the parvocellular region of the reticular formation in the brainstem (PCRt) and a region of the midbrain called the periaqueductal grey (PAG).

By using optogenetics the researchers were able to selectively stimulate the CeA to PCRt projection and found that this caused the mice to display feeding behaviours. Interestingly, stimulating this pathway seemed to only elicit the motor aspects of eating e.g. chewing rather than increasing the mice’s hunger. Conversely, disrupting the function of this pathway interfered with the mice’s ability to eat. Taking this into a ‘live’ setting, the mice could still pursue their prey and subdue it using their forepaws, but they struggled to deliver a killing bite. The researchers then turned their attention to the pathway between the CeA and the PAG. They found that stimulating this projection caused mice to start hunting more quickly, pursue their prey faster, and hunt for longer. Unlike the experiment above, stimulating this pathway had no effect on feeding-type behaviours. Now the scientists geared up for the big experiment: they’ve shown that stimulating the CeA leads to predatory hunting. They’ve shown that biting and pursuit seem to be controlled by different pathways from the CeA. So they decided to see if activating both pathways simultaneously (CeA to PCRt and CeA to PAG) could mimic the effects of stimulating the CeA itself. Indeed, they found that stimulating these two pathways together led the mice to robustly initiate attacks on artificial prey.

So what can we learn from this study? The scientists have demonstrated that the CeA acts as a command system for co-ordinating key behaviours for efficient hunting via two independent pathways. However, there are still some key questions remaining, for example, what determines whether the CeA sends those commands? The scientists hypothesise that cues such as the sight or smell of prey might cause the CeA to respond and send the command to elicit the appropriate motor actions. However, they can’t prove this in the current study.

Despite these limitations, this paper is a great example of how scientists can use cutting edge tools, like optogenetics, to tease apart the brain pathways responsible for different aspects of a complex behaviour such as hunting.

Post by: Michaela Loft

Save

Video astronomy: an update

Early last year I posted an article discussing the merits of webcam imaging. I had just bought some new equipment and wanted to put my enthusiasm into blog form. I was getting fed up with the annoying short observing time our cloudy nights provide us in the UK. Traditional long exposure photography, used to capture faint galaxies and nebulae, is simply out of the question on all but the clearest of nights. However, webcam astronomy is easy to learn, cheap and quick enough to do between clouds. Not only this but, on Moonlit nights when long exposure photography would produce washed out pictures of galaxies, webcam imaging can deliver great Lunar landscapes. Also, during the day, a webcam coupled with a telescope can capture the ever-changing surface of the Sun, meaning you can do astronomy without losing sleep!

So it is now time to show you some of my attempts at webcam astronomy. Before I show any processed images I first want to demonstrate the main limitation facing astrophotography (other than light pollution); atmospheric turbulence. In image 1, a section of the Moon is being videoed; notice how the detail is constantly shifting in and out of focus. This distortion is caused by currents of air at different temperatures which bend and scatter the light passing through the atmosphere.

Image 1. A movie clip of craters Aristoteles (top left) and Eudoxus (top right). The image shimmers because of the constant turbulence in Earth’s atmosphere. Author’s own work.

Although this may look bad, atmospheric distortion can get far worse! For instance, if the Moon moves close to the horizon then light coming from its surface has to travel through far more air, which badly distorts and scatters this light. Just look at how distorted the Sun looks as it is setting. Atmospheric distortion can also be caused in other ways. In image 2, the Moon was passing just above my house, which unfortunately is not well insulated. This atmospheric distortion caused by hot air escaping from my house dramatically reduces the detail you can see – I’d ask my wife to keep the heating off while I’m imaging but I fear this wouldn’t go down too well.

Image 2. Another movie clip taken when the Moon was setting just above my house. The hot air causes increased turbulence that causes the detail of the lunar landscape to dance and blur. Author’s own work.

Luckily webcam astronomy possesses one amazing advantage over other forms of photography. Unlike traditional long exposure astrophotography, video recordings produces thousands of individual images (or frames) of your target, this means you can be very strict about which frames to keep and which to discard. For example, to get one high quality image, I take about 4 minutes of video containing 14400 frames at 60 frames/sec. I then pick the best 2000 of these frames and, using a program called Pipp, I can stack them together to reduce noise and improve detail (see previous post about stacking). This procedure means I can remove all the frames that were distorted by the atmosphere.

So after all that processing what am I left with? The answer is superior detail, better than any individual frame in the movie or even images taken using long exposure photography. In Image 3, Lunar detail as small as 1Km across can be seen, since the Moon was 370000Km away at that point, this resolution is equivalent to spotting a 2cm wide 1p coin from 7.4Km away! Quite an achievement for my small telescope. All because I have used only the frames taken during atmospheric stillness.

Image 3. A stacked image taken using the best 2000 frames of the movie (Figure 1). The resolution has now improved substantially. Author’s own work.

Even during strong atmospheric turbulence, reasonable detail can be retrieved, in Image 4, Lunar craters as small as about 5 Km can be seen, not as good as in Image 3 but still impressive.

Image 4. The stacked image from the movie shown in Image 2. Despite the strong atmospheric disturbance, fine detail can still be resolved. The crater to the far left is Sacrobosco. Author’s own work.

Of course webcam astronomy is not limited to the Moon. With the correct light rejecting filters, you can turn this powerful technique onto the Sun. During July 2016 there was a fantastic chain of Sunspots (see Image 5), these features change shape every day: merging, splitting and distorting providing a very dynamic and unique astronomical sight.
Of course before undertaking solar photography a few considerations must be addressed. (1) Make sure you research how to observe/image the Sun safely, I will not be happy if you go out and blind yourself after reading this article. (2) Be aware that the Sun will heat up your telescope creating turbulent air inside the tube, to avoid this problem I covered my scope in kitchen foil.

Image 5. A stacked image of sunspots taken on 19/07/2016. The internal structure of the Sunspots can be seen as well as individual granulations across the solar surface. Author’s own work.

The planets are probably the most popular and evocative telescopic targets of all. Thankfully webcam imaging provides an easy way to image them and make your own Solar system collections! I’ve added my own incomplete collection in Figure 6. The sizes the planets appear are to scale.

Image 6. My Solar System collection: Jupiter (top left), Uranus (top middle), Neptune (top right), Mars (bottom left) and Venus (bottom middle). Author’s own work.

For the planets, I used exactly the same method as with the Moon. The hardest part is finding the planets in the night sky. If you are unfamiliar with night sky then their locations can be found using planetarium software like Stellarium. I must also mention that you will need some experience finding Uranus and Neptune, they are faint and you will need to be able to use a finder scope to home in on these planets.

In conclusion, I started learning astrophotography in the wrong order, webcam astronomy provides all the excitement of capturing a new world in your back garden but without the long nights, tiresome setup and ruinously expensive equipment.  So fetch that old scope out of your garage, buy a webcam and get recording I have evidence to show you wont be disappointed.

Post by: Daniel Elijah.

Carfentanil: The next step in the opioid crisis?

The US is in the midst of a national opioid epidemic. The use of opioids, which includes prescription drugs and heroin, has quadrupled since 1999. The Centres for Disease Control and Prevention (CDC) has confirmed that these drugs now kill more people than car accidents in the US, making it the most common form of preventable death.

Opioids are a class of opium-derived compounds that relieve pain. These drugs use the same receptors as endorphins, eliciting analgesic effects by inhibiting the release of neurotransmitters in the spinal cord. Exploited for centuries, they are still considered one of the most efficacious treatments for pain, despite serious side effects including physical and psychological addiction.

Fentanyl, a synthetic opioid developed for use in surgery, was first linked with overdose deaths in 2005 . Alarmingly, the number of overdose cases involving fentanyl have escalated in recent years, with its misuse regularly making the headlines due the sheer number of deaths associated with this drug. High profile cases, such as the death of the global star Prince have only added to this.

Carfentanil, another drug with a similar structure to fentanyl, has recently exploded onto the scene as carfentanil-laced drugs rear their toxic heads. An analogue of fentanyl, carfentanyl was first synthesised in the US in 1974 by Janssen Pharmaceutica (owned by Johnson & Johnson). This opiate was designed for use as a general anaesthetic in large animals such as rhinos – just 2mg of carfentanil can knock out an African elephant. Due to its extreme potency the lethal dose range of this drug is unknown in humans, which greatly amplifies the risk involved in taking the drug.  Carfentanil is 10,000 times more potent than morphine, and 100 times more potent than fentanyl. As with other opioids, carfentanil causes death by respiratory distress or cardiac arrest, leading to death within minutes.

So, why are these drugs being increasingly abused? One explanation is that prescription of opioid drugs have increased since the 1970s – this being the result of a series of papers published downplaying the risk of addiction associated with use of opioid painkillers such as oxycontin and fentanyl. They were marketed to doctors as wonder drugs for treating day-to-day pain, with little addiction potential. As we now know, this turned out not to be the case. The resulting willingness of doctors to prescribe opioid painkillers increased the availability of these drugs. This problem was in turn worsened by a subset of pharmacies illegally filling out multiple prescriptions and the phenomenon of ‘doctor shopping’, where patients obtain prescriptions from multiple doctors at once. Currently, over 650,000 new opioid prescriptions are dispensed every day in the US by doctors.

A number of recent studies found that almost half of young people using heroin had abused prescription opioids beforehand. This comes as no surprise when such potent drugs are used routinely to treat even minor sports injuries in young people. As a result of this alarming trend, new regulations were implemented in the US in 2014 to attempt to restrict the misuse of prescription painkillers. Unfortunately, this has forced many people experiencing drug addiction to turn to prescription fraud and illegally produced pills. Cartels in Mexico, the primary supplier of heroin to the US, have stepped in to provide cheaper and more potent opiate alternatives. Evidently, the reduction in the availability of legally-produced drugs has failed to remedy the issue of opioid misuse.

The unknown quantity and composition of the drugs bought on the street, combined with the recent explosion in recreational use, has led to a surge in accidental overdoses. In 2016, both fentanyl and carfentanil have been found as additives in heroin, cocaine and counterfeit Xanax pills in Florida, Ohio and neighbouring Michigan (including Detroit) among other states. Like any other illicit drug, users have no way of determining the strength or purity of what they have bought to any degree of accuracy.

The latest spike in overdoses has led to the DEA issuing a public health warning, with the Acting Administrator Chuck Rosenberg describing carfentanil as ‘crazy dangerous’ . It is hard to put a figure on the number of cases involving carfentanil as there are issues with obtaining samples and identifying how much was taken, with some facilities also unable to identify the compound in toxicology reports at post mortems.
The opioid antagonist naloxone (sold as Narcan™ nasal spray) also struggles to reverse the effects of fentanyl and carfentanil, with reports of patients needing up to five times the recommended amount of naloxone for a heroin overdose. As a result it can take up to five minutes to revive a patient, an effect that normally takes a matter of seconds, vastly increasing the chance of lasting brain damage and death.

On average, opioid overdoses kill 91 Americans every day. This disturbing figure will continue to rise unless rapid change is seen in both government policy and in society as a whole. There remains no easy solution to opioid problem, and with a single gram of carfentanil able to cause 50,000 fatal overdoses, it seems the situation will only worsen unless dramatic changes are put into effect. Continued research into addiction causes and treatments, coupled with investigation into new medications to treat pain are also necessary for long-term management of this devastating crisis.

Post by: Sarah Lambert

Save

Why the rat pack don’t do drugs

From awkward school seminars to the topical banter of South Park, we’ve all heard the message loud and clear ‘Drugs are bad….ok?’. And yes, as a rule messing with your brain chemistry is probably not a great idea. But, there are certain nuances surrounding drug use and addiction that you may not be aware of and which could have important implications for how we understand addiction and work with addicts.

Many of us may have heard about studies in the 1960’s involving lab rats and cocaine. In these relatively simplistic studies researchers offered caged rats a choice between regular drinking water and water laced with cocaine. Most animals studied didn’t just favour the drug-laced drinking water, they actively drank so much they eventually killed themselves. These shocking findings lead many researchers and politicians to believe that drugs such as heroine and cocaine were so dangerously addictive that they caused individuals to loose control over their own behaviour. And yes, these drugs can certainly be dangerous however, there was more to this story than these researchers realised.

In the 1970’s Bruce Alexander, a curious psychologist from Vancouver, noticed a big problem with this research. He recognised that all the rats studied in these addiction experiments were housed in small wire cages with no access to any of the things that make a wild rat’s life worth living (i.e. space to explore, a network of furry friends and lovers and things to play with). So Alexander re-ran these early experiments but with one important difference, his rats all lived in the lap of rodent luxury. These lucky rats were residents of Alexander’s Rat Park, where they had space to explore, tunnels to scamper through and friends to interact with. Amazingly, although the residents of Rat Park were curious enough to try drug-laced drinking water, most would then shun this water – consuming less than a quarter of the drugs isolated rats used; and, most importantly, none of Alexander’s rats died from overdoses.

On top of these findings Alexander also discovered that isolated and addicted rats which were subsequently released from their enforced isolation and introduced into Rat Park soon gave up their destructive habits in favour of a normal life.

So how does this change our understanding of addiction?

Professor Alexander argued that his discovery showed that addiction was more than simply a disease which chemically hijacked the brain, instead it could be an adaptation to an individual’s environment and social situation – i.e. addiction is not about you, it’s all about your cage.

In favour of Alexander’s ‘Rat Park theory’ we know that, although following an injury many individuals are prescribed the pain killer diamorphine (a medical name for heroin), we rarely have problems with these patients becoming addicts. Could this be because the patients are able to return home after their stay in hospital to loving supportive families and rewarding careers so no-longer need to rely on these drugs?

Although these finding are compelling and perhaps suggest useful social interventions with regard to treating addicts, it is still important to understand the limitations of the Rat Park and Alexander’s theory. Indeed, it is important to recognise that ‘Rat Park’ oversimplifies a complex societal and biological problem and that this oversimplification may not be beneficial. Research still suggests that certain people have a physical predisposition towards addiction and, despite living socially enriched lives, these individuals can still fall fowl to the addiction cycle. The myriad of research into the biological substrates of addiction could make-up a post in it’s own right, so I will attempt to cover this in more detail in a later article. However, for now it’s important to recognise that even though environment is likely to play a role in addictive behaviours, biology is also important in shaping our vulnerability to addictive drugs and our subsequent success in kicking the habits. This research should all be considered together if we really want to successfully tackle the problems raised by drugs in our society.

Post by: Sarah Fox

Save

The Adventures of Cornish Cod in the land of Scousers

Over two years ago I began a University course in Liverpool, having traveled across the Pennines from the glorious lands of Yorkshire my accent stood out while I also found the scouse accent particularly confusing – especially when drunk (student life). But it’s not just students living far from home who are getting confused. Climate change has been warming our oceans so much that cold water species have started to migrate further North. This means that the Cornish Cod are now visiting Liverpudlion waters. It’s the start of a real North/South aquatic mixer! We all recognise the differences in culture between the North and South of England but it’s also likely that these differences appear in fish culture, especially regional accents. I know aquatic accents may sound a bit fishy but this is a real phenomenon.

Cod attract a mate by making sounds, a highly specialised ‘sonic muscle’ is drummed against the swim bladder to produce a thumping sound. But, as the Cornish cod and the Scouse cod start to mingle, the differences in their ‘accents’ could actually prevent them from communicating. Reminds me of on a night out in Liverpool when the native guys try to chat you up – can anyone understand them or are the chat up lines just so bad that you’re really better off not understanding?

Males produce sounds (back to Cod now in case anyone was confused) which stimulates females to release their eggs, this allows the males to synchronise when they release their sperm to fertilise the eggs. If the fish aren’t able to understand each other it could seriously damage their reproductive success. Even if Cornish Cod and Scouse Cod can set aside their differences and develop an understanding, there is still the issue of noise pollution.

Noise pollution in an area can drown out sweet mating sounds of male cod. Boats being driven past spawning grounds could have serious effects on the cod communication. It could be that the species manages to adapt over time to overcome this dilemma (similarly to how we over act our gestures when the music is too loud for you to ask your mates if they want a drink). Perhaps the male cod will develop some epic dance moves to seduce their lady friends, or they may have to start signalling louder to be heard. This would however require more energy, meaning the Cod would need to hunt more often which could have detrimental effects on the rest of the ecosystem.

Poor Scouse Cod not only do they have to cope with noise pollution but now they are being invaded by Southerners! Could life get any worse for them? Let’s just hope there isn’t a boom in the fish and chip shop trade…

Post by: Jennifer Rasal

References:

http://www.telegraph.co.uk/news/2016/10/04/cod-speak-with-regional-accents-scientists-believe/

https://commons.wikimedia.org/wiki/File:Portrait_of_Cod.jpg

http://www.bbc.co.uk/news/science-environment-37557945

http://www.fishecology.org/soniferous/waquoitposter.htm

Save

Save

Share This