Neural coding 2: Measuring information within the brain

In my previous neuroscience post, I talked about the spike-triggered averaging method scientists use to find what small part of a stimulus a neuron is capturing. This tells us what a neuron is interested in, such as the peak or trough of a sound wave, but it tells us nothing about how much information a neuron is transmitting about a stimulus to other neurons. We know from my last neuroscience post that a neuron will change its spike firing when it senses a stimulus it is tuned to. Unfortunately, neurons are not perfect and they make mistakes, sometimes staying quiet when they should fire or firing when they should be quiet. Therefore, when the neuron fires, listening neurons can not be fully sure a stimulus has actually occurred. These mistakes lead to a loss of information as signals get sent from neuron to neuron, like Chinese whispers.

Figure 1: Chinese whispers is an example of information loss during communication. Source: jasonthomas92.blogspot.com

It is very important for neuroscientists to ascertain information flow within the brain because this is underlines all other computational processes that happen. After all, to process information within the brain you must first correctly transmit it in the first place! To understand and quantify information flow, neuroscientists use a branch of mathematics known as Information Theory. Information theory centers around the idea of a sender and a receiver. In the brain, both the sender and receiver are normally neurons. The sender neuron encodes a message about a stimulus in a sequence of spikes. The receiving neuron/neurons try to decode this spike sequence and ascertain what the stimulus was. Before the receiving neuron gets the signal, it has little idea what the stimulus was. We say this neuron has a high uncertainty about the stimulus. By receiving a signal from the sending neuron, this uncertainty is reduced, the extent of this reduction in uncertainty depends on the amount of information carried in the signal. Just in case that is not clear, lets use an analogy…so imagine you are a lost hiker with a map.

Figure 2: A map from one of my favourite games. Knowing where you are requires information. Source: pac-miam.deviantart.com

You have a map with 8 equally sized sectors and all you know is that you could be in any of them. You then receive a phone call telling you that you are definitely within 2 sectors on the map. This phone call actually contains a measurable amount of information. If we assume the probability of being in any part of the map prior to receiving the phone call is equal then you have a 1/8 chance of being in each part of the map. We need to calculate a measure of uncertainty and for this we use something called Shannon entropy. This measurement is related to the number of different possible areas there are in the map, so a map with 2000 different areas will have greater entropy than a map with 10 sectors. In our example we have an entropy of 3 bits. After receiving the message, the entropy drops to 1 bit because there are now only two map sectors you could be in. So the telephone call caused our uncertainty about our location to drop from 3 bits to 1 bit of entropy. The information within the phone call is equal to this drop in uncertainty which is 3 – 1 = 2 bits of information. Notice how we didn’t need to know anything about the map itself or the exact words in the telephone call, only what the probabilities of your location were before and after the call.

In neurons, we can calculate information without knowing the details of the stimulus a neuron is responding to. The trick is to stimulate a neuron with the same way over many repeated trials using a highly varying, white-noise stimulus (see the bottom trace in Figure 3).

Figure 3: Diagram showing a neuron’s response to 5 repeats of identical neuron input (bottom). The responses are shown as voltage traces (labelled neuron response). The spike times can be represented as points in time in a ‘raster plot’ (top).

So how does information theory apply to this? Well, recall how Shannon entropy is linked with the number of possible areas contained within a map. In a neuron’s response, entropy is related to the number of different spike sequences a neuron can produce. A neuron producing many different spike sequences has a greater entropy.

In the raster plots below (Figure 4) are the responses of three simulated neurons using computer models that closely approximate real neuron behaviour. They are responding to a noisy stimulus (not shown) similar to the one shown at the bottom of Figure 3. Each dot is a spike fired at a certain time on a particular trial.

Figure 4: Raster plots show three neuron responses transmitting different amounts of information. The first (top) transmits about 9 bits per second of response, the second (middle) transmits 2 bits/s and the third (bottom) transmits 0.7 bits/s.

In all responses, the neuron is generating different spike sequences, some spikes are packed close together in time, while at other times, these spikes are spaced apart. This variation gives rise to entropy.

In the response of the first neuron (top) the spike sequences change in time but do not change at all across trials. This is an unrealistically perfect neuron. All the variable spike sequences follow the stimulus with 100% accuracy. When the stimulus repeats in the next trial the neuron simple fires the same spikes as before, producing vertical lines in the raster plot. Therefore, all that entropy in the neuron’s response is because of the stimulus and is therefore transmitting information. This neuron is highly informative; despite firing relatively few spikes it transmits about 9 bits/second…pretty good for a single neuron.

The second neuron (Figure 4, middle) also shows varying spike sequences across time, but now these sequences vary slightly across trials. We can think of this response as having two types of entropy, a total entropy which measures the total amount of variation a neuron can produce in its response, and a noise entropy. This second entropy is caused by the neuron changing its response to unrelated influences, such as other neuron inputs, electrical/chemical interferences and random fluctuations in signaling mechanisms within the neuron. The noise entropy causes the variability across trials in the raster plot and reduces the information transmitted by the neuron. To be more precise, the information carried in this neuron’s response it whatever remains from the total entropy when the noise entropy is subtracted from it…about 2 bits/s in this case.

In the final response (bottom), the spikes from the neuron only weakly follow the stimulus and are highly variable across trials. Interestingly it shows the most complex spike sequences of spikes of all three examples. It therefore has a very large total entropy, which means it has the capacity to transmit a great deal of information. Unfortunately, much of this entropy is wasted because the neurons spends most its time varying its spike patterns randomly instead of with the stimulus. This makes its noise entropy very high and the useful information low, it transmits a measly 0.7 bits/s.

So, what should you take away from this post. Firstly that neuroscientists can accurately measure the amount of information can transmit. Second, that neurons are not perfect and cannot respond the same way even to repeated identical stimuli. This leads to the final point that this noise within neurons limits the amount of information they can communicate to each other.

Of course, I have only shown a simple view of things in this post. In reality, neurons work together to communicate information and overcome the noise they contain. Perhaps in the future, I will elaborate on this further…

Post by: Dan Elijah.

Share This