An information theory perspective on probability

Mar 11, 2018

This article is an excerpt from the 44-pages article “a Probability Perspective” published by Eric C.R. Hehner. You can find the full article here. In this article Hehner give the above perpective on Information theory, then the Bayesian perspective and the modeling of real world events using a new probabilistic formalism based on programmation, that he calls the programmer’s perspective. His formalism allows to solve classical paradox in an easy and non-ambiguous way.

In 1948, Claude Shannon invented information theory based on probability theory. The basic definition is entropy. Given of a set of messages mi, each one occurring with probability pi, their entropy is defined as where is logarithm base 2 . The messages could be letters in an alphabet, or words in a language, and the idea is that a long sequence of messages is sent from a sender to a receiver. The probability pi is the relative frequency of message mi in the sequence. Shannon referred to entropy as a measure of “uncertainty” on the part of the receiver, before receiving a message, about what message would be received next. It is independent of representation.

The word “entropy” comes from statistical mechanics, where it originally represented the amount of “disorder” in a large collection of molecules. Currently it is explained as the average energy carried by a molecule, which is related by the Boltzmann constant to the temperature. Although temperature is considered a macro property, and one may be reluctant to talk about the average value in a set that contains only one value, there is no harm in relating energy to temperature even for a single molecule.

Similarly, Shannon was reluctant to talk about the information content of each message individually, but there is no harm in doing so. If we define the information content of message as:

then, the entropy:

is the average information content of a message measured in bits.

In 1948 it made good sense to explain information in terms of probability; information (as a mathematical theory) was unknown, and probability (as a mathematical theory) was already well developed. But today it might make better sense to explain probability in terms of information. Most people today have a quantitative idea of what information and memory are; they talk about bits and bytes; they buy an amount of memory, and hold it in their hand; they wait for a download, and complain about the bandwidth. Many people already understand the important difference between information and memory; they compress files before sending them, and they decompress files upon receiving them.

Information theory talks about messages, but it could just as well talk about events, or outcomes of an experiment. (Perhaps a message is just a special case of event, or perhaps an event is just a special case of message.) Let us be more abstract, and dispense with events and messages. The information (in bits) associated with probability is:

which is easily inverted:

to allow us to define probability in terms of information. The suggestion to define probability in terms of information is intended as a pedagogical technique: define the less familiar in terms of the more familiar, or perhaps I mean define the less understood in terms of the more understood. Henceforth I will be neutral on this point, making use of the relationship between them, without taking either one of them to be more basic.

Shannon explained the amount of information carried by a message as a measure of how surprised one is to learn the message. Probability is also a measure of surprise, or inversely, expectation. If there are two possibilities A and B, and I say it will probably be A, I mean that I expect to see A, and I will be surprised to see B . A numeric probability expresses the strength of my expectation to see A, and the amount of my surprise if I see B . One’s expectation and surprise may be shaped by past frequencies, or they could be shaped by considerations that apply to one-time-only events.

Scale

There are two temperature scales in common use: Fahrenheit (in the USA) and Celsius (in the rest of the world). There are formulas to convert each to the other:

and

Whenever two physical quantities can be converted, each to the other, they measure the same thing on different scales. (More generally, every physical law says that there are fewer things to measure than there are variables in the law.) So energy and mass measure the same thing on different scales: and .

More to the point, information and probability measure the same thing on different scales.

and

I am not sure what to call the “thing” measured on these two scales; rather than introduce a new word I shall just call it “information”.

There is another scale in common use for measuring information: the number of possible states. (This same scale applies to energy-temperature-mass too.) This is the scale preferred by people who build “model checkers” to verify the correctness of computer hardware or software. They like to say they can handle up to states, which is something like the number of atoms in our galaxy. That is a truly impressive number, until we realize that is about , which is the state space of 200 bits, or about six 32-bit variables; we rapidly descend from states to 6 program variables!

In order to write the conversion formulas among the three scales neatly, I need unit names for each of them. We already have the “bit” and the “state”; I am missing a unit for the probability scale, so let me invent the “chance”. (All three of these units are non- physical; they are alternative names for unity (pure numbers).) Here are the conversions.

bit = state = chance
state = chance = bit
chance = bit = state

Let’s look at three example point on these scales.

0 bit = 1 state = 1 chance
1 bit = 2 state = 0.5 chance
bit = state = 0 chance

On the middle line, 1 bit is the amount of information needed to tell us which of 2 states we are in, or has occurred, or will occur, and that corresponds to probability 1/2 chance for each state. On the top line, 0 bits is the amount of information needed to tell us which state if there is only 1 state, and that corresponds to 1 chance (certainty). On the bottom line, it takes bits to tell us that something impossible is occurring (Shannon would say that we are infinitely surprised). (I say “certain” for probability 1 and “impossible” for probability 0 and I don’t care about any measure-theoretic difference.) […]

The “problem of prior probabilities” is the problem of how Bayesians justify the assumption that the initial probability distribution is uniform across all states. I suggest that there is no “assumption” being made, and so no need for “justification”.

Saying that there are 4 states is saying, on another scale, that the probability is 1/4, and on yet another scale that 2 bits are required to specify the situation. If we then learn that one of the states never occurs, we adjust: there are 3 states (that occur); each of the (occurring) states has probability 1/3 (and any nonoccurring state has probability 0 ); it takes about 1.585 bits to identify a state (that occurs, and infinitely many bits to identify any nonoccurring state).

To be less extreme, if we learn that one of the four states rarely occurs, then we adjust: as a measure of information, there are less than 4 but more than 3 states ; each commonly occurring state has a probability between 1/4 and 1/3, and the rarely occurring state has a probability between 0 and 1/4 ; it takes somewhere between 1.585 and 2 bits to identify any of the commonly occurring states, and somewhere between 2 and ∞ bits to identify the rarely occurring state. In general, having no prior information about which of n states occurs is probability 1/n for each state, not by assumption, but by a change of scale. […]

What is the point of having several scales on which to measure the same quantity? If they are Fahrenheit and Celsius for measuring temperature, there is no point at all; they are linear translations of each other, and the duplication is just annoying. A slide rule multiplies two numbers by transforming them to a logarithmic scale, where the multiplication is transformed into the simpler operation of addition, and then transforms the result back. Fourier transforms are used for the same reason. Similarly, perhaps some information calculations are easier on the chance (probability) scale, others on the bit scale, and still others on the state scale. Thus they might all be useful.

See how to construct probability theory based on those insights in my next article: Probability theory, the logic of uncertainty