Loading [MathJax]/jax/output/SVG/jax.js

Why bayesian inference is more powerful than logic

Mar 17, 2018

In a previous article I showed that the inference rules of propositional logic can be obtained from probability calculus. But actually, we can obtain much more, and even explain why most people don’t think the propositional logic way.

In this article, we will see that there is more to the traditional implication than it seems in the framework of propositional logic. Using probability calculus and bayesian inference, we will show why most people mistakenly use the AB implication… and show that they are not completely mistaken after all.

Let’s take an example. Yesterday evening, my friend Bob was on his way to a party. He told me this: “If I can kiss Alice during the party, I will go to the cinema with her tomorrow evening”. I haven’t met Bob since the party, but a friend saw Alice and him at the cinema’s evening projection today.

Did you assume that Bob managed to get a kiss from Alice during the party? Logicians know this shortcut too well. In propositional logic, nothing tells us that Bob kissed Alice. Maybe they didn’t kiss and still went to the cinema today.

But using probability calculus, we can show that our probability estimate for their kiss increased when we learned they went to the cinema. This is what I will prove now.

Without further ado, let’s dive in.

A and B are propositions and I use the convention pe()=p(e). If you need a cheatsheet about probability calculus or the notations I use, check this out.

If we let e = “AB”, then pe(BA)=1 by definition of e. I already showed in a previous article that e is enough to derive the usual equivalent forms: ABˉBˉAˉA+B using probability calculus. This article showed the following rules:

If A is true then B is true
If B is false then A is false

But actually, given this rule AB, we can show that:

If B is true then A is more likely
If A is false then B is less likely

Part 1: when A is more likely

I will now show that we can rewind the arrow: given a rule such as AB, we will show that the probability for A increases when we gain information about B. From the propositional logic vantage point, this is surprising because information about B doesn’t tell us anything about A. As we will see, in probability calculus, it’s a completely different story. This could explain why most people mistakenly use AB as BA, even though both are completely different in propositional logic.

Let e be an evidence such that pe(B)<pe(BA). For instance, the rule AB is such evidence given that p(B)1. But so is the weaker rule: A more_plausible(B).

We will show that given evidence e, we also have: pe(A)<pe(AB) which means that evidence for B increases our belief in A.

We have: pe(AB)=pe(AB)pe(B) and pe(AB)=pe(BA)pe(A)pe(AB)/pe(A)=pe(BA)/pe(B)
Hence: pe(B)<pe(BA)1<pe(BA)/pe(B)1<pe(AB)/pe(A)pe(A)<pe(AB)

And that’s why given the rule AB, the probability for A increases when we know that B is true, even though propositional logic don’t tell us anything about A.

Part 2: when B is less likely

Actually, we can even show that when AB, the probability estimate for B decreases when we know that A is false!

I will now show that if pe(B)<pe(BA), then pe(BˉA)<pe(B):

pe(BˉA)=pe(ˉAB)pe(ˉA)pe(B)=1pe(AB)1pe(A)pe(B)

But, we showed that pe(A)<pe(AB), so the fraction is less that 1. Thus the proof that:

pe(BˉA)<pe(B)