In a previous article I showed that the inference rules of propositional logic can be obtained from probability calculus. But actually, we can obtain much more, and even explain why most people don’t think the propositional logic way.

In this article, we will see that there is more to the traditional implication than it seems in the framework of propositional logic. Using probability calculus and bayesian inference, we will show why most people mistakenly use the implication… and show that they are not completely mistaken after all.

Let’s take an example. Yesterday evening, my friend Bob was on his way to a party. He told me this: “If I can kiss Alice during the party, I will go to the cinema with her tomorrow evening”. I haven’t met Bob since the party, but a friend saw Alice and him at the cinema’s evening projection today.

Did you assume that Bob managed to get a kiss from Alice during the party? Logicians know this shortcut too well. In propositional logic, nothing tells us that Bob kissed Alice. Maybe they didn’t kiss and still went to the cinema today.

But using probability calculus, we can show that our probability estimate for their kiss increased when we learned they went to the cinema. This is what I will prove now.

Without further ado, let’s dive in.

and are propositions and I use the convention . If you need a cheatsheet about probability calculus or the notations I use, check this out.

If we let = “”, then by definition of . I already showed in a previous article that is enough to derive the usual equivalent forms: using probability calculus. This article showed the following rules:

If is true | then is true |

If is false | then is false |

But actually, given this rule , we can show that:

If is true | then is more likely |

If is false | then is less likely |

## Part 1: when is more likely

I will now show that we can rewind the arrow: **given a rule such as we will show that the probability for increases when we gain information about **. From the propositional logic vantage point, this is surprising because information about doesn’t tell us anything about . As we will see, in probability calculus, it’s a completely different story. This could explain why most people mistakenly use as , even though both are completely different in propositional logic.

Let be an evidence such that . For instance, the rule is such evidence given that . But so is the weaker rule: .

We will show that given evidence , we also have: which means that evidence for increases our belief in .

We have: | $$\begin{align} p_e(AB) &= p_e(A \mid B)\,p_e(B) \\ \text{ and } p_e(AB) &= p_e(B \mid A)\,p_e(A) \\ \Rightarrow \color{blue}{p_e(A \mid B) / p_e(A)} &= \color{red}{p_e(B \mid A) / p_e(B)} \end{align}$$ |

Hence: | $$\begin{align} p_e(B) &< p_e(B \mid A) \\ \Rightarrow 1 &< \color{red}{p_e(B \mid A) / p_e(B)} \\ \Rightarrow 1 &< \color{blue}{p_e(A \mid B) / p_e(A)} \\ \Rightarrow p_e(A) &< p_e(A \mid B) \end{align}$$ |

And that’s why given the rule , the probability for increases when we know that is true, even though propositional logic don’t tell us anything about .

## Part 2: when is less likely

Actually, we can even show that when , the probability estimate for decreases when we know that is false!

I will now show that if , then :

But, we showed that , so the fraction is less that . Thus the proof that: