0

Given 2 events A and B, it's common to see the concept of independence being defined as several combinations of:

  1. $P(A|B)=P(A)$
  2. $P(B|A)=P(B)$
  3. $P(A\cap B)=P(A)P(B)$

What is correct to say? Condition 1 is necessary and sufficient to stablish what is called independence (because it would imply 2)? Condition 1 and 2 are the necessary ones? Condition 3 implies 1 and 2 and is the only one necessary? Some of them are a specific case of others?

RobPratt
  • 50,938
Rick
  • 17

2 Answers2

3

Independence of two events $A$ and $B$ in the sigma algebra $A, B \in \cal F$ of a probability space $(\Omega,\mathcal F, \mathbb P)$ is defined as

$$\Pr(A\cap B)= \Pr(A)\Pr(B)$$

The other two $\Pr(A\mid B)=\Pr(A)$ and $\Pr(B \mid A) = \Pr(B)$ cannot be used as definitions of independence because, for instance, $\Pr(A\mid B)=\Pr(A)$ requires $\Pr(B)\neq 0.$

Independent events cannot be understood intuitively as knowing about the event $B$ gives you no information about event $A$. The best example is explained here:

In the experiment of throwing a dart on the real line, $A=\text{lands on rational}$ and $B=\text{lands on an irrational}$. In this case, under the Lebesgue probability measure, $A\cap B=\emptyset$ and $\Pr(A\cap B)=0.$ Hence these events are independent. This is consistent with $\Pr(A) \Pr(B)=0$ since the $\Pr(A)=0$. Yet, knowing that the dart has landed on an irrational number rules out the possibility of the dart having landed on a rational number.

The even more mind-blowing example is the extension to the event $B$ (irrational) being independent of itself: the probability of landing on an irrational under the Lebesgue measure is $1$, and therefore $\Pr(A \cap A)= \Pr(A)\Pr(A).$

  • Great , that's what I expected, I will vote it as an answer. Can you just add the important information that "If events A and B both have nonzero probabilities, then (1),(2),(3) are equivalent statements." as @ryang said in the comments? Also mention the equivalence between (1) and (2) as a consequence of the definition of P(A|B). And put the timestamp around 45:45 (or after in 46:20 if you want to point exactly to the experiment) in the link. – Rick Aug 14 '24 at 19:58
  • @Rick My interest was in dispelling the intuition of independent events based on information entropy. That being said, I believe the answer is complete wrt to the condition of $P(B) \neq 0$. The link is just intellectual honesty - listening to the entire lecture is highly recommended. – Antoni Parellada Aug 14 '24 at 20:43
  • Yes, it's a good answer. Just a couple of extra explicit facts that made everything clearer to me (and are the answers to the questions I made in the post) and could be useful to others :) – Rick Aug 14 '24 at 21:01
0

First, I would just clarify what is the concept of independence.

So lets imagine you have two events $A$ and $B$, and we want to define independence, the easiest way to think about two events being independent is that knowing anything about the event $B$ will give you no information about the event $A$. Writing that in formal mathematics is saying $P(A|B)=P(A)$ that is the probability of $A$ knowing anything about $B$ is the same as just knowing the probability of $A$, $P(A)$.

And all the definitions you gave are the same, the first and the second being exactly the same one just by swapping $A$ with $B$. And the third comes from the conditional probability definition $P(A|B)=\frac{P(A\cap B)}{P(B)}$

  • I found out the proof of equivalence between the first 2, however it's not just a matter of swapping the labels, or else, P(A|B) would be the same as P(B|A). It's a consequence of the definition of P(A|B): https://proofwiki.org/wiki/Event_Independence_is_Symmetric – Rick Aug 14 '24 at 19:06