-1

For two events $E$ and $F$, we have $P(E)$ and $P(F)$ as the respective probabilities of there occurrence.

Denote $P(EF)$ as the probability of their simultaneous occurrence. If $P(EF) = P(E) \cdot P(F)$ then the events are independent.

But I have a confusion:

If we take the numbers (universal set) $1,2,\ldots 20$ and we do an experiment of choosing one number among them. Let the events be \begin{align} E \equiv \text{choosing a number among}~1 \sim 10 \\ F \equiv \text{choosing a number among}~6 \sim 15 \end{align}

Then $P(E) = 1/2$ and $P(F)=1/2$ and $P(EF) = P(E) P(F) = 1/4$.

Are the events independent?

If we take the same event $E$ but $F$ to be choosing among $5 \sim 14$ then the experiment is equivalent to the first one. However, in this case $P(EF) \neq P(E)P(F)$. It should not be independent according to the definition.

I think in both the cases $E$ and $F$ are dependent because they lie in the same universal set and corresponds.

  • In the first case, indeed $E$ and $F$ are independent, because the probability of $F$ occuring does not depend upon whether $E$ occured or not : in other words, $P(F) = P(F|E)$. There is a difference between being mutually exclusive and being independent : mutually exclusive events cannot occur together i.e. their intersection occurs with probability zero. Independence means that the occurence of one event does not affect the probability with which the other occurs : they may very well intersect, as $E,F$ are doing at this point. – Sarvesh Ravichandran Iyer Jan 20 '19 at 15:46
  • The second and the first are not the same, or equivalent, experiments : you are changing $F$, even if you are not changing its probability. This will definitely affect how $F$ behaves with other events. – Sarvesh Ravichandran Iyer Jan 20 '19 at 15:50
  • You are shifting $F$, to see this is not the same as the original situation, keep shifting so that $F$ becomes ${1, ..., 10}$, which is the same as $E$, and now it is clear that this shifting has created a perfect dependence $P[E \cap E] = P[E]$. – Michael Jan 20 '19 at 15:53
  • Related, possibly helpful. https://math.stackexchange.com/questions/1355102/how-independence-and-mutually-exclusive-connected/1355176#1355176 – Ethan Bolker Jan 20 '19 at 16:06
  • 1
    "...then the experiment is equivalent to the first one." What exactly do you mean by that? Is there e.g. also equivalence if $F$ is (just like $E$) defined as choosing a number among $1\sim10$? – drhab Jan 20 '19 at 16:19
  • The bottom of this question ("ADDENDUM") is another answer to this question. – ryang Feb 26 '21 at 06:46

2 Answers2

1

When asking whether two events are independent, when you are able to compute the probability of each event and the probability of the intersection of the events, there really is only one question to ask:

Is $P(E \cap F) = P(E) P(F)$?

If the answer is yes, the events are independent, notwithstanding your ability to come up with a modified version of one of the events that looks just as good to you but is not independent from the other event.

If the answer is no, the events are dependent.

This is the definition of dependence. It has nothing to do with whether $E$ and $F$ have the "same sample space" or "different sample spaces." You can have independent events specified over the exact same set (used as a sample space), as in your first example. Alternatively, you can have dependent events specified over different sets (so that you have to use the Cartesian product of the sets in order to describe the complete sample space of the events).

If you use a different definition of independence, then you will not be able to properly understand or be understood by people who speak the language of mathematical probability. It would be as if a boy were raised to believe that "sit down" meant "leave the room." When he is older, at his first job interview, the interviewer says, "Would you like to sit down?" and he says, "No, I'd rather stay in the room." This sort of thing can cause all kinds of confusion.


One way to understand why $F$ as defined in the original example is independent of $E,$ while your modified $F$ is not, is to look at how much $F$ "overlaps" $E$ relative to the size of $E.$ Specifically, in the original example, exactly half of the outcomes in $E$ are in $F,$ and since we suppose the outcomes are equally likely, $F$ "overlaps" exactly half of the probability measure of $E.$ Since the probability of $F$ by itself is $\frac12,$ the formula $P(E \cap F) = P(E) P(F)$ tells us that this exact amount of "overlap" is when we get independence.

When you give $F$ more than half of the outcomes in $E,$ rather than exactly half, you give it too much "overlap." If you were to change $F$ so that it is for the outcomes $7,\ldots,16,$ you would have too little "overlap."

David K
  • 108,155
  • In both cases E and F should be dependent . Because ,if the outcome is 1 then E occurs but not F . Both the event can occur simultaneously but depends on each other. I think we should call two events independent if their sample spaces are different. Like E has sample space S1 and F has S2 .And we are taking the Cartesian product of the sample spaces . – Alapan Das Feb 21 '19 at 07:23
  • Independence is not defined that way. End of story. I have reworded the answer in an attempt to make this clearer. – David K Feb 21 '19 at 11:35
0

The key to understand the confusion you mentioned is to ask "independence of what?" This is NOT the independence of occurrence itself - as your example demonstrates; events are always dependent in that sense, which is not what we mean in probability theory.

What we mean here is the independence of the probability of occurrence - as a number. In other words, prior events always affect future events, so not-technically speaking, events are always dependent. However, if the occurrence of one event squeezes the realm of all remaining possibilities and at the same time shrinks the possibilities of occurrence of another event such that the probability of the occurrence of this another event does not change, we would say they are independent in this context.

Let's paraphrase this differently: if something happens, we know its complement has not happened. This knowledge eliminates the possibilities by subtracting the size of the complement set from the original sample size. Meanwhile, this complement also tells us some possibilities of another event have been eliminated. If the probability of this "another event" does not change due to these eliminations - in the aftermath of this occurrence or knowledge, we call them, awkwardly, independent.

The definition which could help with this intuitive vision is as follows:

$$P(E|F) = {{n_E-n_{{F^c}\cap E}} \over {n_S-n_{F^c}}}.$$

where S is the original sample space.

Taking liberty on terminologies, it is more like the following event perceives its agency remains intact, and not jeopardized by a prior event.

Hope this lens helps.