2

Given some game between two players A and B, after 10 wins of player A and none of player B, how big is the chance of player A to win again? Certainly, one can assume that player A is the more skilled player, since he won all games, but it feels also wrong to assume that player A has a 100% chance of winning now. How can I factor in, that B still has a small chance of winning, and it just didn't occur in that measure? Are there different ways to calculate that?

EDIT To clarify the matter further, I don't look for a calculation that uses a given winning probability for A. Instead I want to understand, what knowledge is included in the fact that I observed 10 wins of A already (e.g. a basketball team won 10 matches against another one). What I can clearly say by this observation is, that A is stronger than B, but I cannot say for sure, that A will win all future games to come.

grackkle
  • 493
  • Are the outcomes of games independent of previous games? – Alex Jan 03 '17 at 22:10
  • 1
    Yes, they always start under the same conditions and so the outcome of the last game does not influence the next one. However, the skill of the players does. – grackkle Jan 03 '17 at 22:37
  • Seems difficult to justify any choice of probability that A wins again. Maybe the rule is "The more skilled player wins every game." Maybe it's "The more skilled player wins 95% of the time when playing a less skilled opponent." Any such number or rule comes down to guesswork so I don't think there's any calculation to do here – manofbear Jan 03 '17 at 22:44
  • 1
    Well, how about not assuming the underlying rule in order to get the probability, but to use the current knowledge (10:0 wins) to estimate the strength of A versus B. In the case of 0:0 we would have no knowledge and we could only assume a 50/50 chance of either of them winning. A 1:0 would already suggest, that A might be stronger, whereas a 10:0 would increase the magnitude. However no number of wins will ever tell us that A would definitely win all games to come. I hope this explanation made it more clear – grackkle Jan 03 '17 at 23:35

1 Answers1

3

This is a situation in which reasonable people might have different subjective probabilities. So one reasonable way to answer is to use a Bayesian approach. Suppose at the start (before any games were played) you had a very neutral opinion about the probability $\theta$ that A will win any one game, expressed by the prior distribution $\theta \sim Beta(1,1),$ which is also $Unif(0,1).$

Then the ten games are played giving you a likelihood proportional to $\theta^{10}.$ According to the version of Bayes' Theorem that states $$\text{POSTERIOR} \propto \text{PRIOR} \times \text{LIKELIHOOD},$$ the kernel of the posterior beta distribution is $\theta^{10}(1-\theta)^0,$ so that the posterior distribution is $\theta \sim Beta(11, 1).$

From there, various statements are possible, including a 95% posterior probability interval of $(.715, .998),$ computed in R statistical software as follows:

 qbeta(c(.025,.975), 11, 1)
 ##0.7150858 0.9977010

This can be interpreted as saying A's chances of winning the next game are between 71.5% and 99.8%. If you want a single value for the random variable $\theta,$ you could use the median 93.9% or the mean $11/12$ of this beta distribution for A's chances of winning; thus 6.1% or 8.3% for B's chances of winning.

The PDF of $Beta(11,1)$ is shown below.

enter image description here

The idea of this sort of Bayesian analysis is to combine the initial subjective view with empirical data to get a posterior distribution that reflects both. If you started out with the view that the opponents might be vaguely equally matched, you might use the (parabolic) prior $Beta(2,2),$ obtaining the posterior $Beta(12,2)$ and slightly different probability interval, median, and mean.

BruceET
  • 52,418