SOURCE EDITOR ONLY!This page uses LaTeX markup to display mathematical formulas. Editing the page with the VisualEditor or Classic rich-text editor disrupts the layout.Do not even switch to one of these editors while editing the page! For help with mathematical symbols, see Mathematical symbols and expressions. |

In probability theory, the **law of large numbers** (**LLN**) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game.

It is important to remember that the law only applies (as the name indicates) when a *large number* of observations is considered. There is no principle that

- a small number of observations will coincide with the expected value (the results of opening a single Pro Kit Box can deviate significantly form its official drop rates), or that
- a streak of one value will immediately be "balanced" by the others. (This is called the "gambler's fallacy": For example, even if a
**Tech Box**grants Early and Initial Tech with the same probability, a streak of 20**Early**Tech cards does not mean that a streak of**Initial**Tech or even 1 Initial Tech will follow.)

## **Forms**

There are two different versions of the law of large numbers, called the **weak** and the **strong** law of large numbers. Both versions state that, given a sequence of independent and identically distributed random variables $ (X_n)_{n \in \N} = X_1, X_2, \ldots $ with expected values $ \mathrm E[|X_1|] = \mathrm E[|X_2|] = \ldots = \mu < \infty $, the sample average

- $ \overline{X}_n=\frac1n(X_1+\cdots+X_n) $

converges to the expected value:

- $ \overline{X}_n \, \to \, \mu \qquad\mathrm{for}\ n \to \infty $.

The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables.

### **Weak law of large numbers**

An illustration of the
weak law of large numbers. As the number of trials (cards) increases, the margin around the expected value (drop rate) of 30.94 % gets smaller and smaller. |

The **weak law of large numbers** states that the sample average of the above-mentioned sequence converges in probability towards the expected value:

- $ \overline{X}_n\ \xrightarrow{\mathrm P}\ \mu \qquad\mathrm{when}\ n \to \infty $

That is, for any positive number $ \varepsilon $,

- $ \displaystyle \lim_{n\to\infty}\mathrm P\!\left(\,|\overline{X}_n-\mu| > \varepsilon\,\right) = 0 $.

Interpreting this result, the weak law states that for any nonzero margin $ \varepsilon $ specified, no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value and within the margin.

### **Strong law of large numbers**

The **strong law of large numbers** states that the sample average of the above-mentioned sequence converges almost surely towards the expected value:

- $ \overline{X}_n\ \xrightarrow{\mathrm{a.\ s.}}\ \mu \qquad\mathrm{when}\ n \to \infty $

That is,

- $ \mathrm P\!\left(\displaystyle \lim_{n\to\infty}\overline{X}_n = \mu \right) = 1 $.

What this means is that the probability that the average of the observations converges to the expected value, as the number of trials $ n $ goes to infinity, is equal to one.

#### **Borel's strong law of large numbers**

**Borel's strong law of large numbers**, named after French mathematician Émile Borel, asserts that if the sequence $ (X_n)_{n \in \N} $ is indepentent and Bernoulli distributed with success probability $ p \in \mathopen{]}0, 1\mathclose{[} $, then the sample average converges almost surely towards $ p $:

- $ \overline{X}_n\ \xrightarrow{\mathrm{a.\ s.}}\ p \qquad\mathrm{when}\ n \to \infty $

That is,

- $ \mathrm P\!\left(\displaystyle \lim_{n\to\infty}\overline{X}_n = p \right) = 1 $.
^{[1]}

This theorem makes rigorous the intuitive notion of probability as the long-run relative frequency of an event's occurrence: As a Bernoulli distributed random variable only takes the values 1 for success and 0 for failure, the sequence $ X_1, X_2, \ldots $ is just a sum of ones and zeros, which makes $ X_n $ the number of successes. The sample average $ \overline{X}_n=\frac{X_n}n $ is then simply the number of successes $ X_n $ divided by the number of trials $ n $, which is exactly the definition of the relative frequency (which, according to the law, converges to the probability $ p $).

## **Examples and implications**

- The law of large numbers, especially the
**strong**version, is of great importance for every random process in the Asphalt games. As official drop rates are expected values, the law is the guarantee that a player's own results will approach the official drop rates in the long run. - However, it should not be neglected that the random variables have to be independent and identically distributed. For example, this means for a Pro Kit Box:
- A drawn card must not influence the possible results of the following cards. This is not the case for Shuffle Boxes like the
**Optimal Shuffle Box**where a rejected card is removed from the pool (sample space) for the following draws, so the results are not independent. - The possible results have to be the same for all cards of a box (they have to have the same probability distribution). This is not the case for boxes like the
**Daily Kit Box**where 8 cards only give parts, 1 card only gives engines and 1 card only gives Mid-Tech or Advanced Tech. These three distribution types have to be treated separately.

- A drawn card must not influence the possible results of the following cards. This is not the case for Shuffle Boxes like the

- The law of large numbers allows conclusions if the game does not provide drop rates. As the statistical outcomes will converge to the expected values, one can infer the drop rates from converging statistical values if the sample size is large enough. A good example is the Tech card of the
**Daily Kit Box**: Since the 2019 Spring Update, the average relative frequencies for the Tech card converged to 70 % Mid-Tech and 30 % Advanced Tech, so a player can expect this ratio in the long run. - Almost all random processes in the Asphalt games can be regarded as Bernoulli trials as it's mostly the question of getting a desired item or not (success or failure). Therefore
**Borel's law of large numbers**applies, and this is the reason why players can deduce the probability of getting an item from the drop rates, both official ones and those calculated with statistical methods by WikiProject Statistics. For example, the probability of getting a Mid-Tech card from a Daily Kit Box is 30 % = 0.3. - As the expected value (=drop rate) of a Bernoulli distribution equals the success probability, drop rates can also be used to calculate the minimum number of trials required to get a desired item with a given probability, say, at least 90 %. For example, a player has to reveal at least 96 cards from
**Finish Line Boxes**to get a V12 Engine with a probability of 90 %. The tables on the minimum card requirements page make direct use of Borel's law of large numbers. - If statistical values do
**not**converge, or if they show constant deviations from official drop rates, this is a sign that game mechanisms have been changed without notice or that the official drop rates have not been calculated correctly. The**Finish Line Box**, for example, is notorious for displaying wrong drop rates due to sudden content changes.

## **See also**

## **References**

- ↑ Wen, Liu (February 1991). “An Analytic Technique to Prove Borel's Strong Law of Large Numbers”.
*The American Mathematical Monthly***98**(2): 146. Retrieved on 2019-07-31.