Bernoulli process

In probability theory and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variables $$X_i$$ are identically distributed and independent. Prosaically, a Bernoulli process is a repeated coin flipping, possibly with an unfair coin (but with consistent unfairness). Every variable $$X_i$$ in the sequence is associated with a Bernoulli trial or experiment. They all have the same Bernoulli distribution.

Definition
A Bernoulli process is a finite or infinite sequence of independent random variables X1, X2, X3, ..., such that


 * For each i, the value of Xi is either 0 or 1;
 * For all values of i, the probability that Xi = 1 is the same number p.

In other words, a Bernoulli process is a sequence of independent identically distributed Bernoulli trials.

Independence of the trials implies that the process is memoryless. Given that the probability p is known, past outcomes provide no information about future outcomes. (If p is unknown, however, the past informs about the future indirectly, through inferences about p.)

If the process is infinite, then from any point the future trials constitute a Bernoulli process identical to the whole process, the fresh-start property.

Interpretation
The two possible values of each $$X_i$$ are often called "success" and "failure". Thus, when expressed as a number 0 or 1, the outcome may be called the number of successes on the $$i$$th "trial".

Two other common interpretations of the values are true or false and yes or no. Under any interpretation of the two values, the individual variables Xi may be called Bernoulli trials with parameter p.

In many applications time passes between trials, as the index i increases. In effect, the trials X1, X2, ... Xi, ... happen at "points in time" 1, 2, ..., i, .... That passage of time and the associated notions of "past" and "future" are not necessary, however. Most generally, any Xi and Xj in the process are simply two from a set of random variables indexed by {1, 2, ..., n} or by {1, 2, 3, ...}, the finite and infinite cases.

Several random variables and probability distributions beside the Bernoullis may be derived from the Bernoulli process:


 * The number of successes in the first n trials, which has a binomial distribution B(n, p)
 * The number of failures needed to get r successes, which has a negative binomial distribution NB(r, p)
 * The number of failures needed to get one success, which has a geometric distribution NB(1, p), a special case of the negative binomial distribution

The negative binomial variables may be interpreted as random waiting times. <!--

Formal definition
The Bernoulli process can be formalized in the language of probability spaces as a random sequence of independent realisations of a random variable that can take values of heads or tails. The state space for an individual value is denoted by $$2=\{H,T\} .$$

Specifically, one considers the countably infinite direct product of copies of $$2=\{H,T\}$$. It is common to examine either the one-sided set $$\Omega=2^\mathbb{N}=\{H,T\}^\mathbb{N}$$ or the two-sided set $$\Omega=2^\mathbb{Z}$$. There is a natural topology on this space, called the product topology. The sets in this topology are finite sequences of coin flips, that is, finite-length strings of H and T, with the rest of (infinitely long) sequence taken as "don't care". These sets of finite sequences are referred to as cylinder sets in the product topology. The set of all such strings form a sigma algebra, specifically, a Borel algebra. This algebra is then commonly written as $$(\Omega, \mathcal{F})$$ where the elements of $$\mathcal{F}$$ are the finite-length sequences of coin flips (the cylinder sets).

If the chances of flipping heads or tails are given by the probabilities $$\{p,1-p\}$$, then one can define a natural measure on the product space, given by $$P=\{p, 1-p\}^\mathbb{N}$$ (or by $$P=\{p, 1-p\}^\mathbb{Z}$$ for the two-sided process). Given a cylinder set, that is, a specific sequence of coin flip results $$[\omega_1, \omega_2,\cdots\omega_n]$$ at times $$1,2,\cdots,n$$, the probability of observing this particular sequence is given by
 * $$P([\omega_1, \omega_2,\cdots ,\omega_n])= p^k (1-p)^{n-k}$$

where k is the number of times that H appears in the sequence, and n−k is the number of times that T appears in the sequence. There are several different kinds of notations for the above; a common one is to write
 * $$P(X_1=x_1, X_2=x_2,\cdots, X_n=x_n)= p^k (1-p)^{n-k}$$

where each $$X_i$$ is a binary-valued random variable with $$x_i=[\omega_i=H]$$ in Iverson bracket notation, meaning either $$1$$ if $$\omega_i=H$$ or $$0$$ if $$\omega_i=T$$. This probability $$P$$ is commonly called the Bernoulli measure.

Note that the probability of any specific, infinitely long sequence of coin flips is exactly zero; this is because $$\lim_{n\to\infty}p^n=0$$, for any $$0\le p<1$$. A probability equal to 1 implies that any given infinite sequence has measure zero. Nevertheless, one can still say that some classes of infinite sequences of coin flips are far more likely than others, this is given by the asymptotic equipartition property.

To conclude the formal definition, a Bernoulli process is then given by the probability triple $$(\Omega, \mathcal{F}, P)$$, as defined above.

Law of large numbers, binomial distribution and central limit theorem
Let us assume the canonical process with $$ H $$ represented by $$ 1 $$ and $$ T $$ represented by $$ 0 $$. The law of large numbers states that, on the average of the sequence, i.e., $$ \bar{X}_{n}:=\frac{1}{n}\sum_{i=1}^{n}X_{i} $$, will approach the expected value almost certainly, that is, the events which do not satisfy this limit have zero probability. The expectation value of flipping heads, assumed to be represented by 1, is given by $$p$$. In fact, one has
 * $$\mathbb{E}[X_i]=\mathbb{P}([X_i=1])=p,$$

for any given random variable $$X_i$$ out of the infinite sequence of Bernoulli trials that compose the Bernoulli process.

One is often interested in knowing how often one will observe H in a sequence of n coin flips. This is given by simply counting: Given n successive coin flips, that is, given the set of all possible strings of length n, the number N(k,n) of such strings that contain k occurrences of H is given by the binomial coefficient
 * $$N(k,n) = {n \choose k}=\frac{n!}{k! (n-k)!}$$

If the probability of flipping heads is given by p, then the total probability of seeing a string of length n with k heads is
 * $$\mathbb{P}([S_n=k]) = {n\choose k} p^k (1-p)^{n-k}, $$

where $$ S_n=\sum_{i=1}^{n}X_i $$. The probability measure thus defined is known as the Binomial distribution.

Of particular interest is the question of the value of $$S_{n}$$ for a sufficiently long sequences of coin flips, that is, for the limit $$n\to\infty$$. In this case, one may make use of Stirling's approximation to the factorial, and write
 * $$n! = \sqrt{2\pi n} \;n^n e^{-n}

\left(1 + \mathcal{O}\left(\frac{1}{n}\right)\right)$$

Inserting this into the expression for P(k,n), one obtains the Normal distribution; this is the content of the central limit theorem, and this is the simplest example thereof.

The combination of the law of large numbers, together with the central limit theorem, leads to an interesting and perhaps surprising result: the asymptotic equipartition property. Put informally, one notes that, yes, over many coin flips, one will observe H exactly p fraction of the time, and that this corresponds exactly with the peak of the Gaussian. The asymptotic equipartition property essentially states that this peak is infinitely sharp, with infinite fall-off on either side. That is, given the set of all possible infinitely long strings of H and T occurring in the Bernoulli process, this set is partitioned into two: those strings that occur with probability 1, and those that occur with probability 0. This partitioning is known as the Kolmogorov 0-1 law.

The size of this set is interesting, also, and can be explicitly determined: the logarithm of it is exactly the entropy of the Bernoulli process. Once again, consider the set of all strings of length n. The size of this set is $$2^n$$. Of these, only a certain subset are likely; the size of this set is $$2^{nH}$$ for $$H\le 1$$. By using Stirling's approximation, putting it into the expression for P(k,n), solving for the location and width of the peak, and finally taking $$n\to\infty$$ one finds that


 * $$H=-p\log_2 p - (1-p)\log_2(1-p)$$

This value is the Bernoulli entropy of a Bernoulli process. Here, H stands for entropy; do not confuse it with the same symbol H standing for heads.

John von Neumann posed a curious question about the Bernoulli process: is it ever possible that a given process is isomorphic to another, in the sense of the isomorphism of dynamical systems? The question long defied analysis, but was finally and completely answered with the Ornstein isomorphism theorem. This breakthrough resulted in the understanding that the Bernoulli process is unique and universal; in a certain sense, it is the single most random process possible; nothing is 'more' random than the Bernoulli process (although one must be careful with this informal statement; certainly, systems that are mixing are, in a certain sense, 'stronger' than the Bernoulli process, which is merely ergodic but not mixing. However, such processes do not consist of independent random variables: indeed, many purely deterministic, non-random systems can be mixing). -->