Mathematics and Probability

http://www.maths.lth.se/matstat/staff/bengtr

Basics

In probability theory, or rather probability calculus, one starts with given probabilities and computes new ones. As long as one agrees to measure probability on an increasing linear scale between 0 and 1, the following rules are obvious.
  1. Probabilities are numbers between 0 and 1, endpoints included.
  2. If an event surely occurs, its probability equals 1.
  3. If an event can be decomposed into smaller events, its probability equals the sum of the probabilities of the subevents. As a consequence, the probability that an event does not occur equals one minus the probability that it does occur.
  4. To compute the probability that two events occur simultaneously, one multiplies the probability of one of them with the conditional probability of the other one.
The above means that probabilities are treated axiomatically ; for instance, if one claims

P(rain tomorrow) = 0.7,

then, whether this means that 70% of the days are rainy ones, or that 70% of the days of this time of the year are rainy, or some meteorological method has been used, or that it is merely based on a pessimistic feeling, one has to admit that

P(not rain tomorrow) = 0.3,

since, otherwise, one contradicts oneself. In probability theory the above reasonings are formalised into rules which guarantee consistent, i. e. contradictionfree, results.

Relation to measure

The events considered are of the type: The outcome of some observed random phenomenon falls within a given set.
where A is a subset of all possible outcomes of the phenomenon in question and μ is a suitable function, which is thought of as assigning probability mass to the subsets. For instance: Since

P(outcome ∈ A or outcome ∈ B) = P(outcome ∈ A ∪ B),
P(outcome ∈ A and outcome ∈ B) = P(outcome ∈ A ∩ B),
and
P(outcome ∉ A) = P(outcome ∈ A')

where A' denotes the complementary set to A, the above rules for probabilities correspond to the following rules for μ, called axioms :
  1. μ(Ω) = 1, where Ω is the set of all possible outcomes, and, if μ(A) and μ(B) are defined, then so are μ(A'), μ(A∪B), and μ(A∩B).
  2. μ(A) ≥ 0.
  3. μ(A∪B) = μ(A)+ μ(B) if A and B are disjoint.

Probability theory

Luckily, it turns out, Kolmogorov (1933) calls this the Fundamental Theorem, that practically all μ's encountered in practice satisfy the following: Such a μ is called a probability measure defined on an algebra of subsets of a given set. The Extension Theorem says that μ's domain of definition can be extended so that one remains within it even after forming infinite unions and intersections. The extended μ is called a probability measure defined on a σ-algebra . This implies:


Link to Kolmogorov's "Grundbegriffe der Wahrscheinlichkeitsrechnung" (1933), translated to English, at www.mathematik.com
Kolmogorov's Fundamental Theorem of Probability Calculus
Bengt Ringnér Centre for Mathematical Sciences, Lund University, Lund, Sweden
http://www.maths.lth.se/matstat/staff/bengtr