A B C D E F __G__ H I J K L M N O P Q R S T U V W X Y Z

# Gaussian distribution

The Gaussian distribution or the normal distribution were in fact invented long before C.F. Gauss was born in 1777 and
goes at least back to an article by A. de Moivre in 1733. The main reason for the importance of the normal distribution
is the central limit theorem (CLT). The simple version of the CLT states that for a sum of n independent and identically distributed (iid) random variables with zero expectation and finite variance, σ^{2}, here denoted S_{n}, we have that S_{n}/(n^{1/2}σ) converges in distribution to a normal distribution with zero expectation and variance 1 as n goes to infinity. This makes it reasonable to approximate many random variables with a normal distribution. An important application of the CLT is to measure the rate of convergence in the law of large numbers (LLN). The LLN says that the average of n iid random variables, x_{n} with finite expectation μ converges with probability one to μ as n goes to infinity. Now if the random variables also have finite variance σ^{2} the CLT will gives us that n^{1/2}(x_{n}-μ) converges in distribution to a normal distribution with expectation zero and variance σ^{2} and thus that also the convergence rate in the LLN is n^{1/2}. This also makes it possible to approximate the error coming from trying the estimate the expectation, μ, of a distribution by using an average of n independent observations from the distribution. i.e. μ≈x_{n}+σG/n^{1/2}, where G is a standard Gaussian random variable (expectation zero and variance 1). It is then possible to find a constant λ_{α/2} such that P(|μ-x_{n}|≥ σλ_{α/2}/n^{1/2})≈ α or to put it differently the probability that μ lies in the interval [x_{n}-σλ_{α/2}/n^{1/2},x_{n}+σλ_{α/2}/n^{1/2}] is approximately 1-α. The constant λ_{α/2} can be found be solving the equation N(x)=1-α/2 for x, where N is the distribution function of the standard normal distribution. The function N or its inverse are not obtainable in closed form but they are fairly easy to approximate.
## Characteristics of the normal distribution

- The density function f(x) of a normal distribution with mean μ and variance σ
^{2}is given by

e^{-((x-μ)/σ)2/2}/(2πσ^{2})^{1/2}. - A multi variate n-dimensional normal distribution with mean vector Μ and covariance matrix Σ has density function

f(x)=e^{-(x-Μ)TΣ-1(x-Μ)T/2}/(2π)^{n/2}/|Σ|^{1/2} - The moment generating function of a normal random variable X with mean μ and variance σ
^{2}is given by

M(z)=E[exp(zX)]=e^{μz+(zσ)2/2}. - The moment generating function of a normal random vector X with mean Μ and covariance Σ is given by

M(z)=E[exp(z^{T}X)]=e^{ΜTz+zTΣz/2}. - The normal distribution is a location/scale family where μ is the location parameter and σ the scale parameter.
- A normal random variable X mean μ and variance σ
^{2}can be obtained by an affine transformation of standard normal random variable G in the following way X=μ+σG. - A normal random vector X with mean Μ and covariance matrix Σ can be obtained by an affine transformation
of a standard normal random vector G in the following way X=Μ+Σ
^{1/2}G. - The sum of two normal random variables X and Y with mean μ
_{X}and μ_{Y}and standard deviations σ_{X}and σ_{Y}is normally distributed with mean μ_{X}+μ_{Y}and variance σ_{X}^{2}+σ_{Y}^{2}+2C(X,Y) if X and Y are multivariate normal. - Two jointly normally distributed random variables X and Y are independent if and only if they have zero covariance (or correlation).
- If X and Y are independent and X+Y is normally distributed then X and Y must be normally distributed.
- A remarkable fact about the normal distribution is that it is the only distribution such that if X and Y are independent and with the same variance then X+Y and X-Y are also independent. In general we have that a zero mean multivariate normal vector with iid components have invariant distribution under orthogonal transformations.
- The only non-trivial random process with independent and identically distributed increments which has continuous sample paths is a Gaussian stochastic process called Brownian motion.

Questions: Magnus Wiktorsson

Last update: 2016 Sep 04 14:00:10. Validate: HTML CSS

Centre for Mathematical Sciences, Box 118, SE-22100, Lund. Telefon: +46 46-222 00 00 (vx)