âčïž Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.1 months ago (distributed domain, exempt) |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://en.wikipedia.org/wiki/Normal_distribution |
| Last Crawled | 2026-04-09 01:27:17 (2 days ago) |
| First Indexed | 2013-08-08 16:24:53 (12 years ago) |
| HTTP Status Code | 200 |
| Meta Title | Normal distribution - Wikipedia |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | Normal distribution
Probability density function
The red curve is the
standard normal distribution
.
Cumulative distribution function
Notation
Parameters
=
mean
(
location
)
=
variance
(squared
scale
)
Support
PDF
CDF
Quantile
Mean
Median
Mode
Variance
MAD
AAD
Skewness
Excess kurtosis
Entropy
MGF
CF
Fisher information
KullbackâLeibler divergence
In
probability theory
and
statistics
, a
normal distribution
or
Gaussian distribution
is a type of
continuous probability distribution
for a
real-valued
random variable
. The general form of its
probability density function
is
[
2
]
[
3
]
[
4
]
The parameter
â
â
is the
mean
or
expectation
of the distribution (and also its
median
and
mode
), while the parameter
is the
variance
. The
standard deviation
of the distribution is the positive value
â
â
(sigma). A random variable with a Gaussian distribution is said to be
normally distributed
and is called a
normal deviate
.
Normal distributions are important in
statistics
and are often used in the
natural
and
social sciences
to represent real-valued
random variables
whose distributions are not known.
[
5
]
[
6
]
Their importance is partly due to the
central limit theorem
. It states that the average of many
statistically independent
samples (observations) of a random variable with finite mean and variance is itself a random variableâwhose distribution
converges
to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as
measurement errors
, often have distributions that are nearly normal.
[
7
]
Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any
linear combination
of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as
propagation of uncertainty
and
least squares
[
8
]
parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.
A normal distribution is sometimes informally called a
bell curve
.
[
9
]
[
10
]
However, many other distributions are
bell-shaped
(such as the
Cauchy
,
Student's
t
, and
logistic
distributions). (For other names, see
Naming
.)
The
univariate probability distribution
is generalized for
vectors
in the
multivariate normal distribution
and for matrices in the
matrix normal distribution
.
Standard normal distribution
[
edit
]
The simplest case of a normal distribution is known as the
standard normal distribution
or
unit normal distribution
. This is a special case when
and
, and it is described by this
probability density function
(or density):
[
11
]
The variable
â
â
has a mean of 0 and a variance and standard deviation of 1. The density
has its peak value
at
and
inflection points
at
and
â
â
.
Although the density above is most commonly known as the
standard normal,
a few authors have used that term to describe other versions of the normal distribution.
Carl Friedrich Gauss
, for example, once defined the standard normal as
which has a variance of
â
â
, and
Stephen Stigler
once defined the standard normal as
which has a simple functional form and a variance of
[
12
]
General normal distribution
[
edit
]
If
â
â
is a
standard normal deviate
, then
will have a normal distribution with expected value
â
â
and standard deviation
â
â
. This is equivalent to saying that the standard normal distribution
â
â
can be scaled/stretched by a factor of
â
â
and shifted by
â
â
to yield a different normal distribution, called
â
â
.
Conversely, if
â
â
is a normal deviate with parameters
â
â
and
, then this
â
â
distribution can be re-scaled and shifted via the formula
to convert it to the standard normal distribution. This variate is also called the standardized form of
â
â
.
In particular, the probability density function for
â
â
can be written in terms of the standard normal distribution
â
â
(with zero mean and unit variance):
The probability density must be scaled by
so that the
integral
is still 1.
The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter
â
â
(
phi
).
[
13
]
The variant form of the Greek letter phi,
â
â
, is also used quite often.
The normal distribution is often referred to as
or
â
â
.
[
14
]
Thus when a random variable
â
â
is normally distributed with mean
â
â
and standard deviation
â
â
, one may write
Alternative parameterizations
[
edit
]
Some authors advocate using the
precision
â
â
as the parameter defining the width of the distribution, instead of the standard deviation
â
â
or the variance
â
â
. The precision is normally defined as the reciprocal of the variance,
â
â
.
[
15
]
The formula for the distribution then becomes
This choice is claimed to have advantages in numerical computations when
â
â
is very close to zero, and simplifies formulas in some contexts, such as in the
Bayesian inference
of variables with
multivariate normal distribution
.
Alternatively, the reciprocal of the standard deviation
might be defined as the
precision
, in which case the expression of the normal distribution becomes
According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the
quantiles
of the distribution.
Normal distributions form an
exponential family
with
natural parameters
and
, and natural statistics
x
and
x
2
. The dual expectation parameters for normal distribution are
η
1
=
Ό
and
η
2
=
Ό
2
+
Ï
2
.
Cumulative distribution function
[
edit
]
The
cumulative distribution function
(CDF) of the standard normal distribution, usually denoted with the capital Greek letter
â
â
, is the integral
The related
error function
gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2, falling in the range
â
â
. That is:
These integrals cannot be expressed in terms of elementary functions, and are often said to be
special functions
. However, many numerical approximations are known; see
below
for more.
The two functions are closely related, namely
For a generic normal distribution with density
â
â
, mean
â
â
and variance
, the cumulative distribution function is
The probability that
x
lies between
a
and
b
with
a < b
is therefore
[
16
]
:â84â
The complement of the standard normal cumulative distribution function,
, is often called the
Q-function
, especially in engineering texts.
[
17
]
[
18
]
It gives the probability that the value of a standard normal random variable
â
â
will exceed
â
â
:
â
â
. Other definitions of the
â
â
-function, all of which are simple transformations of
â
â
, are also used occasionally.
[
19
]
The
graph
of the standard normal cumulative distribution function
â
â
has 2-fold
rotational symmetry
around the point (0,1/2); that is,
â
â
. Its
antiderivative
(indefinite integral) can be expressed as follows:
An
asymptotic expansion
of the cumulative distribution function for large
x
can be derived using
integration by parts
:
where
denotes the
double factorial
. For more, see
Error function § Asymptotic expansion
.
[
20
]
Taylor series representation
[
edit
]
The
Taylor series
for the normal distribution
â
â
can be derived by substituting
â
â
into the
Taylor series for the exponential function
:
[
21
]
This series can be integrated term by term to obtain the Taylor series for the cumulative distribution function:
[
22
]
However, this series is ineffective for calculation due to slow convergence, except when
â
â
is small.
[
22
]
Both of these series describe
entire functions
, which converge for all real and complex values of
â
â
.
Recursive computation with Taylor series
[
edit
]
The recurrence relation for
Hermite polynomials
He
n
(
x
)
may be used to efficiently construct the
Taylor series
expansion about any point
x
0
:
where:
Standard deviation and coverage
[
edit
]
For the normal distribution, the values less than one standard deviation from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.
About 68% of values drawn from a normal distribution are within one standard deviation
Ï
from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.
[
9
]
This is known as the
68â95â99.7 (empirical) rule
, or the
3-sigma rule
.
More precisely, the probability that a normal deviate lies in the range between
and
is given by
To 12 significant digits, the values for
are:
â
â
OEIS
1
0.682
689
492
137
0.317
310
507
863
3
.151
487
187
53
OEIS
:Â
A178647
2
0.954
499
736
104
0.045
500
263
896
21
.977
894
5080
OEIS
:Â
A110894
3
0.997
300
203
937
0.002
699
796
063
370
.398
347
345
OEIS
:Â
A270712
4
0.999
936
657
516
0.000
063
342
484
15
787
.192
7673
5
0.999
999
426
697
0.000
000
573
303
1
744
277
.893
62
6
0.999
999
998
027
0.000
000
001
973
506
797
345
.897
For large
â
â
, one can use the approximation
The
quantile function
of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the
probit function
, and can be expressed in terms of the inverse
error function
:
For a normal random variable with mean
â
â
and variance
, the quantile function is
The
quantile
of the standard normal distribution is commonly denoted as
â
â
. These values are used in
hypothesis testing
, construction of
confidence intervals
and
QâQ plots
. A normal random variable
â
â
will exceed
with probability
, and will lie outside the interval
with probability
â
â
. In particular, the quantile
is
1.96
; therefore a normal random variable will lie outside the interval
in only 5% of cases.
The following table gives the quantile
such that
â
â
will lie in the range
with a specified probability
â
â
. These values are useful to determine
tolerance interval
for
sample averages
and other statistical
estimators
with normal (or
asymptotically
normal) distributions.
[
23
]
The following table shows
, not
as defined above.
â
â
Â
â
â
0.80
1.281
551
565
545
0.999
3.290
526
731
492
0.90
1.644
853
626
951
0.9999
3.890
591
886
413
0.95
1.959
963
984
540
0.99999
4.417
173
413
469
0.98
2.326
347
874
041
0.999999
4.891
638
475
699
0.99
2.575
829
303
549
0.9999999
5.326
723
886
384
0.995
2.807
033
768
344
0.99999999
5.730
728
868
236
0.998
3.090
232
306
168
0.999999999
6.109
410
204
869
For small
â
â
, the quantile function has the useful
asymptotic expansion
[
citation needed
]
Using root finding to compute the quantile function
[
edit
]
Any of the described approaches for computing the cumulative distribution function
can be used with
Newton's method
(or another
root-finding algorithm
such as
Halley's method
) to find the value of
â
â
for which
â
â
for some desired quantile
â
â
. For example, starting with an initial, approximately correct guess
â
â
, increasingly better approximations
â
â
,
â
â
, ... can be calculated iteratively using Newton's method with
The normal distribution is the only distribution whose
cumulants
beyond the first two (i.e., other than the mean and
variance
) are zero. It is also the continuous distribution with the
maximum entropy
for a specified mean and variance.
[
24
]
[
25
]
Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.
[
26
]
[
27
]
The normal distribution is a subclass of the
elliptical distributions
. The normal distribution is
symmetric
about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the
weight
of a person or the price of a
share of stock
. Such variables may be better described by other distributions, such as the
log-normal distribution
or the
Pareto distribution
.
The value of the normal density is practically zero when the value
â
â
lies more than a few
standard deviations
away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of
outliers
âvalues that lie many standard deviations away from the meanâand least squares and other
statistical inference
methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more
heavy-tailed
distribution should be assumed and appropriate
robust statistical inference
methods applied.
The Gaussian distribution belongs to the family of
stable distributions
which are the attractors of sums of
independent, identically distributed
distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the
Cauchy distribution
and the
Lévy distribution
.
Symmetries and derivatives
[
edit
]
The normal distribution with density
(mean
â
â
and variance
) has the following properties:
Furthermore, the density
â
â
of the standard normal distribution (i.e.
and
) also has the following properties:
The plain and absolute
moments
of a variable
â
â
are the expected values of
and
, respectively. If the expected value
â
â
of
â
â
is zero, these parameters are called
central moments;
otherwise, these parameters are called
non-central moments.
Usually we are interested only in moments with integer order
â
â
.
If
â
â
has a normal distribution, the non-central moments exist and are finite for any
â
â
whose real part is greater than â1. For any non-negative integer
â
â
, the plain central moments are:
[
31
]
Here
denotes the
double factorial
, that is, the product of all numbers from
â
â
to 1 that have the same parity as
The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer
The last formula is valid also for any non-integer
When the mean
the plain and absolute moments can be expressed in terms of
confluent hypergeometric functions
and
[
32
]
These expressions remain valid even when
â
â
is not an integer. See also
generalized Hermite polynomials
.
Order
Non-central moment,
Central moment,
0
â
â
â
â
1
â
â
â
â
2
3
â
â
4
5
â
â
6
7
â
â
8
The expectation of
â
â
conditioned on the event that
â
â
lies in an interval
is given by
where
â
â
and
â
â
respectively are the density and the cumulative distribution function of
â
â
. For
this is known as the
inverse Mills ratio
. Note that above, density
â
â
of
â
â
is used instead of standard normal density as in inverse Mills ratio, so here we have
instead of
â
â
.
Fourier transform and characteristic function
[
edit
]
The
Fourier transform
of a normal density
â
â
with mean
â
â
and variance
is
[
33
]
where
â
â
is the
imaginary unit
. If the mean
, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the
frequency domain
, with mean 0 and variance
â
â
. In particular, the standard normal distribution
â
â
is an
eigenfunction
of the Fourier transform.
In probability theory, the Fourier transform of the probability distribution of a real-valued random variable
â
â
is closely connected to the
characteristic function
of that variable, which is defined as the
expected value
of
, as a function of the real variable
â
â
(the
frequency
parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable
â
â
.
[
34
]
The relation between both is:
The real and imaginary parts of
give:
and
Similarly,
and
These formulas evaluated at
give the expected value of these basic trigonometric and hyperbolic functions over a Gaussian random variable
, which also could be seen as consequences of the
Isserlis's theorem
.
Moment- and cumulant-generating functions
[
edit
]
The
moment generating function
of a real random variable
â
â
is the expected value of
, as a function of the real parameter
â
â
. For a normal distribution with density
â
â
, mean
â
â
and variance
, the moment generating function exists and is equal to
For any
â
â
, the coefficient of
â
â
in the moment generating function (expressed as an
exponential power series
in
â
â
) is the normal distribution's expected value
â
â
.
The
cumulant generating function
is the logarithm of the moment generating function, namely
The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in
â
â
, only the first two
cumulants
are nonzero, namely the meanÂ
â
â
and the varianceÂ
â
â
.
Some authors prefer to instead work with the
characteristic function
E[
e
itX
] =
e
iÎŒt
â
Ï
2
t
2
/2
and
ln E[
e
itX
] =
iÎŒt
â
â
1
/
2
â
Ï
2
t
2
.
Stein operator and class
[
edit
]
Within
Stein's method
the Stein operator and class of a random variable
are
and
the class of all absolutely continuous functions
â
â
such that
â
â
.
Zero-variance limit
[
edit
]
In the
limit
when
approaches zero, the probability density
approaches zero everywhere except at
, where it approaches
, while its integral remains equal to 1. An extension of the normal distribution to the case with zero variance can be defined using the
Dirac delta measure
, although the resulting random variables are not
absolutely continuous
and thus do not have
probability density functions
.
The cumulative distribution function of such a random variable is then the
Heaviside step function
translated by the mean
, namely
Of all probability distributions over the reals with a specified finite mean
â
â
and finite variance
â
â
, the normal distribution
is the one with
maximum entropy
.
[
24
]
To see this, let
â
â
be a
continuous random variable
with
probability density
â
â
. The entropy of
â
â
is defined as
[
35
]
[
36
]
[
37
]
where
is understood to be zero whenever
â
â
. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by using
variational calculus
. A function with three
Lagrange multipliers
is defined:
At maximum entropy, a small variation
about
will produce a variation
about
â
â
which is equal to 0:
Since this must hold for any small
â
â
, the factor multiplying
â
â
must be zero, and solving for
â
â
yields:
The Lagrange constraints that
â
â
is properly normalized and has the specified mean and variance are satisfied if and only if
â
â
,
â
â
, and
â
â
are chosen so that
The entropy of a normal distribution
is equal to
which is independent of the mean
â
â
.
If the characteristic function
of some random variable
â
â
is of the form
in a neighborhood of zero, where
is a
polynomial
, then the
Marcinkiewicz theorem
(named after
JĂłzef Marcinkiewicz
) asserts that
â
â
can be at most a quadratic polynomial, and therefore
â
â
is a normal random variable.
[
38
]
The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero
cumulants
.
If
â
â
and
â
â
are
jointly normal
and
uncorrelated
, then they are
independent
. The requirement that
â
â
and
â
â
should be
jointly
normal is essential; without it the property does not hold.
[
39
]
[
40
]
[proof]
For non-normal random variables uncorrelatedness does not imply independence.
The
KullbackâLeibler divergence
of one normal distribution
from another
is given by:
[
41
]
The
Hellinger distance
between the same distributions is equal to
The
Fisher information matrix
for a normal distribution w.r.t.
â
â
and
is diagonal and takes the form
The
conjugate prior
of the mean of a normal distribution is another normal distribution.
[
42
]
Specifically, if
are iid
and the prior is
, then the posterior distribution for the estimator of
â
â
will be
The family of normal distributions not only forms an
exponential family
(EF), but in fact forms a
natural exponential family
(NEF) with quadratic
variance function
(
NEF-QVF
). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.
In
information geometry
, the family of normal distributions forms a
statistical manifold
with
constant curvature
â
â
. The same family is
flat
with respect to the (±1)-connections
and
.
[
43
]
If
are distributed according to
, then
. Note that there is no assumption of independence.
[
44
]
Central limit theorem
[
edit
]
As the number of discrete events increases, the function begins to resemble a normal distribution.
Comparison of probability density functions,
p
(
k
)
for the sum of
n
fair 6-sided dice to show their convergence to a normal distribution with increasing
na
, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve).
The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where
are
independent and identically distributed
random variables with the same arbitrary distribution, zero mean, and variance
and
â
â
is their
mean scaled by
Then, as
â
â
increases, the probability distribution of
â
â
will tend to the normal distribution with zero mean and variance
â
â
.
The theorem can be extended to variables
that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.
Many
test statistics
,
scores
, and
estimators
encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of
influence functions
. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.
The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:
Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.
A general upper bound for the approximation error in the central limit theorem is given by the
BerryâEsseen theorem
, improvements of the approximation are given by the
Edgeworth expansions
.
This theorem can also be used to justify modeling the sum of many uniform noise sources as
Gaussian noise
. See
AWGN
.
Operations and functions of normal variables
[
edit
]
Operations on a single normal variable
[
edit
]
If
â
â
is distributed normally with mean
â
â
and variance
, then
Operations on two independent normal variables
[
edit
]
If
and
are two
independent
normal random variables, with means
,
and variances
,
, then their sum
will also be normally distributed,
[proof]
with mean
and variance
.
In particular, if
â
â
and
â
â
are independent normal deviates with zero mean and variance
, then
and
are also independent and normally distributed, with zero mean and variance
. This is a special case of the
polarization identity
.
[
46
]
If
,
are two independent normal deviates with mean
â
â
and variance
, and
â
â
,
â
â
are arbitrary real numbers, then the variable
is also normally distributed with mean
â
â
and variance
. It follows that the normal distribution is
stable
(with exponent
).
If
,
are normal distributions, then their normalized
geometric mean
is a normal distribution
with
and
.
Operations on two independent standard normal variables
[
edit
]
If
and
are two independent standard normal random variables with mean 0 and variance 1, then
Operations on multiple independent normal variables
[
edit
]
A
quadratic form
of a normal vector, i.e. a quadratic function
of multiple independent or correlated normal variables, is a
generalized chi-square
variable.
Operations on the density function
[
edit
]
The
split normal distribution
is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The
truncated normal distribution
results from rescaling a section of a single density function.
Infinite divisibility and Cramér's theorem
[
edit
]
For any positive integer
n
, any normal distribution with mean
â
â
and variance
is the distribution of the sum of
n
independent normal deviates, each with mean
and variance
. This property is called
infinite divisibility
.
[
51
]
Conversely, if
and
are independent random variables and their sum
has a normal distribution, then both
and
must be normal deviates.
[
52
]
This result is known as
Cramér's decomposition theorem
, and is equivalent to saying that the
convolution
of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.
[
38
]
The KacâBernstein theorem
[
edit
]
The
KacâBernstein theorem
states that if
and
â
â
are independent and
and
are also independent, then both
X
and
Y
must necessarily have normal distributions.
[
53
]
[
54
]
More generally, if
are independent random variables, then two distinct linear combinations
and
will be independent if and only if all
are normal and
, where
denotes the variance of
.
[
53
]
The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called
normal
or
Gaussian
laws, so a certain ambiguity in names exists.
The
multivariate normal distribution
describes the Gaussian law in the
k
-dimensional
Euclidean space
. A vector
X
â
R
k
is multivariate-normally distributed if any linear combination of its components
ÎŁ
k
j
=1
a
j
X
j
has a (univariate) normal distribution. The variance of
X
is a
k
âĂâ
k
symmetric positive-definite matrix
V
. The multivariate normal distribution is a special case of the
elliptical distributions
. As such, its iso-density loci in the
k
= 2
case are
ellipses
and in the case of arbitrary
k
are
ellipsoids
.
Rectified Gaussian distribution
a rectified version of normal distribution with all the negative elements reset to 0.
Complex normal distribution
deals with the complex normal vectors. A complex vector
X
â
C
k
is said to be normal if both its real and imaginary components jointly possess a
2
k
-dimensional multivariate normal distribution. The variance-covariance structure of
X
is described by two matrices: the
variance
matrix
Î
, and the
relation
matrix
C
.
Matrix normal distribution
describes the case of normally distributed matrices.
Gaussian processes
are the normally distributed
stochastic processes
. These can be viewed as elements of some infinite-dimensional
Hilbert space
H
, and thus are the analogues of multivariate normal vectors for the case
k
= â
. A random element
h
â
H
is said to be normal if for any constant
a
â
H
the
scalar product
(
a
,
h
)
has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear
covariance operator
K
:
H
â
H
. Several Gaussian processes became popular enough to have their own names:
Brownian motion
;
Brownian bridge
; and
OrnsteinâUhlenbeck process
.
Gaussian q-distribution
is an abstract mathematical construction that represents a
q-analogue
of the normal distribution.
the
q-Gaussian
is an analogue of the Gaussian distribution, in the sense that it maximises the
Tsallis entropy
, and is one type of
Tsallis distribution
. This distribution is different from the
Gaussian q-distribution
above.
The
Kaniadakis
Îș
-Gaussian distribution
is a generalization of the Gaussian distribution which arises from the
Kaniadakis statistics
, being one of the
Kaniadakis distributions
.
A random variable
X
has a two-piece normal distribution if it has a distribution
where
Ό
is the mean and
Ï
2
1
Â
and
Ï
2
2
Â
are the variances of the distribution to the left and right of the mean respectively.
The mean
E(
X
)
, variance
V(
X
)
, and third central moment
T(
X
)
of this distribution have been determined
[
55
]
One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:
Pearson distribution
â a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values.
The
generalized normal distribution
, also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors.
Statistical inference
[
edit
]
Estimation of parameters
[
edit
]
It is often the case that we do not know the parameters of the normal distribution, but instead want to
estimate
them. That is, having a sample
from a normal
population we would like to learn the approximate values of parameters
â
â
and
. The standard approach to this problem is the
maximum likelihood
method, which requires maximization of the
log-likelihood function
:
Taking derivatives with respect to
â
â
and
and solving the resulting system of first order conditions yields the
maximum likelihood estimates
:
Then
is as follows:
Estimator
is called the
sample mean
, since it is the arithmetic mean of all observations. The statistic
is
complete
and
sufficient
for
â
â
, and therefore by the
LehmannâScheffĂ© theorem
,
is the
uniformly minimum variance unbiased
(UMVU) estimator.
[
56
]
In finite samples it is distributed normally:
The variance of this estimator is equal to the
ΌΌ
-element of the inverse
Fisher information matrix
. This implies that the estimator is
finite-sample efficient
. Of practical importance is the
standard error
of
being proportional to
, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in
Monte Carlo simulations
.
From the standpoint of the
asymptotic theory
,
is
consistent
, that is, it
converges in probability
to
â
â
as
. The estimator is also
asymptotically normal
, which is a simple corollary of it being normal in finite samples:
The estimator
is called the
sample variance
, since it is the variance of the sample (
). In practice, another estimator is often used instead of the
. This other estimator is denoted
, and is also called the
sample variance
, which represents a certain ambiguity in terminology; its square root
â
â
is called the
sample standard deviation
. The estimator
differs from
by having
(
n
â 1)
instead ofÂ
n
in the denominator (the so-called
Bessel's correction
):
The difference between
and
becomes negligibly small for large
n
'
s. In finite samples however, the motivation behind the use of
is that it is an
unbiased estimator
of the underlying parameter
, whereas
is biased. Also, by the LehmannâScheffĂ© theorem the estimator
is uniformly minimum variance unbiased (
UMVU
),
[
56
]
which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator
is better than the
in terms of the
mean squared error
(MSE) criterion. In finite samples both
and
have scaled
chi-squared distribution
with
(
n
â 1)
degrees of freedom:
The first of these expressions shows that the variance of
is equal to
, which is slightly greater than the
ÏÏ
-element of the inverse Fisher information matrix
, which is
. Thus,
is not an efficient estimator for
, and moreover, since
is UMVU, we can conclude that the finite-sample efficient estimator for
does not exist.
Applying the asymptotic theory, both estimators
and
are consistent, that is they converge in probability to
as the sample size
. The two estimators are also both asymptotically normal:
In particular, both estimators are asymptotically efficient for
.
Confidence intervals
[
edit
]
By
Cochran's theorem
, for normal distributions the sample mean
and the sample variance
s
2
are
independent
, which means there can be no gain in considering their
joint distribution
. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between
and
s
can be employed to construct the so-called
t-statistic
:
This quantity
t
has the
Student's t-distribution
with
(
n
â 1)
degrees of freedom, and it is an
ancillary statistic
(independent of the value of the parameters). Inverting the distribution of this
t
-statistics will allow us to construct the
confidence interval
for
Ό
;
[
57
]
similarly, inverting the
Ï
2
distribution of the statistic
s
2
will give us the confidence interval for
Ï
2
:
[
58
]
where
t
k
,
p
and
Ï
Â
2
k,p
Â
are the
p
th
quantiles
of the
t
- and
Ï
2
-distributions respectively. These confidence intervals are of the
confidence level
1 â
α
, meaning that the true values
Ό
and
Ï
2
fall outside of these intervals with probability (or
significance level
)
α
. In practice people usually take
α
= 5%
, resulting in the 95% confidence intervals. The confidence interval for
Ï
can be found by taking the square root of the interval bounds for
Ï
2
.
Approximate formulas can be derived from the asymptotic distributions of
and
s
2
:
The approximate formulas become valid for large values of
n
, and are more convenient for the manual calculation since the standard normal quantiles
z
α
/2
do not depend on
n
. In particular, the most popular value of
α
= 5%
, results in
|
z
0.025
| =
1.96
.
Normality tests assess the likelihood that the given data set
{
x
1
, ...,
x
n
}
comes from a normal distribution. Typically the
null hypothesis
H
0
is that the observations are distributed normally with unspecified mean
Ό
and variance
Ï
2
, versus the alternative
H
a
that the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below:
Diagnostic plots
are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis.
QâQ plot
, also known as
normal probability plot
or
rankit
plotâis a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it is a plot of point of the form
(
Ί
â1
(
p
k
),
x
(
k
)
)
, where plotting points
p
k
are equal to
p
k
= (
k
â
α
)/(
n
+ 1 â 2
α
)
and
α
is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line.
PâP plot
â similar to the QâQ plot, but used much less frequently. This method consists of plotting the points
(
Ί
(
z
(
k
)
),
p
k
)
, where
. For normally distributed data this plot should lie on a straight line between
(0, 0)
andÂ
(1, 1)
.
Goodness-of-fit tests
:
Moment-based tests
:
D'Agostino's K-squared test
JarqueâBera test
ShapiroâWilk test
: This is based on the line in the QâQ plot having the slope of
Ï
. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.
Tests based on the empirical distribution function
:
AndersonâDarling test
Lilliefors test
(an adaptation of the
KolmogorovâSmirnov test
)
Bayesian analysis of the normal distribution
[
edit
]
Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:
Either the mean, or the variance, or neither, may be considered a fixed quantity.
When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the
precision
, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified.
Both univariate and
multivariate
cases need to be considered.
Either
conjugate
or
improper
prior distributions
may be placed on the unknown variables.
An additional set of cases occurs in
Bayesian linear regression
, where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the
regression coefficients
. The resulting analysis is similar to the basic cases of
independent identically distributed
data.
The formulas for the non-linear-regression cases are summarized in the
conjugate prior
article.
Sum of two quadratics
[
edit
]
The following auxiliary formula is useful for simplifying the
posterior
update equations, which otherwise become fairly tedious.
This equation rewrites the sum of two quadratics in
x
by expanding the squares, grouping the terms in
x
, and
completing the square
. Note the following about the complex constant factors attached to some of the terms:
The factor
has the form of a
weighted average
of
y
and
z
.
This shows that this factor can be thought of as resulting from a situation where the
reciprocals
of quantities
a
and
b
add directly, so to combine
a
and
b
themselves, it is necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the
harmonic mean
, so it is not surprising that
is one-half the
harmonic mean
of
a
and
b
.
A similar formula can be written for the sum of two vector quadratics: If
x
,
y
,
z
are vectors of length
k
, and
A
and
B
are
symmetric
,
invertible matrices
of size
, then
where
The form
x
âČ
A
x
is called a
quadratic form
and is a
scalar
:
In other words, it sums up all possible combinations of products of pairs of elements from
x
, with a separate coefficient for each. In addition, since
, only the sum
matters for any off-diagonal elements of
A
, and there is no loss of generality in assuming that
A
is
symmetric
. Furthermore, if
A
is symmetric, then the form
Sum of differences from the mean
[
edit
]
Another useful formula is as follows:
where
With known variance
[
edit
]
For a set of
i.i.d.
normally distributed data points
X
of size
n
where each individual point
x
follows
with known
variance
Ï
2
, the
conjugate prior
distribution is also normally distributed.
This can be shown more easily by rewriting the variance as the
precision
, i.e. using
Ï
= 1/
Ï
2
. Then if
and
we proceed as follows.
First, the
likelihood function
is (using the formula above for the sum of differences from the mean):
Then, we proceed as follows:
In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involvingÂ
Ό
. The result is the
kernel
of a normal distribution, with mean
and precision
, i.e.
This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:
That is, to combine
n
data points with total precision of
nÏ
(or equivalently, total variance of
n
/
Ï
2
) and mean of values
, derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a
precision-weighted average
, i.e. a
weighted average
of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)
The above formula reveals why it is more convenient to do
Bayesian analysis
of
conjugate priors
for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas
For a set of
i.i.d.
normally distributed data points
X
of size
n
where each individual point
x
follows
with known mean
Ό
, the
conjugate prior
of the
variance
has an
inverse gamma distribution
or a
scaled inverse chi-squared distribution
. The two are equivalent except for having different
parameterizations
. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for
Ï
2
is as follows:
The
likelihood function
from above, written in terms of the variance, is:
where
Then:
The above is also a scaled inverse chi-squared distribution where
or equivalently
Reparameterizing in terms of an
inverse gamma distribution
, the result is:
With unknown mean and unknown variance
[
edit
]
For a set of
i.i.d.
normally distributed data points
X
of size
n
where each individual point
x
follows
with unknown mean
Ό
and unknown
variance
Ï
2
, a combined (multivariate)
conjugate prior
is placed over the mean and variance, consisting of a
normal-inverse-gamma distribution
.
Logically, this originates as follows:
From the analysis of the case with unknown mean but known variance, we see that the update equations involve
sufficient statistics
computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.
From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and
sum of squared deviations
.
Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.
To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence.
This suggests that we create a
conditional prior
of the mean on the unknown variance, with a hyperparameter specifying the mean of the
pseudo-observations
associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately.
This leads immediately to the
normal-inverse-gamma distribution
, which is the product of the two distributions just defined, with
conjugate priors
used (an
inverse gamma distribution
over the variance, and a normal distribution over the mean,
conditional
on the variance) and with the same four parameters just defined.
The priors are normally defined as follows:
The update equations can be derived, and look as follows:
The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for
is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.
Occurrence and applications
[
edit
]
The occurrence of normal distribution in practical problems can be loosely classified into four categories:
Exactly normal distributions;
Approximately normal laws, for example when such approximation is justified by the
central limit theorem
; and
Distributions modeled as normal â the normal distribution being the distribution with
maximum entropy
for a given mean and variance.
Regression problems â the normal distribution being found after systematic effects have been modeled sufficiently well.
The ground state of a
quantum harmonic oscillator
has the Gaussian distribution.
A normal distribution occurs in some
physical theories
:
Approximate normality
[
edit
]
Approximately
normal distributions occur in many situations, as explained by the
central limit theorem
. When the outcome is produced by many small effects acting
additively and independently
, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects.
In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where
infinitely divisible
and
decomposable
distributions are involved, such as
Binomial random variables
, associated with binary response variables;
Poisson random variables
, associated with rare events;
Thermal radiation
has a
BoseâEinstein
distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem.
Histogram of sepal widths for
Iris versicolor
from Fisher's
Iris flower data set
, with superimposed best-fitting normal distribution
I can only recognize the occurrence of the normal curve â the Laplacian curve of errors â as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations.
There are statistical methods to empirically test that assumption; see the above
Normality tests
section.
In
biology
, the
logarithm
of various variables tend to have a normal distribution, that is, they tend to have a
log-normal distribution
(after separation on male/female subpopulations), with examples including:
Measures of size of living tissue (length, height, skin area, weight);
[
62
]
The
length
of
inert
appendages (hair, claws, nails, teeth) of biological specimens,
in the direction of growth
; presumably the thickness of tree bark also falls under this category;
Certain physiological measurements, such as blood pressure of adult humans.
In finance, in particular the
BlackâScholes model
, changes in the
logarithm
of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like
compound interest
, not like simple interest, and so are multiplicative). Some mathematicians such as
Benoit Mandelbrot
have argued that
log-Levy distributions
, which possess
heavy tails
, would be a more appropriate model, in particular for the analysis for
stock market crashes
. The use of the assumption of normal distribution occurring in financial models has also been criticized by
Nassim Nicholas Taleb
in his works.
Measurement errors
in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.
[
63
]
In
standardized testing
, results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the
IQ test
) or transforming the raw test scores into output scores by fitting them to the normal distribution. For example, the
SAT
's traditional range of 200â800 is based on a normal distribution with a mean of 500 and a standard deviation of 100.
Fitted cumulative normal distribution to October rainfalls, see
distribution fitting
Many scores are derived from the normal distribution, including
percentile ranks
(percentiles or quantiles),
normal curve equivalents
,
stanines
,
z-scores
, and T-scores. Additionally, some
behavioral statistical
procedures assume that scores are normally distributed; for example,
t-tests
and
ANOVAs
.
Bell curve grading
assigns relative grades based on a normal distribution of scores.
In
hydrology
the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the
central limit theorem
.
[
64
]
The plot on the right illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90%
confidence belt
based on the
binomial distribution
. The rainfall data are represented by
plotting positions
as part of the
cumulative frequency analysis
.
Methodological problems and peer review
[
edit
]
John Ioannidis
argued
that using normally distributed standard deviations as standards for validating research findings leave
falsifiable predictions
about phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if they were unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.
[
65
]
Computational methods
[
edit
]
Generating values from normal distribution
[
edit
]
The
bean machine
, a device invented by
Francis Galton
, can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.
In computer simulations, especially in applications of the
Monte-Carlo method
, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a
N
(
Ό
,
Ï
2
)
can be generated as
X
=
Ό
+
ÏZ
, where
Z
is standard normal. All these algorithms rely on the availability of a
random number generator
U
capable of producing
uniform
random variates.
The most straightforward method is based on the
probability integral transform
property: if
U
is distributed uniformly on (0,1), then
Ί
â1
(
U
)
will have the standard normal distribution. The drawback of this method is that it relies on calculation of the
probit function
Ί
â1
, which cannot be done analytically. Some approximate methods are described in
Hart (1968)
and in the
erf
article. Wichura gives a fast algorithm for computing this function to 16 decimal places,
[
66
]
which is used by
R
to compute random variates of the normal distribution.
An easy-to-program approximate approach
that relies on the
central limit theorem
is as follows: generate 12 uniform
U
(0,1)
deviates, add them all up, and subtract 6 â the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be
IrwinâHall
, which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of
(â6, 6)
.
[
67
]
Note that in a true normal distribution, only 0.00034% of all samples will fall outsideÂ
±6
Ï
.
The
BoxâMuller method
uses two independent random numbers
U
and
V
distributed
uniformly
on (0,1). Then the two random variables
X
and
Y
will both have the standard normal distribution, and will be
independent
. This formulation arises because for a
bivariate normal
random vector
(
X
,
Y
)
the squared norm
X
2
+
Y
2
will have the
chi-squared distribution
with two degrees of freedom, which is an easily generated
exponential random variable
corresponding to the quantity
â2 ln(
U
)
in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable
V
.
The
Marsaglia polar method
is a modification of the BoxâMuller method which does not require computation of the sine and cosine functions. In this method,
U
and
V
are drawn from the uniform (â1,1) distribution, and then
S
=
U
2
+
V
2
is computed. If
S
is greater or equal to 1, then the method starts over, otherwise the two quantities
are returned. Again,
X
and
Y
are independent, standard normal random variables.
The Ratio method
[
68
]
is a rejection method. The algorithm proceeds as follows:
Generate two independent uniform deviates
U
and
V
;
Compute
X
=
â
8/
e
(
V
â 0.5)/
U
;
Optional: if
X
2
†5 â 4
e
1/4
U
then accept
X
and terminate algorithm;
Optional: if
X
2
â„ 4
e
â1.35
/
U
+ 1.4
then reject
X
and start over from step 1;
If
X
2
†â4 ln
U
then accept
X
, otherwise start over the algorithm.
The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved
[
69
]
so that the logarithm is rarely evaluated.
The
ziggurat algorithm
[
70
]
is faster than the BoxâMuller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed.
Integer arithmetic can be used to sample from the standard normal distribution.
[
71
]
[
72
]
This method is exact in the sense that it satisfies the conditions of
ideal approximation
;
[
73
]
i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number.
There is also some investigation
[
74
]
into the connection between the fast
Hadamard transform
and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data.
Numerical approximations for the normal cumulative distribution function and normal quantile function
[
edit
]
The standard normal
cumulative distribution function
is widely used in scientific and statistical computing.
The values
Ί
(
x
)
may be approximated very accurately by a variety of methods, such as
numerical integration
,
Taylor series
,
asymptotic series
and
continued fractions
. Different approximations are used depending on the desired level of accuracy.
Zelen & Severo (1964)
give the approximation for
Ί
(
x
)
for
x
> 0
with the absolute error
|
Δ
(
x
)
| < 7.5·10
â8
(algorithm
26.2.17
):
where
Ï
(
x
)
is the standard normal probability density function, and
b
0
= 0.2316419
,
b
1
= 0.319381530
,
b
2
= â0.356563782
,
b
3
= 1.781477937
,
b
4
= â1.821255978
,
b
5
= 1.330274429
.
Hart (1968)
lists dozens of approximations by means of rational functions, with or without exponentials, for the
erfc()
function, where erfc(x) = 1 - erf(x). His algorithms vary in the degree of complexity and the resulting precision, with a maximum absolute precision of 24 digits. An algorithm by
West (2009)
combines Hart's algorithm 5666 with a
continued fraction
approximation in the tail to provide a fast computation algorithm with 16-digit precision.
Cody (1969)
, after recalling the Hart68 solution is not suited for erf, gave a solution for both erf and erfc, with maximal relative error bound, via
Rational Chebyshev Approximation
.
Marsaglia (2004)
suggested a simple algorithm
[
note 1
]
based on the Taylor series expansion
for calculating
Ί
(
x
)
with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when
x
= 10
).
The
GNU Scientific Library
calculates values of the standard normal cumulative distribution function using Hart's algorithms and approximations with
Chebyshev polynomials
.
Dia (2023)
proposes the following approximation of
with a maximum relative error less than
in absolute value: for
and for
,
Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting
p
=
Ί
(
z
)
, the simplest approximation for the quantile function is:
This approximation delivers for
z
a maximum absolute error of 0.026 (for
0.5 â€
p
†0.9999
, corresponding to
0 â€
z
†3.719
). For
p
< 1/2
replace
p
by
1 â
p
and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:
The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by
This approximation is particularly accurate for the right far-tail (maximum error of 10
â3
for
z
â„ 1.4
). Highly accurate approximations for the cumulative distribution function, based on
Response Modeling Methodology
(RMM, Shore, 2011, 2012), are shown in Shore (2005).
Some more approximations can be found at:
Error function#Approximation with elementary functions
. In particular, small
relative
error on the whole domain for the cumulative distribution function
â
â
and the quantile function
as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.
Some authors
[
75
]
[
76
]
attribute the discovery of the normal distribution to
de Moivre
, who in 1738
[
note 2
]
published in the second edition of his
The Doctrine of Chances
the study of the coefficients in the
binomial expansion
of
(
a
+
b
)
n
. De Moivre proved that the middle term in this expansion has the approximate magnitude of
, and that "If
m
or
â
1
/
2
â
n
be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval
â
, has to the middle Term, is
."
[
77
]
Although this theorem can be interpreted as the first obscure expression for the normal probability law,
Stigler
points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.
[
78
]
In 1809,
Carl Friedrich Gauss
showed that the normal distribution provides a way to rationalize the
method of least squares
.
In 1823
Gauss
published his monograph
"
Theoria combinationis observationum erroribus minimis obnoxiae
"
where among other things he introduces several important statistical concepts, such as the
method of least squares
, the
method of maximum likelihood
, and the
normal distribution
. Gauss used
M
,
M
âČ
,
M
âł, ...
to denote the measurements of some unknown quantityÂ
V
, and sought the most probable estimator of that quantity: the one that maximizes the probability
Ï
(
M
â
V
) ·
Ï
(
M
âČ â
V
) ·
Ï
(
M
âł â
V
) · ...
of obtaining the observed experimental results. In his notation ÏÎ is the probability density function of the measurement errors of magnitude Î. Not knowing what the function
Ï
is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.
[
note 3
]
Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:
[
79
]
where
h
is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the
non-linear
weighted least squares
method.
[
80
]
Pierre-Simon Laplace
proved the
central limit theorem
in 1810, consolidating the importance of the normal distribution in statistics.
Although Gauss was the first to suggest the normal distribution law,
Laplace
made significant contributions.
[
note 4
]
It was Laplace who first posed the problem of aggregating several observations in 1774,
[
81
]
although his own solution led to the
Laplacian distribution
. It was Laplace who first calculated the value of the
integral
â«
e
â
t
2
dt
=
â
Ï
in 1782, providing the normalization constant for the normal distribution.
[
82
]
For this accomplishment, Gauss acknowledged the priority of Laplace.
[
83
]
Finally, it was Laplace who in 1810 proved and presented to the academy the fundamental
central limit theorem
, which emphasized the theoretical importance of the normal distribution.
[
84
]
It is of interest to note that in 1809 an Irish-American mathematician
Robert Adrain
published two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.
[
85
]
His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed by
Abbe
.
[
86
]
In the middle of the 19th century
Maxwell
demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:
[
59
]
The number of particles whose velocity, resolved in a certain direction, lies between
x
and
x
+
dx
is
Today, the concept is usually known in English as the
normal distribution
or
Gaussian distribution
. Other less common names include Gauss distribution, LaplaceâGauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law.
Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.
[
87
]
However, by the end of the 19th century some authors
[
note 5
]
had started using the name
normal distribution
, where the word "normal" was used as an adjective â the term now being seen as a reflection of this distribution being seen as typical, common â and thus normal.
Peirce
(one of those authors) once defined "normal" thus: "... the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what
would
, in the long run, occur under certain circumstances."
[
88
]
Around the turn of the 20th century
Pearson
popularized the term
normal
as a designation for this distribution.
[
89
]
Many years ago I called the LaplaceâGaussian curve the
normal
curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'.
Also, it was Pearson who first wrote the distribution in terms of the standard deviation
Ï
as in modern notation. Soon after this, in year 1915,
Fisher
added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays:
The term
standard normal distribution
, which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947)
Introduction to Mathematical Statistics
and
Alexander M. Mood
(1950)
Introduction to the Theory of Statistics
.
[
90
]
[
91
]
[
92
]
^
For example, this algorithm is given in the article
Bc programming language
.
^
De Moivre first published his findings in 1733, in a pamphlet
Approximatio ad Summam Terminorum Binomii
(
a
+
b
)
n
in Seriem Expansi
that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example
Walker (1985)
.
^
"It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been determined by several direct observations, made under the same circumstances and with equal care, the arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at least, so that it is always most safe to adhere to it." â
Gauss (1809
, section 177)
^
"My custom of terming the curve the GaussâLaplacian or
normal
curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from
Pearson (1905
, p. 189)
^
Besides those specifically referenced here, such use is encountered in the works of
Peirce
,
Galton
(
Galton (1889
, chapter V)) and
Lexis
(
Lexis (1878)
,
Rohrbasser & Véron (2003)
) c. 1875.
[
citation needed
]
^
Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019).
"Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation"
(PDF)
.
Annals of Operations Research
.
299
(
1â
2). Springer:
1281â
1315.
arXiv
:
1811.11301
.
doi
:
10.1007/s10479-019-03373-1
.
S2CID
Â
254231768
. Archived from
the original
(PDF)
on March 31, 2023
. Retrieved
February 27,
2023
.
^
Tsokos, Chris; Wooten, Rebecca (January 1, 2016). Tsokos, Chris; Wooten, Rebecca (eds.).
The Joy of Finite Mathematics
. Boston: Academic Press. pp.Â
231â
263.
doi
:
10.1016/b978-0-12-802967-1.00007-3
.
ISBN
Â
978-0-12-802967-1
.
^
Harris, Frank E. (January 1, 2014). Harris, Frank E. (ed.).
Mathematics for Physical Science and Engineering
. Boston: Academic Press. pp.Â
663â
709.
doi
:
10.1016/b978-0-12-801000-6.00018-3
.
ISBN
Â
978-0-12-801000-6
.
^
Hoel (1947
,
p. 31
) and
Mood (1950
,
p. 109
) give this definition with slightly different notation.
^
Normal Distribution
, Gale Encyclopedia of Psychology
^
Casella & Berger (2001
, p. 102)
^
Lyon, A. (2014).
Why are Normal Distributions Normal?
, The British Journal for the Philosophy of Science.
^
Jorge, Nocedal; Stephan, J. Wright (2006).
Numerical Optimization
(2nd ed.). Springer. p. 249.
ISBN
Â
978-0387-30303-1
.
^
a
b
"Normal Distribution"
.
www.mathsisfun.com
. Retrieved
August 15,
2020
.
^
"bell curve"
.
Merriam-Webster.com Dictionary
. Retrieved
May 25,
2025
.
^
Mood (1950
,
p. 112
) explicitly defines the
standard normal distribution
. In contrast,
Hoel (1947)
explicitly defines the
standard normal curve
(p. 33)
and introduces the term
standard normal distribution
(p. 69)
.
^
Stigler (1982)
^
Halperin, Hartley & Hoel (1965
, item 7)
^
McPherson (1990
, p. 110)
^
Bernardo & Smith (2000
, p. 121)
^
Park, Kun Il (2018).
Fundamentals of Probability and Stochastic Processes with Applications to Communications
. Springer.
ISBN
Â
978-3-319-68074-3
.
^
Scott, Clayton; Nowak, Robert (August 7, 2003).
"The Q-function"
.
Connexions
.
^
Barak, Ohad (April 6, 2006).
"Q Function and Error Function"
(PDF)
. Tel Aviv University. Archived from
the original
(PDF)
on March 25, 2009.
^
Weisstein, Eric W.
"Normal Distribution Function"
.
MathWorld
.
^
Abramowitz, Milton
;
Stegun, Irene Ann
, eds. (1983) [June 1964].
"Chapter 26, eqn 26.2.12"
.
Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables
. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 932.
ISBN
Â
978-0-486-61272-0
.
LCCN
Â
64-60036
.
MR
Â
0167642
.
LCCN
Â
65-12253
.
^
Duff, Michael (2003). "Normal Distribution Algorithms".
The Mathematical Gazette
.
87
(509):
331â
336.
JSTOR
Â
3621062
.
^
a
b
Stuart, Alan; Ord, J. Keith (1987).
"The normal d.f."
.
Kendall's Advanced Theory of Statistics
. Vol. 1: Distribution Theory. originally by
Maurice Kendall
(5th ed.). Charles Griffin & Co. §
Â
5.37, pp. 183â185.
ISBN
Â
0-85264-285-7
.
^
Vaart, A. W. van der (October 13, 1998).
Asymptotic Statistics
. Cambridge University Press.
doi
:
10.1017/cbo9780511802256
.
ISBN
Â
978-0-511-80225-6
.
^
a
b
Cover & Thomas (2006)
, p. 254.
^
Park, Sung Y.; Bera, Anil K. (2009).
"Maximum Entropy Autoregressive Conditional Heteroskedasticity Model"
(PDF)
.
Journal of Econometrics
.
150
(2):
219â
230.
Bibcode
:
2009JEcon.150..219P
.
CiteSeerX
Â
10.1.1.511.9750
.
doi
:
10.1016/j.jeconom.2008.12.014
. Archived from
the original
(PDF)
on March 7, 2016
. Retrieved
June 2,
2011
.
^
Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178â184
^
Lukacs, Eugene
(March 1942).
"A Characterization of the Normal Distribution"
.
Annals of Mathematical Statistics
.
13
(1):
91â
93.
doi
:
10.1214/AOMS/1177731647
.
ISSN
Â
0003-4851
.
JSTOR
Â
2236166
.
MR
Â
0006626
.
Zbl
Â
0060.28509
.
Wikidata
Â
Q55897617
.
^
a
b
c
Patel & Read (1996
, [2.1.4])
^
Fan (1991
, p. 1258)
^
Patel & Read (1996
, [2.1.8])
^
Papoulis, Athanasios.
Probability, Random Variables and Stochastic Processes
(4th ed.). p. 148.
^
Winkelbauer, Andreas (2012). "Moments and Absolute Moments of the Normal Distribution".
arXiv
:
1209.4340
[
math.ST
].
^
Bryc (1995
, p. 23)
^
Bryc (1995
, p. 24)
^
Williams, David (2001).
Weighing the odds : a course in probability and statistics
(Reprinted. ed.). Cambridge [u.a.]: Cambridge Univ. Press. pp.Â
197
â199.
ISBN
Â
978-0-521-00618-7
.
^
José M. Bernardo; Adrian F. M. Smith (2000).
Bayesian theory
(Reprint ed.). Chichester [u.a.]: Wiley. pp.Â
209
, 366.
ISBN
Â
978-0-471-49464-5
.
^
O'Hagan, A. (1994)
Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference
, Edward Arnold.
ISBN
Â
0-340-52922-9
(Section 5.40)
^
a
b
Bryc (1995
, p. 35)
^
UIUC, Lecture 21.
The Multivariate Normal Distribution
, 21.6:"Individually Gaussian Versus Jointly Gaussian".
^
Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution",
The American Statistician
, volume 36, number 4 November 1982, pages 372â373
^
"Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions"
.
Allisons.org
. December 5, 2007
. Retrieved
March 3,
2017
.
^
Jordan, Michael I. (February 8, 2010).
"Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution"
(PDF)
.
^
Amari & Nagaoka (2000)
^
"Expectation of the maximum of gaussian random variables"
.
Mathematics Stack Exchange
. Retrieved
April 7,
2024
.
^
"Normal Approximation to Poisson Distribution"
.
Stat.ucla.edu
. Retrieved
March 3,
2017
.
^
Bryc (1995
, p. 27)
^
Weisstein, Eric W.
"Normal Product Distribution"
.
MathWorld
. wolfram.com.
^
Lukacs, Eugene (1942).
"A Characterization of the Normal Distribution"
.
The Annals of Mathematical Statistics
.
13
(1):
91â
3.
doi
:
10.1214/aoms/1177731647
.
ISSN
Â
0003-4851
.
JSTOR
Â
2236166
.
^
Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution".
SankhyÄ
.
13
(4):
359â
62.
ISSN
Â
0036-4452
.
JSTOR
Â
25048183
.
^
Lehmann, E. L. (1997).
Testing Statistical Hypotheses
(2nd ed.). Springer. p. 199.
ISBN
Â
978-0-387-94919-2
.
^
Patel & Read (1996
, [2.3.6])
^
Galambos & Simonelli (2004
, Theorem 3.5)
^
a
b
Lukacs & King (1954)
^
Quine, M.P. (1993).
"On three characterisations of the normal distribution"
.
Probability and Mathematical Statistics
.
14
(2):
257â
263.
^
John, S (1982). "The three parameter two-piece normal family of distributions and its fitting".
Communications in Statistics â Theory and Methods
.
11
(8):
879â
885.
doi
:
10.1080/03610928208828279
.
^
a
b
Krishnamoorthy (2006
, p. 127)
^
Krishnamoorthy (2006
, p. 130)
^
Krishnamoorthy (2006
, p. 133)
^
a
b
Maxwell (1860)
, p. 23.
^
Bryc (1995)
, p. 1.
^
Larkoski, Andrew J. (2023).
Quantum Mechanics: A Mathematical Introduction
. United Kingdom: Cambridge University Press. pp.Â
120â
121.
ISBN
Â
978-1-009-12222-1
. Retrieved
May 30,
2025
.
^
Huxley (1932)
^
Jaynes, Edwin T. (2003).
Probability Theory: The Logic of Science
. Cambridge University Press. pp.Â
592â
593.
ISBN
Â
9780521592710
.
^
Oosterbaan, Roland J. (1994).
"Chapter 6: Frequency and Regression Analysis of Hydrologic Data"
(PDF)
. In Ritzema, Henk P. (ed.).
Drainage Principles and Applications, Publication 16
(second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp.Â
175â
224.
ISBN
Â
978-90-70754-33-4
.
^
Why Most Published Research Findings Are False, John P. A. Ioannidis, 2005
^
Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution".
Applied Statistics
.
37
(3):
477â
84.
doi
:
10.2307/2347330
.
JSTOR
Â
2347330
.
^
Johnson, Kotz & Balakrishnan (1995
, Equation (26.48))
^
Kinderman & Monahan (1977)
^
Leva (1992)
^
Marsaglia & Tsang (2000)
^
Karney (2016)
^
Du, Fan & Wei (2022)
^
Monahan (1985
, section 2)
^
Wallace (1996)
^
Johnson, Kotz & Balakrishnan (1994
, p. 85)
^
Le Cam & Lo Yang (2000
, p. 74)
^
De Moivre, Abraham (1733), Corollary I â see
Walker (1985
, p. 77)
^
Stigler (1986
,
p. 76
)
^
Gauss (1809
, section 177)
^
Gauss (1809
, section 179)
^
Laplace (1774
, Problem III)
^
Pearson (1905
, p. 189)
^
Gauss (1809
, section 177)
^
Stigler (1986
, p. 144)
^
Stigler (1978
, p. 243)
^
Stigler (1978
, p. 244)
^
Jaynes, Edwin J.;
Probability Theory: The Logic of Science
,
Ch. 7
.
^
Peirce, Charles S. (c. 1909 MS),
Collected Papers
v. 6, paragraph 327.
^
Kruskal & Stigler (1997)
.
^
"Earliest Uses... (Entry Standard Normal Curve)"
.
^
Hoel (1947)
introduces the terms
standard normal curve
(p. 33)
and
standard normal distribution
(p. 69)
.
^
Mood (1950)
explicitly defines the
standard normal distribution
(p. 112)
.
^
Sun, Jingchao; Kong, Maiying; Pal, Subhadip (June 22, 2021).
"The Modified-Half-Normal distribution: Properties and an efficient sampling scheme"
.
Communications in Statistics â Theory and Methods
.
52
(5):
1591â
1613.
doi
:
10.1080/03610926.2021.1934700
.
ISSN
Â
0361-0926
.
S2CID
Â
237919587
.
Aldrich, John; Miller, Jeff.
"Earliest Uses of Symbols in Probability and Statistics"
.
Aldrich, John; Miller, Jeff.
"Earliest Known Uses of Some of the Words of Mathematics"
.
In particular, the entries for
"bell-shaped and bell curve"
,
"normal (distribution)"
,
"Gaussian"
, and
"Error, law of error, theory of errors, etc."
.
Amari, Shun'ichi
; Nagaoka, Hiroshi (2000).
Methods of Information Geometry
. Oxford University Press.
ISBN
Â
978-0-8218-0531-2
.
Bernardo, José M.
;
Smith, Adrian F. M.
(2000).
Bayesian Theory
. Wiley.
ISBN
Â
978-0-471-49464-5
.
Bryc, Wlodzimierz (1995).
The Normal Distribution: Characterizations with Applications
. Springer-Verlag.
ISBN
Â
978-0-387-97990-8
.
Casella, George
;
Berger, Roger L.
(2001).
Statistical Inference
(2nd ed.). Duxbury.
ISBN
Â
978-0-534-24312-8
.
Cody, William J. (1969).
"Rational Chebyshev Approximations for the Error Function"
.
Mathematics of Computation
.
23
(107):
631â
638.
Bibcode
:
1969MaCom..23..631C
.
doi
:
10.1090/S0025-5718-1969-0247736-4
.
Cover, Thomas M.
;
Thomas, Joy A.
(2006).
Elements of Information Theory
. John Wiley and Sons.
ISBN
Â
9780471241959
.
Dia, Yaya D. (2023).
"Approximate Incomplete Integrals, Application to Complementary Error Function"
.
SSRN
.
doi
:
10.2139/ssrn.4487559
.
S2CID
Â
259689086
.
de Moivre, Abraham
(2000) [First published 1738].
The Doctrine of Chances
. American Mathematical Society.
ISBN
Â
978-0-8218-2103-9
.
Du, Y.; Fan, B.; Wei, B. (2022). "An improved exact sampling algorithm for the standard normal distribution".
Computational Statistics
.
37
(2):
721â
737.
arXiv
:
2008.03855
.
doi
:
10.1007/s00180-021-01136-w
.
Fan, Jianqing (1991).
"On the optimal rates of convergence for nonparametric deconvolution problems"
.
The Annals of Statistics
.
19
(3):
1257â
1272.
doi
:
10.1214/aos/1176348248
.
JSTOR
Â
2241949
.
Galton, Francis
(1889).
Natural Inheritance
(PDF)
. London, UK: Richard Clay and Sons.
Galambos, Janos
; Simonelli, Italo (2004).
Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions
. Marcel Dekker, Inc.
ISBN
Â
978-0-8247-5402-0
.
Gauss, Carolo Friderico
(1809).
Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm
[
Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections
] (in Latin). Hambvrgi, Svmtibvs F. Perthes et I. H. Besser.
English translation
.
Gould, Stephen Jay
(1981).
The Mismeasure of Man
(first ed.). W. W. Norton.
ISBN
Â
978-0-393-01489-1
.
Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation".
The American Statistician
.
19
(3):
12â
14.
doi
:
10.2307/2681417
.
JSTOR
Â
2681417
.
Hart, John F.; et al. (1968).
Computer Approximations
. New York, NY: John Wiley & Sons, Inc.
ISBN
Â
978-0-88275-642-4
.
"Normal Distribution"
,
Encyclopedia of Mathematics
,
EMS Press
, 2001 [1994]
Herrnstein, Richard J.
;
Murray, Charles
(1994).
The Bell Curve: Intelligence and Class Structure in American Life
.
Free Press
.
ISBN
Â
978-0-02-914673-6
.
Hoel, Paul G. (1947).
Introduction To Mathematical Statistics
. New York: Wiley.
Huxley, Julian S.
(1972) [First published 1932].
Problems of Relative Growth
. London.
ISBN
Â
978-0-486-61114-3
.
OCLC
Â
476909537
.
Johnson, Norman L.
;
Kotz, Samuel
; Balakrishnan, Narayanaswamy (1994).
Continuous Univariate Distributions, Volume 1
. Wiley.
ISBN
Â
978-0-471-58495-7
.
Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995).
Continuous Univariate Distributions, Volume 2
. Wiley.
ISBN
Â
978-0-471-58494-0
.
Karney, C. F. F. (2016).
"Sampling exactly from the normal distribution"
.
ACM Transactions on Mathematical Software
.
42
(1): 3:1â14.
arXiv
:
1303.6257
.
doi
:
10.1145/2710016
.
S2CID
Â
14252035
.
Kinderman, Albert J.; Monahan, John F. (1977).
"Computer Generation of Random Variables Using the Ratio of Uniform Deviates"
.
ACM Transactions on Mathematical Software
.
3
(3):
257â
260.
doi
:
10.1145/355744.355750
.
S2CID
Â
12884505
.
Krishnamoorthy, Kalimuthu (2006).
Handbook of Statistical Distributions with Applications
. Chapman & Hall/CRC.
ISBN
Â
978-1-58488-635-8
.
Kruskal, William H.
; Stigler, Stephen M. (1997). Spencer, Bruce D. (ed.).
Normative Terminology: 'Normal' in Statistics and Elsewhere
. Statistics and Public Policy. Oxford University Press.
ISBN
Â
978-0-19-852341-3
.
Laplace, Pierre-Simon de
(1774).
"Mémoire sur la probabilité des causes par les événements"
.
Mémoires de l'Académie Royale des Sciences de Paris (Savants étrangers), Tome 6
:
621â
656.
Translated by Stephen M. Stigler in
Statistical Science
1
(3), 1986:
JSTOR
Â
2245476
.
Laplace, Pierre-Simon (1812).
Théorie analytique des probabilités
[
Analytical theory of probabilities
]. Paris, Ve. Courcier.
Le Cam, Lucien
;
Lo Yang, Grace
(2000).
Asymptotics in Statistics: Some Basic Concepts
(second ed.). Springer.
ISBN
Â
978-0-387-95036-5
.
Leva, Joseph L. (1992).
"A fast normal random number generator"
(PDF)
.
ACM Transactions on Mathematical Software
.
18
(4):
449â
453.
CiteSeerX
Â
10.1.1.544.5806
.
doi
:
10.1145/138351.138364
.
S2CID
Â
15802663
. Archived from
the original
(PDF)
on July 16, 2010.
Lexis, Wilhelm
(1878). "Sur la durée normale de la vie humaine et sur la théorie de la stabilité des rapports statistiques".
Annales de Démographie Internationale
.
II
. Paris:
447â
462.
Lukacs, Eugene; King, Edgar P. (1954).
"A Property of Normal Distribution"
.
The Annals of Mathematical Statistics
.
25
(2):
389â
394.
doi
:
10.1214/aoms/1177728796
.
JSTOR
Â
2236741
.
McPherson, Glen (1990).
Statistics in Scientific Investigation: Its Basis, Application and Interpretation
. Springer-Verlag.
ISBN
Â
978-0-387-97137-7
.
Marsaglia, George
; Tsang, Wai Wan (2000).
"The Ziggurat Method for Generating Random Variables"
.
Journal of Statistical Software
.
5
(8).
doi
:
10.18637/jss.v005.i08
.
Marsaglia, George (2004).
"Evaluating the Normal Distribution"
.
Journal of Statistical Software
.
11
(4).
doi
:
10.18637/jss.v011.i04
.
Maxwell, James Clerk
(1860).
"V. Illustrations of the dynamical theory of gases. â Part I: On the motions and collisions of perfectly elastic spheres"
.
Philosophical Magazine
. Series 4.
19
(124):
19â
32.
Bibcode
:
1860LEDPM..19...19M
.
doi
:
10.1080/14786446008642818
.
Monahan, J. F. (1985).
"Accuracy in random number generation"
.
Mathematics of Computation
.
45
(172):
559â
568.
doi
:
10.1090/S0025-5718-1985-0804945-X
.
Mood, Alexander McFarlane
(1950).
Introduction to the Theory of Statistics
. New York: McGraw-Hill.
Patel, Jagdish K.; Read, Campbell B. (1996).
Handbook of the Normal Distribution
(2nd ed.). CRC Press.
ISBN
Â
978-0-8247-9342-5
.
Pearson, Karl
(1901).
"On Lines and Planes of Closest Fit to Systems of Points in Space"
(PDF)
.
Philosophical Magazine
. 6.
2
(11):
559â
572.
doi
:
10.1080/14786440109462720
.
S2CID
Â
125037489
.
Pearson, Karl
(1905).
"
'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A rejoinder"
.
Biometrika
.
4
(1):
169â
212.
doi
:
10.2307/2331536
.
JSTOR
Â
2331536
.
Pearson, Karl (1920).
"Notes on the History of Correlation"
.
Biometrika
.
13
(1):
25â
45.
doi
:
10.1093/biomet/13.1.25
.
JSTOR
Â
2331722
.
Rohrbasser, Jean-Marc; Véron, Jacques (2003).
"Wilhelm Lexis: The Normal Length of Life as an Expression of the "Nature of Things"
"
.
Population
.
58
(3):
303â
322.
doi
:
10.3917/pope.303.0303
.
Shore, H (1982). "Simple Approximations for the Inverse Cumulative Function, the Density Function and the Loss Integral of the Normal Distribution".
Journal of the Royal Statistical Society. Series C (Applied Statistics)
.
31
(2):
108â
114.
doi
:
10.2307/2347972
.
JSTOR
Â
2347972
.
Shore, H (2005). "Accurate RMM-Based Approximations for the CDF of the Normal Distribution".
Communications in Statistics â Theory and Methods
.
34
(3):
507â
513.
doi
:
10.1081/sta-200052102
.
S2CID
Â
122148043
.
Shore, H (2011). "Response Modeling Methodology".
WIREs Comput Stat
.
3
(4):
357â
372.
doi
:
10.1002/wics.151
.
S2CID
Â
62021374
.
Shore, H (2012). "Estimating Response Modeling Methodology Models".
WIREs Comput Stat
.
4
(3):
323â
333.
doi
:
10.1002/wics.1199
.
S2CID
Â
122366147
.
Stigler, Stephen M.
(1978).
"Mathematical Statistics in the Early States"
.
The Annals of Statistics
.
6
(2):
239â
265.
doi
:
10.1214/aos/1176344123
.
JSTOR
Â
2958876
.
Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal".
The American Statistician
.
36
(2):
137â
138.
doi
:
10.2307/2684031
.
JSTOR
Â
2684031
.
Stigler, Stephen M. (1986).
The History of Statistics: The Measurement of Uncertainty before 1900
. Harvard University Press.
ISBN
Â
978-0-674-40340-6
.
Stigler, Stephen M. (1999).
Statistics on the Table
. Harvard University Press.
ISBN
Â
978-0-674-83601-3
.
Walker, Helen M. (1985).
"De Moivre on the Law of Normal Probability"
(PDF)
. In Smith, David Eugene (ed.).
A Source Book in Mathematics
. Dover.
ISBN
Â
978-0-486-64690-9
.
Wallace, C. S.
(1996).
"Fast pseudo-random generators for normal and exponential variates"
.
ACM Transactions on Mathematical Software
.
22
(1):
119â
127.
doi
:
10.1145/225545.225554
.
S2CID
Â
18514848
.
Weisstein, Eric W.
"Normal Distribution"
.
MathWorld
.
West, Graeme (2009).
"Better Approximations to Cumulative Normal Functions"
(PDF)
.
Wilmott Magazine
:
70â
76. Archived from
the original
(PDF)
on February 29, 2012.
Zelen, Marvin; Severo, Norman C. (1972) [First published 1964].
Probability Functions (chapter 26)
.
Handbook of mathematical functions with formulas, graphs, and mathematical tables
, by
Abramowitz, M.
; and
Stegun, I. A.
: National Bureau of Standards. New York, NY: Dover.
ISBN
Â
978-0-486-61272-0
.
"Normal distribution"
,
Encyclopedia of Mathematics
,
EMS Press
, 2001 [1994]
Normal distribution calculator |
| Markdown | [Jump to content](https://en.wikipedia.org/wiki/Normal_distribution#bodyContent)
Main menu
Main menu
move to sidebar
hide
Navigation
- [Main page](https://en.wikipedia.org/wiki/Main_Page "Visit the main page [z]")
- [Contents](https://en.wikipedia.org/wiki/Wikipedia:Contents "Guides to browsing Wikipedia")
- [Current events](https://en.wikipedia.org/wiki/Portal:Current_events "Articles related to current events")
- [Random article](https://en.wikipedia.org/wiki/Special:Random "Visit a randomly selected article [x]")
- [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About "Learn about Wikipedia and how it works")
- [Contact us](https://en.wikipedia.org/wiki/Wikipedia:Contact_us "How to contact Wikipedia")
Contribute
- [Help](https://en.wikipedia.org/wiki/Help:Contents "Guidance on how to use and edit Wikipedia")
- [Learn to edit](https://en.wikipedia.org/wiki/Help:Introduction "Learn how to edit Wikipedia")
- [Community portal](https://en.wikipedia.org/wiki/Wikipedia:Community_portal "The hub for editors")
- [Recent changes](https://en.wikipedia.org/wiki/Special:RecentChanges "A list of recent changes to Wikipedia [r]")
- [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_upload_wizard "Add images or other media for use on Wikipedia")
- [Special pages](https://en.wikipedia.org/wiki/Special:SpecialPages "A list of all special pages [q]")
[  ](https://en.wikipedia.org/wiki/Main_Page)
[Search](https://en.wikipedia.org/wiki/Special:Search "Search Wikipedia [f]")
Appearance
- [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en)
- [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=Normal+distribution "You are encouraged to create an account and log in; however, it is not mandatory")
- [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Normal+distribution "You're encouraged to log in; however, it's not mandatory. [o]")
Personal tools
- [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en)
- [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=Normal+distribution "You are encouraged to create an account and log in; however, it is not mandatory")
- [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Normal+distribution "You're encouraged to log in; however, it's not mandatory. [o]")
## Contents
move to sidebar
hide
- [(Top)](https://en.wikipedia.org/wiki/Normal_distribution)
- [1 Definitions](https://en.wikipedia.org/wiki/Normal_distribution#Definitions)
Toggle Definitions subsection
- [1\.1 Standard normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution)
- [1\.2 General normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#General_normal_distribution)
- [1\.3 Notation](https://en.wikipedia.org/wiki/Normal_distribution#Notation)
- [1\.4 Alternative parameterizations](https://en.wikipedia.org/wiki/Normal_distribution#Alternative_parameterizations)
- [1\.5 Cumulative distribution function](https://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution_function)
- [1\.5.1 Taylor series representation](https://en.wikipedia.org/wiki/Normal_distribution#Taylor_series_representation)
- [1\.5.2 Recursive computation with Taylor series](https://en.wikipedia.org/wiki/Normal_distribution#Recursive_computation_with_Taylor_series)
- [1\.5.3 Standard deviation and coverage](https://en.wikipedia.org/wiki/Normal_distribution#Standard_deviation_and_coverage)
- [1\.5.4 Quantile function](https://en.wikipedia.org/wiki/Normal_distribution#Quantile_function)
- [1\.5.5 Using root finding to compute the quantile function](https://en.wikipedia.org/wiki/Normal_distribution#Using_root_finding_to_compute_the_quantile_function)
- [2 Properties](https://en.wikipedia.org/wiki/Normal_distribution#Properties)
Toggle Properties subsection
- [2\.1 Symmetries and derivatives](https://en.wikipedia.org/wiki/Normal_distribution#Symmetries_and_derivatives)
- [2\.2 Moments](https://en.wikipedia.org/wiki/Normal_distribution#Moments)
- [2\.3 Fourier transform and characteristic function](https://en.wikipedia.org/wiki/Normal_distribution#Fourier_transform_and_characteristic_function)
- [2\.4 Moment- and cumulant-generating functions](https://en.wikipedia.org/wiki/Normal_distribution#Moment-_and_cumulant-generating_functions)
- [2\.5 Stein operator and class](https://en.wikipedia.org/wiki/Normal_distribution#Stein_operator_and_class)
- [2\.6 Zero-variance limit](https://en.wikipedia.org/wiki/Normal_distribution#Zero-variance_limit)
- [2\.7 Maximum entropy](https://en.wikipedia.org/wiki/Normal_distribution#Maximum_entropy)
- [2\.8 Other properties](https://en.wikipedia.org/wiki/Normal_distribution#Other_properties)
- [3 Related distributions](https://en.wikipedia.org/wiki/Normal_distribution#Related_distributions)
Toggle Related distributions subsection
- [3\.1 Central limit theorem](https://en.wikipedia.org/wiki/Normal_distribution#Central_limit_theorem)
- [3\.2 Operations and functions of normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_and_functions_of_normal_variables)
- [3\.2.1 Operations on a single normal variable](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_a_single_normal_variable)
- [3\.2.1.1 Operations on two independent normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_two_independent_normal_variables)
- [3\.2.1.2 Operations on two independent standard normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_two_independent_standard_normal_variables)
- [3\.2.2 Operations on multiple independent normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_multiple_independent_normal_variables)
- [3\.2.3 Operations on multiple correlated normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_multiple_correlated_normal_variables)
- [3\.3 Operations on the density function](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_the_density_function)
- [3\.4 Infinite divisibility and Cramér's theorem](https://en.wikipedia.org/wiki/Normal_distribution#Infinite_divisibility_and_Cram%C3%A9r's_theorem)
- [3\.5 The KacâBernstein theorem](https://en.wikipedia.org/wiki/Normal_distribution#The_Kac%E2%80%93Bernstein_theorem)
- [3\.6 Extensions](https://en.wikipedia.org/wiki/Normal_distribution#Extensions)
- [4 Statistical inference](https://en.wikipedia.org/wiki/Normal_distribution#Statistical_inference)
Toggle Statistical inference subsection
- [4\.1 Estimation of parameters](https://en.wikipedia.org/wiki/Normal_distribution#Estimation_of_parameters)
- [4\.1.1 Sample mean](https://en.wikipedia.org/wiki/Normal_distribution#Sample_mean)
- [4\.1.2 Sample variance](https://en.wikipedia.org/wiki/Normal_distribution#Sample_variance)
- [4\.2 Confidence intervals](https://en.wikipedia.org/wiki/Normal_distribution#Confidence_intervals)
- [4\.3 Normality tests](https://en.wikipedia.org/wiki/Normal_distribution#Normality_tests)
- [4\.4 Bayesian analysis of the normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Bayesian_analysis_of_the_normal_distribution)
- [4\.4.1 Sum of two quadratics](https://en.wikipedia.org/wiki/Normal_distribution#Sum_of_two_quadratics)
- [4\.4.1.1 Scalar form](https://en.wikipedia.org/wiki/Normal_distribution#Scalar_form)
- [4\.4.1.2 Vector form](https://en.wikipedia.org/wiki/Normal_distribution#Vector_form)
- [4\.4.2 Sum of differences from the mean](https://en.wikipedia.org/wiki/Normal_distribution#Sum_of_differences_from_the_mean)
- [4\.5 With known variance](https://en.wikipedia.org/wiki/Normal_distribution#With_known_variance)
- [4\.5.1 With known mean](https://en.wikipedia.org/wiki/Normal_distribution#With_known_mean)
- [4\.5.2 With unknown mean and unknown variance](https://en.wikipedia.org/wiki/Normal_distribution#With_unknown_mean_and_unknown_variance)
- [5 Occurrence and applications](https://en.wikipedia.org/wiki/Normal_distribution#Occurrence_and_applications)
Toggle Occurrence and applications subsection
- [5\.1 Exact normality](https://en.wikipedia.org/wiki/Normal_distribution#Exact_normality)
- [5\.2 Approximate normality](https://en.wikipedia.org/wiki/Normal_distribution#Approximate_normality)
- [5\.3 Assumed normality](https://en.wikipedia.org/wiki/Normal_distribution#Assumed_normality)
- [5\.4 Methodological problems and peer review](https://en.wikipedia.org/wiki/Normal_distribution#Methodological_problems_and_peer_review)
- [6 Computational methods](https://en.wikipedia.org/wiki/Normal_distribution#Computational_methods)
Toggle Computational methods subsection
- [6\.1 Generating values from normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Generating_values_from_normal_distribution)
- [6\.2 Numerical approximations for the normal cumulative distribution function and normal quantile function](https://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_cumulative_distribution_function_and_normal_quantile_function)
- [7 History](https://en.wikipedia.org/wiki/Normal_distribution#History)
Toggle History subsection
- [7\.1 Development](https://en.wikipedia.org/wiki/Normal_distribution#Development)
- [7\.2 Naming](https://en.wikipedia.org/wiki/Normal_distribution#Naming)
- [8 See also](https://en.wikipedia.org/wiki/Normal_distribution#See_also)
- [9 Notes](https://en.wikipedia.org/wiki/Normal_distribution#Notes)
- [10 References](https://en.wikipedia.org/wiki/Normal_distribution#References)
Toggle References subsection
- [10\.1 Citations](https://en.wikipedia.org/wiki/Normal_distribution#Citations)
- [10\.2 Sources](https://en.wikipedia.org/wiki/Normal_distribution#Sources)
- [11 External links](https://en.wikipedia.org/wiki/Normal_distribution#External_links)
Toggle the table of contents
# Normal distribution
73 languages
- [Alemannisch](https://als.wikipedia.org/wiki/Normalverteilung "Normalverteilung â Alemannic")
- [ۧÙŰč۱ۚÙŰ©](https://ar.wikipedia.org/wiki/%D8%AA%D9%88%D8%B2%D9%8A%D8%B9_%D8%A7%D8%AD%D8%AA%D9%85%D8%A7%D9%84%D9%8A_%D8%B7%D8%A8%D9%8A%D8%B9%D9%8A "ŰȘÙŰČÙŰč ۧŰŰȘÙ
ۧÙÙ Ű·ŰšÙŰčÙ â Arabic")
- [Asturianu](https://ast.wikipedia.org/wiki/Distribuci%C3%B3n_normal "DistribuciĂłn normal â Asturian")
- [AzÉrbaycanca](https://az.wikipedia.org/wiki/Normal_paylanma "Normal paylanma â Azerbaijani")
- [ŰȘÛ۱کۏÙ](https://azb.wikipedia.org/wiki/%D9%86%D9%88%D8%B1%D9%85%D8%A7%D9%84_%D8%AF%D8%A7%D8%BA%DB%8C%D9%84%DB%8C%D9%85 "ÙÙ۱Ù
Ű§Ù ŰŻŰ§ŰșÛÙÛÙ
â South Azerbaijani")
- [ĐДлаŃŃŃĐșаŃ](https://be.wikipedia.org/wiki/%D0%9D%D0%B0%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0%D0%B5_%D1%80%D0%B0%D0%B7%D0%BC%D0%B5%D1%80%D0%BA%D0%B0%D0%B2%D0%B0%D0%BD%D0%BD%D0%B5 "ĐаŃĐŒĐ°Đ»ŃĐœĐ°Đ” ŃĐ°Đ·ĐŒĐ”ŃĐșаĐČĐ°ĐœĐœĐ” â Belarusian")
- [ĐŃлгаŃŃĐșĐž](https://bg.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%BE_%D1%80%D0%B0%D0%B7%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5 "ĐĐŸŃĐŒĐ°Đ»ĐœĐŸ ŃазпŃĐ”ĐŽĐ”Đ»Đ”ĐœĐžĐ” â Bulgarian")
- [Bosanski](https://bs.wikipedia.org/wiki/Normalna_raspodjela "Normalna raspodjela â Bosnian")
- [CatalĂ ](https://ca.wikipedia.org/wiki/Distribuci%C3%B3_normal "DistribuciĂł normal â Catalan")
- [ÄeĆĄtina](https://cs.wikipedia.org/wiki/Norm%C3%A1ln%C3%AD_rozd%C4%9Blen%C3%AD "NormĂĄlnĂ rozdÄlenĂ â Czech")
- [ЧÓĐČаŃла](https://cv.wikipedia.org/wiki/%D0%93%D0%B0%D1%83%D1%81%D1%81_%D0%B2%D0%B0%D0%BB%D0%B5%C3%A7%C4%95%D0%B2%C4%95 "ĐаŃŃŃ ĐČалДçÄĐČÄ â Chuvash")
- [Cymraeg](https://cy.wikipedia.org/wiki/Dosraniad_normal "Dosraniad normal â Welsh")
- [Dansk](https://da.wikipedia.org/wiki/Normalfordeling "Normalfordeling â Danish")
- [Deutsch](https://de.wikipedia.org/wiki/Normalverteilung "Normalverteilung â German")
- [ÎλληΜÎčÎșÎŹ](https://el.wikipedia.org/wiki/%CE%9A%CE%B1%CE%BD%CE%BF%CE%BD%CE%B9%CE%BA%CE%AE_%CE%BA%CE%B1%CF%84%CE%B1%CE%BD%CE%BF%CE%BC%CE%AE "ÎÎ±ÎœÎżÎœÎčÎșÎź ÎșαÏÎ±ÎœÎżÎŒÎź â Greek")
- [Esperanto](https://eo.wikipedia.org/wiki/Normala_distribuo "Normala distribuo â Esperanto")
- [Español](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal "DistribuciĂłn normal â Spanish")
- [Eesti](https://et.wikipedia.org/wiki/Normaaljaotus "Normaaljaotus â Estonian")
- [Euskara](https://eu.wikipedia.org/wiki/Banaketa_normal "Banaketa normal â Basque")
- [Ùۧ۱۳Û](https://fa.wikipedia.org/wiki/%D8%AA%D9%88%D8%B2%DB%8C%D8%B9_%D9%86%D8%B1%D9%85%D8%A7%D9%84 "ŰȘÙŰČÛŰč Ù۱Ù
Ű§Ù â Persian")
- [Suomi](https://fi.wikipedia.org/wiki/Normaalijakauma "Normaalijakauma â Finnish")
- [Français](https://fr.wikipedia.org/wiki/Loi_normale "Loi normale â French")
- [Nordfriisk](https://frr.wikipedia.org/wiki/Normoolferdialang "Normoolferdialang â Northern Frisian")
- [Gaeilge](https://ga.wikipedia.org/wiki/D%C3%A1ileadh_normalach "DĂĄileadh normalach â Irish")
- [Galego](https://gl.wikipedia.org/wiki/Distribuci%C3%B3n_normal "DistribuciĂłn normal â Galician")
- [ŚąŚŚšŚŚȘ](https://he.wikipedia.org/wiki/%D7%94%D7%AA%D7%A4%D7%9C%D7%92%D7%95%D7%AA_%D7%A0%D7%95%D7%A8%D7%9E%D7%9C%D7%99%D7%AA "ŚŚȘŚ€ŚŚŚŚȘ Ś ŚŚšŚŚŚŚȘ â Hebrew")
- [à€čà€żà€šà„à€Šà„](https://hi.wikipedia.org/wiki/%E0%A4%AA%E0%A5%8D%E0%A4%B0%E0%A4%B8%E0%A4%BE%E0%A4%AE%E0%A4%BE%E0%A4%A8%E0%A5%8D%E0%A4%AF_%E0%A4%AC%E0%A4%82%E0%A4%9F%E0%A4%A8 "à€Șà„à€°à€žà€Ÿà€źà€Ÿà€šà„à€Ż à€Źà€à€à€š â Hindi")
- [Hrvatski](https://hr.wikipedia.org/wiki/Normalna_raspodjela "Normalna raspodjela â Croatian")
- [Magyar](https://hu.wikipedia.org/wiki/Norm%C3%A1lis_eloszl%C3%A1s "NormĂĄlis eloszlĂĄs â Hungarian")
- [ŐŐĄŐ”Ő„ÖŐ„Ő¶](https://hy.wikipedia.org/wiki/%D5%86%D5%B8%D6%80%D5%B4%D5%A1%D5%AC_%D5%A2%D5%A1%D5%B7%D5%AD%D5%B8%D6%82%D5%B4 "ŐŐžÖŐŽŐĄŐŹ ŐąŐĄŐ·ŐŐžÖŐŽ â Armenian")
- [Bahasa Indonesia](https://id.wikipedia.org/wiki/Distribusi_normal "Distribusi normal â Indonesian")
- [Ăslenska](https://is.wikipedia.org/wiki/Normaldreifing "Normaldreifing â Icelandic")
- [Italiano](https://it.wikipedia.org/wiki/Distribuzione_normale "Distribuzione normale â Italian")
- [æ„æŹèȘ](https://ja.wikipedia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%88%86%E5%B8%83 "æŁèŠććž â Japanese")
- [á„áá ááŁáá](https://ka.wikipedia.org/wiki/%E1%83%9C%E1%83%9D%E1%83%A0%E1%83%9B%E1%83%90%E1%83%9A%E1%83%A3%E1%83%A0%E1%83%98_%E1%83%92%E1%83%90%E1%83%9C%E1%83%90%E1%83%AC%E1%83%98%E1%83%9A%E1%83%94%E1%83%91%E1%83%90 "ááá ááááŁá á áááááŹááááá â Georgian")
- [ÒазаÒŃа](https://kk.wikipedia.org/wiki/%D2%9A%D0%B0%D0%BB%D1%8B%D0%BF%D1%82%D1%8B_%D0%B4%D0%B8%D1%81%D0%BF%D0%B5%D1%80%D1%81%D0%B8%D1%8F "ÒалŃĐżŃŃ ĐŽĐžŃпДŃŃĐžŃ â Kazakh")
- [íê”ìŽ](https://ko.wikipedia.org/wiki/%EC%A0%95%EA%B7%9C_%EB%B6%84%ED%8F%AC "ì ê· ë¶íŹ â Korean")
- [Latina](https://la.wikipedia.org/wiki/Distributio_normalis "Distributio normalis â Latin")
- [Lombard](https://lmo.wikipedia.org/wiki/Distribuzzion_normala "Distribuzzion normala â Lombard")
- [LietuviĆł](https://lt.wikipedia.org/wiki/Normalusis_skirstinys "Normalusis skirstinys â Lithuanian")
- [LatvieĆĄu](https://lv.wikipedia.org/wiki/Norm%C4%81lais_sadal%C4%ABjums "NormÄlais sadalÄ«jums â Latvian")
- [ĐаĐșĐ”ĐŽĐŸĐœŃĐșĐž](https://mk.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%B0_%D1%80%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B1%D0%B0 "ĐĐŸŃĐŒĐ°Đ»ĐœĐ° ŃаŃĐżŃДЎДлба â Macedonian")
- [à€źà€°à€Ÿà€ à„](https://mr.wikipedia.org/wiki/%E0%A4%B8%E0%A4%BE%E0%A4%AE%E0%A4%BE%E0%A4%A8%E0%A5%8D%E0%A4%AF_%E0%A4%B5%E0%A4%BF%E0%A4%A4%E0%A4%B0%E0%A4%A3 "à€žà€Ÿà€źà€Ÿà€šà„à€Ż à€”à€żà€€à€°à€Ł â Marathi")
- [Bahasa Melayu](https://ms.wikipedia.org/wiki/Taburan_normal "Taburan normal â Malay")
- [Nederlands](https://nl.wikipedia.org/wiki/Normale_verdeling "Normale verdeling â Dutch")
- [Norsk nynorsk](https://nn.wikipedia.org/wiki/Normalfordeling "Normalfordeling â Norwegian Nynorsk")
- [Norsk bokmĂ„l](https://no.wikipedia.org/wiki/Normalfordeling "Normalfordeling â Norwegian BokmĂ„l")
- [Polski](https://pl.wikipedia.org/wiki/Rozk%C5%82ad_normalny "RozkĆad normalny â Polish")
- [PiemontĂšis](https://pms.wikipedia.org/wiki/Distribussion_%C3%ABd_Gauss "Distribussion Ă«d Gauss â Piedmontese")
- [PortuguĂȘs](https://pt.wikipedia.org/wiki/Distribui%C3%A7%C3%A3o_normal "Distribuição normal â Portuguese")
- [RomĂąnÄ](https://ro.wikipedia.org/wiki/Distribu%C8%9Bia_Gauss "DistribuÈia Gauss â Romanian")
- [Đ ŃŃŃĐșĐžĐč](https://ru.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%BE%D0%B5_%D1%80%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5 "ĐĐŸŃĐŒĐ°Đ»ŃĐœĐŸĐ” ŃаŃĐżŃĐ”ĐŽĐ”Đ»Đ”ĐœĐžĐ” â Russian")
- [Srpskohrvatski / ŃŃĐżŃĐșĐŸŃ
ŃĐČаŃŃĐșĐž](https://sh.wikipedia.org/wiki/Normalna_raspodjela "Normalna raspodjela â Serbo-Croatian")
- [Simple English](https://simple.wikipedia.org/wiki/Normal_distribution "Normal distribution â Simple English")
- [SlovenÄina](https://sk.wikipedia.org/wiki/Norm%C3%A1lne_rozdelenie "NormĂĄlne rozdelenie â Slovak")
- [SlovenĆĄÄina](https://sl.wikipedia.org/wiki/Normalna_porazdelitev "Normalna porazdelitev â Slovenian")
- [Shqip](https://sq.wikipedia.org/wiki/Shp%C3%ABrndarja_normale "ShpĂ«rndarja normale â Albanian")
- [ĐĄŃĐżŃĐșĐž / srpski](https://sr.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%B0_%D1%80%D0%B0%D1%81%D0%BF%D0%BE%D0%B4%D0%B5%D0%BB%D0%B0 "ĐĐŸŃĐŒĐ°Đ»ĐœĐ° ŃаŃĐżĐŸĐŽĐ”Đ»Đ° â Serbian")
- [Sunda](https://su.wikipedia.org/wiki/Sebaran_normal "Sebaran normal â Sundanese")
- [Svenska](https://sv.wikipedia.org/wiki/Normalf%C3%B6rdelning "Normalfördelning â Swedish")
- [àź€àźźàźżàźŽàŻ](https://ta.wikipedia.org/wiki/%E0%AE%87%E0%AE%AF%E0%AE%B2%E0%AF%8D%E0%AE%A8%E0%AE%BF%E0%AE%B2%E0%AF%88%E0%AE%AA%E0%AF%8D_%E0%AE%AA%E0%AE%B0%E0%AE%B5%E0%AE%B2%E0%AF%8D "àźàźŻàźČàŻàźšàźżàźČàŻàźȘàŻ àźȘàź°àź”àźČàŻ â Tamil")
- [àčàžàžą](https://th.wikipedia.org/wiki/%E0%B8%81%E0%B8%B2%E0%B8%A3%E0%B9%81%E0%B8%88%E0%B8%81%E0%B9%81%E0%B8%88%E0%B8%87%E0%B8%9B%E0%B8%A3%E0%B8%81%E0%B8%95%E0%B8%B4 "àžàžČàžŁàčàžàžàčàžàžàžàžŁàžàžàžŽ â Thai")
- [Tagalog](https://tl.wikipedia.org/wiki/Distribusyong_normal "Distribusyong normal â Tagalog")
- [TĂŒrkçe](https://tr.wikipedia.org/wiki/Normal_da%C4%9F%C4%B1l%C4%B1m "Normal daÄılım â Turkish")
- [йаŃаŃŃа / tatarça](https://tt.wikipedia.org/wiki/%D0%93%D0%B0%D1%83%D1%81%D1%81_%D0%B1%D2%AF%D0%BB%D0%B5%D0%BD%D0%B5%D1%88%D0%B5 "ĐаŃŃŃ Đ±ÒŻĐ»Đ”ĐœĐ”ŃĐ” â Tatar")
- [ĐŁĐșŃаŃĐœŃŃĐșа](https://uk.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%B8%D0%B9_%D1%80%D0%BE%D0%B7%D0%BF%D0%BE%D0%B4%D1%96%D0%BB "ĐĐŸŃĐŒĐ°Đ»ŃĐœĐžĐč ŃĐŸĐ·ĐżĐŸĐŽŃĐ» â Ukrainian")
- [ۧ۱ۯÙ](https://ur.wikipedia.org/wiki/%D9%86%D8%A7%D8%B1%D9%85%D9%84_%D8%AA%D9%82%D8%B3%DB%8C%D9%85 "Ùۧ۱Ù
Ù ŰȘÙŰłÛÙ
â Urdu")
- [Tiáșżng Viá»t](https://vi.wikipedia.org/wiki/Ph%C3%A2n_ph%E1%BB%91i_chu%E1%BA%A9n "PhĂąn phá»i chuáș©n â Vietnamese")
- [ćŽèŻ](https://wuu.wikipedia.org/wiki/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83 "æŁæććž â Wu")
- [ŚŚÖŽŚŚŚ©](https://yi.wikipedia.org/wiki/%D7%A0%D7%90%D7%A8%D7%9E%D7%90%D7%9C%D7%A2_%D7%A4%D7%90%D7%A8%D7%98%D7%99%D7%99%D7%9C%D7%95%D7%A0%D7%92 "Ś ŚŚšŚŚŚŚą Ś€ŚŚšŚŚŚŚŚŚ Ś â Yiddish")
- [é©ćèȘ / BĂąn-lĂąm-gĂ](https://zh-min-nan.wikipedia.org/wiki/Si%C3%B4ng-th%C3%A0i_hun-p%C3%B2%CD%98 "SiĂŽng-thĂ i hun-pĂČÍ â Minnan")
- [çČ”èȘ](https://zh-yue.wikipedia.org/wiki/%E5%B8%B8%E6%85%8B%E5%88%86%E4%BD%88 "ćžžæ
ćäœ â Cantonese")
- [äžæ](https://zh.wikipedia.org/wiki/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83 "æŁæććž â Chinese")
[Edit links](https://www.wikidata.org/wiki/Special:EntityPage/Q133871#sitelinks-wikipedia "Edit interlanguage links")
- [Article](https://en.wikipedia.org/wiki/Normal_distribution "View the content page [c]")
- [Talk](https://en.wikipedia.org/wiki/Talk:Normal_distribution "Discuss improvements to the content page [t]")
English
- [Read](https://en.wikipedia.org/wiki/Normal_distribution)
- [Edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit "Edit this page [e]")
- [View history](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=history "Past revisions of this page [h]")
Tools
Tools
move to sidebar
hide
Actions
- [Read](https://en.wikipedia.org/wiki/Normal_distribution)
- [Edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit "Edit this page [e]")
- [View history](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=history)
General
- [What links here](https://en.wikipedia.org/wiki/Special:WhatLinksHere/Normal_distribution "List of all English Wikipedia pages containing links to this page [j]")
- [Related changes](https://en.wikipedia.org/wiki/Special:RecentChangesLinked/Normal_distribution "Recent changes in pages linked from this page [k]")
- [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_Upload_Wizard "Upload files [u]")
- [Permanent link](https://en.wikipedia.org/w/index.php?title=Normal_distribution&oldid=1344852379 "Permanent link to this revision of this page")
- [Page information](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=info "More information about this page")
- [Cite this page](https://en.wikipedia.org/w/index.php?title=Special:CiteThisPage&page=Normal_distribution&id=1344852379&wpFormIdentifier=titleform "Information on how to cite this page")
- [Get shortened URL](https://en.wikipedia.org/w/index.php?title=Special:UrlShortener&url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FNormal_distribution)
Print/export
- [Download as PDF](https://en.wikipedia.org/w/index.php?title=Special:DownloadAsPdf&page=Normal_distribution&action=show-download-screen "Download this page as a PDF file")
- [Printable version](https://en.wikipedia.org/w/index.php?title=Normal_distribution&printable=yes "Printable version of this page [p]")
In other projects
- [Wikimedia Commons](https://commons.wikimedia.org/wiki/Category:Normal_distribution)
- [Wikidata item](https://www.wikidata.org/wiki/Special:EntityPage/Q133871 "Structured data on this page hosted by Wikidata [g]")
Appearance
move to sidebar
hide
From Wikipedia, the free encyclopedia
Probability distribution
"Bell curve" redirects here. For other uses, see [Bell curve (disambiguation)](https://en.wikipedia.org/wiki/Bell_curve_\(disambiguation\) "Bell curve (disambiguation)").
| Normal distribution | |
|---|---|
| Probability density function[](https://en.wikipedia.org/wiki/File:Normal_Distribution_PDF.svg)The red curve is the [*standard normal distribution*](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution). | |
| Cumulative distribution function[](https://en.wikipedia.org/wiki/File:Normal_Distribution_CDF.svg) | |
| Notation | N ( ÎŒ , Ï 2 ) {\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}  |
| |
|---|
| Part of a series on [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics") |
| [Probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") |
| [](https://en.wikipedia.org/wiki/File:Standard_deviation_diagram_micro.svg) |
| [Probability](https://en.wikipedia.org/wiki/Probability "Probability") [Axioms](https://en.wikipedia.org/wiki/Probability_axioms "Probability axioms") [Determinism](https://en.wikipedia.org/wiki/Determinism "Determinism") [System](https://en.wikipedia.org/wiki/Deterministic_system "Deterministic system") [Indeterminism](https://en.wikipedia.org/wiki/Indeterminism "Indeterminism") [Randomness](https://en.wikipedia.org/wiki/Randomness "Randomness") |
| [Probability space](https://en.wikipedia.org/wiki/Probability_space "Probability space") [Sample space](https://en.wikipedia.org/wiki/Sample_space "Sample space") [Event](https://en.wikipedia.org/wiki/Event_\(probability_theory\) "Event (probability theory)") [Collectively exhaustive events](https://en.wikipedia.org/wiki/Collectively_exhaustive_events "Collectively exhaustive events") [Elementary event](https://en.wikipedia.org/wiki/Elementary_event "Elementary event") [Mutual exclusivity](https://en.wikipedia.org/wiki/Mutual_exclusivity "Mutual exclusivity") [Outcome](https://en.wikipedia.org/wiki/Outcome_\(probability\) "Outcome (probability)") [Singleton](https://en.wikipedia.org/wiki/Singleton_\(mathematics\) "Singleton (mathematics)") [Experiment](https://en.wikipedia.org/wiki/Experiment_\(probability_theory\) "Experiment (probability theory)") [Bernoulli trial](https://en.wikipedia.org/wiki/Bernoulli_trial "Bernoulli trial") [Probability distribution](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution "Bernoulli distribution") [Binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution") [Exponential distribution](https://en.wikipedia.org/wiki/Exponential_distribution "Exponential distribution") [Normal distribution]() [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution") [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution "Poisson distribution") [Probability measure](https://en.wikipedia.org/wiki/Probability_measure "Probability measure") [Random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable") [Bernoulli process](https://en.wikipedia.org/wiki/Bernoulli_process "Bernoulli process") [Continuous or discrete](https://en.wikipedia.org/wiki/Continuous_or_discrete_variable "Continuous or discrete variable") [Expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") [Variance](https://en.wikipedia.org/wiki/Variance "Variance") [Markov chain](https://en.wikipedia.org/wiki/Markov_chain "Markov chain") [Observed value](https://en.wikipedia.org/wiki/Realization_\(probability\) "Realization (probability)") [Random walk](https://en.wikipedia.org/wiki/Random_walk "Random walk") [Stochastic process](https://en.wikipedia.org/wiki/Stochastic_process "Stochastic process") |
| [Complementary event](https://en.wikipedia.org/wiki/Complementary_event "Complementary event") [Joint probability](https://en.wikipedia.org/wiki/Joint_probability_distribution "Joint probability distribution") [Marginal probability](https://en.wikipedia.org/wiki/Marginal_distribution "Marginal distribution") [Conditional probability](https://en.wikipedia.org/wiki/Conditional_probability "Conditional probability") |
| [Independence](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)") [Conditional independence](https://en.wikipedia.org/wiki/Conditional_independence "Conditional independence") [Law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability "Law of total probability") [Law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers") [Bayes' theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem "Bayes' theorem") [Boole's inequality](https://en.wikipedia.org/wiki/Boole%27s_inequality "Boole's inequality") |
| [Venn diagram](https://en.wikipedia.org/wiki/Venn_diagram "Venn diagram") [Tree diagram](https://en.wikipedia.org/wiki/Tree_diagram_\(probability_theory\) "Tree diagram (probability theory)") |
| [v](https://en.wikipedia.org/wiki/Template:Probability_fundamentals "Template:Probability fundamentals") [t](https://en.wikipedia.org/wiki/Template_talk:Probability_fundamentals "Template talk:Probability fundamentals") [e](https://en.wikipedia.org/wiki/Special:EditPage/Template:Probability_fundamentals "Special:EditPage/Template:Probability fundamentals") |
In [probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") and [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics"), a **normal distribution** or **Gaussian distribution** is a type of [continuous probability distribution](https://en.wikipedia.org/wiki/Continuous_probability_distribution "Continuous probability distribution") for a [real-valued](https://en.wikipedia.org/wiki/Real_number "Real number") [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable"). The general form of its [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") is[\[2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-The_Joy_of_Finite_Mathematics-2)[\[3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Mathematics_for_Physical_Science_and_Engineering-3)[\[4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-4) f ( x ) \= 1 2 Ï Ï 2 exp ⥠( â ( x â ÎŒ ) 2 2 Ï 2 ) . {\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}\\exp {\\left(-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}\\right)}\\,.}  The parameter â ÎŒ {\\displaystyle \\mu }  â is the [mean](https://en.wikipedia.org/wiki/Mean#Mean_of_a_probability_distribution "Mean") or [expectation](https://en.wikipedia.org/wiki/Expected_value "Expected value") of the distribution (and also its [median](https://en.wikipedia.org/wiki/Median "Median") and [mode](https://en.wikipedia.org/wiki/Mode_\(statistics\) "Mode (statistics)")), while the parameter Ï 2 {\\textstyle \\sigma ^{2}}  is the [variance](https://en.wikipedia.org/wiki/Variance "Variance"). The [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") of the distribution is the positive value â Ï {\\displaystyle \\sigma }  â (sigma). A random variable with a Gaussian distribution is said to be **normally distributed** and is called a **normal deviate**.
Normal distributions are important in [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics") and are often used in the [natural](https://en.wikipedia.org/wiki/Natural_science "Natural science") and [social sciences](https://en.wikipedia.org/wiki/Social_science "Social science") to represent real-valued [random variables](https://en.wikipedia.org/wiki/Random_variable "Random variable") whose distributions are not known.[\[5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-5)[\[6\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-6) Their importance is partly due to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). It states that the average of many [statistically independent](https://en.wikipedia.org/wiki/Statistically_independent "Statistically independent") samples (observations) of a random variable with finite mean and variance is itself a random variableâwhose distribution [converges](https://en.wikipedia.org/wiki/Convergence_in_distribution "Convergence in distribution") to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as [measurement errors](https://en.wikipedia.org/wiki/Measurement_error "Measurement error"), often have distributions that are nearly normal.[\[7\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-7)
Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any [linear combination](https://en.wikipedia.org/wiki/Linear_combination "Linear combination") of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as [propagation of uncertainty](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") and [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares")[\[8\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-8) parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.
A normal distribution is sometimes informally called a **bell curve**.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9)[\[10\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-10) However, many other distributions are [bell-shaped](https://en.wikipedia.org/wiki/Bell-shaped_function "Bell-shaped function") (such as the [Cauchy](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"), [Student's t](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution"), and [logistic](https://en.wikipedia.org/wiki/Logistic_distribution "Logistic distribution") distributions). (For other names, see *[Naming](https://en.wikipedia.org/wiki/Normal_distribution#Naming)*.)
The [univariate probability distribution](https://en.wikipedia.org/wiki/Univariate_distribution "Univariate distribution") is generalized for [vectors](https://en.wikipedia.org/wiki/Vector_\(mathematics_and_physics\) "Vector (mathematics and physics)") in the [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") and for matrices in the [matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution").
## Definitions
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=1 "Edit section: Definitions")\]
### Standard normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=2 "Edit section: Standard normal distribution")\]
The simplest case of a normal distribution is known as the **standard normal distribution** or **unit normal distribution**. This is a special case when ÎŒ \= 0 {\\textstyle \\mu =0}  and Ï 2 \= 1 {\\textstyle \\sigma ^{2}=1} , and it is described by this [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") (or density):[\[11\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-11) Ï ( z ) \= e â z 2 / 2 2 Ï . {\\displaystyle \\varphi (z)={\\frac {e^{-z^{2}/2}}{\\sqrt {2\\pi }}}\\,.}  The variable â z {\\displaystyle z}  â has a mean of 0 and a variance and standard deviation of 1. The density Ï ( z ) {\\textstyle \\varphi (z)}  has its peak value 1 2 Ï {\\textstyle {\\frac {1}{\\sqrt {2\\pi }}}}  at z \= 0 {\\textstyle z=0}  and [inflection points](https://en.wikipedia.org/wiki/Inflection_point "Inflection point") at z \= \+ 1 {\\textstyle z=+1}  and â z \= â 1 {\\displaystyle z=-1}  â .
Although the density above is most commonly known as the *standard normal,* a few authors have used that term to describe other versions of the normal distribution. [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss"), for example, once defined the standard normal as Ï ( z ) \= 1 Ï e â z 2 , {\\textstyle \\varphi (z)={\\frac {1}{\\sqrt {\\pi }}}e^{-z^{2}},}  which has a variance of â 1 2 {\\displaystyle {\\tfrac {1}{2}}}  â , and [Stephen Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") once defined the standard normal as Ï ( z ) \= e â Ï z 2 , {\\textstyle \\varphi (z)=e^{-\\pi z^{2}},}  which has a simple functional form and a variance of Ï 2 \= 1 2 Ï . {\\textstyle \\sigma ^{2}={\\frac {1}{2\\pi }}.} [\[12\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-12)
### General normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=3 "Edit section: General normal distribution")\]
If â Z {\\displaystyle Z}  â is a [standard normal deviate](https://en.wikipedia.org/wiki/Standard_normal_deviate "Standard normal deviate"), then X \= Ï Z \+ ÎŒ {\\textstyle X=\\sigma Z+\\mu }  will have a normal distribution with expected value â ÎŒ {\\displaystyle \\mu }  â and standard deviation â Ï {\\displaystyle \\sigma }  â . This is equivalent to saying that the standard normal distribution â Z {\\displaystyle Z}  â can be scaled/stretched by a factor of â Ï {\\displaystyle \\sigma }  â and shifted by â ÎŒ {\\displaystyle \\mu }  â to yield a different normal distribution, called â X {\\displaystyle X}  â .
Conversely, if â X {\\displaystyle X}  â is a normal deviate with parameters â ÎŒ {\\displaystyle \\mu }  â and Ï 2 {\\textstyle \\sigma ^{2}} , then this â X {\\displaystyle X}  â distribution can be re-scaled and shifted via the formula Z \= ( X â ÎŒ ) / Ï {\\textstyle Z=(X-\\mu )/\\sigma }  to convert it to the standard normal distribution. This variate is also called the standardized form of â X {\\displaystyle X}  â .
In particular, the probability density function for â X {\\displaystyle X}  â can be written in terms of the standard normal distribution â Ï {\\displaystyle \\varphi }  â (with zero mean and unit variance): f ( x âŁ ÎŒ , Ï 2 ) \= 1 Ï Ï ( x â ÎŒ Ï ) . {\\displaystyle f(x\\mid \\mu ,\\sigma ^{2})={\\frac {1}{\\sigma }}\\varphi \\left({\\frac {x-\\mu }{\\sigma }}\\right)\\,.}  The probability density must be scaled by 1 / Ï {\\textstyle 1/\\sigma }  so that the [integral](https://en.wikipedia.org/wiki/Integral "Integral") is still 1.
### Notation
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=4 "Edit section: Notation")\]
The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter â Ï {\\displaystyle \\phi }  â ([phi](https://en.wikipedia.org/wiki/Phi "Phi")).[\[13\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-13) The variant form of the Greek letter phi, â Ï {\\displaystyle \\varphi }  â , is also used quite often.
The normal distribution is often referred to as N ( ÎŒ , Ï 2 ) {\\textstyle N(\\mu ,\\sigma ^{2})}  or â N ( ÎŒ , Ï 2 ) {\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}  â .[\[14\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-14) Thus when a random variable â X {\\displaystyle X}  â is normally distributed with mean â ÎŒ {\\displaystyle \\mu }  â and standard deviation â Ï {\\displaystyle \\sigma }  â , one may write
X ⌠N ( ÎŒ , Ï 2 ) . {\\displaystyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}).} 
### Alternative parameterizations
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=5 "Edit section: Alternative parameterizations")\]
Some authors advocate using the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)") â Ï {\\displaystyle \\tau }  â as the parameter defining the width of the distribution, instead of the standard deviation â Ï {\\displaystyle \\sigma }  â or the variance â Ï 2 {\\displaystyle \\sigma ^{2}}  â . The precision is normally defined as the reciprocal of the variance, â 1 / Ï 2 {\\displaystyle 1/\\sigma ^{2}}  â .[\[15\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-15) The formula for the distribution then becomes f ( x ) \= Ï 2 Ï e â Ï ( x â ÎŒ ) 2 / 2 . {\\displaystyle f(x)={\\sqrt {\\frac {\\tau }{2\\pi }}}e^{-\\tau (x-\\mu )^{2}/2}.} 
This choice is claimed to have advantages in numerical computations when â Ï {\\displaystyle \\sigma }  â is very close to zero, and simplifies formulas in some contexts, such as in the [Bayesian inference](https://en.wikipedia.org/wiki/Bayesian_statistics "Bayesian statistics") of variables with [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution").
Alternatively, the reciprocal of the standard deviation Ï âČ \= 1 / Ï {\\textstyle \\tau '=1/\\sigma }  might be defined as the *precision*, in which case the expression of the normal distribution becomes f ( x ) \= Ï âČ 2 Ï e â ( Ï âČ ) 2 ( x â ÎŒ ) 2 / 2 . {\\displaystyle f(x)={\\frac {\\tau '}{\\sqrt {2\\pi }}}e^{-(\\tau ')^{2}(x-\\mu )^{2}/2}.} 
According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the distribution.
Normal distributions form an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") with [natural parameters](https://en.wikipedia.org/wiki/Natural_parameter "Natural parameter") Ξ 1 \= ÎŒ Ï 2 {\\textstyle \\textstyle \\theta \_{1}={\\frac {\\mu }{\\sigma ^{2}}}}  and Ξ 2 \= â 1 2 Ï 2 {\\textstyle \\textstyle \\theta \_{2}=-{\\frac {1}{2\\sigma ^{2}}}} , and natural statistics x and *x*2. The dual expectation parameters for normal distribution are *η*1 = *ÎŒ* and *η*2 = *ÎŒ*2 + *Ï*2.
### Cumulative distribution function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=6 "Edit section: Cumulative distribution function")\]
The [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") (CDF) of the standard normal distribution, usually denoted with the capital Greek letter â Ί {\\displaystyle \\Phi }  â , is the integral Ί ( x ) \= 1 2 Ï â« â â x e â t 2 / 2 d t . {\\displaystyle \\Phi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x}e^{-t^{2}/2}\\,dt\\,.} 
The related [error function](https://en.wikipedia.org/wiki/Error_function "Error function") erf ⥠( x ) {\\textstyle \\operatorname {erf} (x)}  gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2, falling in the range â \[ â x , x \] {\\displaystyle \[-x,x\]} ![{\\displaystyle \[-x,x\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e23c41ff0bd6f01a0e27054c2b85819fcd08b762) â . That is: erf ⥠( x ) \= 1 Ï â« â x x e â t 2 d t \= 2 Ï â« 0 x e â t 2 d t . {\\displaystyle \\operatorname {erf} (x)={\\frac {1}{\\sqrt {\\pi }}}\\int \_{-x}^{x}e^{-t^{2}}\\,dt={\\frac {2}{\\sqrt {\\pi }}}\\int \_{0}^{x}e^{-t^{2}}\\,dt\\,.} 
These integrals cannot be expressed in terms of elementary functions, and are often said to be [special functions](https://en.wikipedia.org/wiki/Special_function "Special function"). However, many numerical approximations are known; see [below](https://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_cumulative_distribution_function_and_normal_quantile_function) for more.
The two functions are closely related, namely Ί ( x ) \= 1 2 \[ 1 \+ erf ⥠( x 2 ) \] . {\\displaystyle \\Phi (x)={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x}{\\sqrt {2}}}\\right)\\right\].} ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x}{\\sqrt {2}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8356fb040a87c7199fe5d99ca78fc217bb22260)
For a generic normal distribution with density â f {\\displaystyle f}  â , mean â ÎŒ {\\displaystyle \\mu }  â and variance Ï 2 {\\textstyle \\sigma ^{2}} , the cumulative distribution function is F ( x ) \= Ί ( x â ÎŒ Ï ) \= 1 2 \[ 1 \+ erf ⥠( x â ÎŒ Ï 2 ) \] . {\\displaystyle F(x)=\\Phi {\\left({\\frac {x-\\mu }{\\sigma }}\\right)}={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\].} ![{\\displaystyle F(x)=\\Phi {\\left({\\frac {x-\\mu }{\\sigma }}\\right)}={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d71d8dab3627a46c34fafde729c82724b641b3eb)
The probability that x lies between a and b with a \< b is therefore[\[16\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-KunIlPark-16): 84 P ⥠( a \< x †b ) \= 1 2 \[ erf ⥠( b â ÎŒ Ï 2 ) â erf ⥠( a â ÎŒ Ï 2 ) \] {\\displaystyle \\operatorname {P} (a\<x\\leq b)={\\frac {1}{2}}\\left\[\\operatorname {erf} \\left({\\frac {b-\\mu }{\\sigma {\\sqrt {2}}}}\\right)-\\operatorname {erf} \\left({\\frac {a-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\]} ![{\\displaystyle \\operatorname {P} (a\<x\\leq b)={\\frac {1}{2}}\\left\[\\operatorname {erf} \\left({\\frac {b-\\mu }{\\sigma {\\sqrt {2}}}}\\right)-\\operatorname {erf} \\left({\\frac {a-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/081e3058b1d40566c4e105866b7728e950d32f1d)
The complement of the standard normal cumulative distribution function, Q ( x ) \= 1 â Ί ( x ) {\\textstyle Q(x)=1-\\Phi (x)} , is often called the [Q-function](https://en.wikipedia.org/wiki/Q-function "Q-function"), especially in engineering texts.[\[17\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-17)[\[18\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-18) It gives the probability that the value of a standard normal random variable â X {\\displaystyle X}  â will exceed â x {\\displaystyle x}  â : â P ( X \> x ) {\\displaystyle P(X\>x)}  â . Other definitions of the â Q {\\displaystyle Q}  â \-function, all of which are simple transformations of â Ί {\\displaystyle \\Phi }  â , are also used occasionally.[\[19\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-19)
The [graph](https://en.wikipedia.org/wiki/Graph_of_a_function "Graph of a function") of the standard normal cumulative distribution function â Ί {\\displaystyle \\Phi }  â has 2-fold [rotational symmetry](https://en.wikipedia.org/wiki/Rotational_symmetry "Rotational symmetry") around the point (0,1/2); that is, â Ί ( â x ) \= 1 â Ί ( x ) {\\displaystyle \\Phi (-x)=1-\\Phi (x)}  â . Its [antiderivative](https://en.wikipedia.org/wiki/Antiderivative "Antiderivative") (indefinite integral) can be expressed as follows: ⫠Ί ( x ) d x \= x Ί ( x ) \+ Ï ( x ) \+ C . {\\displaystyle \\int \\Phi (x)\\,dx=x\\Phi (x)+\\varphi (x)+C.} 
An [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") of the cumulative distribution function for large x can be derived using [integration by parts](https://en.wikipedia.org/wiki/Integration_by_parts "Integration by parts"): Ί ( x ) \= 1 2 \+ 1 2 Ï e â x 2 / 2 â n \= 0 â 1 ( 2 n \+ 1 ) \! \! x 2 n \+ 1 . {\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}e^{-x^{2}/2}\\sum \_{n=0}^{\\infty }{\\frac {1}{(2n+1)!!}}x^{2n+1}\\,.}  where \! \! {\\textstyle !!}  denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"). For more, see [Error function § Asymptotic expansion](https://en.wikipedia.org/wiki/Error_function#Asymptotic_expansion "Error function").[\[20\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-20)
#### Taylor series representation
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=7 "Edit section: Taylor series representation")\]
The [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") for the normal distribution â Ï {\\displaystyle \\varphi }  â can be derived by substituting â â 1 2 x 2 {\\displaystyle -{\\tfrac {1}{2}}x^{2}}  â into the [Taylor series for the exponential function](https://en.wikipedia.org/wiki/Exponential_function#Power_series "Exponential function"):[\[21\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-duff-21)
Ï ( x ) \= 1 2 Ï â n \= 0 â ( â 1 ) n n \! 2 n x 2 n {\\displaystyle \\varphi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}}}x^{2n}} 
This series can be integrated term by term to obtain the Taylor series for the cumulative distribution function:[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22)
Ί ( x ) \= 1 2 \+ 1 2 Ï â n \= 0 â ( â 1 ) n n \! 2 n ( 2 n \+ 1 ) x 2 n \+ 1 . {\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}(2n+1)}}x^{2n+1}.}  However, this series is ineffective for calculation due to slow convergence, except when â x {\\displaystyle x}  â is small.[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22)
Both of these series describe [entire functions](https://en.wikipedia.org/wiki/Entire_function "Entire function"), which converge for all real and complex values of â x {\\displaystyle x}  â .
#### Recursive computation with Taylor series
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=8 "Edit section: Recursive computation with Taylor series")\]
The recurrence relation for [Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials "Hermite polynomials") He*n*(*x*) may be used to efficiently construct the [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") expansion about any point *x*0: Ί ( x ) \= â n \= 0 â Ί ( n ) ( x 0 ) n \! ( x â x 0 ) n , {\\displaystyle \\Phi (x)=\\sum \_{n=0}^{\\infty }{\\frac {\\Phi ^{(n)}(x\_{0})}{n!}}(x-x\_{0})^{n}\\,,}  where: Ί ( 0 ) ( x 0 ) \= 1 2 Ï â« â â x 0 e â t 2 / 2 d t Ί ( 1 ) ( x 0 ) \= 1 2 Ï e â x 0 2 / 2 Ί ( n ) ( x 0 ) \= â ( x 0 Ί ( n â 1 ) ( x 0 ) \+ ( n â 2 ) Ί ( n â 2 ) ( x 0 ) ) , n â„ 2 . {\\displaystyle {\\begin{aligned}\\Phi ^{(0)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x\_{0}}e^{-t^{2}/2}\\,dt\\\\\\Phi ^{(1)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}e^{-x\_{0}^{2}/2}\\\\\\Phi ^{(n)}(x\_{0})&=-\\left(x\_{0}\\Phi ^{(n-1)}(x\_{0})+(n-2)\\Phi ^{(n-2)}(x\_{0})\\right),\&n\\geq 2\\,.\\end{aligned}}} 
#### Standard deviation and coverage
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=9 "Edit section: Standard deviation and coverage")\]
Further information: [Interval estimation](https://en.wikipedia.org/wiki/Interval_estimation "Interval estimation") and [Coverage probability](https://en.wikipedia.org/wiki/Coverage_probability "Coverage probability")
[](https://en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg)
For the normal distribution, the values less than one standard deviation from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.
About 68% of values drawn from a normal distribution are within one standard deviation Ï from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9) This is known as the [68â95â99.7 (empirical) rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule "68â95â99.7 rule"), or the *3-sigma rule*.
More precisely, the probability that a normal deviate lies in the range between ÎŒ â n Ï {\\textstyle \\mu -n\\sigma }  and ÎŒ \+ n Ï {\\textstyle \\mu +n\\sigma }  is given by F ( ÎŒ \+ n Ï ) â F ( ÎŒ â n Ï ) \= Ί ( n ) â Ί ( â n ) \= erf ⥠( n 2 ) . {\\displaystyle F(\\mu +n\\sigma )-F(\\mu -n\\sigma )=\\Phi (n)-\\Phi (-n)=\\operatorname {erf} \\left({\\frac {n}{\\sqrt {2}}}\\right).}  To 12 significant digits, the values for n \= 1 , 2 , ⊠, 6 {\\textstyle n=1,2,\\ldots ,6}  are:
| â n {\\displaystyle n}  â | | | | |
|---|---|---|---|---|
| [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A178647](https://oeis.org/A178647 "oeis:A178647") | | | | |
| 2 | 0\.954499736104 | 0\.045500263896 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A110894](https://oeis.org/A110894 "oeis:A110894") |
| | | | | |
| 21 | .9778945080 | | | |
| 3 | 0\.997300203937 | 0\.002699796063 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A270712](https://oeis.org/A270712 "oeis:A270712") |
| | | | | |
| 370 | .398347345 | | | |
| 4 | 0\.999936657516 | 0\.000063342484 | | |
| | | | | |
| 15787 | .1927673 | | | |
| 5 | 0\.999999426697 | 0\.000000573303 | | |
| | | | | |
| 1744277 | .89362 | | | |
| 6 | 0\.999999998027 | 0\.000000001973 | | |
| | | | | |
| 506797345 | .897 | | | |
For large â n {\\displaystyle n}  â , one can use the approximation 1 â p â 2 n Ï e n 2 {\\displaystyle 1-p\\approx {\\frac {\\sqrt {2}}{n{\\sqrt {\\pi e^{n^{2}}}}}}} 
#### Quantile function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=10 "Edit section: Quantile function")\]
Further information: [Quantile function § Normal distribution](https://en.wikipedia.org/wiki/Quantile_function#Normal_distribution "Quantile function")
The [quantile function](https://en.wikipedia.org/wiki/Quantile_function "Quantile function") of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function"), and can be expressed in terms of the inverse [error function](https://en.wikipedia.org/wiki/Error_function "Error function"): Ί â 1 ( p ) \= 2 erf â 1 ⥠( 2 p â 1 ) , p â ( 0 , 1 ) . {\\displaystyle \\Phi ^{-1}(p)={\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).}  For a normal random variable with mean â ÎŒ {\\displaystyle \\mu }  â and variance Ï 2 {\\textstyle \\sigma ^{2}} , the quantile function is F â 1 ( p ) \= ÎŒ \+ Ï ÎŠ â 1 ( p ) \= ÎŒ \+ Ï 2 erf â 1 ⥠( 2 p â 1 ) , p â ( 0 , 1 ) . {\\displaystyle F^{-1}(p)=\\mu +\\sigma \\Phi ^{-1}(p)=\\mu +\\sigma {\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).}  The [quantile](https://en.wikipedia.org/wiki/Quantile "Quantile") Ί â 1 ( p ) {\\textstyle \\Phi ^{-1}(p)}  of the standard normal distribution is commonly denoted as â z p {\\displaystyle z\_{p}}  â . These values are used in [hypothesis testing](https://en.wikipedia.org/wiki/Hypothesis_testing "Hypothesis testing"), construction of [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") and [QâQ plots](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "QâQ plot"). A normal random variable â X {\\displaystyle X}  â will exceed ÎŒ \+ z p Ï {\\textstyle \\mu +z\_{p}\\sigma }  with probability 1 â p {\\textstyle 1-p} , and will lie outside the interval ÎŒ ± z p Ï {\\textstyle \\mu \\pm z\_{p}\\sigma }  with probability â 2 ( 1 â p ) {\\displaystyle 2(1-p)}  â . In particular, the quantile z 0\.975 {\\textstyle z\_{0.975}}  is [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96"); therefore a normal random variable will lie outside the interval ÎŒ ± 1\.96 Ï {\\textstyle \\mu \\pm 1.96\\sigma }  in only 5% of cases.
The following table gives the quantile z p {\\textstyle z\_{p}}  such that â X {\\displaystyle X}  â will lie in the range ÎŒ ± z p Ï {\\textstyle \\mu \\pm z\_{p}\\sigma }  with a specified probability â p {\\displaystyle p}  â . These values are useful to determine [tolerance interval](https://en.wikipedia.org/wiki/Tolerance_interval "Tolerance interval") for [sample averages](https://en.wikipedia.org/wiki/Sample_mean_and_sample_covariance#Sample_mean "Sample mean and sample covariance") and other statistical [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") with normal (or [asymptotically](https://en.wikipedia.org/wiki/Asymptotic "Asymptotic") normal) distributions.[\[23\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-23) The following table shows 2 erf â 1 ⥠( p ) \= Ί â 1 ( p \+ 1 2 ) {\\textstyle {\\sqrt {2}}\\operatorname {erf} ^{-1}(p)=\\Phi ^{-1}\\left({\\frac {p+1}{2}}\\right)} , not Ί â 1 ( p ) {\\textstyle \\Phi ^{-1}(p)}  as defined above.
| â p {\\displaystyle p}  â |
|---|
For small â p {\\displaystyle p}  â , the quantile function has the useful [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") Ί â 1 ( p ) \= â ln ⥠1 p 2 â ln ⥠ln ⥠1 p 2 â ln ⥠( 2 Ï ) \+ o ( 1 ) . {\\textstyle \\Phi ^{-1}(p)=-{\\sqrt {\\ln {\\frac {1}{p^{2}}}-\\ln \\ln {\\frac {1}{p^{2}}}-\\ln(2\\pi )}}+{\\mathcal {o}}(1).} \[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\]
#### Using root finding to compute the quantile function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=11 "Edit section: Using root finding to compute the quantile function")\]
Any of the described approaches for computing the cumulative distribution function Ί ( x ) {\\textstyle \\Phi (x)}  can be used with [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method "Newton's method") (or another [root-finding algorithm](https://en.wikipedia.org/wiki/Root-finding_algorithm "Root-finding algorithm") such as [Halley's method](https://en.wikipedia.org/wiki/Halley%27s_method "Halley's method")) to find the value of â x {\\displaystyle x}  â for which â Ί ( x ) \= q {\\displaystyle \\Phi (x)=q}  â for some desired quantile â q {\\displaystyle q}  â . For example, starting with an initial, approximately correct guess â x 0 {\\displaystyle x\_{0}}  â , increasingly better approximations â x 1 {\\displaystyle x\_{1}}  â , â x 2 {\\displaystyle x\_{2}}  â , ... can be calculated iteratively using Newton's method with x n \= x n â 1 â Ί ( x n â 1 ) â q Ï ( x n â 1 ) . {\\displaystyle x\_{n}=x\_{n-1}-{\\frac {\\Phi (x\_{n-1})-q}{\\varphi (x\_{n-1})}}\\,.} 
## Properties
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=12 "Edit section: Properties")\]
The normal distribution is the only distribution whose [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") beyond the first two (i.e., other than the mean and [variance](https://en.wikipedia.org/wiki/Variance "Variance")) are zero. It is also the continuous distribution with the [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution") for a specified mean and variance.[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24)[\[25\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-25) Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.[\[26\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Geary_RC-26)[\[27\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-27)
The normal distribution is a subclass of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). The normal distribution is [symmetric](https://en.wikipedia.org/wiki/Symmetric_distribution "Symmetric distribution") about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the [weight](https://en.wikipedia.org/wiki/Weight "Weight") of a person or the price of a [share of stock](https://en.wikipedia.org/wiki/Share_\(finance\) "Share (finance)"). Such variables may be better described by other distributions, such as the [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") or the [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution").
The value of the normal density is practically zero when the value â x {\\displaystyle x}  â lies more than a few [standard deviations](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of [outliers](https://en.wikipedia.org/wiki/Outlier "Outlier")âvalues that lie many standard deviations away from the meanâand least squares and other [statistical inference](https://en.wikipedia.org/wiki/Statistical_inference "Statistical inference") methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more [heavy-tailed](https://en.wikipedia.org/wiki/Heavy-tailed "Heavy-tailed") distribution should be assumed and appropriate [robust statistical inference](https://en.wikipedia.org/wiki/Robust_statistics "Robust statistics") methods applied.
The Gaussian distribution belongs to the family of [stable distributions](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") which are the attractors of sums of [independent, identically distributed](https://en.wikipedia.org/wiki/Independent,_identically_distributed "Independent, identically distributed") distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution") and the [Lévy distribution](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "Lévy distribution").
### Symmetries and derivatives
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=13 "Edit section: Symmetries and derivatives")\]
The normal distribution with density f ( x ) {\\textstyle f(x)}  (mean â ÎŒ {\\displaystyle \\mu }  â and variance Ï 2 \> 0 {\\textstyle \\sigma ^{2}\>0} ) has the following properties:
- It is symmetric around the point
x
\=
Ό
,
{\\textstyle x=\\mu ,}

which is at the same time the [mode](https://en.wikipedia.org/wiki/Mode_\(statistics\) "Mode (statistics)"), the [median](https://en.wikipedia.org/wiki/Median "Median") and the [mean](https://en.wikipedia.org/wiki/Mean "Mean") of the distribution.[\[28\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Patel-28)
- It is [unimodal](https://en.wikipedia.org/wiki/Unimodal "Unimodal"): its first [derivative](https://en.wikipedia.org/wiki/Derivative "Derivative") is positive for
x
\<
Ό
,
{\\textstyle x\<\\mu ,}

negative for
x
\>
Ό
,
{\\textstyle x\>\\mu ,}

and zero only at
x
\=
Ό
.
{\\textstyle x=\\mu .}

- The area bounded by the curve and the
â
x
{\\displaystyle x}

â
\-axis is unity (i.e. equal to one).
- Its first derivative is
f
âČ
(
x
)
\=
â
x
â
Ό
Ï
2
f
(
x
)
.
{\\textstyle f'(x)=-{\\frac {x-\\mu }{\\sigma ^{2}}}f(x).}

- Its second derivative is
f
âł
(
x
)
\=
(
x
â
Ό
)
2
â
Ï
2
Ï
4
f
(
x
)
.
{\\textstyle f''(x)={\\frac {(x-\\mu )^{2}-\\sigma ^{2}}{\\sigma ^{4}}}f(x).}

- Its density has two [inflection points](https://en.wikipedia.org/wiki/Inflection_point "Inflection point") (where the second derivative of
â
f
{\\displaystyle f}

â
is zero and changes sign), located one standard deviation away from the mean, namely at
x
\=
Ό
â
Ï
{\\textstyle x=\\mu -\\sigma }

and
x
\=
Ό
\+
Ï
.
{\\textstyle x=\\mu +\\sigma .}

[\[28\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Patel-28)
- Its density is [log-concave](https://en.wikipedia.org/wiki/Logarithmically_concave_function "Logarithmically concave function").[\[28\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Patel-28)
- Its density is infinitely [differentiable](https://en.wikipedia.org/wiki/Differentiable "Differentiable"), indeed [supersmooth](https://en.wikipedia.org/wiki/Supersmooth "Supersmooth") of order 2.[\[29\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-29)
Furthermore, the density â Ï {\\displaystyle \\varphi }  â of the standard normal distribution (i.e. ÎŒ \= 0 {\\textstyle \\mu =0}  and Ï \= 1 {\\textstyle \\sigma =1} ) also has the following properties:
- Its first derivative is
Ï
âČ
(
x
)
\=
â
x
Ï
(
x
)
.
{\\textstyle \\varphi '(x)=-x\\varphi (x).}

- Its second derivative is
Ï
âł
(
x
)
\=
(
x
2
â
1
)
Ï
(
x
)
{\\textstyle \\varphi ''(x)=(x^{2}-1)\\varphi (x)}

- More generally, its nth derivative is
Ï
(
n
)
(
x
)
\=
(
â
1
)
n
He
n
âĄ
(
x
)
Ï
(
x
)
,
{\\textstyle \\varphi ^{(n)}(x)=(-1)^{n}\\operatorname {He} \_{n}(x)\\varphi (x),}

where
He
n
âĄ
(
x
)
{\\textstyle \\operatorname {He} \_{n}(x)}

is the nth (probabilist) [Hermite polynomial](https://en.wikipedia.org/wiki/Hermite_polynomial "Hermite polynomial").[\[30\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-30)
- The probability that a normally distributed variable
â
X
{\\displaystyle X}

â
with known
â
Ό
{\\displaystyle \\mu }

â
and
Ï
2
{\\textstyle \\sigma ^{2}}

is in a particular set, can be calculated given that the fraction
Z
\=
(
X
â
Ό
)
/
Ï
{\\textstyle Z=(X-\\mu )/\\sigma }

has a standard normal distribution.
### Moments
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=14 "Edit section: Moments")\]
See also: [List of integrals of Gaussian functions](https://en.wikipedia.org/wiki/List_of_integrals_of_Gaussian_functions "List of integrals of Gaussian functions")
The plain and absolute [moments](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") of a variable â X {\\displaystyle X}  â are the expected values of X p {\\textstyle X^{p}}  and \| X \| p {\\textstyle \|X\|^{p}} , respectively. If the expected value â ÎŒ {\\displaystyle \\mu }  â of â X {\\displaystyle X}  â is zero, these parameters are called *central moments;* otherwise, these parameters are called *non-central moments.* Usually we are interested only in moments with integer order â p {\\displaystyle p}  â .
If â X {\\displaystyle X}  â has a normal distribution, the non-central moments exist and are finite for any â p {\\displaystyle p}  â whose real part is greater than â1. For any non-negative integer â p {\\displaystyle p}  â , the plain central moments are:[\[31\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-31) E ⥠\[ ( X â ÎŒ ) p \] \= { 0 if p is odd, Ï p ( p â 1 ) \! \! if p is even. {\\displaystyle \\operatorname {E} \\left\[(X-\\mu )^{p}\\right\]={\\begin{cases}0&{\\text{if }}p{\\text{ is odd,}}\\\\\\sigma ^{p}(p-1)!!&{\\text{if }}p{\\text{ is even.}}\\end{cases}}} ![{\\displaystyle \\operatorname {E} \\left\[(X-\\mu )^{p}\\right\]={\\begin{cases}0&{\\text{if }}p{\\text{ is odd,}}\\\\\\sigma ^{p}(p-1)!!&{\\text{if }}p{\\text{ is even.}}\\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1d2c92b62ac2bbe07a8e475faac29c8cc5f7755) Here n \! \! {\\textstyle n!!}  denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"), that is, the product of all numbers from â n {\\displaystyle n}  â to 1 that have the same parity as n . {\\textstyle n.} 
The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer p , {\\textstyle p,} 
E ⥠\[ \| X â ÎŒ \| p \] \= Ï p ( p â 1 ) \! \! â
{ 2 Ï if p is odd 1 if p is even \= Ï p â
2 p / 2 Î ( p \+ 1 2 ) Ï . {\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[\|X-\\mu \|^{p}\\right\]&=\\sigma ^{p}(p-1)!!\\cdot {\\begin{cases}{\\sqrt {\\frac {2}{\\pi }}}&{\\text{if }}p{\\text{ is odd}}\\\\1&{\\text{if }}p{\\text{ is even}}\\end{cases}}\\\\\[8pt\]&=\\sigma ^{p}\\cdot {\\frac {2^{p/2}\\Gamma \\left({\\frac {p+1}{2}}\\right)}{\\sqrt {\\pi }}}.\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[\|X-\\mu \|^{p}\\right\]&=\\sigma ^{p}(p-1)!!\\cdot {\\begin{cases}{\\sqrt {\\frac {2}{\\pi }}}&{\\text{if }}p{\\text{ is odd}}\\\\1&{\\text{if }}p{\\text{ is even}}\\end{cases}}\\\\\[8pt\]&=\\sigma ^{p}\\cdot {\\frac {2^{p/2}\\Gamma \\left({\\frac {p+1}{2}}\\right)}{\\sqrt {\\pi }}}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3be5c6403b0141985f1980de6283a118ef4ea267) The last formula is valid also for any non-integer p \> â 1\. {\\textstyle p\>-1.}  When the mean ÎŒ â 0 , {\\textstyle \\mu \\neq 0,}  the plain and absolute moments can be expressed in terms of [confluent hypergeometric functions](https://en.wikipedia.org/wiki/Confluent_hypergeometric_function "Confluent hypergeometric function") 1 F 1 {\\textstyle {}\_{1}F\_{1}}  and U . {\\textstyle U.} [\[32\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-32) E ⥠\[ X p \] \= Ï p â
( â i 2 ) p U ( â p 2 , 1 2 , â ÎŒ 2 2 Ï 2 ) , E ⥠\[ \| X \| p \] \= Ï p â
2 p / 2 Î ( 1 \+ p 2 ) Ï 1 F 1 ( â p 2 , 1 2 , â ÎŒ 2 2 Ï 2 ) . {\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[X^{p}\\right\]&=\\sigma ^{p}\\cdot {\\left(-i{\\sqrt {2}}\\right)}^{p}\\,U{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)},\\\\\\operatorname {E} \\left\[\|X\|^{p}\\right\]&=\\sigma ^{p}\\cdot 2^{p/2}{\\frac {\\Gamma {\\left({\\frac {1+p}{2}}\\right)}}{\\sqrt {\\pi }}}\\,{}\_{1}F\_{1}{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)}.\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[X^{p}\\right\]&=\\sigma ^{p}\\cdot {\\left(-i{\\sqrt {2}}\\right)}^{p}\\,U{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)},\\\\\\operatorname {E} \\left\[\|X\|^{p}\\right\]&=\\sigma ^{p}\\cdot 2^{p/2}{\\frac {\\Gamma {\\left({\\frac {1+p}{2}}\\right)}}{\\sqrt {\\pi }}}\\,{}\_{1}F\_{1}{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c1a76b43032d0b01ff1935cced240253263ec62)
These expressions remain valid even when â p \> â 1 {\\displaystyle p\>-1}  â is not an integer. See also [generalized Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials#"Negative_variance" "Hermite polynomials").
| Order | Non-central moment, E ⥠\[ X p \] {\\displaystyle \\operatorname {E} \\left\[X^{p}\\right\]} ![{\\displaystyle \\operatorname {E} \\left\[X^{p}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53264b5ab94e93f2e7de05de69012af27df4c4f4) |
|---|---|
The expectation of â X {\\displaystyle X}  â conditioned on the event that â X {\\displaystyle X}  â lies in an interval \[ a , b \] {\\textstyle \[a,b\]} ![{\\textstyle \[a,b\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c780cbaafb5b1d4a6912aa65d2b0b1982097108) is given by E ⥠\[ X ⣠a \< X \< b \] \= ÎŒ â Ï 2 f ( b ) â f ( a ) F ( b ) â F ( a ) , {\\displaystyle \\operatorname {E} \\left\[X\\mid a\<X\<b\\right\]=\\mu -\\sigma ^{2}{\\frac {f(b)-f(a)}{F(b)-F(a)}}\\,,} ![{\\displaystyle \\operatorname {E} \\left\[X\\mid a\<X\<b\\right\]=\\mu -\\sigma ^{2}{\\frac {f(b)-f(a)}{F(b)-F(a)}}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad97cc40960e6d1d4e65f51596c9cd0c9accfdc0) where â f {\\displaystyle f}  â and â F {\\displaystyle F}  â respectively are the density and the cumulative distribution function of â X {\\displaystyle X}  â . For b \= â {\\textstyle b=\\infty }  this is known as the [inverse Mills ratio](https://en.wikipedia.org/wiki/Inverse_Mills_ratio "Inverse Mills ratio"). Note that above, density â f {\\displaystyle f}  â of â X {\\displaystyle X}  â is used instead of standard normal density as in inverse Mills ratio, so here we have Ï 2 {\\textstyle \\sigma ^{2}}  instead of â Ï {\\displaystyle \\sigma }  â .
### Fourier transform and characteristic function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=15 "Edit section: Fourier transform and characteristic function")\]
The [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform "Fourier transform") of a normal density â f {\\displaystyle f}  â with mean â ÎŒ {\\displaystyle \\mu }  â and variance Ï 2 {\\textstyle \\sigma ^{2}}  is[\[33\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-33)
f ^ ( t ) \= â« â â â f ( x ) e â i t x d x \= e â i ÎŒ t e â 1 2 Ï 2 t 2 , {\\displaystyle {\\hat {f}}(t)=\\int \_{-\\infty }^{\\infty }f(x)e^{-itx}\\,dx=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}\\,,} 
where â i {\\displaystyle i}  â is the [imaginary unit](https://en.wikipedia.org/wiki/Imaginary_unit "Imaginary unit"). If the mean ÎŒ \= 0 {\\textstyle \\mu =0} , the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the [frequency domain](https://en.wikipedia.org/wiki/Frequency_domain "Frequency domain"), with mean 0 and variance â 1 / Ï 2 {\\displaystyle 1/\\sigma ^{2}}  â . In particular, the standard normal distribution â Ï {\\displaystyle \\varphi }  â is an [eigenfunction](https://en.wikipedia.org/wiki/Fourier_transform#Eigenfunctions "Fourier transform") of the Fourier transform.
In probability theory, the Fourier transform of the probability distribution of a real-valued random variable â X {\\displaystyle X}  â is closely connected to the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") Ï X ( t ) {\\textstyle \\varphi \_{X}(t)}  of that variable, which is defined as the [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") of e i t X {\\textstyle e^{itX}} , as a function of the real variable â t {\\displaystyle t}  â (the [frequency](https://en.wikipedia.org/wiki/Frequency "Frequency") parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable â t {\\displaystyle t}  â .[\[34\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-34) The relation between both is: Ï X ( t ) \= f ^ ( â t ) . {\\displaystyle \\varphi \_{X}(t)={\\hat {f}}(-t)\\,.} 
The real and imaginary parts of f ^ ( t ) \= E ⥠\[ e â i t x \] \= e â i ÎŒ t e â 1 2 Ï 2 t 2 {\\displaystyle {\\hat {f}}(t)=\\operatorname {E} \[e^{-itx}\]=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}} ![{\\displaystyle {\\hat {f}}(t)=\\operatorname {E} \[e^{-itx}\]=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/952332f7eee9e208a58adcb0fc9579bdaa143cce) give: E ⥠\[ cos ⥠( t x ) \] \= cos ⥠( ÎŒ t ) e â 1 2 Ï 2 t 2 {\\displaystyle \\operatorname {E} \[\\cos(tx)\]=\\cos(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}} ![{\\displaystyle \\operatorname {E} \[\\cos(tx)\]=\\cos(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5395d7de6a6755a20b760ae1a32fffc4321f63be) and E ⥠\[ sin ⥠( t x ) \] \= sin ⥠( ÎŒ t ) e â 1 2 Ï 2 t 2 . {\\displaystyle \\operatorname {E} \[\\sin(tx)\]=\\sin(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}.} ![{\\displaystyle \\operatorname {E} \[\\sin(tx)\]=\\sin(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da4035208745307ad5ea8c4ac3274bdebc42bccd)
Similarly, E ⥠\[ cosh ⥠( t x ) \] \= cosh ⥠( ÎŒ t ) e 1 2 Ï 2 t 2 {\\displaystyle \\operatorname {E} \[\\cosh(tx)\]=\\cosh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}} ![{\\displaystyle \\operatorname {E} \[\\cosh(tx)\]=\\cosh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ab17a3e9ea7d35d5f8f142bdb6057ea1cf8ff224) and E ⥠\[ sinh ⥠( t x ) \] \= sinh ⥠( ÎŒ t ) e 1 2 Ï 2 t 2 . {\\displaystyle \\operatorname {E} \[\\sinh(tx)\]=\\sinh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}.} ![{\\displaystyle \\operatorname {E} \[\\sinh(tx)\]=\\sinh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/448bbe59bcf7fbe7f53f44effd9ce6d75fc19fc3)
These formulas evaluated at t \= 1 {\\displaystyle t=1}  give the expected value of these basic trigonometric and hyperbolic functions over a Gaussian random variable X ⌠N ( ÎŒ , Ï 2 ) {\\displaystyle X\\sim N(\\mu ,\\sigma ^{2})} , which also could be seen as consequences of the [Isserlis's theorem](https://en.wikipedia.org/wiki/Isserlis%27s_theorem "Isserlis's theorem").
### Moment- and cumulant-generating functions
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=16 "Edit section: Moment- and cumulant-generating functions")\]
The [moment generating function](https://en.wikipedia.org/wiki/Moment_generating_function "Moment generating function") of a real random variable â X {\\displaystyle X}  â is the expected value of e t X {\\textstyle e^{tX}} , as a function of the real parameter â t {\\displaystyle t}  â . For a normal distribution with density â f {\\displaystyle f}  â , mean â ÎŒ {\\displaystyle \\mu }  â and variance Ï 2 {\\textstyle \\sigma ^{2}} , the moment generating function exists and is equal to
M ( t ) \= E ⥠\[ e t X \] \= f ^ ( i t ) \= e ÎŒ t e Ï 2 t 2 / 2 . {\\displaystyle M(t)=\\operatorname {E} \\left\[e^{tX}\\right\]={\\hat {f}}(it)=e^{\\mu t}e^{\\sigma ^{2}t^{2}/2}\\,.} ![{\\displaystyle M(t)=\\operatorname {E} \\left\[e^{tX}\\right\]={\\hat {f}}(it)=e^{\\mu t}e^{\\sigma ^{2}t^{2}/2}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3b5930b107fb4328bc04d077d65ce3d2bf1510de) For any â k {\\displaystyle k}  â , the coefficient of â t k / k \! {\\displaystyle t^{k}/k!}  â in the moment generating function (expressed as an [exponential power series](https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function_\(EGF\) "Generating function") in â t {\\displaystyle t}  â ) is the normal distribution's expected value â E ⥠\[ X k \] {\\displaystyle \\operatorname {E} \[X^{k}\]} ![{\\displaystyle \\operatorname {E} \[X^{k}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9893a5b728d8111751abbcf6bd653d214b462cdc) â .
The [cumulant generating function](https://en.wikipedia.org/wiki/Cumulant_generating_function "Cumulant generating function") is the logarithm of the moment generating function, namely g ( t ) \= ln ⥠M ( t ) \= ÎŒ t \+ 1 2 Ï 2 t 2 . {\\displaystyle g(t)=\\ln M(t)=\\mu t+{\\tfrac {1}{2}}\\sigma ^{2}t^{2}\\,.} 
The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in â t {\\displaystyle t}  â , only the first two [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") are nonzero, namely the mean â ÎŒ {\\displaystyle \\mu }  â and the variance â Ï 2 {\\displaystyle \\sigma ^{2}}  â .
Some authors prefer to instead work with the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") E\[*e**itX*\] = *e**iÎŒt* â *Ï*2*t*2/2 and ln E\[*e**itX*\] = *iÎŒt* â â 1/2â *Ï*2*t*2.
### Stein operator and class
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=17 "Edit section: Stein operator and class")\]
Within [Stein's method](https://en.wikipedia.org/wiki/Stein%27s_method "Stein's method") the Stein operator and class of a random variable X ⌠N ( ÎŒ , Ï 2 ) {\\textstyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}  are A f ( x ) \= Ï 2 f âČ ( x ) â ( x â ÎŒ ) f ( x ) {\\textstyle {\\mathcal {A}}f(x)=\\sigma ^{2}f'(x)-(x-\\mu )f(x)}  and F {\\textstyle {\\mathcal {F}}}  the class of all absolutely continuous functions â f : R â R {\\displaystyle \\textstyle f:\\mathbb {R} \\to \\mathbb {R} }  â such that â E ⥠\[ \| f âČ ( X ) \| \] \< â {\\displaystyle \\operatorname {E} \[\\vert f'(X)\\vert \]\<\\infty } ![{\\displaystyle \\operatorname {E} \[\\vert f'(X)\\vert \]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/c172f331ab4cca02c7e06a7322b7832f082e717b) â .
### Zero-variance limit
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=18 "Edit section: Zero-variance limit")\]
In the [limit](https://en.wikipedia.org/wiki/Limit_\(mathematics\) "Limit (mathematics)") when Ï 2 {\\textstyle \\sigma ^{2}}  approaches zero, the probability density f {\\textstyle f}  approaches zero everywhere except at ÎŒ {\\textstyle \\mu } , where it approaches â {\\textstyle \\infty } , while its integral remains equal to 1. An extension of the normal distribution to the case with zero variance can be defined using the [Dirac delta measure](https://en.wikipedia.org/wiki/Dirac_measure "Dirac measure") ÎŽ ÎŒ {\\textstyle \\delta \_{\\mu }} , although the resulting random variables are not [absolutely continuous](https://en.wikipedia.org/wiki/Absolutely_continuous_random_variable "Absolutely continuous random variable") and thus do not have [probability density functions](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function"). The cumulative distribution function of such a random variable is then the [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function "Heaviside step function") translated by the mean ÎŒ {\\textstyle \\mu } , namely F ( x ) \= { 0 if x \< ÎŒ 1 if x â„ ÎŒ . {\\displaystyle F(x)={\\begin{cases}0&{\\text{if }}x\<\\mu \\\\1&{\\text{if }}x\\geq \\mu .\\end{cases}}} 
### Maximum entropy
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=19 "Edit section: Maximum entropy")\]
Of all probability distributions over the reals with a specified finite mean â ÎŒ {\\displaystyle \\mu }  â and finite variance â Ï 2 {\\displaystyle \\sigma ^{2}}  â , the normal distribution N ( ÎŒ , Ï 2 ) {\\textstyle N(\\mu ,\\sigma ^{2})}  is the one with [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution").[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24) To see this, let â X {\\displaystyle X}  â be a [continuous random variable](https://en.wikipedia.org/wiki/Continuous_random_variable "Continuous random variable") with [probability density](https://en.wikipedia.org/wiki/Probability_density "Probability density") â f ( x ) {\\displaystyle f(x)}  â . The entropy of â X {\\displaystyle X}  â is defined as[\[35\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-35)[\[36\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-36)[\[37\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-37) H ( X ) \= â â« â â â f ( x ) ln ⥠f ( x ) d x , {\\displaystyle H(X)=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx\\,,}  where f ( x ) log ⥠f ( x ) {\\textstyle f(x)\\log f(x)}  is understood to be zero whenever â f ( x ) \= 0 {\\displaystyle f(x)=0}  â . This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by using [variational calculus](https://en.wikipedia.org/wiki/Variational_calculus "Variational calculus"). A function with three [Lagrange multipliers](https://en.wikipedia.org/wiki/Lagrange_multipliers "Lagrange multipliers") is defined: L \= â â« â â â f ( x ) ln ⥠f ( x ) d x â λ 0 ( 1 â â« â â â f ( x ) d x ) â λ 1 ( ÎŒ â â« â â â f ( x ) x d x ) â λ 2 ( Ï 2 â â« â â â f ( x ) ( x â ÎŒ ) 2 d x ) . {\\displaystyle L=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx-\\lambda \_{0}\\left(1-\\int \_{-\\infty }^{\\infty }f(x)\\,dx\\right)-\\lambda \_{1}\\left(\\mu -\\int \_{-\\infty }^{\\infty }f(x)x\\,dx\\right)-\\lambda \_{2}\\left(\\sigma ^{2}-\\int \_{-\\infty }^{\\infty }f(x)(x-\\mu )^{2}\\,dx\\right)\\,.} 
At maximum entropy, a small variation ÎŽ f ( x ) {\\textstyle \\delta f(x)}  about f ( x ) {\\textstyle f(x)}  will produce a variation ÎŽ L {\\textstyle \\delta L}  about â L {\\displaystyle L}  â which is equal to 0: 0 \= ÎŽ L \= â« â â â ÎŽ f ( x ) ( â ln ⥠f ( x ) â 1 \+ λ 0 \+ λ 1 x \+ λ 2 ( x â ÎŒ ) 2 ) d x . {\\displaystyle 0=\\delta L=\\int \_{-\\infty }^{\\infty }\\delta f(x)\\left(-\\ln f(x)-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,dx\\,.} 
Since this must hold for any small â ÎŽ f ( x ) {\\displaystyle \\delta f(x)}  â , the factor multiplying â ÎŽ f ( x ) {\\displaystyle \\delta f(x)}  â must be zero, and solving for â f ( x ) {\\displaystyle f(x)}  â yields: f ( x ) \= exp ⥠( â 1 \+ λ 0 \+ λ 1 x \+ λ 2 ( x â ÎŒ ) 2 ) . {\\displaystyle f(x)=\\exp \\left(-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,.} 
The Lagrange constraints that â f ( x ) {\\displaystyle f(x)}  â is properly normalized and has the specified mean and variance are satisfied if and only if â λ 0 {\\displaystyle \\lambda \_{0}}  â , â λ 1 {\\displaystyle \\lambda \_{1}}  â , and â λ 2 {\\displaystyle \\lambda \_{2}}  â are chosen so that f ( x ) \= 1 2 Ï Ï 2 e â ( x â ÎŒ ) 2 2 Ï 2 . {\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}e^{-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}}\\,.}  The entropy of a normal distribution X ⌠N ( ÎŒ , Ï 2 ) {\\textstyle X\\sim N(\\mu ,\\sigma ^{2})}  is equal to H ( X ) \= 1 2 ( 1 \+ ln ⥠2 Ï 2 Ï ) , {\\displaystyle H(X)={\\tfrac {1}{2}}(1+\\ln 2\\sigma ^{2}\\pi )\\,,}  which is independent of the mean â ÎŒ {\\displaystyle \\mu }  â .
### Other properties
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=20 "Edit section: Other properties")\]
1. If the characteristic function
Ï
X
{\\textstyle \\phi \_{X}}

of some random variable
â
X
{\\displaystyle X}

â
is of the form
Ï
X
(
t
)
\=
exp
âĄ
Q
(
t
)
{\\textstyle \\phi \_{X}(t)=\\exp Q(t)}

in a neighborhood of zero, where
Q
(
t
)
{\\textstyle Q(t)}

is a [polynomial](https://en.wikipedia.org/wiki/Polynomial "Polynomial"), then the **Marcinkiewicz theorem** (named after [JĂłzef Marcinkiewicz](https://en.wikipedia.org/wiki/J%C3%B3zef_Marcinkiewicz "JĂłzef Marcinkiewicz")) asserts that
â
Q
{\\displaystyle Q}

â
can be at most a quadratic polynomial, and therefore
â
X
{\\displaystyle X}

â
is a normal random variable.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38) The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant").
2. If
â
X
{\\displaystyle X}

â
and
â
Y
{\\displaystyle Y}

â
are [jointly normal](https://en.wikipedia.org/wiki/Jointly_normal "Jointly normal") and [uncorrelated](https://en.wikipedia.org/wiki/Uncorrelated "Uncorrelated"), then they are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). The requirement that
â
X
{\\displaystyle X}

â
and
â
Y
{\\displaystyle Y}

â
should be *jointly* normal is essential; without it the property does not hold.[\[39\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-39)[\[40\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-40)[\[proof\]](https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent "Normally distributed and uncorrelated does not imply independent") For non-normal random variables uncorrelatedness does not imply independence.
3. The [KullbackâLeibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence "KullbackâLeibler divergence") of one normal distribution
X
1
âŒ
N
(
Ό
1
,
Ï
1
2
)
{\\textstyle X\_{1}\\sim N(\\mu \_{1},\\sigma \_{1}^{2})}

from another
X
2
âŒ
N
(
Ό
2
,
Ï
2
2
)
{\\textstyle X\_{2}\\sim N(\\mu \_{2},\\sigma \_{2}^{2})}

is given by:[\[41\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-41)
D
K
L
(
X
1
â„
X
2
)
\=
(
Ό
1
â
Ό
2
)
2
2
Ï
2
2
\+
1
2
(
Ï
1
2
Ï
2
2
â
1
â
ln
âĄ
Ï
1
2
Ï
2
2
)
{\\displaystyle D\_{\\mathrm {KL} }(X\_{1}\\parallel X\_{2})={\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{2\\sigma \_{2}^{2}}}+{\\frac {1}{2}}\\left({\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}-1-\\ln {\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}\\right)}

The [Hellinger distance](https://en.wikipedia.org/wiki/Hellinger_distance "Hellinger distance") between the same distributions is equal to
H
2
(
X
1
,
X
2
)
\=
1
â
2
Ï
1
Ï
2
Ï
1
2
\+
Ï
2
2
exp
âĄ
(
â
1
4
(
Ό
1
â
Ό
2
)
2
Ï
1
2
\+
Ï
2
2
)
{\\displaystyle H^{2}(X\_{1},X\_{2})=1-{\\sqrt {\\frac {2\\sigma \_{1}\\sigma \_{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}}\\exp \\left(-{\\frac {1}{4}}{\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}\\right)}

4. The [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") for a normal distribution w.r.t.
â
Ό
{\\displaystyle \\mu }

â
and
Ï
2
{\\textstyle \\sigma ^{2}}

is diagonal and takes the form
I
(
Ό
,
Ï
2
)
\=
(
1
Ï
2
0
0
1
2
Ï
4
)
{\\displaystyle {\\mathcal {I}}(\\mu ,\\sigma ^{2})={\\begin{pmatrix}{\\frac {1}{\\sigma ^{2}}}&0\\\\0&{\\frac {1}{2\\sigma ^{4}}}\\end{pmatrix}}}

5. The [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the mean of a normal distribution is another normal distribution.[\[42\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-42) Specifically, if
x
1
,
âŠ
,
x
n
{\\textstyle x\_{1},\\ldots ,x\_{n}}

are iid
âŒ
N
(
Ό
,
Ï
2
)
{\\textstyle \\sim N(\\mu ,\\sigma ^{2})}

and the prior is
Ό
âŒ
N
(
Ό
0
,
Ï
0
2
)
{\\textstyle \\mu \\sim N(\\mu \_{0},\\sigma \_{0}^{2})}

, then the posterior distribution for the estimator of
â
Ό
{\\displaystyle \\mu }

â
will be
Ό
âŁ
x
1
,
âŠ
,
x
n
âŒ
N
(
Ï
2
n
Ό
0
\+
Ï
0
2
x
ÂŻ
Ï
2
n
\+
Ï
0
2
,
(
n
Ï
2
\+
1
Ï
0
2
)
â
1
)
{\\displaystyle \\mu \\mid x\_{1},\\ldots ,x\_{n}\\sim {\\mathcal {N}}\\left({\\frac {{\\frac {\\sigma ^{2}}{n}}\\mu \_{0}+\\sigma \_{0}^{2}{\\bar {x}}}{{\\frac {\\sigma ^{2}}{n}}+\\sigma \_{0}^{2}}},\\left({\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}\\right)^{-1}\\right)}

6. The family of normal distributions not only forms an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") (EF), but in fact forms a [natural exponential family](https://en.wikipedia.org/wiki/Natural_exponential_family "Natural exponential family") (NEF) with quadratic [variance function](https://en.wikipedia.org/wiki/Variance_function "Variance function") ([NEF-QVF](https://en.wikipedia.org/wiki/NEF-QVF "NEF-QVF")). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.
7. In [information geometry](https://en.wikipedia.org/wiki/Information_geometry "Information geometry"), the family of normal distributions forms a [statistical manifold](https://en.wikipedia.org/wiki/Statistical_manifold "Statistical manifold") with [constant curvature](https://en.wikipedia.org/wiki/Constant_curvature "Constant curvature")
â
â
1
{\\displaystyle -1}

â
. The same family is [flat](https://en.wikipedia.org/wiki/Flat_manifold "Flat manifold") with respect to the (±1)-connections
â
(
e
)
{\\textstyle \\nabla ^{(e)}}

and
â
(
m
)
{\\textstyle \\nabla ^{(m)}}

.[\[43\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-43)
8. If
X
1
,
âŠ
,
X
n
{\\textstyle X\_{1},\\dots ,X\_{n}}

are distributed according to
N
(
0
,
Ï
2
)
{\\textstyle N(0,\\sigma ^{2})}

, then
E
\[
max
i
X
i
\]
â€
Ï
2
ln
âĄ
n
{\\textstyle E\[\\max \_{i}X\_{i}\]\\leq \\sigma {\\sqrt {2\\ln n}}}
![{\\textstyle E\[\\max \_{i}X\_{i}\]\\leq \\sigma {\\sqrt {2\\ln n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0dfb87c9b047ccf23ace2139d97810dff1ed6670)
. Note that there is no assumption of independence.[\[44\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-44)
## Related distributions
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=21 "Edit section: Related distributions")\]
### Central limit theorem
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=22 "Edit section: Central limit theorem")\]
[](https://en.wikipedia.org/wiki/File:De_moivre-laplace.gif)
As the number of discrete events increases, the function begins to resemble a normal distribution.
[](https://en.wikipedia.org/wiki/File:Dice_sum_central_limit_theorem.svg)
Comparison of probability density functions, *p*(*k*) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing na, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve).
Main article: [Central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem")
The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where X 1 , ⊠, X n {\\textstyle X\_{1},\\ldots ,X\_{n}}  are [independent and identically distributed](https://en.wikipedia.org/wiki/Independent_and_identically_distributed "Independent and identically distributed") random variables with the same arbitrary distribution, zero mean, and variance Ï 2 {\\textstyle \\sigma ^{2}}  and â Z {\\displaystyle Z}  â is their mean scaled by n {\\textstyle {\\sqrt {n}}}  Z \= n ( 1 n â i \= 1 n X i ) {\\displaystyle Z={\\sqrt {n}}{\\biggl (}{\\frac {1}{n}}\\sum \_{i=1}^{n}X\_{i}{\\biggr )}}  Then, as â n {\\displaystyle n}  â increases, the probability distribution of â Z {\\displaystyle Z}  â will tend to the normal distribution with zero mean and variance â Ï 2 {\\displaystyle \\sigma ^{2}}  â .
The theorem can be extended to variables ( X i ) {\\textstyle (X\_{i})}  that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.
Many [test statistics](https://en.wikipedia.org/wiki/Test_statistic "Test statistic"), [scores](https://en.wikipedia.org/wiki/Score_\(statistics\) "Score (statistics)"), and [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of [influence functions](https://en.wikipedia.org/wiki/Influence_function_\(statistics\) "Influence function (statistics)"). The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.
The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:
- The [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution")
B
(
n
,
p
)
{\\textstyle B(n,p)}

is [approximately normal](https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem "De MoivreâLaplace theorem") with mean
n
p
{\\textstyle np}

and variance
n
p
(
1
â
p
)
{\\textstyle np(1-p)}

for large
â
n
{\\displaystyle n}

â
and for
â
p
{\\displaystyle p}

â
not too close to 0 or 1.
- The [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution "Poisson distribution") with parameter
â
λ
{\\displaystyle \\lambda }

â
is approximately normal with mean
â
λ
{\\displaystyle \\lambda }

â
and variance
â
λ
{\\displaystyle \\lambda }

â
, for large values of
â
λ
{\\displaystyle \\lambda }

â
.[\[45\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-45)
- The [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution")
Ï
2
(
k
)
{\\textstyle \\chi ^{2}(k)}

is approximately normal with mean
â
k
{\\displaystyle k}

â
and variance
2
k
{\\textstyle 2k}

, for large
â
k
{\\displaystyle k}

â
.
- The [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution")
t
(
Μ
)
{\\textstyle t(\\nu )}

is approximately normal with mean 0 and variance 1 when
â
Μ
{\\displaystyle \\nu }

â
is large.
Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.
A general upper bound for the approximation error in the central limit theorem is given by the [BerryâEsseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "BerryâEsseen theorem"), improvements of the approximation are given by the [Edgeworth expansions](https://en.wikipedia.org/wiki/Edgeworth_expansion "Edgeworth expansion").
This theorem can also be used to justify modeling the sum of many uniform noise sources as [Gaussian noise](https://en.wikipedia.org/wiki/Gaussian_noise "Gaussian noise"). See [AWGN](https://en.wikipedia.org/wiki/AWGN "AWGN").
### Operations and functions of normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=23 "Edit section: Operations and functions of normal variables")\]
#### Operations on a single normal variable
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=24 "Edit section: Operations on a single normal variable")\]
If â X {\\displaystyle X}  â is distributed normally with mean â ÎŒ {\\displaystyle \\mu }  â and variance Ï 2 {\\textstyle \\sigma ^{2}} , then
- a
X
\+
b
{\\textstyle aX+b}

, for any real numbers
â
a
{\\displaystyle a}

â
and
â
b
{\\displaystyle b}

â
, is also normally distributed, with mean
a
Ό
\+
b
{\\textstyle a\\mu +b}

and variance
a
2
Ï
2
{\\textstyle a^{2}\\sigma ^{2}}

. That is, the family of normal distributions is closed under [linear transformations](https://en.wikipedia.org/wiki/Linear_transformations "Linear transformations").
- The exponential of
â
X
{\\displaystyle X}

â
is distributed [log-normally](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution"):
e
X
âŒ
ln
âĄ
(
N
(
Ό
,
Ï
2
)
)
{\\textstyle e^{X}\\sim \\ln(N(\\mu ,\\sigma ^{2}))}

.
- The standard [sigmoid](https://en.wikipedia.org/wiki/Logistic_function "Logistic function") of
â
X
{\\displaystyle X}

â
is [logit-normally distributed](https://en.wikipedia.org/wiki/Logit-normal_distribution "Logit-normal distribution"):
Ï
(
X
)
âŒ
P
(
N
(
Ό
,
Ï
2
)
)
{\\textstyle \\sigma (X)\\sim P({\\mathcal {N}}(\\mu ,\\,\\sigma ^{2}))}

.
- The absolute value of
â
X
{\\displaystyle X}

â
has [folded normal distribution](https://en.wikipedia.org/wiki/Folded_normal_distribution "Folded normal distribution"):
\|
X
\|
âŒ
N
f
(
Ό
,
Ï
2
)
{\\textstyle {\\left\|X\\right\|\\sim N\_{f}(\\mu ,\\sigma ^{2})}}

. If
Ό
\=
0
{\\textstyle \\mu =0}

this is known as the [half-normal distribution](https://en.wikipedia.org/wiki/Half-normal_distribution "Half-normal distribution").
- The absolute value of normalized residuals,
\|
X
â
Ό
\|
/
Ï
{\\textstyle \|X-\\mu \|/\\sigma }

, has [chi distribution](https://en.wikipedia.org/wiki/Chi_distribution "Chi distribution") with one degree of freedom:
\|
X
â
Ό
\|
/
Ï
âŒ
Ï
1
{\\textstyle \|X-\\mu \|/\\sigma \\sim \\chi \_{1}}

.
- The square of
X
/
Ï
{\\textstyle X/\\sigma }

has the [noncentral chi-squared distribution](https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution "Noncentral chi-squared distribution") with one degree of freedom:
X
2
/
Ï
2
âŒ
Ï
1
2
(
Ό
2
/
Ï
2
)
{\\textstyle X^{2}/\\sigma ^{2}\\sim \\chi \_{1}^{2}(\\mu ^{2}/\\sigma ^{2})}

. If
Ό
\=
0
{\\textstyle \\mu =0}

, the distribution is called simply [chi-squared](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution").
- The log-likelihood of a normal variable
â
x
{\\displaystyle x}

â
is simply the log of its [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function"):
ln
âĄ
p
(
x
)
\=
â
1
2
(
x
â
Ό
Ï
)
2
â
ln
âĄ
(
Ï
2
Ï
)
.
{\\displaystyle \\ln p(x)=-{\\frac {1}{2}}\\left({\\frac {x-\\mu }{\\sigma }}\\right)^{2}-\\ln \\left(\\sigma {\\sqrt {2\\pi }}\\right).}

Since this is a scaled and shifted square of a standard normal variable, it is distributed as a scaled and shifted [chi-squared](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") variable.
- The distribution of the variable
â
X
{\\displaystyle X}

â
restricted to an interval
\[
a
,
b
\]
{\\textstyle \[a,b\]}
![{\\textstyle \[a,b\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c780cbaafb5b1d4a6912aa65d2b0b1982097108)
is called the [truncated normal distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution").
- (
X
â
Ό
)
â
2
{\\textstyle (X-\\mu )^{-2}}

has a [Lévy distribution](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "Lévy distribution") with location 0 and scale
Ï
â
2
{\\textstyle \\sigma ^{-2}}

.
##### Operations on two independent normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=25 "Edit section: Operations on two independent normal variables")\]
- If
X
1
{\\textstyle X\_{1}}

and
X
2
{\\textstyle X\_{2}}

are two [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)") normal random variables, with means
Ό
1
{\\textstyle \\mu \_{1}}

,
Ό
2
{\\textstyle \\mu \_{2}}

and variances
Ï
1
2
{\\textstyle \\sigma \_{1}^{2}}

,
Ï
2
2
{\\textstyle \\sigma \_{2}^{2}}

, then their sum
X
1
\+
X
2
{\\textstyle X\_{1}+X\_{2}}

will also be normally distributed,[\[proof\]](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables "Sum of normally distributed random variables") with mean
Ό
1
\+
Ό
2
{\\textstyle \\mu \_{1}+\\mu \_{2}}

and variance
Ï
1
2
\+
Ï
2
2
{\\textstyle \\sigma \_{1}^{2}+\\sigma \_{2}^{2}}

.
- In particular, if
â
X
{\\displaystyle X}

â
and
â
Y
{\\displaystyle Y}

â
are independent normal deviates with zero mean and variance
Ï
2
{\\textstyle \\sigma ^{2}}

, then
X
\+
Y
{\\textstyle X+Y}

and
X
â
Y
{\\textstyle X-Y}

are also independent and normally distributed, with zero mean and variance
2
Ï
2
{\\textstyle 2\\sigma ^{2}}

. This is a special case of the [polarization identity](https://en.wikipedia.org/wiki/Polarization_identity "Polarization identity").[\[46\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-46)
- If
X
1
{\\textstyle X\_{1}}

,
X
2
{\\textstyle X\_{2}}

are two independent normal deviates with mean
â
Ό
{\\displaystyle \\mu }

â
and variance
Ï
2
{\\textstyle \\sigma ^{2}}

, and
â
a
{\\displaystyle a}

â
,
â
b
{\\displaystyle b}

â
are arbitrary real numbers, then the variable
X
3
\=
a
X
1
\+
b
X
2
â
(
a
\+
b
)
Ό
a
2
\+
b
2
\+
Ό
{\\displaystyle X\_{3}={\\frac {aX\_{1}+bX\_{2}-(a+b)\\mu }{\\sqrt {a^{2}+b^{2}}}}+\\mu }

is also normally distributed with mean
â
Ό
{\\displaystyle \\mu }

â
and variance
Ï
2
{\\textstyle \\sigma ^{2}}

. It follows that the normal distribution is [stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") (with exponent
α
\=
2
{\\textstyle \\alpha =2}

).
- If
X
k
âŒ
N
(
m
k
,
Ï
k
2
)
{\\textstyle X\_{k}\\sim {\\mathcal {N}}(m\_{k},\\sigma \_{k}^{2})}

,
k
â
{
0
,
1
}
{\\textstyle k\\in \\{0,1\\}}

are normal distributions, then their normalized [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean "Geometric mean")
1
â«
R
n
X
0
α
(
x
)
X
1
1
â
α
(
x
)
d
x
X
0
α
X
1
1
â
α
{\\textstyle {\\frac {1}{\\int \_{\\mathbb {R} ^{n}}X\_{0}^{\\alpha }(x)X\_{1}^{1-\\alpha }(x)\\,{\\text{d}}x}}X\_{0}^{\\alpha }X\_{1}^{1-\\alpha }}

is a normal distribution
N
(
m
α
,
Ï
α
2
)
{\\textstyle {\\mathcal {N}}(m\_{\\alpha },\\sigma \_{\\alpha }^{2})}

with
m
α
\=
α
m
0
Ï
1
2
\+
(
1
â
α
)
m
1
Ï
0
2
α
Ï
1
2
\+
(
1
â
α
)
Ï
0
2
{\\textstyle m\_{\\alpha }={\\frac {\\alpha m\_{0}\\sigma \_{1}^{2}+(1-\\alpha )m\_{1}\\sigma \_{0}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}}

and
Ï
α
2
\=
Ï
0
2
Ï
1
2
α
Ï
1
2
\+
(
1
â
α
)
Ï
0
2
{\\textstyle \\sigma \_{\\alpha }^{2}={\\frac {\\sigma \_{0}^{2}\\sigma \_{1}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}}

.
##### Operations on two independent standard normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=26 "Edit section: Operations on two independent standard normal variables")\]
If X 1 {\\textstyle X\_{1}}  and X 2 {\\textstyle X\_{2}}  are two independent standard normal random variables with mean 0 and variance 1, then
- Their sum and difference is distributed normally with mean zero and variance two:
X
1
±
X
2
âŒ
N
(
0
,
2
)
{\\textstyle X\_{1}\\pm X\_{2}\\sim {\\mathcal {N}}(0,2)}

.
- Their product
Z
\=
X
1
X
2
{\\textstyle Z=X\_{1}X\_{2}}

follows the [product distribution](https://en.wikipedia.org/wiki/Product_distribution#Independent_central-normal_distributions "Product distribution")[\[47\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-47) with density function
f
Z
(
z
)
\=
Ï
â
1
K
0
(
\|
z
\|
)
{\\textstyle f\_{Z}(z)=\\pi ^{-1}K\_{0}(\|z\|)}

where
K
0
{\\textstyle K\_{0}}

is the [modified Bessel function of the second kind](https://en.wikipedia.org/wiki/Macdonald_function "Macdonald function"). This distribution is symmetric around zero, unbounded at
z
\=
0
{\\textstyle z=0}

, and has the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)")
Ï
Z
(
t
)
\=
(
1
\+
t
2
)
â
1
/
2
{\\textstyle \\phi \_{Z}(t)=(1+t^{2})^{-1/2}}

.
- Their ratio follows the standard [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"):
X
1
/
X
2
âŒ
Cauchy
âĄ
(
0
,
1
)
{\\textstyle X\_{1}/X\_{2}\\sim \\operatorname {Cauchy} (0,1)}

.
- Their Euclidean norm
X
1
2
\+
X
2
2
{\\textstyle {\\sqrt {X\_{1}^{2}+X\_{2}^{2}}}}

has the [Rayleigh distribution](https://en.wikipedia.org/wiki/Rayleigh_distribution "Rayleigh distribution").
#### Operations on multiple independent normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=27 "Edit section: Operations on multiple independent normal variables")\]
- Any [linear combination](https://en.wikipedia.org/wiki/Linear_combination "Linear combination") of independent normal deviates is a normal deviate.
- If
X
1
,
X
2
,
âŠ
,
X
n
{\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}}

are independent standard normal random variables, then the sum of their squares has the [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with
â
n
{\\displaystyle n}

â
degrees of freedom
X
1
2
\+
âŻ
\+
X
n
2
âŒ
Ï
n
2
.
{\\displaystyle X\_{1}^{2}+\\cdots +X\_{n}^{2}\\sim \\chi \_{n}^{2}.}

- If
X
1
,
X
2
,
âŠ
,
X
n
{\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}}

are independent normally distributed random variables with means
â
Ό
{\\displaystyle \\mu }

â
and variances
Ï
2
{\\textstyle \\sigma ^{2}}

, then their [sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean") is independent from the sample [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation"),[\[48\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-48) which can be demonstrated using [Basu's theorem](https://en.wikipedia.org/wiki/Basu%27s_theorem "Basu's theorem") or [Cochran's theorem](https://en.wikipedia.org/wiki/Cochran%27s_theorem "Cochran's theorem").[\[49\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-49) The ratio of these two quantities will have the [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") with
n
â
1
{\\textstyle n-1}

degrees of freedom:
t
\=
X
ÂŻ
â
Ό
S
/
n
\=
1
n
(
X
1
\+
âŻ
\+
X
n
)
â
Ό
1
n
(
n
â
1
)
\[
(
X
1
â
X
ÂŻ
)
2
\+
âŻ
\+
(
X
n
â
X
ÂŻ
)
2
\]
âŒ
t
n
â
1
.
{\\displaystyle t={\\frac {{\\overline {X}}-\\mu }{S/{\\sqrt {n}}}}={\\frac {{\\frac {1}{n}}(X\_{1}+\\cdots +X\_{n})-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\left\[(X\_{1}-{\\overline {X}})^{2}+\\cdots +(X\_{n}-{\\overline {X}})^{2}\\right\]}}}\\sim t\_{n-1}.}
![{\\displaystyle t={\\frac {{\\overline {X}}-\\mu }{S/{\\sqrt {n}}}}={\\frac {{\\frac {1}{n}}(X\_{1}+\\cdots +X\_{n})-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\left\[(X\_{1}-{\\overline {X}})^{2}+\\cdots +(X\_{n}-{\\overline {X}})^{2}\\right\]}}}\\sim t\_{n-1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/36ff0d3c79a0504e8f259ef99192b825357914d7)
- If
X
1
,
X
2
,
âŠ
,
X
n
{\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}}

,
Y
1
,
Y
2
,
âŠ
,
Y
m
{\\textstyle Y\_{1},Y\_{2},\\ldots ,Y\_{m}}

are independent standard normal random variables, then the ratio of their normalized sums of squares will have the [F-distribution](https://en.wikipedia.org/wiki/F-distribution "F-distribution") with (*n*, *m*) degrees of freedom:[\[50\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-50)
F
\=
(
X
1
2
\+
X
2
2
\+
âŻ
\+
X
n
2
)
/
n
(
Y
1
2
\+
Y
2
2
\+
âŻ
\+
Y
m
2
)
/
m
âŒ
F
n
,
m
.
{\\displaystyle F={\\frac {\\left(X\_{1}^{2}+X\_{2}^{2}+\\cdots +X\_{n}^{2}\\right)/n}{\\left(Y\_{1}^{2}+Y\_{2}^{2}+\\cdots +Y\_{m}^{2}\\right)/m}}\\sim F\_{n,m}.}

#### Operations on multiple correlated normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=28 "Edit section: Operations on multiple correlated normal variables")\]
- A [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") of a normal vector, i.e. a quadratic function
q
\=
â
x
i
2
\+
â
x
j
\+
c
{\\textstyle q=\\sum x\_{i}^{2}+\\sum x\_{j}+c}

of multiple independent or correlated normal variables, is a [generalized chi-square](https://en.wikipedia.org/wiki/Generalized_chi-square_distribution "Generalized chi-square distribution") variable.
### Operations on the density function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=29 "Edit section: Operations on the density function")\]
The [split normal distribution](https://en.wikipedia.org/wiki/Split_normal_distribution "Split normal distribution") is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The [truncated normal distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution") results from rescaling a section of a single density function.
### Infinite divisibility and Cramér's theorem
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=30 "Edit section: Infinite divisibility and CramĂ©r's theorem")\]
For any positive integer n, any normal distribution with mean â ÎŒ {\\displaystyle \\mu }  â and variance Ï 2 {\\textstyle \\sigma ^{2}}  is the distribution of the sum of n independent normal deviates, each with mean ÎŒ n {\\textstyle {\\frac {\\mu }{n}}}  and variance Ï 2 n {\\textstyle {\\frac {\\sigma ^{2}}{n}}} . This property is called [infinite divisibility](https://en.wikipedia.org/wiki/Infinite_divisibility_\(probability\) "Infinite divisibility (probability)").[\[51\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-51)
Conversely, if X 1 {\\textstyle X\_{1}}  and X 2 {\\textstyle X\_{2}}  are independent random variables and their sum X 1 \+ X 2 {\\textstyle X\_{1}+X\_{2}}  has a normal distribution, then both X 1 {\\textstyle X\_{1}}  and X 2 {\\textstyle X\_{2}}  must be normal deviates.[\[52\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-52)
This result is known as [Cramér's decomposition theorem](https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_decomposition_theorem "Cramér's decomposition theorem"), and is equivalent to saying that the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38)
### The KacâBernstein theorem
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=31 "Edit section: The KacâBernstein theorem")\]
The [KacâBernstein theorem](https://en.wikipedia.org/wiki/Kac%E2%80%93Bernstein_theorem "KacâBernstein theorem") states that if X {\\textstyle X}  and â Y {\\displaystyle Y}  â are independent and X \+ Y {\\textstyle X+Y}  and X â Y {\\textstyle X-Y}  are also independent, then both X and Y must necessarily have normal distributions.[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53)[\[54\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-54)
More generally, if X 1 , ⊠, X n {\\textstyle X\_{1},\\ldots ,X\_{n}}  are independent random variables, then two distinct linear combinations â a k X k {\\textstyle \\sum {a\_{k}X\_{k}}}  and â b k X k {\\textstyle \\sum {b\_{k}X\_{k}}} will be independent if and only if all X k {\\textstyle X\_{k}}  are normal and â a k b k Ï k 2 \= 0 {\\textstyle \\sum {a\_{k}b\_{k}\\sigma \_{k}^{2}=0}} , where Ï k 2 {\\textstyle \\sigma \_{k}^{2}}  denotes the variance of X k {\\textstyle X\_{k}} .[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53)
### Extensions
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=32 "Edit section: Extensions")\]
The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called *normal* or *Gaussian* laws, so a certain ambiguity in names exists.
- The [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") describes the Gaussian law in the k\-dimensional [Euclidean space](https://en.wikipedia.org/wiki/Euclidean_space "Euclidean space"). A vector *X* â **R***k* is multivariate-normally distributed if any linear combination of its components ÎŁ*k*
*j*\=1*a**j* *X**j* has a (univariate) normal distribution. The variance of X is a *k* Ă *k* symmetric positive-definite matrix V. The multivariate normal distribution is a special case of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). As such, its iso-density loci in the *k* = 2 case are [ellipses](https://en.wikipedia.org/wiki/Ellipse "Ellipse") and in the case of arbitrary k are [ellipsoids](https://en.wikipedia.org/wiki/Ellipsoid "Ellipsoid").
- [Rectified Gaussian distribution](https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution "Rectified Gaussian distribution") a rectified version of normal distribution with all the negative elements reset to 0.
- [Complex normal distribution](https://en.wikipedia.org/wiki/Complex_normal_distribution "Complex normal distribution") deals with the complex normal vectors. A complex vector *X* â **C***k* is said to be normal if both its real and imaginary components jointly possess a 2*k*\-dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the *variance* matrix Î, and the *relation* matrix C.
- [Matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution") describes the case of normally distributed matrices.
- [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process "Gaussian process") are the normally distributed [stochastic processes](https://en.wikipedia.org/wiki/Stochastic_process "Stochastic process"). These can be viewed as elements of some infinite-dimensional [Hilbert space](https://en.wikipedia.org/wiki/Hilbert_space "Hilbert space") H, and thus are the analogues of multivariate normal vectors for the case *k* = â. A random element *h* â *H* is said to be normal if for any constant *a* â *H* the [scalar product](https://en.wikipedia.org/wiki/Scalar_product "Scalar product") (*a*, *h*) has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear *covariance operator* *K*: *H* â *H*. Several Gaussian processes became popular enough to have their own names:
- [Brownian motion](https://en.wikipedia.org/wiki/Wiener_process "Wiener process");
- [Brownian bridge](https://en.wikipedia.org/wiki/Brownian_bridge "Brownian bridge"); and
- [OrnsteinâUhlenbeck process](https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process "OrnsteinâUhlenbeck process").
- [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") is an abstract mathematical construction that represents a [q-analogue](https://en.wikipedia.org/wiki/Q-analogue "Q-analogue") of the normal distribution.
- the [q-Gaussian](https://en.wikipedia.org/wiki/Q-Gaussian "Q-Gaussian") is an analogue of the Gaussian distribution, in the sense that it maximises the [Tsallis entropy](https://en.wikipedia.org/wiki/Tsallis_entropy "Tsallis entropy"), and is one type of [Tsallis distribution](https://en.wikipedia.org/wiki/Tsallis_distribution "Tsallis distribution"). This distribution is different from the [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") above.
- The [Kaniadakis Îș\-Gaussian distribution](https://en.wikipedia.org/wiki/Kaniadakis_Gaussian_distribution "Kaniadakis Gaussian distribution") is a generalization of the Gaussian distribution which arises from the [Kaniadakis statistics](https://en.wikipedia.org/wiki/Kaniadakis_statistics "Kaniadakis statistics"), being one of the [Kaniadakis distributions](https://en.wikipedia.org/wiki/Kaniadakis_distribution "Kaniadakis distribution").
A random variable X has a two-piece normal distribution if it has a distribution f X ( x ) \= { N ( ÎŒ , Ï 1 2 ) , if x †Ό N ( ÎŒ , Ï 2 2 ) , if x â„ ÎŒ {\\displaystyle f\_{X}(x)={\\begin{cases}N(\\mu ,\\sigma \_{1}^{2}),&{\\text{ if }}x\\leq \\mu \\\\N(\\mu ,\\sigma \_{2}^{2}),&{\\text{ if }}x\\geq \\mu \\end{cases}}}  where ÎŒ is the mean and *Ï*2
1 and *Ï*2
2 are the variances of the distribution to the left and right of the mean respectively.
The mean E(*X*), variance V(*X*), and third central moment T(*X*) of this distribution have been determined[\[55\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-John-1982-55) E ⥠( X ) \= ÎŒ \+ 2 Ï ( Ï 2 â Ï 1 ) , V ⥠( X ) \= ( 1 â 2 Ï ) ( Ï 2 â Ï 1 ) 2 \+ Ï 1 Ï 2 , T ⥠( X ) \= 2 Ï ( Ï 2 â Ï 1 ) \[ ( 4 Ï â 1 ) ( Ï 2 â Ï 1 ) 2 \+ Ï 1 Ï 2 \] . {\\displaystyle {\\begin{aligned}\\operatorname {E} (X)&=\\mu +{\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1}),\\\\\\operatorname {V} (X)&=\\left(1-{\\frac {2}{\\pi }}\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2},\\\\\\operatorname {T} (X)&={\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1})\\left\[\\left({\\frac {4}{\\pi }}-1\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2}\\right\].\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\operatorname {E} (X)&=\\mu +{\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1}),\\\\\\operatorname {V} (X)&=\\left(1-{\\frac {2}{\\pi }}\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2},\\\\\\operatorname {T} (X)&={\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1})\\left\[\\left({\\frac {4}{\\pi }}-1\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2}\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/97f32cc8147bff0b5cdc02123a520a1119854060)
One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:
- [Pearson distribution](https://en.wikipedia.org/wiki/Pearson_distribution "Pearson distribution") â a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values.
- The [generalized normal distribution](https://en.wikipedia.org/wiki/Generalized_normal_distribution "Generalized normal distribution"), also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors.
## Statistical inference
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=33 "Edit section: Statistical inference")\]
### Estimation of parameters
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=34 "Edit section: Estimation of parameters")\]
See also: [Maximum likelihood § Continuous distribution, continuous parameter space](https://en.wikipedia.org/wiki/Maximum_likelihood#Continuous_distribution,_continuous_parameter_space "Maximum likelihood"); and [Gaussian function § Estimation of parameters](https://en.wikipedia.org/wiki/Gaussian_function#Estimation_of_parameters "Gaussian function")
It is often the case that we do not know the parameters of the normal distribution, but instead want to [estimate](https://en.wikipedia.org/wiki/Estimation_theory "Estimation theory") them. That is, having a sample ( x 1 , ⊠, x n ) {\\textstyle (x\_{1},\\ldots ,x\_{n})}  from a normal N ( ÎŒ , Ï 2 ) {\\textstyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}  population we would like to learn the approximate values of parameters â ÎŒ {\\displaystyle \\mu }  â and Ï 2 {\\textstyle \\sigma ^{2}} . The standard approach to this problem is the [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") method, which requires maximization of the *[log-likelihood function](https://en.wikipedia.org/wiki/Log-likelihood_function "Log-likelihood function")*: ln ⥠L ( ÎŒ , Ï 2 ) \= â i \= 1 n ln ⥠f ( x i âŁ ÎŒ , Ï 2 ) \= â n 2 ln ⥠( 2 Ï ) â n 2 ln âĄ Ï 2 â 1 2 Ï 2 â i \= 1 n ( x i â ÎŒ ) 2 . {\\displaystyle \\ln {\\mathcal {L}}(\\mu ,\\sigma ^{2})=\\sum \_{i=1}^{n}\\ln f(x\_{i}\\mid \\mu ,\\sigma ^{2})=-{\\frac {n}{2}}\\ln(2\\pi )-{\\frac {n}{2}}\\ln \\sigma ^{2}-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.}  Taking derivatives with respect to â ÎŒ {\\displaystyle \\mu }  â and Ï 2 {\\textstyle \\sigma ^{2}}  and solving the resulting system of first order conditions yields the *maximum likelihood estimates*: ÎŒ ^ \= x ÂŻ ⥠1 n â i \= 1 n x i , Ï ^ 2 \= 1 n â i \= 1 n ( x i â x ÂŻ ) 2 . {\\displaystyle {\\hat {\\mu }}={\\overline {x}}\\equiv {\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i},\\qquad {\\hat {\\sigma }}^{2}={\\frac {1}{n}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.} 
Then ln ⥠L ( ÎŒ ^ , Ï ^ 2 ) {\\textstyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})}  is as follows: ln ⥠L ( ÎŒ ^ , Ï ^ 2 ) \= ( â n / 2 ) \[ ln ⥠( 2 Ï Ï ^ 2 ) \+ 1 \] {\\displaystyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})=(-n/2)\[\\ln(2\\pi {\\hat {\\sigma }}^{2})+1\]} ![{\\displaystyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})=(-n/2)\[\\ln(2\\pi {\\hat {\\sigma }}^{2})+1\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/561353e6bc80d226fddd9510be61d21bc67b3aee)
#### Sample mean
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=35 "Edit section: Sample mean")\]
See also: [Standard error of the mean](https://en.wikipedia.org/wiki/Standard_error_of_the_mean "Standard error of the mean")
Estimator ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}}  is called the *[sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean")*, since it is the arithmetic mean of all observations. The statistic x ÂŻ {\\displaystyle \\textstyle {\\overline {x}}}  is [complete](https://en.wikipedia.org/wiki/Complete_statistic "Complete statistic") and [sufficient](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") for â ÎŒ {\\displaystyle \\mu }  â , and therefore by the [LehmannâScheffĂ© theorem](https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9_theorem "LehmannâScheffĂ© theorem"), ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}}  is the [uniformly minimum variance unbiased](https://en.wikipedia.org/wiki/Uniformly_minimum_variance_unbiased "Uniformly minimum variance unbiased") (UMVU) estimator.[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) In finite samples it is distributed normally: ÎŒ ^ ⌠N ( ÎŒ , Ï 2 / n ) . {\\displaystyle {\\hat {\\mu }}\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}/n).}  The variance of this estimator is equal to the *ΌΌ*\-element of the inverse [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") I â 1 {\\displaystyle \\textstyle {\\mathcal {I}}^{-1}} . This implies that the estimator is [finite-sample efficient](https://en.wikipedia.org/wiki/Efficient_estimator "Efficient estimator"). Of practical importance is the [standard error](https://en.wikipedia.org/wiki/Standard_error "Standard error") of ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}}  being proportional to 1 / n {\\displaystyle \\textstyle 1/{\\sqrt {n}}} , that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in [Monte Carlo simulations](https://en.wikipedia.org/wiki/Monte_Carlo_simulation "Monte Carlo simulation").
From the standpoint of the [asymptotic theory](https://en.wikipedia.org/wiki/Asymptotic_theory_\(statistics\) "Asymptotic theory (statistics)"), ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}}  is [consistent](https://en.wikipedia.org/wiki/Consistent_estimator "Consistent estimator"), that is, it [converges in probability](https://en.wikipedia.org/wiki/Converges_in_probability "Converges in probability") to â ÎŒ {\\displaystyle \\mu }  â as n â â {\\textstyle n\\rightarrow \\infty } . The estimator is also [asymptotically normal](https://en.wikipedia.org/wiki/Asymptotic_normality "Asymptotic normality"), which is a simple corollary of it being normal in finite samples: n ( ÎŒ ^ â ÎŒ ) â d N ( 0 , Ï 2 ) . {\\displaystyle {\\sqrt {n}}({\\hat {\\mu }}-\\mu )\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,\\sigma ^{2}).} 
#### Sample variance
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=36 "Edit section: Sample variance")\]
See also: [Standard deviation § Estimation](https://en.wikipedia.org/wiki/Standard_deviation#Estimation "Standard deviation"), and [Variance § Estimation](https://en.wikipedia.org/wiki/Variance#Estimation "Variance")
The estimator Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}  is called the *[sample variance](https://en.wikipedia.org/wiki/Sample_variance "Sample variance")*, since it is the variance of the sample (( x 1 , ⊠, x n ) {\\textstyle (x\_{1},\\ldots ,x\_{n})} ). In practice, another estimator is often used instead of the Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} . This other estimator is denoted s 2 {\\textstyle s^{2}} , and is also called the *sample variance*, which represents a certain ambiguity in terminology; its square root â s {\\displaystyle s}  â is called the *sample standard deviation*. The estimator s 2 {\\textstyle s^{2}}  differs from Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}  by having (*n* â 1) instead of n in the denominator (the so-called [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction "Bessel's correction")): s 2 \= n n â 1 Ï ^ 2 \= 1 n â 1 â i \= 1 n ( x i â x ÂŻ ) 2 . {\\displaystyle s^{2}={\\frac {n}{n-1}}{\\hat {\\sigma }}^{2}={\\frac {1}{n-1}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.}  The difference between s 2 {\\textstyle s^{2}}  and Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}  becomes negligibly small for large n's. In finite samples however, the motivation behind the use of s 2 {\\textstyle s^{2}}  is that it is an [unbiased estimator](https://en.wikipedia.org/wiki/Unbiased_estimator "Unbiased estimator") of the underlying parameter Ï 2 {\\textstyle \\sigma ^{2}} , whereas Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}  is biased. Also, by the LehmannâScheffĂ© theorem the estimator s 2 {\\textstyle s^{2}}  is uniformly minimum variance unbiased ([UMVU](https://en.wikipedia.org/wiki/UMVU "UMVU")),[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}  is better than the s 2 {\\textstyle s^{2}}  in terms of the [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") (MSE) criterion. In finite samples both s 2 {\\textstyle s^{2}}  and Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}  have scaled [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with (*n* â 1) degrees of freedom: s 2 âŒ Ï 2 n â 1 â
Ï n â 1 2 , Ï ^ 2 âŒ Ï 2 n â
Ï n â 1 2 . {\\displaystyle s^{2}\\sim {\\frac {\\sigma ^{2}}{n-1}}\\cdot \\chi \_{n-1}^{2},\\qquad {\\hat {\\sigma }}^{2}\\sim {\\frac {\\sigma ^{2}}{n}}\\cdot \\chi \_{n-1}^{2}.}  The first of these expressions shows that the variance of s 2 {\\textstyle s^{2}}  is equal to 2 Ï 4 / ( n â 1 ) {\\textstyle 2\\sigma ^{4}/(n-1)} , which is slightly greater than the *ÏÏ*\-element of the inverse Fisher information matrix I â 1 {\\displaystyle \\textstyle {\\mathcal {I}}^{-1}} , which is 2 Ï 4 / n {\\textstyle 2\\sigma ^{4}/n} . Thus, s 2 {\\textstyle s^{2}}  is not an efficient estimator for Ï 2 {\\textstyle \\sigma ^{2}} , and moreover, since s 2 {\\textstyle s^{2}}  is UMVU, we can conclude that the finite-sample efficient estimator for Ï 2 {\\textstyle \\sigma ^{2}}  does not exist.
Applying the asymptotic theory, both estimators s 2 {\\textstyle s^{2}}  and Ï ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}  are consistent, that is they converge in probability to Ï 2 {\\textstyle \\sigma ^{2}}  as the sample size n â â {\\textstyle n\\rightarrow \\infty } . The two estimators are also both asymptotically normal: n ( Ï ^ 2 â Ï 2 ) â n ( s 2 â Ï 2 ) â d N ( 0 , 2 Ï 4 ) . {\\displaystyle {\\sqrt {n}}({\\hat {\\sigma }}^{2}-\\sigma ^{2})\\simeq {\\sqrt {n}}(s^{2}-\\sigma ^{2})\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,2\\sigma ^{4}).}  In particular, both estimators are asymptotically efficient for Ï 2 {\\textstyle \\sigma ^{2}} .
### Confidence intervals
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=37 "Edit section: Confidence intervals")\]
See also: [Studentization](https://en.wikipedia.org/wiki/Studentization "Studentization") and [3-sigma rule](https://en.wikipedia.org/wiki/3-sigma_rule "3-sigma rule")
By [Cochran's theorem](https://en.wikipedia.org/wiki/Cochran%27s_theorem "Cochran's theorem"), for normal distributions the sample mean ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}}  and the sample variance *s*2 are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"), which means there can be no gain in considering their [joint distribution](https://en.wikipedia.org/wiki/Joint_distribution "Joint distribution"). There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}}  and s can be employed to construct the so-called *t-statistic*: t \= ÎŒ ^ â ÎŒ s / n \= x ÂŻ â ÎŒ 1 n ( n â 1 ) â ( x i â x ÂŻ ) 2 ⌠t n â 1 {\\displaystyle t={\\frac {{\\hat {\\mu }}-\\mu }{s/{\\sqrt {n}}}}={\\frac {{\\overline {x}}-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\sum (x\_{i}-{\\overline {x}})^{2}}}}\\sim t\_{n-1}}  This quantity t has the [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") with (*n* â 1) degrees of freedom, and it is an [ancillary statistic](https://en.wikipedia.org/wiki/Ancillary_statistic "Ancillary statistic") (independent of the value of the parameters). Inverting the distribution of this t\-statistics will allow us to construct the [confidence interval](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") for ÎŒ;[\[57\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-57) similarly, inverting the *Ï*2 distribution of the statistic *s*2 will give us the confidence interval for *Ï*2:[\[58\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-58) ÎŒ â \[ ÎŒ ^ â t n â 1 , 1 â α / 2 s n , ÎŒ ^ \+ t n â 1 , 1 â α / 2 s n \] {\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}},\\,{\\hat {\\mu }}+t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}}\\right\]} ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}},\\,{\\hat {\\mu }}+t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/86ad00c4aac2907f6358d3ab3a5e413a58158be4) Ï 2 â \[ n â 1 Ï n â 1 , 1 â α / 2 2 s 2 , n â 1 Ï n â 1 , α / 2 2 s 2 \] {\\displaystyle \\sigma ^{2}\\in \\left\[{\\frac {n-1}{\\chi \_{n-1,1-\\alpha /2}^{2}}}s^{2},\\,{\\frac {n-1}{\\chi \_{n-1,\\alpha /2}^{2}}}s^{2}\\right\]} ![{\\displaystyle \\sigma ^{2}\\in \\left\[{\\frac {n-1}{\\chi \_{n-1,1-\\alpha /2}^{2}}}s^{2},\\,{\\frac {n-1}{\\chi \_{n-1,\\alpha /2}^{2}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87c0c6ae7bd48ba8377279fed58df479f0de900c) where *t**k*,*p* and Ï 2
*k,p* are the pth [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the t\- and *Ï*2\-distributions respectively. These confidence intervals are of the *[confidence level](https://en.wikipedia.org/wiki/Confidence_level "Confidence level")* 1 â *α*, meaning that the true values ÎŒ and *Ï*2 fall outside of these intervals with probability (or [significance level](https://en.wikipedia.org/wiki/Significance_level "Significance level")) α. In practice people usually take *α* = 5%, resulting in the 95% confidence intervals. The confidence interval for Ï can be found by taking the square root of the interval bounds for *Ï*2.
Approximate formulas can be derived from the asymptotic distributions of ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}}  and *s*2: ÎŒ â \[ ÎŒ ^ â \| z α / 2 \| n s , ÎŒ ^ \+ \| z α / 2 \| n s \] {\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s,\\,{\\hat {\\mu }}+{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s\\right\]} ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s,\\,{\\hat {\\mu }}+{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6ed5adb135a9cd03de1aa21d774e66be1adb4ea8) Ï 2 â \[ s 2 â 2 \| z α / 2 \| n s 2 , s 2 \+ 2 \| z α / 2 \| n s 2 \] {\\displaystyle \\sigma ^{2}\\in \\left\[s^{2}-{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2},\\,s^{2}+{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2}\\right\]} ![{\\displaystyle \\sigma ^{2}\\in \\left\[s^{2}-{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2},\\,s^{2}+{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56646fb560578a0414ad2f045c14031c4015b9a2) The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles *z**α*/2 do not depend on n. In particular, the most popular value of *α* = 5%, results in \|*z*0\.025\| = [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96").
### Normality tests
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=38 "Edit section: Normality tests")\]
Main article: [Normality tests](https://en.wikipedia.org/wiki/Normality_tests "Normality tests")
Normality tests assess the likelihood that the given data set {*x*1, ..., *x**n*} comes from a normal distribution. Typically the [null hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis "Null hypothesis") *H*0 is that the observations are distributed normally with unspecified mean ÎŒ and variance *Ï*2, versus the alternative *H**a* that the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below:
**Diagnostic plots** are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis.
- [QâQ plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "QâQ plot"), also known as [normal probability plot](https://en.wikipedia.org/wiki/Normal_probability_plot "Normal probability plot") or [rankit](https://en.wikipedia.org/wiki/Rankit "Rankit") plotâis a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it is a plot of point of the form (*Ί*â1(*p**k*), *x*(*k*)), where plotting points *p**k* are equal to *p**k* = (*k* â *α*)/(*n* + 1 â 2*α*) and α is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line.
- [PâP plot](https://en.wikipedia.org/wiki/P%E2%80%93P_plot "PâP plot") â similar to the QâQ plot, but used much less frequently. This method consists of plotting the points (*Ί*(*z*(*k*)), *p**k*), where
z
(
k
)
\=
(
x
(
k
)
â
Ό
^
)
/
Ï
^
{\\textstyle \\textstyle z\_{(k)}=(x\_{(k)}-{\\hat {\\mu }})/{\\hat {\\sigma }}}

. For normally distributed data this plot should lie on a straight line between (0, 0) and (1, 1).
**Goodness-of-fit tests**:
*Moment-based tests*:
- [D'Agostino's K-squared test](https://en.wikipedia.org/wiki/D%27Agostino%27s_K-squared_test "D'Agostino's K-squared test")
- [JarqueâBera test](https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test "JarqueâBera test")
- [ShapiroâWilk test](https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test "ShapiroâWilk test"): This is based on the line in the QâQ plot having the slope of Ï. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.
*Tests based on the empirical distribution function*:
- [AndersonâDarling test](https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test "AndersonâDarling test")
- [Lilliefors test](https://en.wikipedia.org/wiki/Lilliefors_test "Lilliefors test") (an adaptation of the [KolmogorovâSmirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test "KolmogorovâSmirnov test"))
### Bayesian analysis of the normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=39 "Edit section: Bayesian analysis of the normal distribution")\]
Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:
- Either the mean, or the variance, or neither, may be considered a fixed quantity.
- When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified.
- Both univariate and [multivariate](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") cases need to be considered.
- Either [conjugate](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") or [improper](https://en.wikipedia.org/wiki/Improper_prior "Improper prior") [prior distributions](https://en.wikipedia.org/wiki/Prior_distribution "Prior distribution") may be placed on the unknown variables.
- An additional set of cases occurs in [Bayesian linear regression](https://en.wikipedia.org/wiki/Bayesian_linear_regression "Bayesian linear regression"), where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the [regression coefficients](https://en.wikipedia.org/wiki/Regression_coefficient "Regression coefficient"). The resulting analysis is similar to the basic cases of [independent identically distributed](https://en.wikipedia.org/wiki/Independent_identically_distributed "Independent identically distributed") data.
The formulas for the non-linear-regression cases are summarized in the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") article.
#### Sum of two quadratics
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=40 "Edit section: Sum of two quadratics")\]
##### Scalar form
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=41 "Edit section: Scalar form")\]
The following auxiliary formula is useful for simplifying the [posterior](https://en.wikipedia.org/wiki/Posterior_distribution "Posterior distribution") update equations, which otherwise become fairly tedious.
a ( x â y ) 2 \+ b ( x â z ) 2 \= ( a \+ b ) ( x â a y \+ b z a \+ b ) 2 \+ a b a \+ b ( y â z ) 2 {\\displaystyle a(x-y)^{2}+b(x-z)^{2}=(a+b)\\left(x-{\\frac {ay+bz}{a+b}}\\right)^{2}+{\\frac {ab}{a+b}}(y-z)^{2}} 
This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and [completing the square](https://en.wikipedia.org/wiki/Completing_the_square "Completing the square"). Note the following about the complex constant factors attached to some of the terms:
1. The factor
a
y
\+
b
z
a
\+
b
{\\textstyle {\\frac {ay+bz}{a+b}}}

has the form of a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of y and z.
2. a
b
a
\+
b
\=
1
1
a
\+
1
b
\=
(
a
â
1
\+
b
â
1
)
â
1
.
{\\textstyle {\\frac {ab}{a+b}}={\\frac {1}{{\\frac {1}{a}}+{\\frac {1}{b}}}}=(a^{-1}+b^{-1})^{-1}.}

This shows that this factor can be thought of as resulting from a situation where the [reciprocals](https://en.wikipedia.org/wiki/Multiplicative_inverse "Multiplicative inverse") of quantities a and b add directly, so to combine a and b themselves, it is necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean"), so it is not surprising that
a
b
a
\+
b
{\\textstyle {\\frac {ab}{a+b}}}

is one-half the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean") of a and b.
##### Vector form
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=42 "Edit section: Vector form")\]
A similar formula can be written for the sum of two vector quadratics: If **x**, **y**, **z** are vectors of length k, and **A** and **B** are [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"), [invertible matrices](https://en.wikipedia.org/wiki/Invertible_matrices "Invertible matrices") of size k Ă k {\\textstyle k\\times k} , then
( y â x ) âČ A ( y â x ) \+ ( x â z ) âČ B ( x â z ) \= ( x â c ) âČ ( A \+ B ) ( x â c ) \+ ( y â z ) âČ ( A â 1 \+ B â 1 ) â 1 ( y â z ) {\\displaystyle {\\begin{aligned}&(\\mathbf {y} -\\mathbf {x} )'\\mathbf {A} (\\mathbf {y} -\\mathbf {x} )+(\\mathbf {x} -\\mathbf {z} )'\\mathbf {B} (\\mathbf {x} -\\mathbf {z} )\\\\={}&(\\mathbf {x} -\\mathbf {c} )'(\\mathbf {A} +\\mathbf {B} )(\\mathbf {x} -\\mathbf {c} )+(\\mathbf {y} -\\mathbf {z} )'(\\mathbf {A} ^{-1}+\\mathbf {B} ^{-1})^{-1}(\\mathbf {y} -\\mathbf {z} )\\end{aligned}}}  where c \= ( A \+ B ) â 1 ( A y \+ B z ) {\\displaystyle \\mathbf {c} =(\\mathbf {A} +\\mathbf {B} )^{-1}(\\mathbf {A} \\mathbf {y} +\\mathbf {B} \\mathbf {z} )} 
The form **x**âČ **A** **x** is called a [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") and is a [scalar](https://en.wikipedia.org/wiki/Scalar_\(mathematics\) "Scalar (mathematics)"): x âČ A x \= â i , j a i j x i x j {\\displaystyle \\mathbf {x} '\\mathbf {A} \\mathbf {x} =\\sum \_{i,j}a\_{ij}x\_{i}x\_{j}}  In other words, it sums up all possible combinations of products of pairs of elements from **x**, with a separate coefficient for each. In addition, since x i x j \= x j x i {\\textstyle x\_{i}x\_{j}=x\_{j}x\_{i}} , only the sum a i j \+ a j i {\\textstyle a\_{ij}+a\_{ji}}  matters for any off-diagonal elements of **A**, and there is no loss of generality in assuming that **A** is [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"). Furthermore, if **A** is symmetric, then the form x âČ A y \= y âČ A x . {\\textstyle \\mathbf {x} '\\mathbf {A} \\mathbf {y} =\\mathbf {y} '\\mathbf {A} \\mathbf {x} .} 
#### Sum of differences from the mean
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=43 "Edit section: Sum of differences from the mean")\]
Another useful formula is as follows: â i \= 1 n ( x i â ÎŒ ) 2 \= â i \= 1 n ( x i â x ÂŻ ) 2 \+ n ( x ÂŻ â ÎŒ ) 2 {\\displaystyle \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}=\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}}  where x ÂŻ \= 1 n â i \= 1 n x i . {\\textstyle {\\bar {x}}={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}.} 
### With known variance
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=44 "Edit section: With known variance")\]
For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows x ⌠N ( ÎŒ , Ï 2 ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}  with known [variance](https://en.wikipedia.org/wiki/Variance "Variance") *Ï*2, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") distribution is also normally distributed.
This can be shown more easily by rewriting the variance as the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), i.e. using *Ï* = 1/*Ï*2. Then if x ⌠N ( ÎŒ , 1 / Ï ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,1/\\tau )}  and ÎŒ ⌠N ( ÎŒ 0 , 1 / Ï 0 ) , {\\textstyle \\mu \\sim {\\mathcal {N}}(\\mu \_{0},1/\\tau \_{0}),}  we proceed as follows.
First, the [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") is (using the formula above for the sum of differences from the mean): p ( X âŁ ÎŒ , Ï ) \= â i \= 1 n Ï 2 Ï exp ⥠( â 1 2 Ï ( x i â ÎŒ ) 2 ) \= ( Ï 2 Ï ) n / 2 exp ⥠( â 1 2 Ï â i \= 1 n ( x i â ÎŒ ) 2 ) \= ( Ï 2 Ï ) n / 2 exp ⥠\[ â 1 2 Ï ( â i \= 1 n ( x i â x ÂŻ ) 2 \+ n ( x ÂŻ â ÎŒ ) 2 ) \] . {\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\tau )&=\\prod \_{i=1}^{n}{\\sqrt {\\frac {\\tau }{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau (x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left(-{\\frac {1}{2}}\\tau \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\].\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\tau )&=\\prod \_{i=1}^{n}{\\sqrt {\\frac {\\tau }{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau (x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left(-{\\frac {1}{2}}\\tau \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2bcd1c34520a24e29b758a0f7427e79e9d8a414)
Then, we proceed as follows: p ( ÎŒ ⣠X ) â p ( X âŁ ÎŒ ) p ( ÎŒ ) \= ( Ï 2 Ï ) n / 2 exp ⥠\[ â 1 2 Ï ( â i \= 1 n ( x i â x ÂŻ ) 2 \+ n ( x ÂŻ â ÎŒ ) 2 ) \] Ï 0 2 Ï exp ⥠( â 1 2 Ï 0 ( ÎŒ â ÎŒ 0 ) 2 ) â exp ⥠( â 1 2 ( Ï ( â i \= 1 n ( x i â x ÂŻ ) 2 \+ n ( x ÂŻ â ÎŒ ) 2 ) \+ Ï 0 ( ÎŒ â ÎŒ 0 ) 2 ) ) â exp ⥠( â 1 2 ( n Ï ( x ÂŻ â ÎŒ ) 2 \+ Ï 0 ( ÎŒ â ÎŒ 0 ) 2 ) ) \= exp ⥠( â 1 2 ( n Ï \+ Ï 0 ) ( ÎŒ â n Ï x ÂŻ \+ Ï 0 ÎŒ 0 n Ï \+ Ï 0 ) 2 \+ n Ï Ï 0 n Ï \+ Ï 0 ( x ÂŻ â ÎŒ 0 ) 2 ) â exp ⥠( â 1 2 ( n Ï \+ Ï 0 ) ( ÎŒ â n Ï x ÂŻ \+ Ï 0 ÎŒ 0 n Ï \+ Ï 0 ) 2 ) {\\displaystyle {\\begin{aligned}p(\\mu \\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\mu )p(\\mu )\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\]{\\sqrt {\\frac {\\tau \_{0}}{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(n\\tau ({\\bar {x}}-\\mu )^{2}+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&=\\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}+{\\frac {n\\tau \\tau \_{0}}{n\\tau +\\tau \_{0}}}({\\bar {x}}-\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}\\right)\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\mu \\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\mu )p(\\mu )\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\]{\\sqrt {\\frac {\\tau \_{0}}{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(n\\tau ({\\bar {x}}-\\mu )^{2}+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&=\\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}+{\\frac {n\\tau \\tau \_{0}}{n\\tau +\\tau \_{0}}}({\\bar {x}}-\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}\\right)\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/96e309ead00fbc8603eced5342aa5df534522d6a)
In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving ÎŒ. The result is the [kernel](https://en.wikipedia.org/wiki/Kernel_\(statistics\) "Kernel (statistics)") of a normal distribution, with mean n Ï x ÂŻ \+ Ï 0 ÎŒ 0 n Ï \+ Ï 0 {\\textstyle {\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}}  and precision n Ï \+ Ï 0 {\\textstyle n\\tau +\\tau \_{0}} , i.e. p ( ÎŒ ⣠X ) ⌠N ( n Ï x ÂŻ \+ Ï 0 ÎŒ 0 n Ï \+ Ï 0 , 1 n Ï \+ Ï 0 ) {\\displaystyle p(\\mu \\mid \\mathbf {X} )\\sim {\\mathcal {N}}\\left({\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}},{\\frac {1}{n\\tau +\\tau \_{0}}}\\right)} 
This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: Ï 0 âČ \= Ï 0 \+ n Ï ÎŒ 0 âČ \= n Ï x ÂŻ \+ Ï 0 ÎŒ 0 n Ï \+ Ï 0 x ÂŻ \= 1 n â i \= 1 n x i {\\displaystyle {\\begin{aligned}\\tau \_{0}'&=\\tau \_{0}+n\\tau \\\\\[5pt\]\\mu \_{0}'&={\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\tau \_{0}'&=\\tau \_{0}+n\\tau \\\\\[5pt\]\\mu \_{0}'&={\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a6cfbdf504b1a9ce4cbe79561b4ae983fdf7271d)
That is, to combine n data points with total precision of *nÏ* (or equivalently, total variance of *n*/*Ï*2) and mean of values x ÂŻ {\\textstyle {\\bar {x}}} , derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a *precision-weighted average*, i.e. a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)
The above formula reveals why it is more convenient to do [Bayesian analysis](https://en.wikipedia.org/wiki/Bayesian_analysis "Bayesian analysis") of [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas Ï 0 2 âČ \= 1 n Ï 2 \+ 1 Ï 0 2 ÎŒ 0 âČ \= n x ÂŻ Ï 2 \+ ÎŒ 0 Ï 0 2 n Ï 2 \+ 1 Ï 0 2 x ÂŻ \= 1 n â i \= 1 n x i {\\displaystyle {\\begin{aligned}{\\sigma \_{0}^{2}}'&={\\frac {1}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]\\mu \_{0}'&={\\frac {{\\frac {n{\\bar {x}}}{\\sigma ^{2}}}+{\\frac {\\mu \_{0}}{\\sigma \_{0}^{2}}}}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}{\\sigma \_{0}^{2}}'&={\\frac {1}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]\\mu \_{0}'&={\\frac {{\\frac {n{\\bar {x}}}{\\sigma ^{2}}}+{\\frac {\\mu \_{0}}{\\sigma \_{0}^{2}}}}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea454c8840683777ce8192d9ae63068c63962858)
#### With known mean
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=45 "Edit section: With known mean")\]
For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows x ⌠N ( ÎŒ , Ï 2 ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}  with known mean ÎŒ, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the [variance](https://en.wikipedia.org/wiki/Variance "Variance") has an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") or a [scaled inverse chi-squared distribution](https://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution "Scaled inverse chi-squared distribution"). The two are equivalent except for having different [parameterizations](https://en.wikipedia.org/wiki/Parameter "Parameter"). Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for *Ï*2 is as follows: p ( Ï 2 âŁ Îœ 0 , Ï 0 2 ) \= ( Ï 0 2 Μ 0 2 ) Μ 0 / 2 Î ( Μ 0 2 ) exp ⥠\[ â Μ 0 Ï 0 2 2 Ï 2 \] ( Ï 2 ) 1 \+ Μ 0 2 â exp ⥠\[ â Μ 0 Ï 0 2 2 Ï 2 \] ( Ï 2 ) 1 \+ Μ 0 2 {\\displaystyle p(\\sigma ^{2}\\mid \\nu \_{0},\\sigma \_{0}^{2})={\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\nu \_{0}/2}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\propto {\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}} ![{\\displaystyle p(\\sigma ^{2}\\mid \\nu \_{0},\\sigma \_{0}^{2})={\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\nu \_{0}/2}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\propto {\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ef2528fe4774a93087d4adae570ef9ab84707f52)
The [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") from above, written in terms of the variance, is: p ( X âŁ ÎŒ , Ï 2 ) \= ( 1 2 Ï Ï 2 ) n / 2 exp ⥠\[ â 1 2 Ï 2 â i \= 1 n ( x i â ÎŒ ) 2 \] \= ( 1 2 Ï Ï 2 ) n / 2 exp ⥠\[ â S 2 Ï 2 \] {\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\sigma ^{2})&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right\]\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\sigma ^{2})&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right\]\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc06aa31588bba03e4748f8f345f0638a75dc156) where S \= â i \= 1 n ( x i â ÎŒ ) 2 . {\\displaystyle S=\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.} 
Then: p ( Ï 2 ⣠X ) â p ( X âŁ Ï 2 ) p ( Ï 2 ) \= ( 1 2 Ï Ï 2 ) n / 2 exp ⥠\[ â S 2 Ï 2 \] ( Ï 0 2 Μ 0 2 ) Μ 0 2 Î ( Μ 0 2 ) exp ⥠\[ â Μ 0 Ï 0 2 2 Ï 2 \] ( Ï 2 ) 1 \+ Μ 0 2 â ( 1 Ï 2 ) n / 2 1 ( Ï 2 ) 1 \+ Μ 0 2 exp ⥠\[ â S 2 Ï 2 \+ â Μ 0 Ï 0 2 2 Ï 2 \] \= 1 ( Ï 2 ) 1 \+ Μ 0 \+ n 2 exp ⥠\[ â Μ 0 Ï 0 2 \+ S 2 Ï 2 \] {\\displaystyle {\\begin{aligned}p(\\sigma ^{2}\\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\sigma ^{2})p(\\sigma ^{2})\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]{\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\frac {\\nu \_{0}}{2}}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\\\&\\propto \\left({\\frac {1}{\\sigma ^{2}}}\\right)^{n/2}{\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}+{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]\\\\&={\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}+n}{2}}}}}\\exp \\left\[-{\\frac {\\nu \_{0}\\sigma \_{0}^{2}+S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\sigma ^{2}\\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\sigma ^{2})p(\\sigma ^{2})\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]{\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\frac {\\nu \_{0}}{2}}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\\\&\\propto \\left({\\frac {1}{\\sigma ^{2}}}\\right)^{n/2}{\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}+{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]\\\\&={\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}+n}{2}}}}}\\exp \\left\[-{\\frac {\\nu \_{0}\\sigma \_{0}^{2}+S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/381c1b93f6dc76e2cdca9f3f1f77132dd51dc55f)
The above is also a scaled inverse chi-squared distribution where Μ 0 âČ \= Μ 0 \+ n Μ 0 âČ Ï 0 2 âČ \= Μ 0 Ï 0 2 \+ â i \= 1 n ( x i â ÎŒ ) 2 {\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\end{aligned}}}  or equivalently Μ 0 âČ \= Μ 0 \+ n Ï 0 2 âČ \= Μ 0 Ï 0 2 \+ â i \= 1 n ( x i â ÎŒ ) 2 Μ 0 \+ n {\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\{\\sigma \_{0}^{2}}'&={\\frac {\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{\\nu \_{0}+n}}\\end{aligned}}} 
Reparameterizing in terms of an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution"), the result is: α âČ \= α \+ n 2 ÎČ âČ \= ÎČ \+ â i \= 1 n ( x i â ÎŒ ) 2 2 {\\displaystyle {\\begin{aligned}\\alpha '&=\\alpha +{\\frac {n}{2}}\\\\\\beta '&=\\beta +{\\frac {\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{2}}\\end{aligned}}} 
#### With unknown mean and unknown variance
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=46 "Edit section: With unknown mean and unknown variance")\]
For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows x ⌠N ( ÎŒ , Ï 2 ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}  with unknown mean ÎŒ and unknown [variance](https://en.wikipedia.org/wiki/Variance "Variance") *Ï*2, a combined (multivariate) [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") is placed over the mean and variance, consisting of a [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"). Logically, this originates as follows:
1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve [sufficient statistics](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.
2. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and [sum of squared deviations](https://en.wikipedia.org/wiki/Sum_of_squared_deviations "Sum of squared deviations").
3. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.
4. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence.
5. This suggests that we create a *conditional prior* of the mean on the unknown variance, with a hyperparameter specifying the mean of the [pseudo-observations](https://en.wikipedia.org/wiki/Pseudo-observation "Pseudo-observation") associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately.
6. This leads immediately to the [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"), which is the product of the two distributions just defined, with [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") used (an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") over the variance, and a normal distribution over the mean, *conditional* on the variance) and with the same four parameters just defined.
The priors are normally defined as follows: p ( ÎŒ âŁ Ï 2 ; ÎŒ 0 , n 0 ) ⌠N ( ÎŒ 0 , Ï 2 / n 0 ) p ( Ï 2 ; Μ 0 , Ï 0 2 ) ⌠I Ï 2 ( Μ 0 , Ï 0 2 ) \= I G ( Μ 0 / 2 , Μ 0 Ï 0 2 / 2 ) {\\displaystyle {\\begin{aligned}p(\\mu \\mid \\sigma ^{2};\\mu \_{0},n\_{0})&\\sim {\\mathcal {N}}(\\mu \_{0},\\sigma ^{2}/n\_{0})\\\\p(\\sigma ^{2};\\nu \_{0},\\sigma \_{0}^{2})&\\sim I\\chi ^{2}(\\nu \_{0},\\sigma \_{0}^{2})=IG(\\nu \_{0}/2,\\nu \_{0}\\sigma \_{0}^{2}/2)\\end{aligned}}} 
The update equations can be derived, and look as follows: x ÂŻ \= 1 n â i \= 1 n x i ÎŒ 0 âČ \= n 0 ÎŒ 0 \+ n x ÂŻ n 0 \+ n n 0 âČ \= n 0 \+ n Μ 0 âČ \= Μ 0 \+ n Μ 0 âČ Ï 0 2 âČ \= Μ 0 Ï 0 2 \+ â i \= 1 n ( x i â x ÂŻ ) 2 \+ n 0 n n 0 \+ n ( ÎŒ 0 â x ÂŻ ) 2 {\\displaystyle {\\begin{aligned}{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\\\\\mu \_{0}'&={\\frac {n\_{0}\\mu \_{0}+n{\\bar {x}}}{n\_{0}+n}}\\\\n\_{0}'&=n\_{0}+n\\\\\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+{\\frac {n\_{0}n}{n\_{0}+n}}(\\mu \_{0}-{\\bar {x}})^{2}\\end{aligned}}} The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for Μ 0 âČ Ï 0 2 âČ {\\textstyle \\nu \_{0}'{\\sigma \_{0}^{2}}'}  is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.
## Occurrence and applications
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=47 "Edit section: Occurrence and applications")\]
The occurrence of normal distribution in practical problems can be loosely classified into four categories:
1. Exactly normal distributions;
2. Approximately normal laws, for example when such approximation is justified by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"); and
3. Distributions modeled as normal â the normal distribution being the distribution with [maximum entropy](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy "Principle of maximum entropy") for a given mean and variance.
4. Regression problems â the normal distribution being found after systematic effects have been modeled sufficiently well.
### Exact normality
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=48 "Edit section: Exact normality")\]
[](https://en.wikipedia.org/wiki/File:QHarmonicOscillator.png)
The ground state of a [quantum harmonic oscillator](https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator "Quantum harmonic oscillator") has the Gaussian distribution.
A normal distribution occurs in some [physical theories](https://en.wikipedia.org/wiki/Physical_theory "Physical theory"):
- The [velocity distribution](https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution#Distribution_for_the_velocity_vector "MaxwellâBoltzmann distribution") of independently moving and perfectly elastic spheres, which is a consequence of [Maxwell's Dynamical Theory of Gases, Part I (1860)](https://en.wikipedia.org/wiki/Maxwell%27s_theorem "Maxwell's theorem").[\[59\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEMaxwell186023-59)[\[60\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEBryc19951-60)
- The [ground state](https://en.wikipedia.org/wiki/Ground_state "Ground state") [wave function](https://en.wikipedia.org/wiki/Wave_function "Wave function") in [position space](https://en.wikipedia.org/wiki/Position_and_momentum_spaces#Quantum_mechanics "Position and momentum spaces") of the [quantum harmonic oscillator](https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator "Quantum harmonic oscillator").[\[61\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-61)
- The position of a particle that experiences [diffusion](https://en.wikipedia.org/wiki/Diffusion "Diffusion").\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] If initially the particle is located at a specific point (that is its probability distribution is the [Dirac delta function](https://en.wikipedia.org/wiki/Dirac_delta_function "Dirac delta function")), then after time t its location is described by a normal distribution with variance t, which satisfies the [diffusion equation](https://en.wikipedia.org/wiki/Diffusion_equation "Diffusion equation")
â
â
t
f
(
x
,
t
)
\=
1
2
â
2
â
x
2
f
(
x
,
t
)
{\\textstyle {\\frac {\\partial }{\\partial t}}f(x,t)={\\frac {1}{2}}{\\frac {\\partial ^{2}}{\\partial x^{2}}}f(x,t)}

. If the initial location is given by a certain density function
g
(
x
)
{\\textstyle g(x)}

, then the density at time t is the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of g and the normal probability density function.
### Approximate normality
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=49 "Edit section: Approximate normality")\]
*Approximately* normal distributions occur in many situations, as explained by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). When the outcome is produced by many small effects acting *additively and independently*, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects.
- In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where [infinitely divisible](https://en.wikipedia.org/wiki/Infinitely_divisible "Infinitely divisible") and [decomposable](https://en.wikipedia.org/wiki/Indecomposable_distribution "Indecomposable distribution") distributions are involved, such as
- [Binomial random variables](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"), associated with binary response variables;
- [Poisson random variables](https://en.wikipedia.org/wiki/Poisson_random_variables "Poisson random variables"), associated with rare events;
- [Thermal radiation](https://en.wikipedia.org/wiki/Thermal_radiation "Thermal radiation") has a [BoseâEinstein](https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statistics "BoseâEinstein statistics") distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem.
### Assumed normality
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=50 "Edit section: Assumed normality")\]
[](https://en.wikipedia.org/wiki/File:Fisher_iris_versicolor_sepalwidth.svg)
Histogram of sepal widths for *Iris versicolor* from Fisher's [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set "Iris flower data set"), with superimposed best-fitting normal distribution
> I can only recognize the occurrence of the normal curve â the Laplacian curve of errors â as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations.
â [Pearson (1901)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1901)
There are statistical methods to empirically test that assumption; see the above [Normality tests](https://en.wikipedia.org/wiki/Normal_distribution#Normality_tests) section.
- In [biology](https://en.wikipedia.org/wiki/Biology "Biology"), the *logarithm* of various variables tend to have a normal distribution, that is, they tend to have a [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") (after separation on male/female subpopulations), with examples including:
- Measures of size of living tissue (length, height, skin area, weight);[\[62\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-62)
- The *length* of *inert* appendages (hair, claws, nails, teeth) of biological specimens, *in the direction of growth*; presumably the thickness of tree bark also falls under this category;
- Certain physiological measurements, such as blood pressure of adult humans.
- In finance, in particular the [BlackâScholes model](https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model "BlackâScholes model"), changes in the *logarithm* of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like [compound interest](https://en.wikipedia.org/wiki/Compound_interest "Compound interest"), not like simple interest, and so are multiplicative). Some mathematicians such as [Benoit Mandelbrot](https://en.wikipedia.org/wiki/Benoit_Mandelbrot "Benoit Mandelbrot") have argued that [log-Levy distributions](https://en.wikipedia.org/wiki/Levy_skew_alpha-stable_distribution "Levy skew alpha-stable distribution"), which possess [heavy tails](https://en.wikipedia.org/wiki/Heavy_tails "Heavy tails"), would be a more appropriate model, in particular for the analysis for [stock market crashes](https://en.wikipedia.org/wiki/Stock_market_crash "Stock market crash"). The use of the assumption of normal distribution occurring in financial models has also been criticized by [Nassim Nicholas Taleb](https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb "Nassim Nicholas Taleb") in his works.
- [Measurement errors](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.[\[63\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-63)
- In [standardized testing](https://en.wikipedia.org/wiki/Standardized_testing_\(statistics\) "Standardized testing (statistics)"), results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the [IQ test](https://en.wikipedia.org/wiki/Intelligence_quotient "Intelligence quotient")) or transforming the raw test scores into output scores by fitting them to the normal distribution. For example, the [SAT](https://en.wikipedia.org/wiki/SAT "SAT")'s traditional range of 200â800 is based on a normal distribution with a mean of 500 and a standard deviation of 100.
[](https://en.wikipedia.org/wiki/File:FitNormDistr.tif)
Fitted cumulative normal distribution to October rainfalls, see [distribution fitting](https://en.wikipedia.org/wiki/Distribution_fitting "Distribution fitting")
- Many scores are derived from the normal distribution, including [percentile ranks](https://en.wikipedia.org/wiki/Percentile_rank "Percentile rank") (percentiles or quantiles), [normal curve equivalents](https://en.wikipedia.org/wiki/Normal_curve_equivalent "Normal curve equivalent"), [stanines](https://en.wikipedia.org/wiki/Stanine "Stanine"), [z-scores](https://en.wikipedia.org/wiki/Z-scores "Z-scores"), and T-scores. Additionally, some [behavioral statistical](https://en.wikipedia.org/wiki/Psychological_statistics "Psychological statistics") procedures assume that scores are normally distributed; for example, [t-tests](https://en.wikipedia.org/wiki/T-tests "T-tests") and [ANOVAs](https://en.wikipedia.org/wiki/Analysis_of_variance "Analysis of variance"). [Bell curve grading](https://en.wikipedia.org/wiki/Bell_curve_grading "Bell curve grading") assigns relative grades based on a normal distribution of scores.
- In [hydrology](https://en.wikipedia.org/wiki/Hydrology "Hydrology") the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem").[\[64\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-64) The plot on the right illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% [confidence belt](https://en.wikipedia.org/wiki/Confidence_belt "Confidence belt") based on the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"). The rainfall data are represented by [plotting positions](https://en.wikipedia.org/wiki/Plotting_position "Plotting position") as part of the [cumulative frequency analysis](https://en.wikipedia.org/wiki/Cumulative_frequency_analysis "Cumulative frequency analysis").
### Methodological problems and peer review
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=51 "Edit section: Methodological problems and peer review")\]
[John Ioannidis](https://en.wikipedia.org/wiki/John_Ioannidis "John Ioannidis") [argued](https://en.wikipedia.org/wiki/Why_Most_Published_Research_Findings_Are_False "Why Most Published Research Findings Are False") that using normally distributed standard deviations as standards for validating research findings leave [falsifiable predictions](https://en.wikipedia.org/wiki/Falsifiability "Falsifiability") about phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if they were unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.[\[65\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-65)
## Computational methods
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=52 "Edit section: Computational methods")\]
### Generating values from normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=53 "Edit section: Generating values from normal distribution")\]
[](https://en.wikipedia.org/wiki/File:Planche_de_Galton.jpg)
The [bean machine](https://en.wikipedia.org/wiki/Bean_machine "Bean machine"), a device invented by [Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton"), can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.
In computer simulations, especially in applications of the [Monte-Carlo method](https://en.wikipedia.org/wiki/Monte-Carlo_method "Monte-Carlo method"), it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a *N*(*ÎŒ*, *Ï*2) can be generated as *X* = *ÎŒ* + *ÏZ*, where Z is standard normal. All these algorithms rely on the availability of a [random number generator](https://en.wikipedia.org/wiki/Random_number_generator "Random number generator") U capable of producing [uniform](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") random variates.
- The most straightforward method is based on the [probability integral transform](https://en.wikipedia.org/wiki/Probability_integral_transform "Probability integral transform") property: if U is distributed uniformly on (0,1), then *Ί*â1(*U*) will have the standard normal distribution. The drawback of this method is that it relies on calculation of the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function") Ίâ1, which cannot be done analytically. Some approximate methods are described in [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) and in the [erf](https://en.wikipedia.org/wiki/Error_function "Error function") article. Wichura gives a fast algorithm for computing this function to 16 decimal places,[\[66\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-66) which is used by [R](https://en.wikipedia.org/wiki/R_programming_language "R programming language") to compute random variates of the normal distribution.
- [An easy-to-program approximate approach](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution#Approximating_a_Normal_distribution "IrwinâHall distribution") that relies on the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") is as follows: generate 12 uniform *U*(0,1) deviates, add them all up, and subtract 6 â the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be [IrwinâHall](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "IrwinâHall distribution"), which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (â6, 6).[\[67\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-67) Note that in a true normal distribution, only 0.00034% of all samples will fall outside ±6*Ï*.
- The [BoxâMuller method](https://en.wikipedia.org/wiki/Box%E2%80%93Muller_method "BoxâMuller method") uses two independent random numbers U and V distributed [uniformly](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") on (0,1). Then the two random variables X and Y
X
\=
â
2
ln
âĄ
U
cos
âĄ
(
2
Ï
V
)
,
Y
\=
â
2
ln
âĄ
U
sin
âĄ
(
2
Ï
V
)
.
{\\displaystyle X={\\sqrt {-2\\ln U}}\\,\\cos(2\\pi V),\\qquad Y={\\sqrt {-2\\ln U}}\\,\\sin(2\\pi V).}

will both have the standard normal distribution, and will be [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). This formulation arises because for a [bivariate normal](https://en.wikipedia.org/wiki/Bivariate_normal "Bivariate normal") random vector (*X*, *Y*) the squared norm *X*2 + *Y*2 will have the [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with two degrees of freedom, which is an easily generated [exponential random variable](https://en.wikipedia.org/wiki/Exponential_random_variable "Exponential random variable") corresponding to the quantity â2 ln(*U*) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V.
- The [Marsaglia polar method](https://en.wikipedia.org/wiki/Marsaglia_polar_method "Marsaglia polar method") is a modification of the BoxâMuller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (â1,1) distribution, and then *S* = *U*2 + *V*2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities
X
\=
U
â
2
ln
âĄ
S
S
,
Y
\=
V
â
2
ln
âĄ
S
S
{\\displaystyle X=U{\\sqrt {\\frac {-2\\ln S}{S}}},\\qquad Y=V{\\sqrt {\\frac {-2\\ln S}{S}}}}

are returned. Again, X and Y are independent, standard normal random variables.
- The Ratio method[\[68\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-68) is a rejection method. The algorithm proceeds as follows:
- Generate two independent uniform deviates U and V;
- Compute *X* = â8/*e* (*V* â 0.5)/*U*;
- Optional: if *X*2 †5 â 4*e*1/4*U* then accept X and terminate algorithm;
- Optional: if *X*2 â„ 4*e*â1.35/*U* + 1.4 then reject X and start over from step 1;
- If *X*2 †â4 ln *U* then accept X, otherwise start over the algorithm.
The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved[\[69\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-69) so that the logarithm is rarely evaluated.
- The [ziggurat algorithm](https://en.wikipedia.org/wiki/Ziggurat_algorithm "Ziggurat algorithm")[\[70\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-70) is faster than the BoxâMuller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed.
- Integer arithmetic can be used to sample from the standard normal distribution.[\[71\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-71)[\[72\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-72) This method is exact in the sense that it satisfies the conditions of *ideal approximation*;[\[73\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-73) i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number.
- There is also some investigation[\[74\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-74) into the connection between the fast [Hadamard transform](https://en.wikipedia.org/wiki/Hadamard_transform "Hadamard transform") and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data.
### Numerical approximations for the normal cumulative distribution function and normal quantile function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=54 "Edit section: Numerical approximations for the normal cumulative distribution function and normal quantile function")\]
The standard normal [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") is widely used in scientific and statistical computing.
The values *Ί*(*x*) may be approximated very accurately by a variety of methods, such as [numerical integration](https://en.wikipedia.org/wiki/Numerical_integration "Numerical integration"), [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series"), [asymptotic series](https://en.wikipedia.org/wiki/Asymptotic_series "Asymptotic series") and [continued fractions](https://en.wikipedia.org/wiki/Gauss%27s_continued_fraction#Of_Kummer's_confluent_hypergeometric_function "Gauss's continued fraction"). Different approximations are used depending on the desired level of accuracy.
- [Zelen & Severo (1964)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFZelenSevero1964) give the approximation for *Ί*(*x*) for *x* \> 0 with the absolute error \|*Δ*(*x*)\| \< 7.5·10â8 (algorithm [26\.2.17](https://secure.math.ubc.ca/~cbm/aands/page_932.htm)):
Ί
(
x
)
\=
1
â
Ï
(
x
)
(
b
1
t
\+
b
2
t
2
\+
b
3
t
3
\+
b
4
t
4
\+
b
5
t
5
)
\+
Δ
(
x
)
,
t
\=
1
1
\+
b
0
x
,
{\\displaystyle \\Phi (x)=1-\\varphi (x)\\left(b\_{1}t+b\_{2}t^{2}+b\_{3}t^{3}+b\_{4}t^{4}+b\_{5}t^{5}\\right)+\\varepsilon (x),\\qquad t={\\frac {1}{1+b\_{0}x}},}

where *Ï*(*x*) is the standard normal probability density function, and *b*0 = 0.2316419, *b*1 = 0.319381530, *b*2 = â0.356563782, *b*3 = 1.781477937, *b*4 = â1.821255978, *b*5 = 1.330274429.
- [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) lists dozens of approximations by means of rational functions, with or without exponentials, for the `erfc()` function, where erfc(x) = 1 - erf(x). His algorithms vary in the degree of complexity and the resulting precision, with a maximum absolute precision of 24 digits. An algorithm by [West (2009)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWest2009) combines Hart's algorithm 5666 with a [continued fraction](https://en.wikipedia.org/wiki/Continued_fraction "Continued fraction") approximation in the tail to provide a fast computation algorithm with 16-digit precision.
- [Cody (1969)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCody1969), after recalling the Hart68 solution is not suited for erf, gave a solution for both erf and erfc, with maximal relative error bound, via [Rational Chebyshev Approximation](https://en.wikipedia.org/wiki/Rational_function "Rational function").
- [Marsaglia (2004)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsaglia2004) suggested a simple algorithm[\[note 1\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-75) based on the Taylor series expansion
Ί
(
x
)
\=
1
2
\+
Ï
(
x
)
(
x
\+
x
3
3
\+
x
5
3
â
5
\+
x
7
3
â
5
â
7
\+
x
9
3
â
5
â
7
â
9
\+
âŻ
)
{\\displaystyle \\Phi (x)={\\frac {1}{2}}+\\varphi (x)\\left(x+{\\frac {x^{3}}{3}}+{\\frac {x^{5}}{3\\cdot 5}}+{\\frac {x^{7}}{3\\cdot 5\\cdot 7}}+{\\frac {x^{9}}{3\\cdot 5\\cdot 7\\cdot 9}}+\\cdots \\right)}

for calculating *Ί*(*x*) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when *x* = 10).
- The [GNU Scientific Library](https://en.wikipedia.org/wiki/GNU_Scientific_Library "GNU Scientific Library") calculates values of the standard normal cumulative distribution function using Hart's algorithms and approximations with [Chebyshev polynomials](https://en.wikipedia.org/wiki/Chebyshev_polynomial "Chebyshev polynomial").
- [Dia (2023)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDia2023) proposes the following approximation of
1
â
Ί
{\\textstyle 1-\\Phi }

with a maximum relative error less than
2
â
53
{\\textstyle 2^{-53}}

(
â
1\.1
Ă
10
â
16
)
{\\textstyle \\left(\\approx 1.1\\times 10^{-16}\\right)}

in absolute value: for
x
â„
0
{\\textstyle x\\geq 0}

1
â
Ί
(
x
)
\=
(
0\.39894228040143268
x
\+
2\.92678600515804815
)
(
x
2
\+
8\.42742300458043240
x
\+
18\.38871225773938487
x
2
\+
5\.81582518933527391
x
\+
8\.97280659046817350
)
(
x
2
\+
7\.30756258553673541
x
\+
18\.25323235347346525
x
2
\+
5\.70347935898051437
x
\+
10\.27157061171363079
)
(
x
2
\+
5\.66479518878470765
x
\+
18\.61193318971775795
x
2
\+
5\.51862483025707963
x
\+
12\.72323261907760928
)
(
x
2
\+
4\.91396098895240075
x
\+
24\.14804072812762821
x
2
\+
5\.26184239579604207
x
\+
16\.88639562007936908
)
(
x
2
\+
3\.83362947800146179
x
\+
11\.61511226260603247
x
2
\+
4\.92081346632882033
x
\+
24\.12333774572479110
)
e
â
x
2
2
{\\textstyle {\\begin{aligned}1-\\Phi \\left(x\\right)&=\\left({\\frac {0.39894228040143268}{x+2.92678600515804815}}\\right)\\left({\\frac {x^{2}+8.42742300458043240x+18.38871225773938487}{x^{2}+5.81582518933527391x+8.97280659046817350}}\\right)\\\\&\\left({\\frac {x^{2}+7.30756258553673541x+18.25323235347346525}{x^{2}+5.70347935898051437x+10.27157061171363079}}\\right)\\left({\\frac {x^{2}+5.66479518878470765x+18.61193318971775795}{x^{2}+5.51862483025707963x+12.72323261907760928}}\\right)\\\\&\\left({\\frac {x^{2}+4.91396098895240075x+24.14804072812762821}{x^{2}+5.26184239579604207x+16.88639562007936908}}\\right)\\left({\\frac {x^{2}+3.83362947800146179x+11.61511226260603247}{x^{2}+4.92081346632882033x+24.12333774572479110}}\\right)e^{-{\\frac {x^{2}}{2}}}\\end{aligned}}}

and for
x
\<
0
{\\textstyle x\<0}

,
1 â Ί ( x ) \= 1 â ( 1 â Ί ( â x ) ) {\\displaystyle 1-\\Phi \\left(x\\right)=1-\\left(1-\\Phi \\left(-x\\right)\\right)} 
Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting *p* = *Ί*(*z*), the simplest approximation for the quantile function is: z \= Ί â 1 ( p ) \= 5\.5556 \[ 1 â ( 1 â p p ) 0\.1186 \] , p â„ 1 / 2 {\\displaystyle z=\\Phi ^{-1}(p)=5.5556\\left\[1-\\left({\\frac {1-p}{p}}\\right)^{0.1186}\\right\],\\qquad p\\geq 1/2} ![{\\displaystyle z=\\Phi ^{-1}(p)=5.5556\\left\[1-\\left({\\frac {1-p}{p}}\\right)^{0.1186}\\right\],\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5f2df7f1427d0c90d075faef38f4f5ab7acce5c9)
This approximation delivers for z a maximum absolute error of 0.026 (for 0\.5 †*p* †0.9999, corresponding to 0 †*z* †3.719). For *p* \< 1/2 replace p by 1 â *p* and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation: z \= â 0\.4115 { 1 â p p \+ log ⥠\[ 1 â p p \] â 1 } , p â„ 1 / 2 {\\displaystyle z=-0.4115\\left\\{{\\frac {1-p}{p}}+\\log \\left\[{\\frac {1-p}{p}}\\right\]-1\\right\\},\\qquad p\\geq 1/2} ![{\\displaystyle z=-0.4115\\left\\{{\\frac {1-p}{p}}+\\log \\left\[{\\frac {1-p}{p}}\\right\]-1\\right\\},\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1edea9f990058f741db6735799c8b40999b833b)
The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by L ( z ) \= â« z â ( u â z ) Ï ( u ) d u \= â« z â \[ 1 â Ί ( u ) \] d u L ( z ) â { 0\.4115 ( p 1 â p ) â z , p \< 1 / 2 , 0\.4115 ( 1 â p p ) , p â„ 1 / 2\. or, equivalently, L ( z ) â { 0\.4115 { 1 â log ⥠\[ p 1 â p \] } , p \< 1 / 2 , 0\.4115 1 â p p , p â„ 1 / 2\. {\\displaystyle {\\begin{aligned}L(z)&=\\int \_{z}^{\\infty }(u-z)\\varphi (u)\\,du=\\int \_{z}^{\\infty }\[1-\\Phi (u)\]\\,du\\\\\[5pt\]L(z)&\\approx {\\begin{cases}0.4115\\left({\\dfrac {p}{1-p}}\\right)-z,\&p\<1/2,\\\\\\\\0.4115\\left({\\dfrac {1-p}{p}}\\right),\&p\\geq 1/2.\\end{cases}}\\\\\[5pt\]{\\text{or, equivalently,}}\\\\L(z)&\\approx {\\begin{cases}0.4115\\left\\{1-\\log \\left\[{\\frac {p}{1-p}}\\right\]\\right\\},\&p\<1/2,\\\\\\\\0.4115{\\dfrac {1-p}{p}},\&p\\geq 1/2.\\end{cases}}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}L(z)&=\\int \_{z}^{\\infty }(u-z)\\varphi (u)\\,du=\\int \_{z}^{\\infty }\[1-\\Phi (u)\]\\,du\\\\\[5pt\]L(z)&\\approx {\\begin{cases}0.4115\\left({\\dfrac {p}{1-p}}\\right)-z,\&p\<1/2,\\\\\\\\0.4115\\left({\\dfrac {1-p}{p}}\\right),\&p\\geq 1/2.\\end{cases}}\\\\\[5pt\]{\\text{or, equivalently,}}\\\\L(z)&\\approx {\\begin{cases}0.4115\\left\\{1-\\log \\left\[{\\frac {p}{1-p}}\\right\]\\right\\},\&p\<1/2,\\\\\\\\0.4115{\\dfrac {1-p}{p}},\&p\\geq 1/2.\\end{cases}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4b69fa586cffdfbbd40a94c65629726e4ae78bf)
This approximation is particularly accurate for the right far-tail (maximum error of 10â3 for *z* â„ 1.4). Highly accurate approximations for the cumulative distribution function, based on [Response Modeling Methodology](https://en.wikipedia.org/wiki/Response_Modeling_Methodology "Response Modeling Methodology") (RMM, Shore, 2011, 2012), are shown in Shore (2005).
Some more approximations can be found at: [Error function\#Approximation with elementary functions](https://en.wikipedia.org/wiki/Error_function#Approximation_with_elementary_functions "Error function"). In particular, small *relative* error on the whole domain for the cumulative distribution function â Ί {\\displaystyle \\Phi }  â and the quantile function Ί â 1 {\\textstyle \\Phi ^{-1}}  as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.
## History
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=55 "Edit section: History")\]
### Development
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=56 "Edit section: Development")\]
Some authors[\[75\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-76)[\[76\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-77) attribute the discovery of the normal distribution to [de Moivre](https://en.wikipedia.org/wiki/De_Moivre "De Moivre"), who in 1738[\[note 2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-78) published in the second edition of his *[The Doctrine of Chances](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances")* the study of the coefficients in the [binomial expansion](https://en.wikipedia.org/wiki/Binomial_expansion "Binomial expansion") of (*a* + *b*)*n*. De Moivre proved that the middle term in this expansion has the approximate magnitude of 2 n / 2 Ï n {\\textstyle 2^{n}/{\\sqrt {2\\pi n}}} , and that "If m or â 1/2â *n* be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval â, has to the middle Term, is â 2 â â n {\\textstyle -{\\frac {2\\ell \\ell }{n}}} ."[\[77\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-79) Although this theorem can be interpreted as the first obscure expression for the normal probability law, [Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[\[78\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-80)
[](https://en.wikipedia.org/wiki/File:Carl_Friedrich_Gauss.jpg)
In 1809, [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") showed that the normal distribution provides a way to rationalize the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares").
In 1823 [Gauss](https://en.wikipedia.org/wiki/Gauss "Gauss") published his monograph "*Theoria combinationis observationum erroribus minimis obnoxiae*" where among other things he introduces several important statistical concepts, such as the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares"), the [method of maximum likelihood](https://en.wikipedia.org/wiki/Method_of_maximum_likelihood "Method of maximum likelihood"), and the *normal distribution*. Gauss used M, *M*âČ, *M*âł, ... to denote the measurements of some unknown quantity V, and sought the most probable estimator of that quantity: the one that maximizes the probability *Ï*(*M* â *V*) · *Ï*(*M*âČ â *V*) · *Ï*(*M*âł â *V*) · ... of obtaining the observed experimental results. In his notation ÏÎ is the probability density function of the measurement errors of magnitude Î. Not knowing what the function Ï is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[\[note 3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-81) Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[\[79\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-82) Ï Î \= h â Ï e â h h Î Î , {\\displaystyle \\varphi {\\mathit {\\Delta }}={\\frac {h}{\\surd \\pi }}\\,e^{-\\mathrm {hh} \\Delta \\Delta },}  where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the [non-linear](https://en.wikipedia.org/wiki/Non-linear_least_squares "Non-linear least squares") [weighted least squares](https://en.wikipedia.org/wiki/Weighted_least_squares "Weighted least squares") method.[\[80\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-83)
[](https://en.wikipedia.org/wiki/File:Pierre-Simon_Laplace.jpg)
[Pierre-Simon Laplace](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") proved the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") in 1810, consolidating the importance of the normal distribution in statistics.
Although Gauss was the first to suggest the normal distribution law, [Laplace](https://en.wikipedia.org/wiki/Laplace "Laplace") made significant contributions.[\[note 4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-84) It was Laplace who first posed the problem of aggregating several observations in 1774,[\[81\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-85) although his own solution led to the [Laplacian distribution](https://en.wikipedia.org/wiki/Laplacian_distribution "Laplacian distribution"). It was Laplace who first calculated the value of the [integral â« *e*â*t*2 *dt* = âÏ](https://en.wikipedia.org/wiki/Gaussian_integral "Gaussian integral") in 1782, providing the normalization constant for the normal distribution.[\[82\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-86) For this accomplishment, Gauss acknowledged the priority of Laplace.[\[83\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-87) Finally, it was Laplace who in 1810 proved and presented to the academy the fundamental [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"), which emphasized the theoretical importance of the normal distribution.[\[84\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-88)
It is of interest to note that in 1809 an Irish-American mathematician [Robert Adrain](https://en.wikipedia.org/wiki/Robert_Adrain "Robert Adrain") published two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.[\[85\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-89) His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed by [Abbe](https://en.wikipedia.org/wiki/Cleveland_Abbe "Cleveland Abbe").[\[86\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-90)
In the middle of the 19th century [Maxwell](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[\[59\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEMaxwell186023-59) The number of particles whose velocity, resolved in a certain direction, lies between x and *x* + *dx* is N ⥠1 α Ï e â x 2 α 2 d x {\\displaystyle \\operatorname {N} {\\frac {1}{\\alpha \\;{\\sqrt {\\pi }}}}\\;e^{-{\\frac {x^{2}}{\\alpha ^{2}}}}\\,dx} 
### Naming
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=57 "Edit section: Naming")\]
Today, the concept is usually known in English as the **normal distribution** or **Gaussian distribution**. Other less common names include Gauss distribution, LaplaceâGauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law.
Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.[\[87\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-91) However, by the end of the 19th century some authors[\[note 5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-92) had started using the name *normal distribution*, where the word "normal" was used as an adjective â the term now being seen as a reflection of this distribution being seen as typical, common â and thus normal. [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce") (one of those authors) once defined "normal" thus: "... the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what *would*, in the long run, occur under certain circumstances."[\[88\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-93) Around the turn of the 20th century [Pearson](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") popularized the term *normal* as a designation for this distribution.[\[89\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-94)
> Many years ago I called the LaplaceâGaussian curve the *normal* curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'.
â [Pearson (1920)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1920)
Also, it was Pearson who first wrote the distribution in terms of the standard deviation Ï as in modern notation. Soon after this, in year 1915, [Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher "Ronald Fisher") added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: d f \= 1 2 Ï 2 Ï e â ( x â m ) 2 / ( 2 Ï 2 ) d x . {\\displaystyle df={\\frac {1}{\\sqrt {2\\sigma ^{2}\\pi }}}e^{-(x-m)^{2}/(2\\sigma ^{2})}\\,dx.} 
The term *standard normal distribution*, which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947) *Introduction to Mathematical Statistics* and [Alexander M. Mood](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950) *Introduction to the Theory of Statistics*.[\[90\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-95)[\[91\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-96)[\[92\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-97)
## See also
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=58 "Edit section: See also")\]
- [](https://en.wikipedia.org/wiki/File:Nuvola_apps_edu_mathematics_blue-p.svg)[Mathematics portal](https://en.wikipedia.org/wiki/Portal:Mathematics "Portal:Mathematics")
- [Bates distribution](https://en.wikipedia.org/wiki/Bates_distribution "Bates distribution") â similar to the IrwinâHall distribution, but rescaled back into the 0 to 1 range
- [BehrensâFisher problem](https://en.wikipedia.org/wiki/Behrens%E2%80%93Fisher_problem "BehrensâFisher problem") â the long-standing problem of testing whether two normal samples with different variances have same means;
- [Bhattacharyya distance](https://en.wikipedia.org/wiki/Bhattacharyya_distance "Bhattacharyya distance") â method used to separate mixtures of normal distributions
- [ErdĆsâKac theorem](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kac_theorem "ErdĆsâKac theorem") â on the occurrence of the normal distribution in [number theory](https://en.wikipedia.org/wiki/Number_theory "Number theory")
- [Full width at half maximum](https://en.wikipedia.org/wiki/Full_width_at_half_maximum "Full width at half maximum")
- [Gaussian blur](https://en.wikipedia.org/wiki/Gaussian_blur "Gaussian blur") â [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution"), which uses the normal distribution as a kernel
- [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function "Gaussian function")
- [Modified half-normal distribution](https://en.wikipedia.org/wiki/Modified_half-normal_distribution "Modified half-normal distribution")[\[93\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Sun-2021-98) with the pdf on
(
0
,
â
)
{\\textstyle (0,\\infty )}

is given as
f
(
x
)
\=
2
ÎČ
α
/
2
x
α
â
1
exp
âĄ
(
â
ÎČ
x
2
\+
Îł
x
)
Κ
(
α
2
,
Îł
ÎČ
)
{\\textstyle f(x)={\\frac {2\\beta ^{\\alpha /2}x^{\\alpha -1}\\exp(-\\beta x^{2}+\\gamma x)}{\\Psi \\left({\\frac {\\alpha }{2}},{\\frac {\\gamma }{\\sqrt {\\beta }}}\\right)}}}

, where
Κ
(
α
,
z
)
\=
1
Κ
1
(
(
α
,
1
2
)
(
1
,
0
)
;
z
)
{\\textstyle \\Psi (\\alpha ,z)={}\_{1}\\Psi \_{1}\\left({\\begin{matrix}\\left(\\alpha ,{\\frac {1}{2}}\\right)\\\\(1,0)\\end{matrix}};z\\right)}

denotes the [FoxâWright Psi function](https://en.wikipedia.org/wiki/Fox%E2%80%93Wright_Psi_function "FoxâWright Psi function").
- [Normally distributed and uncorrelated does not imply independent](https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent "Normally distributed and uncorrelated does not imply independent")
- [Ratio normal distribution](https://en.wikipedia.org/wiki/Ratio_normal_distribution "Ratio normal distribution")
- [Reciprocal normal distribution](https://en.wikipedia.org/wiki/Reciprocal_normal_distribution "Reciprocal normal distribution")
- [Standard normal table](https://en.wikipedia.org/wiki/Standard_normal_table "Standard normal table")
- [Stein's lemma](https://en.wikipedia.org/wiki/Stein%27s_lemma "Stein's lemma")
- [Sub-Gaussian distribution](https://en.wikipedia.org/wiki/Sub-Gaussian_distribution "Sub-Gaussian distribution")
- [Sum of normally distributed random variables](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables "Sum of normally distributed random variables")
- [Tweedie distribution](https://en.wikipedia.org/wiki/Tweedie_distribution "Tweedie distribution") â The normal distribution is a member of the family of Tweedie [exponential dispersion models](https://en.wikipedia.org/wiki/Exponential_dispersion_model "Exponential dispersion model").
- [Wrapped normal distribution](https://en.wikipedia.org/wiki/Wrapped_normal_distribution "Wrapped normal distribution") â the normal distribution applied to a circular domain
- [Z-test](https://en.wikipedia.org/wiki/Z-test "Z-test") â using the normal distribution
## Notes
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=59 "Edit section: Notes")\]
1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-75)** For example, this algorithm is given in the article [Bc programming language](https://en.wikipedia.org/wiki/Bc_programming_language#A_translated_C_function "Bc programming language").
2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-78)** De Moivre first published his findings in 1733, in a pamphlet *Approximatio ad Summam Terminorum Binomii* (*a* + *b*)*n* *in Seriem Expansi* that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example [Walker (1985)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985).
3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-81)** "It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been determined by several direct observations, made under the same circumstances and with equal care, the arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at least, so that it is always most safe to adhere to it." â [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177)
4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-84)** "My custom of terming the curve the GaussâLaplacian or *normal* curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189)
5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-92)** Besides those specifically referenced here, such use is encountered in the works of [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce"), [Galton](https://en.wikipedia.org/wiki/Galton "Galton") ([Galton (1889](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalton1889), chapter V)) and [Lexis](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") ([Lexis (1878)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLexis1878), [Rohrbasser & Véron (2003)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFRohrbasserV%C3%A9ron2003)) c. 1875.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\]
## References
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=60 "Edit section: References")\]
### Citations
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=61 "Edit section: Citations")\]
1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Norton-2019_1-0)**
Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019). ["Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation"](https://web.archive.org/web/20230331230821/http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF). *Annals of Operations Research*. **299** (1â2\). Springer: 1281â1315\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1811\.11301](https://arxiv.org/abs/1811.11301). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s10479-019-03373-1](https://doi.org/10.1007%2Fs10479-019-03373-1). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [254231768](https://api.semanticscholar.org/CorpusID:254231768). Archived from [the original](http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF) on March 31, 2023. Retrieved February 27, 2023.
2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-The_Joy_of_Finite_Mathematics_2-0)**
Tsokos, Chris; Wooten, Rebecca (January 1, 2016). Tsokos, Chris; Wooten, Rebecca (eds.). [*The Joy of Finite Mathematics*](https://linkinghub.elsevier.com/retrieve/pii/B9780128029671000073). Boston: Academic Press. pp. 231â263\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-802967-1.00007-3](https://doi.org/10.1016%2Fb978-0-12-802967-1.00007-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-12-802967-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-802967-1 "Special:BookSources/978-0-12-802967-1")
.
3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Mathematics_for_Physical_Science_and_Engineering_3-0)**
Harris, Frank E. (January 1, 2014). Harris, Frank E. (ed.). [*Mathematics for Physical Science and Engineering*](https://linkinghub.elsevier.com/retrieve/pii/B9780128010006000183). Boston: Academic Press. pp. 663â709\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-801000-6.00018-3](https://doi.org/10.1016%2Fb978-0-12-801000-6.00018-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-12-801000-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-801000-6 "Special:BookSources/978-0-12-801000-6")
.
4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-4)** [Hoel (1947](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947), [p. 31](https://archive.org/details/in.ernet.dli.2015.263186/page/n39/mode/2up?q=%22normal+distribution%22)) and [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 109](https://archive.org/details/introductiontoth0000alex/page/108/mode/2up?q=%22normal+distribution%22)) give this definition with slightly different notation.
5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-5)** [*Normal Distribution*](http://www.encyclopedia.com/topic/Normal_Distribution.aspx#3), Gale Encyclopedia of Psychology
6. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-6)** [Casella & Berger (2001](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCasellaBerger2001), p. 102)
7. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-7)** Lyon, A. (2014). [Why are Normal Distributions Normal?](https://aidanlyon.com/normal_distributions.pdf), The British Journal for the Philosophy of Science.
8. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-8)**
Jorge, Nocedal; Stephan, J. Wright (2006). *Numerical Optimization* (2nd ed.). Springer. p. 249. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0387-30303-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0387-30303-1 "Special:BookSources/978-0387-30303-1")
.
9. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-1)
["Normal Distribution"](https://www.mathsisfun.com/data/standard-normal-distribution.html). *www.mathsisfun.com*. Retrieved August 15, 2020.
10. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-10)**
["bell curve"](https://www.merriam-webster.com/dictionary/bell%20curve). *Merriam-Webster.com Dictionary*. Retrieved May 25, 2025.
11. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-11)** [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 112](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22)) explicitly defines the *standard normal distribution*. In contrast, [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) explicitly defines the *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and introduces the term *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22).
12. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-12)** [Stigler (1982)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1982)
13. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-13)** [Halperin, Hartley & Hoel (1965](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHalperinHartleyHoel1965), item 7)
14. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-14)** [McPherson (1990](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMcPherson1990), p. 110)
15. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-15)** [Bernardo & Smith (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBernardoSmith2000), p. 121)
16. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-KunIlPark_16-0)**
Park, Kun Il (2018). *Fundamentals of Probability and Stochastic Processes with Applications to Communications*. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-3-319-68074-3](https://en.wikipedia.org/wiki/Special:BookSources/978-3-319-68074-3 "Special:BookSources/978-3-319-68074-3")
.
17. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-17)**
Scott, Clayton; Nowak, Robert (August 7, 2003). ["The Q-function"](http://cnx.org/content/m11537/1.2/). *Connexions*.
18. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-18)**
Barak, Ohad (April 6, 2006). ["Q Function and Error Function"](https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF). Tel Aviv University. Archived from [the original](http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF) on March 25, 2009.
19. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-19)**
[Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution Function"](https://mathworld.wolfram.com/NormalDistributionFunction.html). *[MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld")*.
20. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-20)**
[Abramowitz, Milton](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); [Stegun, Irene Ann](https://en.wikipedia.org/wiki/Irene_Stegun "Irene Stegun"), eds. (1983) \[June 1964\]. ["Chapter 26, eqn 26.2.12"](http://www.math.ubc.ca/~cbm/aands/page_932.htm). [*Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun"). Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 932. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0")
. [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [64-60036](https://lccn.loc.gov/64-60036). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0167642](https://mathscinet.ams.org/mathscinet-getitem?mr=0167642). [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [65-12253](https://www.loc.gov/item/65012253).
21. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-duff_21-0)**
Duff, Michael (2003). "Normal Distribution Algorithms". *The Mathematical Gazette*. **87** (509): 331â336\. [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [3621062](https://www.jstor.org/stable/3621062).
22. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-1)
Stuart, Alan; Ord, J. Keith (1987). ["The normal d.f."](https://archive.org/details/kendallsadvanced0001kend/page/183/mode/1up). *Kendall's Advanced Theory of Statistics*. Vol. 1: Distribution Theory. originally by [Maurice Kendall](https://en.wikipedia.org/wiki/Maurice_Kendall "Maurice Kendall") (5th ed.). Charles Griffin & Co. § 5\.37, pp. 183â185. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[0-85264-285-7](https://en.wikipedia.org/wiki/Special:BookSources/0-85264-285-7 "Special:BookSources/0-85264-285-7")
.
23. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-23)**
Vaart, A. W. van der (October 13, 1998). [*Asymptotic Statistics*](https://dx.doi.org/10.1017/cbo9780511802256). Cambridge University Press. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1017/cbo9780511802256](https://doi.org/10.1017%2Fcbo9780511802256). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-511-80225-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-511-80225-6 "Special:BookSources/978-0-511-80225-6")
.
24. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-1) [Cover & Thomas (2006)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCoverThomas2006), p. 254.
25. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-25)**
Park, Sung Y.; Bera, Anil K. (2009). ["Maximum Entropy Autoregressive Conditional Heteroskedasticity Model"](https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf) (PDF). *Journal of Econometrics*. **150** (2): 219â230\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2009JEcon.150..219P](https://ui.adsabs.harvard.edu/abs/2009JEcon.150..219P). [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.511.9750](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.511.9750). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.jeconom.2008.12.014](https://doi.org/10.1016%2Fj.jeconom.2008.12.014). Archived from [the original](http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf) (PDF) on March 7, 2016. Retrieved June 2, 2011.
26. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Geary_RC_26-0)** Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178â184
27. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-27)**
[Lukacs, Eugene](https://en.wikipedia.org/wiki/Eugene_Lukacs "Eugene Lukacs") (March 1942). ["A Characterization of the Normal Distribution"](https://archive.org/details/dli.ernet.4125/page/91). *[Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics "Annals of Mathematical Statistics")*. **13** (1): 91â93\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/AOMS/1177731647](https://doi.org/10.1214%2FAOMS%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0006626](https://mathscinet.ams.org/mathscinet-getitem?mr=0006626). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0060\.28509](https://zbmath.org/?format=complete&q=an:0060.28509). [Wikidata](https://en.wikipedia.org/wiki/WDQ_\(identifier\) "WDQ (identifier)") [Q55897617](https://www.wikidata.org/wiki/Q55897617 "d:Q55897617").
28. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-1) [***c***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-2) [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.4\])
29. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-29)** [Fan (1991](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFFan1991), p. 1258)
30. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-30)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.8\])
31. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-31)**
Papoulis, Athanasios. *Probability, Random Variables and Stochastic Processes* (4th ed.). p. 148.
32. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-32)**
Winkelbauer, Andreas (2012). "Moments and Absolute Moments of the Normal Distribution". [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1209\.4340](https://arxiv.org/abs/1209.4340) \[[math.ST](https://arxiv.org/archive/math.ST)\].
33. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-33)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 23)
34. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-34)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 24)
35. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-35)**
Williams, David (2001). [*Weighing the odds : a course in probability and statistics*](https://archive.org/details/weighingoddscour00will) (Reprinted. ed.). Cambridge \[u.a.\]: Cambridge Univ. Press. pp. [197](https://archive.org/details/weighingoddscour00will/page/n219)â199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-521-00618-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-00618-7 "Special:BookSources/978-0-521-00618-7")
.
36. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-36)**
José M. Bernardo; Adrian F. M. Smith (2000). [*Bayesian theory*](https://archive.org/details/bayesiantheory00bern_963) (Reprint ed.). Chichester \[u.a.\]: Wiley. pp. [209](https://archive.org/details/bayesiantheory00bern_963/page/n224), 366. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5")
.
37. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-37)**
O'Hagan, A. (1994) *Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference*, Edward Arnold.
[ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[0-340-52922-9](https://en.wikipedia.org/wiki/Special:BookSources/0-340-52922-9 "Special:BookSources/0-340-52922-9")
(Section 5.40)
38. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-1) [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 35)
39. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-39)** [UIUC, Lecture 21. *The Multivariate Normal Distribution*](http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf), 21.6:"Individually Gaussian Versus Jointly Gaussian".
40. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-40)** Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", *[The American Statistician](https://en.wikipedia.org/wiki/The_American_Statistician "The American Statistician")*, volume 36, number 4 November 1982, pages 372â373
41. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-41)**
["Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions"](http://www.allisons.org/ll/MML/KL/Normal/). *Allisons.org*. December 5, 2007. Retrieved March 3, 2017.
42. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-42)**
Jordan, Michael I. (February 8, 2010). ["Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution"](http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf) (PDF).
43. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-43)** [Amari & Nagaoka (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFAmariNagaoka2000)
44. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-44)**
["Expectation of the maximum of gaussian random variables"](https://math.stackexchange.com/a/89147). *Mathematics Stack Exchange*. Retrieved April 7, 2024.
45. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-45)**
["Normal Approximation to Poisson Distribution"](http://www.stat.ucla.edu/~dinov/courses_students.dir/Applets.dir/NormalApprox2PoissonApplet.html). *Stat.ucla.edu*. Retrieved March 3, 2017.
46. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-46)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 27)
47. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-47)**
Weisstein, Eric W. ["Normal Product Distribution"](http://mathworld.wolfram.com/NormalProductDistribution.html). *MathWorld*. wolfram.com.
48. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-48)**
Lukacs, Eugene (1942). ["A Characterization of the Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177731647). *[The Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/The_Annals_of_Mathematical_Statistics "The Annals of Mathematical Statistics")*. **13** (1): 91â3\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177731647](https://doi.org/10.1214%2Faoms%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166).
49. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-49)**
Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". *[SankhyÄ](https://en.wikipedia.org/wiki/Sankhy%C4%81_\(journal\) "SankhyÄ (journal)")*. **13** (4): 359â62\. [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0036-4452](https://search.worldcat.org/issn/0036-4452). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [25048183](https://www.jstor.org/stable/25048183).
50. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-50)**
Lehmann, E. L. (1997). *Testing Statistical Hypotheses* (2nd ed.). Springer. p. 199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-94919-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-94919-2 "Special:BookSources/978-0-387-94919-2")
.
51. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-51)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.3.6\])
52. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-52)** [Galambos & Simonelli (2004](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalambosSimonelli2004), Theorem 3.5)
53. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-1) [Lukacs & King (1954)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLukacsKing1954)
54. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-54)**
Quine, M.P. (1993). ["On three characterisations of the normal distribution"](http://www.math.uni.wroc.pl/~pms/publicationsArticle.php?nr=14.2&nrA=8&ppB=257&ppE=263). *Probability and Mathematical Statistics*. **14** (2): 257â263\.
55. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-John-1982_55-0)**
John, S (1982). "The three parameter two-piece normal family of distributions and its fitting". *Communications in Statistics â Theory and Methods*. **11** (8): 879â885\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610928208828279](https://doi.org/10.1080%2F03610928208828279).
56. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-1) [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 127)
57. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-57)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 130)
58. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-58)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 133)
59. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-1) [Maxwell (1860)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMaxwell1860), p. 23.
60. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEBryc19951_60-0)** [Bryc (1995)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 1.
61. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-61)**
Larkoski, Andrew J. (2023). [*Quantum Mechanics: A Mathematical Introduction*](https://books.google.com/books?id=iKmnEAAAQBAJ&dq=normal%20distribution&pg=PA120). United Kingdom: Cambridge University Press. pp. 120â121\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-1-009-12222-1](https://en.wikipedia.org/wiki/Special:BookSources/978-1-009-12222-1 "Special:BookSources/978-1-009-12222-1")
. Retrieved May 30, 2025.
62. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-62)** [Huxley (1932)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHuxley1932)
63. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-63)**
Jaynes, Edwin T. (2003). [*Probability Theory: The Logic of Science*](https://books.google.com/books?id=tTN4HuUNXjgC&pg=PA592). Cambridge University Press. pp. 592â593\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[9780521592710](https://en.wikipedia.org/wiki/Special:BookSources/9780521592710 "Special:BookSources/9780521592710")
.
64. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-64)**
Oosterbaan, Roland J. (1994). ["Chapter 6: Frequency and Regression Analysis of Hydrologic Data"](http://www.waterlog.info/pdf/freqtxt.pdf) (PDF). In Ritzema, Henk P. (ed.). *Drainage Principles and Applications, Publication 16* (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp. 175â224\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-90-70754-33-4](https://en.wikipedia.org/wiki/Special:BookSources/978-90-70754-33-4 "Special:BookSources/978-90-70754-33-4")
.
65. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-65)** Why Most Published Research Findings Are False, John P. A. Ioannidis, 2005
66. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-66)**
Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". *Applied Statistics*. **37** (3): 477â84\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347330](https://doi.org/10.2307%2F2347330). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347330](https://www.jstor.org/stable/2347330).
67. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-67)** [Johnson, Kotz & Balakrishnan (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1995), Equation (26.48))
68. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-68)** [Kinderman & Monahan (1977)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKindermanMonahan1977)
69. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-69)** [Leva (1992)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLeva1992)
70. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-70)** [Marsaglia & Tsang (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsagliaTsang2000)
71. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-71)** [Karney (2016)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKarney2016)
72. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-72)** [Du, Fan & Wei (2022)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDuFanWei2022)
73. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-73)** [Monahan (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMonahan1985), section 2)
74. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-74)** [Wallace (1996)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWallace1996)
75. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-76)** [Johnson, Kotz & Balakrishnan (1994](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1994), p. 85)
76. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-77)** [Le Cam & Lo Yang (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLe_CamLo_Yang2000), p. 74)
77. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-79)** De Moivre, Abraham (1733), Corollary I â see [Walker (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985), p. 77)
78. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-80)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), [p. 76](https://archive.org/details/historyofstatist00stig/page/76/mode/2up?q=%22de+moivre%22))
79. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-82)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177)
80. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-83)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 179)
81. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-85)** [Laplace (1774](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLaplace1774), Problem III)
82. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-86)** [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189)
83. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-87)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177)
84. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-88)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), p. 144)
85. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-89)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 243)
86. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-90)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 244)
87. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-91)** Jaynes, Edwin J.; *Probability Theory: The Logic of Science*, [Ch. 7](http://www-biba.inrialpes.fr/Jaynes/cc07s.pdf).
88. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-93)** Peirce, Charles S. (c. 1909 MS), *[Collected Papers](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce_bibliography#CP "Charles Sanders Peirce bibliography")* v. 6, paragraph 327.
89. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-94)** [Kruskal & Stigler (1997)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKruskalStigler1997).
90. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-95)**
["Earliest Uses... (Entry Standard Normal Curve)"](http://jeff560.tripod.com/s.html).
91. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-96)** [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) introduces the terms *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22).
92. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-97)** [Mood (1950)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950) explicitly defines the *standard normal distribution* [(p. 112)](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22).
93. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Sun-2021_98-0)**
Sun, Jingchao; Kong, Maiying; Pal, Subhadip (June 22, 2021). ["The Modified-Half-Normal distribution: Properties and an efficient sampling scheme"](https://www.tandfonline.com/doi/abs/10.1080/03610926.2021.1934700?journalCode=lsta20). *Communications in Statistics â Theory and Methods*. **52** (5): 1591â1613\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610926.2021.1934700](https://doi.org/10.1080%2F03610926.2021.1934700). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0361-0926](https://search.worldcat.org/issn/0361-0926). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [237919587](https://api.semanticscholar.org/CorpusID:237919587).
### Sources
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=62 "Edit section: Sources")\]
- Aldrich, John; Miller, Jeff. ["Earliest Uses of Symbols in Probability and Statistics"](http://jeff560.tripod.com/stat.html).
- Aldrich, John; Miller, Jeff. ["Earliest Known Uses of Some of the Words of Mathematics"](http://jeff560.tripod.com/mathword.html).
In particular, the entries for ["bell-shaped and bell curve"](http://jeff560.tripod.com/b.html), ["normal (distribution)"](http://jeff560.tripod.com/n.html), ["Gaussian"](http://jeff560.tripod.com/g.html), and ["Error, law of error, theory of errors, etc."](http://jeff560.tripod.com/e.html).
- [Amari, Shun'ichi](https://en.wikipedia.org/wiki/Shun%27ichi_Amari "Shun'ichi Amari"); Nagaoka, Hiroshi (2000). *Methods of Information Geometry*. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8218-0531-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-0531-2 "Special:BookSources/978-0-8218-0531-2")
.
- [Bernardo, José M.](https://en.wikipedia.org/wiki/Jos%C3%A9-Miguel_Bernardo "José-Miguel Bernardo"); [Smith, Adrian F. M.](https://en.wikipedia.org/wiki/Adrian_Smith_\(statistician\) "Adrian Smith (statistician)") (2000). *Bayesian Theory*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5")
.
- Bryc, Wlodzimierz (1995). [*The Normal Distribution: Characterizations with Applications*](https://books.google.com/books?id=tyXjBwAAQBAJ). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-97990-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97990-8 "Special:BookSources/978-0-387-97990-8")
.
- [Casella, George](https://en.wikipedia.org/wiki/George_Casella "George Casella"); [Berger, Roger L.](https://en.wikipedia.org/wiki/Roger_Lee_Berger "Roger Lee Berger") (2001). *Statistical Inference* (2nd ed.). Duxbury. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-534-24312-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-24312-8 "Special:BookSources/978-0-534-24312-8")
.
- Cody, William J. (1969). ["Rational Chebyshev Approximations for the Error Function"](https://en.wikipedia.org/wiki/Error_function#cite_note-5 "Error function"). *Mathematics of Computation*. **23** (107): 631â638\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1969MaCom..23..631C](https://ui.adsabs.harvard.edu/abs/1969MaCom..23..631C). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1969-0247736-4](https://doi.org/10.1090%2FS0025-5718-1969-0247736-4).
- [Cover, Thomas M.](https://en.wikipedia.org/wiki/Thomas_M._Cover "Thomas M. Cover"); [Thomas, Joy A.](https://en.wikipedia.org/wiki/Joy_A._Thomas "Joy A. Thomas") (2006). [*Elements of Information Theory*](https://books.google.com/books?id=VWq5GG6ycxMC). John Wiley and Sons. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[9780471241959](https://en.wikipedia.org/wiki/Special:BookSources/9780471241959 "Special:BookSources/9780471241959")
.
- Dia, Yaya D. (2023). ["Approximate Incomplete Integrals, Application to Complementary Error Function"](https://ssrn.com/abstract=4487559). *SSRN*. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2139/ssrn.4487559](https://doi.org/10.2139%2Fssrn.4487559). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [259689086](https://api.semanticscholar.org/CorpusID:259689086).
- [de Moivre, Abraham](https://en.wikipedia.org/wiki/Abraham_de_Moivre "Abraham de Moivre") (2000) \[First published 1738\]. [*The Doctrine of Chances*](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances"). American Mathematical Society. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8218-2103-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-2103-9 "Special:BookSources/978-0-8218-2103-9")
.
- Du, Y.; Fan, B.; Wei, B. (2022). "An improved exact sampling algorithm for the standard normal distribution". *Computational Statistics*. **37** (2): 721â737\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[2008\.03855](https://arxiv.org/abs/2008.03855). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00180-021-01136-w](https://doi.org/10.1007%2Fs00180-021-01136-w).
- Fan, Jianqing (1991). ["On the optimal rates of convergence for nonparametric deconvolution problems"](https://doi.org/10.1214%2Faos%2F1176348248). *The Annals of Statistics*. **19** (3): 1257â1272\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176348248](https://doi.org/10.1214%2Faos%2F1176348248). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2241949](https://www.jstor.org/stable/2241949).
- [Galton, Francis](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") (1889). [*Natural Inheritance*](http://galton.org/books/natural-inheritance/pdf/galton-nat-inh-1up-clean.pdf) (PDF). London, UK: Richard Clay and Sons.
- [Galambos, Janos](https://en.wikipedia.org/wiki/Janos_Galambos "Janos Galambos"); Simonelli, Italo (2004). [*Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions*](https://archive.org/details/productsofrandom00gala). Marcel Dekker, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8247-5402-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-5402-0 "Special:BookSources/978-0-8247-5402-0")
.
- [Gauss, Carolo Friderico](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") (1809). [*Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm*](https://archive.org/details/theoriamotuscor00gausgoog) \[*Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections*\] (in Latin). Hambvrgi, Svmtibvs F. Perthes et I. H. Besser. [English translation](https://books.google.com/books?id=1TIAAAAAQAAJ).
- [Gould, Stephen Jay](https://en.wikipedia.org/wiki/Stephen_Jay_Gould "Stephen Jay Gould") (1981). [*The Mismeasure of Man*](https://en.wikipedia.org/wiki/The_Mismeasure_of_Man "The Mismeasure of Man") (first ed.). W. W. Norton. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-393-01489-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-393-01489-1 "Special:BookSources/978-0-393-01489-1")
.
- Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation". *The American Statistician*. **19** (3): 12â14\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2681417](https://doi.org/10.2307%2F2681417). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2681417](https://www.jstor.org/stable/2681417).
- Hart, John F.; et al. (1968). *Computer Approximations*. New York, NY: John Wiley & Sons, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-88275-642-4](https://en.wikipedia.org/wiki/Special:BookSources/978-0-88275-642-4 "Special:BookSources/978-0-88275-642-4")
.
- ["Normal Distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_Distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\]
- [Herrnstein, Richard J.](https://en.wikipedia.org/wiki/Richard_J._Herrnstein "Richard J. Herrnstein"); [Murray, Charles](https://en.wikipedia.org/wiki/Charles_Murray_\(political_scientist\) "Charles Murray (political scientist)") (1994). [*The Bell Curve: Intelligence and Class Structure in American Life*](https://en.wikipedia.org/wiki/The_Bell_Curve "The Bell Curve"). [Free Press](https://en.wikipedia.org/wiki/Free_Press_\(publisher\) "Free Press (publisher)"). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-02-914673-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-02-914673-6 "Special:BookSources/978-0-02-914673-6")
.
- Hoel, Paul G. (1947). [*Introduction To Mathematical Statistics*](https://archive.org/details/in.ernet.dli.2015.263186/page/n1/mode/2up). New York: Wiley.
- [Huxley, Julian S.](https://en.wikipedia.org/wiki/Julian_S._Huxley "Julian S. Huxley") (1972) \[First published 1932\]. *Problems of Relative Growth*. London. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-61114-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61114-3 "Special:BookSources/978-0-486-61114-3")
. [OCLC](https://en.wikipedia.org/wiki/OCLC_\(identifier\) "OCLC (identifier)") [476909537](https://search.worldcat.org/oclc/476909537).
- [Johnson, Norman L.](https://en.wikipedia.org/wiki/Norman_Lloyd_Johnson "Norman Lloyd Johnson"); [Kotz, Samuel](https://en.wikipedia.org/wiki/Samuel_Kotz "Samuel Kotz"); Balakrishnan, Narayanaswamy (1994). *Continuous Univariate Distributions, Volume 1*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-58495-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58495-7 "Special:BookSources/978-0-471-58495-7")
.
- Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). *Continuous Univariate Distributions, Volume 2*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-58494-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58494-0 "Special:BookSources/978-0-471-58494-0")
.
- Karney, C. F. F. (2016). ["Sampling exactly from the normal distribution"](https://doi.org/10.1145%2F2710016). *ACM Transactions on Mathematical Software*. **42** (1): 3:1â14. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1303\.6257](https://arxiv.org/abs/1303.6257). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/2710016](https://doi.org/10.1145%2F2710016). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [14252035](https://api.semanticscholar.org/CorpusID:14252035).
- Kinderman, Albert J.; Monahan, John F. (1977). ["Computer Generation of Random Variables Using the Ratio of Uniform Deviates"](https://doi.org/10.1145%2F355744.355750). *ACM Transactions on Mathematical Software*. **3** (3): 257â260\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/355744.355750](https://doi.org/10.1145%2F355744.355750). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [12884505](https://api.semanticscholar.org/CorpusID:12884505).
- Krishnamoorthy, Kalimuthu (2006). *Handbook of Statistical Distributions with Applications*. Chapman & Hall/CRC. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-1-58488-635-8](https://en.wikipedia.org/wiki/Special:BookSources/978-1-58488-635-8 "Special:BookSources/978-1-58488-635-8")
.
- [Kruskal, William H.](https://en.wikipedia.org/wiki/William_H._Kruskal "William H. Kruskal"); Stigler, Stephen M. (1997). Spencer, Bruce D. (ed.). *Normative Terminology: 'Normal' in Statistics and Elsewhere*. Statistics and Public Policy. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-19-852341-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-19-852341-3 "Special:BookSources/978-0-19-852341-3")
.
- [Laplace, Pierre-Simon de](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") (1774). ["MĂ©moire sur la probabilitĂ© des causes par les Ă©vĂ©nements"](http://gallica.bnf.fr/ark:/12148/bpt6k77596b/f32). *MĂ©moires de l'AcadĂ©mie Royale des Sciences de Paris (Savants Ă©trangers), Tome 6*: 621â656\.
Translated by Stephen M. Stigler in *Statistical Science* **1** (3), 1986: [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2245476](https://www.jstor.org/stable/2245476).
- Laplace, Pierre-Simon (1812). [*Théorie analytique des probabilités*](https://archive.org/details/thorieanalytiqu00laplgoog) \[*[Analytical theory of probabilities](https://en.wikipedia.org/wiki/Analytical_theory_of_probabilities "Analytical theory of probabilities")*\]. Paris, Ve. Courcier.
- [Le Cam, Lucien](https://en.wikipedia.org/wiki/Lucien_Le_Cam "Lucien Le Cam"); [Lo Yang, Grace](https://en.wikipedia.org/wiki/Grace_Yang "Grace Yang") (2000). *Asymptotics in Statistics: Some Basic Concepts* (second ed.). Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-95036-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-95036-5 "Special:BookSources/978-0-387-95036-5")
.
- Leva, Joseph L. (1992). ["A fast normal random number generator"](https://web.archive.org/web/20100716035328/http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF). *ACM Transactions on Mathematical Software*. **18** (4): 449â453\. [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.544.5806](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.544.5806). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/138351.138364](https://doi.org/10.1145%2F138351.138364). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [15802663](https://api.semanticscholar.org/CorpusID:15802663). Archived from [the original](http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF) on July 16, 2010.
- [Lexis, Wilhelm](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") (1878). "Sur la durĂ©e normale de la vie humaine et sur la thĂ©orie de la stabilitĂ© des rapports statistiques". *Annales de DĂ©mographie Internationale*. **II**. Paris: 447â462\.
- Lukacs, Eugene; King, Edgar P. (1954). ["A Property of Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177728796). *The Annals of Mathematical Statistics*. **25** (2): 389â394\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177728796](https://doi.org/10.1214%2Faoms%2F1177728796). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236741](https://www.jstor.org/stable/2236741).
- McPherson, Glen (1990). [*Statistics in Scientific Investigation: Its Basis, Application and Interpretation*](https://archive.org/details/statisticsinscie0000mcph). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-97137-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97137-7 "Special:BookSources/978-0-387-97137-7")
.
- [Marsaglia, George](https://en.wikipedia.org/wiki/George_Marsaglia "George Marsaglia"); Tsang, Wai Wan (2000). ["The Ziggurat Method for Generating Random Variables"](https://doi.org/10.18637%2Fjss.v005.i08). *Journal of Statistical Software*. **5** (8). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v005.i08](https://doi.org/10.18637%2Fjss.v005.i08).
- Marsaglia, George (2004). ["Evaluating the Normal Distribution"](https://doi.org/10.18637%2Fjss.v011.i04). *Journal of Statistical Software*. **11** (4). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v011.i04](https://doi.org/10.18637%2Fjss.v011.i04).
- [Maxwell, James Clerk](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") (1860). ["V. Illustrations of the dynamical theory of gases. â Part I: On the motions and collisions of perfectly elastic spheres"](https://books.google.com/books?id=-YU7AQAAMAAJ&pg=PA19). *Philosophical Magazine*. Series 4. **19** (124): 19â32\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1860LEDPM..19...19M](https://ui.adsabs.harvard.edu/abs/1860LEDPM..19...19M). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786446008642818](https://doi.org/10.1080%2F14786446008642818).
- Monahan, J. F. (1985). ["Accuracy in random number generation"](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X). *Mathematics of Computation*. **45** (172): 559â568\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1985-0804945-X](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X).
- [Mood, Alexander McFarlane](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950). [*Introduction to the Theory of Statistics*](https://archive.org/details/introductiontoth0000alex/page/n5/mode/2up). New York: McGraw-Hill.
- Patel, Jagdish K.; Read, Campbell B. (1996). *Handbook of the Normal Distribution* (2nd ed.). CRC Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8247-9342-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-9342-5 "Special:BookSources/978-0-8247-9342-5")
.
- [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1901). ["On Lines and Planes of Closest Fit to Systems of Points in Space"](http://stat.smmu.edu.cn/history/pearson1901.pdf) (PDF). *[Philosophical Magazine](https://en.wikipedia.org/wiki/Philosophical_Magazine "Philosophical Magazine")*. 6. **2** (11): 559â572\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786440109462720](https://doi.org/10.1080%2F14786440109462720). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [125037489](https://api.semanticscholar.org/CorpusID:125037489).
- [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1905). ["'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A rejoinder"](https://zenodo.org/record/1449456). *Biometrika*. **4** (1): 169â212\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2331536](https://doi.org/10.2307%2F2331536). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331536](https://www.jstor.org/stable/2331536).
- Pearson, Karl (1920). ["Notes on the History of Correlation"](https://zenodo.org/record/1431597). *Biometrika*. **13** (1): 25â45\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1093/biomet/13.1.25](https://doi.org/10.1093%2Fbiomet%2F13.1.25). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331722](https://www.jstor.org/stable/2331722).
- Rohrbasser, Jean-Marc; VĂ©ron, Jacques (2003). ["Wilhelm Lexis: The Normal Length of Life as an Expression of the "Nature of Things""](http://www.persee.fr/web/revues/home/prescript/article/pop_1634-2941_2003_num_58_3_18444). *Population*. **58** (3): 303â322\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.3917/pope.303.0303](https://doi.org/10.3917%2Fpope.303.0303).
- Shore, H (1982). "Simple Approximations for the Inverse Cumulative Function, the Density Function and the Loss Integral of the Normal Distribution". *Journal of the Royal Statistical Society. Series C (Applied Statistics)*. **31** (2): 108â114\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347972](https://doi.org/10.2307%2F2347972). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347972](https://www.jstor.org/stable/2347972).
- Shore, H (2005). "Accurate RMM-Based Approximations for the CDF of the Normal Distribution". *Communications in Statistics â Theory and Methods*. **34** (3): 507â513\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1081/sta-200052102](https://doi.org/10.1081%2Fsta-200052102). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122148043](https://api.semanticscholar.org/CorpusID:122148043).
- Shore, H (2011). "Response Modeling Methodology". *WIREs Comput Stat*. **3** (4): 357â372\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.151](https://doi.org/10.1002%2Fwics.151). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [62021374](https://api.semanticscholar.org/CorpusID:62021374).
- Shore, H (2012). "Estimating Response Modeling Methodology Models". *WIREs Comput Stat*. **4** (3): 323â333\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.1199](https://doi.org/10.1002%2Fwics.1199). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122366147](https://api.semanticscholar.org/CorpusID:122366147).
- [Stigler, Stephen M.](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") (1978). ["Mathematical Statistics in the Early States"](https://doi.org/10.1214%2Faos%2F1176344123). *The Annals of Statistics*. **6** (2): 239â265\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176344123](https://doi.org/10.1214%2Faos%2F1176344123). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2958876](https://www.jstor.org/stable/2958876).
- Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal". *The American Statistician*. **36** (2): 137â138\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2684031](https://doi.org/10.2307%2F2684031). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2684031](https://www.jstor.org/stable/2684031).
- Stigler, Stephen M. (1986). [*The History of Statistics: The Measurement of Uncertainty before 1900*](https://archive.org/details/historyofstatist00stig). Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-674-40340-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-40340-6 "Special:BookSources/978-0-674-40340-6")
.
- Stigler, Stephen M. (1999). *Statistics on the Table*. Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-674-83601-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-83601-3 "Special:BookSources/978-0-674-83601-3")
.
- Walker, Helen M. (1985). ["De Moivre on the Law of Normal Probability"](http://www.york.ac.uk/depts/maths/histstat/demoivre.pdf) (PDF). In Smith, David Eugene (ed.). *A Source Book in Mathematics*. Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-64690-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-64690-9 "Special:BookSources/978-0-486-64690-9")
.
- [Wallace, C. S.](https://en.wikipedia.org/wiki/Chris_Wallace_\(computer_scientist\) "Chris Wallace (computer scientist)") (1996). ["Fast pseudo-random generators for normal and exponential variates"](https://doi.org/10.1145%2F225545.225554). *ACM Transactions on Mathematical Software*. **22** (1): 119â127\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/225545.225554](https://doi.org/10.1145%2F225545.225554). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [18514848](https://api.semanticscholar.org/CorpusID:18514848).
- [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution"](http://mathworld.wolfram.com/NormalDistribution.html). [MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld").
- West, Graeme (2009). ["Better Approximations to Cumulative Normal Functions"](https://web.archive.org/web/20120229202051/https://wilmott.com/pdfs/090721_west.pdf) (PDF). *Wilmott Magazine*: 70â76\. Archived from [the original](https://wilmott.com/pdfs/090721_west.pdf) (PDF) on February 29, 2012.
- Zelen, Marvin; Severo, Norman C. (1972) \[First published 1964\]. [*Probability Functions (chapter 26)*](http://www.math.sfu.ca/~cbm/aands/page_931.htm). *[Handbook of mathematical functions with formulas, graphs, and mathematical tables](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun")*, by [Abramowitz, M.](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); and [Stegun, I. A.](https://en.wikipedia.org/wiki/Irene_A._Stegun "Irene A. Stegun"): National Bureau of Standards. New York, NY: Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0")
.
## External links
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=63 "Edit section: External links")\]
[](https://en.wikipedia.org/wiki/File:Commons-logo.svg)
Wikimedia Commons has media related to [Normal distribution](https://commons.wikimedia.org/wiki/Category:Normal_distribution "commons:Category:Normal distribution").
- ["Normal distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\]
- [Normal distribution calculator](https://www.hackmath.net/en/calculator/normal-distribution)
| [v](https://en.wikipedia.org/wiki/Template:Probability_distributions "Template:Probability distributions") [t](https://en.wikipedia.org/wiki/Template_talk:Probability_distributions "Template talk:Probability distributions") [e](https://en.wikipedia.org/wiki/Special:EditPage/Template:Probability_distributions "Special:EditPage/Template:Probability distributions")[Probability distributions](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") ([list](https://en.wikipedia.org/wiki/List_of_probability_distributions "List of probability distributions")) | |
|---|---|
| Discrete univariate | |
| | |
| with finite support | [Benford](https://en.wikipedia.org/wiki/Benford%27s_law "Benford's law") [Bernoulli](https://en.wikipedia.org/wiki/Bernoulli_distribution "Bernoulli distribution") [Beta-binomial](https://en.wikipedia.org/wiki/Beta-binomial_distribution "Beta-binomial distribution") [Binomial](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution") [Categorical](https://en.wikipedia.org/wiki/Categorical_distribution "Categorical distribution") [Hypergeometric](https://en.wikipedia.org/wiki/Hypergeometric_distribution "Hypergeometric distribution") [Negative](https://en.wikipedia.org/wiki/Negative_hypergeometric_distribution "Negative hypergeometric distribution") [Poisson binomial](https://en.wikipedia.org/wiki/Poisson_binomial_distribution "Poisson binomial distribution") [Rademacher](https://en.wikipedia.org/wiki/Rademacher_distribution "Rademacher distribution") [Soliton](https://en.wikipedia.org/wiki/Soliton_distribution "Soliton distribution") [Discrete uniform](https://en.wikipedia.org/wiki/Discrete_uniform_distribution "Discrete uniform distribution") [Zipf](https://en.wikipedia.org/wiki/Zipf%27s_law "Zipf's law") [ZipfâMandelbrot](https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law "ZipfâMandelbrot law") |
| with infinite support | [Beta negative binomial](https://en.wikipedia.org/wiki/Beta_negative_binomial_distribution "Beta negative binomial distribution") [Borel](https://en.wikipedia.org/wiki/Borel_distribution "Borel distribution") [ConwayâMaxwellâPoisson](https://en.wikipedia.org/wiki/Conway%E2%80%93Maxwell%E2%80%93Poisson_distribution "ConwayâMaxwellâPoisson distribution") [Discrete phase-type](https://en.wikipedia.org/wiki/Discrete_phase-type_distribution "Discrete phase-type distribution") [Delaporte](https://en.wikipedia.org/wiki/Delaporte_distribution "Delaporte distribution") [Extended negative binomial](https://en.wikipedia.org/wiki/Extended_negative_binomial_distribution "Extended negative binomial distribution") [FloryâSchulz](https://en.wikipedia.org/wiki/Flory%E2%80%93Schulz_distribution "FloryâSchulz distribution") [GaussâKuzmin](https://en.wikipedia.org/wiki/Gauss%E2%80%93Kuzmin_distribution "GaussâKuzmin distribution") [Geometric](https://en.wikipedia.org/wiki/Geometric_distribution "Geometric distribution") [Logarithmic](https://en.wikipedia.org/wiki/Logarithmic_distribution "Logarithmic distribution") [Mixed Poisson](https://en.wikipedia.org/wiki/Mixed_Poisson_distribution "Mixed Poisson distribution") [Negative binomial](https://en.wikipedia.org/wiki/Negative_binomial_distribution "Negative binomial distribution") [Panjer](https://en.wikipedia.org/wiki/\(a,b,0\)_class_of_distributions "(a,b,0) class of distributions") [Parabolic fractal](https://en.wikipedia.org/wiki/Parabolic_fractal_distribution "Parabolic fractal distribution") [Poisson](https://en.wikipedia.org/wiki/Poisson_distribution "Poisson distribution") [Skellam](https://en.wikipedia.org/wiki/Skellam_distribution "Skellam distribution") [YuleâSimon](https://en.wikipedia.org/wiki/Yule%E2%80%93Simon_distribution "YuleâSimon distribution") [Zeta](https://en.wikipedia.org/wiki/Zeta_distribution "Zeta distribution") |
| Continuous univariate | |
| | |
| supported on a bounded interval | [Arcsine](https://en.wikipedia.org/wiki/Arcsine_distribution "Arcsine distribution") [ARGUS](https://en.wikipedia.org/wiki/ARGUS_distribution "ARGUS distribution") [BaldingâNichols](https://en.wikipedia.org/wiki/Balding%E2%80%93Nichols_model "BaldingâNichols model") [Bates](https://en.wikipedia.org/wiki/Bates_distribution "Bates distribution") [Beta](https://en.wikipedia.org/wiki/Beta_distribution "Beta distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_beta_distribution "Generalized beta distribution") [Beta rectangular](https://en.wikipedia.org/wiki/Beta_rectangular_distribution "Beta rectangular distribution") [Continuous Bernoulli](https://en.wikipedia.org/wiki/Continuous_Bernoulli_distribution "Continuous Bernoulli distribution") [IrwinâHall](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "IrwinâHall distribution") [Kumaraswamy](https://en.wikipedia.org/wiki/Kumaraswamy_distribution "Kumaraswamy distribution") [Logit-normal](https://en.wikipedia.org/wiki/Logit-normal_distribution "Logit-normal distribution") [Noncentral beta](https://en.wikipedia.org/wiki/Noncentral_beta_distribution "Noncentral beta distribution") [PERT](https://en.wikipedia.org/wiki/PERT_distribution "PERT distribution") [Power function](https://en.wikipedia.org/w/index.php?title=Power_function_distribution&action=edit&redlink=1 "Power function distribution (page does not exist)") [Raised cosine](https://en.wikipedia.org/wiki/Raised_cosine_distribution "Raised cosine distribution") [Reciprocal](https://en.wikipedia.org/wiki/Reciprocal_distribution "Reciprocal distribution") [Triangular](https://en.wikipedia.org/wiki/Triangular_distribution "Triangular distribution") [U-quadratic](https://en.wikipedia.org/wiki/U-quadratic_distribution "U-quadratic distribution") [Uniform](https://en.wikipedia.org/wiki/Continuous_uniform_distribution "Continuous uniform distribution") [Wigner semicircle](https://en.wikipedia.org/wiki/Wigner_semicircle_distribution "Wigner semicircle distribution") |
| supported on a semi-infinite interval | [Benini](https://en.wikipedia.org/wiki/Benini_distribution "Benini distribution") [Benktander 1st kind](https://en.wikipedia.org/wiki/Benktander_type_I_distribution "Benktander type I distribution") [Benktander 2nd kind](https://en.wikipedia.org/wiki/Benktander_type_II_distribution "Benktander type II distribution") [Beta prime](https://en.wikipedia.org/wiki/Beta_prime_distribution "Beta prime distribution") [Burr](https://en.wikipedia.org/wiki/Burr_distribution "Burr distribution") [Chi](https://en.wikipedia.org/wiki/Chi_distribution "Chi distribution") [Chi-squared](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") [Noncentral](https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution "Noncentral chi-squared distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse-chi-squared_distribution "Inverse-chi-squared distribution") [Scaled](https://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution "Scaled inverse chi-squared distribution") [Dagum](https://en.wikipedia.org/wiki/Dagum_distribution "Dagum distribution") [Davis](https://en.wikipedia.org/wiki/Davis_distribution "Davis distribution") [Erlang](https://en.wikipedia.org/wiki/Erlang_distribution "Erlang distribution") [Hyper](https://en.wikipedia.org/wiki/Hyper-Erlang_distribution "Hyper-Erlang distribution") [Exponential](https://en.wikipedia.org/wiki/Exponential_distribution "Exponential distribution") [Hyperexponential](https://en.wikipedia.org/wiki/Hyperexponential_distribution "Hyperexponential distribution") [Hypoexponential](https://en.wikipedia.org/wiki/Hypoexponential_distribution "Hypoexponential distribution") [Logarithmic](https://en.wikipedia.org/wiki/Exponential-logarithmic_distribution "Exponential-logarithmic distribution") [*F*](https://en.wikipedia.org/wiki/F-distribution "F-distribution") [Noncentral](https://en.wikipedia.org/wiki/Noncentral_F-distribution "Noncentral F-distribution") [Folded normal](https://en.wikipedia.org/wiki/Folded_normal_distribution "Folded normal distribution") [FrĂ©chet](https://en.wikipedia.org/wiki/Fr%C3%A9chet_distribution "FrĂ©chet distribution") [Gamma](https://en.wikipedia.org/wiki/Gamma_distribution "Gamma distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_gamma_distribution "Generalized gamma distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse-gamma_distribution "Inverse-gamma distribution") [gamma/Gompertz](https://en.wikipedia.org/wiki/Gamma/Gompertz_distribution "Gamma/Gompertz distribution") [Gompertz](https://en.wikipedia.org/wiki/Gompertz_distribution "Gompertz distribution") [Shifted](https://en.wikipedia.org/wiki/Shifted_Gompertz_distribution "Shifted Gompertz distribution") [Half-logistic](https://en.wikipedia.org/wiki/Half-logistic_distribution "Half-logistic distribution") [Half-normal](https://en.wikipedia.org/wiki/Half-normal_distribution "Half-normal distribution") [Hotelling's *T*\-squared](https://en.wikipedia.org/wiki/Hotelling%27s_T-squared_distribution "Hotelling's T-squared distribution") [HartmanâWatson](https://en.wikipedia.org/wiki/Hartman%E2%80%93Watson_distribution "HartmanâWatson distribution") [Inverse Gaussian](https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution "Inverse Gaussian distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_inverse_Gaussian_distribution "Generalized inverse Gaussian distribution") [Kolmogorov](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test "KolmogorovâSmirnov test") [LĂ©vy](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "LĂ©vy distribution") [Log-Cauchy](https://en.wikipedia.org/wiki/Log-Cauchy_distribution "Log-Cauchy distribution") [Log-Laplace](https://en.wikipedia.org/wiki/Log-Laplace_distribution "Log-Laplace distribution") [Log-logistic](https://en.wikipedia.org/wiki/Log-logistic_distribution "Log-logistic distribution") [Log-normal](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") [Log-t](https://en.wikipedia.org/wiki/Log-t_distribution "Log-t distribution") [Lomax](https://en.wikipedia.org/wiki/Lomax_distribution "Lomax distribution") [Matrix-exponential](https://en.wikipedia.org/wiki/Matrix-exponential_distribution "Matrix-exponential distribution") [MaxwellâBoltzmann](https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution "MaxwellâBoltzmann distribution") [MaxwellâJĂŒttner](https://en.wikipedia.org/wiki/Maxwell%E2%80%93J%C3%BCttner_distribution "MaxwellâJĂŒttner distribution") [Mittag-Leffler](https://en.wikipedia.org/wiki/Mittag-Leffler_distribution "Mittag-Leffler distribution") [Nakagami](https://en.wikipedia.org/wiki/Nakagami_distribution "Nakagami distribution") [Pareto](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution") [Phase-type](https://en.wikipedia.org/wiki/Phase-type_distribution "Phase-type distribution") [Poly-Weibull](https://en.wikipedia.org/wiki/Poly-Weibull_distribution "Poly-Weibull distribution") [Rayleigh](https://en.wikipedia.org/wiki/Rayleigh_distribution "Rayleigh distribution") [Relativistic BreitâWigner](https://en.wikipedia.org/wiki/Relativistic_Breit%E2%80%93Wigner_distribution "Relativistic BreitâWigner distribution") [Rice](https://en.wikipedia.org/wiki/Rice_distribution "Rice distribution") [Truncated normal](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution") [type-2 Gumbel](https://en.wikipedia.org/wiki/Type-2_Gumbel_distribution "Type-2 Gumbel distribution") [Weibull](https://en.wikipedia.org/wiki/Weibull_distribution "Weibull distribution") [Discrete](https://en.wikipedia.org/wiki/Discrete_Weibull_distribution "Discrete Weibull distribution") [Wilks's lambda](https://en.wikipedia.org/wiki/Wilks%27s_lambda_distribution "Wilks's lambda distribution") |
| supported on the whole real line | [Cauchy](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution") [Exponential power](https://en.wikipedia.org/wiki/Generalized_normal_distribution#Version_1 "Generalized normal distribution") [Fisher's *z*](https://en.wikipedia.org/wiki/Fisher%27s_z-distribution "Fisher's z-distribution") [Kaniadakis Îș-Gaussian](https://en.wikipedia.org/wiki/Kaniadakis_Gaussian_distribution "Kaniadakis Gaussian distribution") [Gaussian *q*](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") [Generalized hyperbolic](https://en.wikipedia.org/wiki/Generalised_hyperbolic_distribution "Generalised hyperbolic distribution") [Generalized logistic (logistic-beta)](https://en.wikipedia.org/wiki/Generalized_logistic_distribution "Generalized logistic distribution") [Generalized normal](https://en.wikipedia.org/wiki/Generalized_normal_distribution "Generalized normal distribution") [Geometric stable](https://en.wikipedia.org/wiki/Geometric_stable_distribution "Geometric stable distribution") [Gumbel](https://en.wikipedia.org/wiki/Gumbel_distribution "Gumbel distribution") [Holtsmark](https://en.wikipedia.org/wiki/Holtsmark_distribution "Holtsmark distribution") [Hyperbolic secant](https://en.wikipedia.org/wiki/Hyperbolic_secant_distribution "Hyperbolic secant distribution") [Johnson's *SU*](https://en.wikipedia.org/wiki/Johnson%27s_SU-distribution "Johnson's SU-distribution") [Landau](https://en.wikipedia.org/wiki/Landau_distribution "Landau distribution") [Laplace](https://en.wikipedia.org/wiki/Laplace_distribution "Laplace distribution") [Asymmetric](https://en.wikipedia.org/wiki/Asymmetric_Laplace_distribution "Asymmetric Laplace distribution") [Logistic](https://en.wikipedia.org/wiki/Logistic_distribution "Logistic distribution") [Noncentral *t*](https://en.wikipedia.org/wiki/Noncentral_t-distribution "Noncentral t-distribution") [Normal (Gaussian)]() [Normal-inverse Gaussian](https://en.wikipedia.org/wiki/Normal-inverse_Gaussian_distribution "Normal-inverse Gaussian distribution") [Skew normal](https://en.wikipedia.org/wiki/Skew_normal_distribution "Skew normal distribution") [Slash](https://en.wikipedia.org/wiki/Slash_distribution "Slash distribution") [Stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") [Student's *t*](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") [TracyâWidom](https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution "TracyâWidom distribution") [Variance-gamma](https://en.wikipedia.org/wiki/Variance-gamma_distribution "Variance-gamma distribution") [Voigt](https://en.wikipedia.org/wiki/Voigt_profile "Voigt profile") |
| with support whose type varies | [Generalized chi-squared](https://en.wikipedia.org/wiki/Generalized_chi-squared_distribution "Generalized chi-squared distribution") [Generalized extreme value](https://en.wikipedia.org/wiki/Generalized_extreme_value_distribution "Generalized extreme value distribution") [Generalized Pareto](https://en.wikipedia.org/wiki/Generalized_Pareto_distribution "Generalized Pareto distribution") [MarchenkoâPastur](https://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur_distribution "MarchenkoâPastur distribution") [Kaniadakis *Îș*\-exponential](https://en.wikipedia.org/wiki/Kaniadakis_Exponential_distribution "Kaniadakis Exponential distribution") [Kaniadakis *Îș*\-Gamma](https://en.wikipedia.org/wiki/Kaniadakis_Gamma_distribution "Kaniadakis Gamma distribution") [Kaniadakis *Îș*\-Weibull](https://en.wikipedia.org/wiki/Kaniadakis_Weibull_distribution "Kaniadakis Weibull distribution") [Kaniadakis *Îș*\-Logistic](https://en.wikipedia.org/wiki/Kaniadakis_Logistic_distribution "Kaniadakis Logistic distribution") [Kaniadakis *Îș*\-Erlang](https://en.wikipedia.org/wiki/Kaniadakis_Erlang_distribution "Kaniadakis Erlang distribution") [*q*\-exponential](https://en.wikipedia.org/wiki/Q-exponential_distribution "Q-exponential distribution") [*q*\-Gaussian](https://en.wikipedia.org/wiki/Q-Gaussian_distribution "Q-Gaussian distribution") [*q*\-Weibull](https://en.wikipedia.org/wiki/Q-Weibull_distribution "Q-Weibull distribution") [Shifted log-logistic](https://en.wikipedia.org/wiki/Shifted_log-logistic_distribution "Shifted log-logistic distribution") [Tukey lambda](https://en.wikipedia.org/wiki/Tukey_lambda_distribution "Tukey lambda distribution") |
| Mixed univariate | |
| | |
| continuous- discrete | [Rectified Gaussian](https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution "Rectified Gaussian distribution") |
| [Multivariate (joint)](https://en.wikipedia.org/wiki/Joint_probability_distribution "Joint probability distribution") | *Discrete:* [Ewens](https://en.wikipedia.org/wiki/Ewens%27s_sampling_formula "Ewens's sampling formula") [Multinomial](https://en.wikipedia.org/wiki/Multinomial_distribution "Multinomial distribution") [Dirichlet](https://en.wikipedia.org/wiki/Dirichlet-multinomial_distribution "Dirichlet-multinomial distribution") [Negative](https://en.wikipedia.org/wiki/Negative_multinomial_distribution "Negative multinomial distribution") *Continuous:* [Dirichlet](https://en.wikipedia.org/wiki/Dirichlet_distribution "Dirichlet distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_Dirichlet_distribution "Generalized Dirichlet distribution") [Multivariate Laplace](https://en.wikipedia.org/wiki/Multivariate_Laplace_distribution "Multivariate Laplace distribution") [Multivariate normal](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") [Multivariate stable](https://en.wikipedia.org/wiki/Multivariate_stable_distribution "Multivariate stable distribution") [Multivariate *t*](https://en.wikipedia.org/wiki/Multivariate_t-distribution "Multivariate t-distribution") [Normal-gamma](https://en.wikipedia.org/wiki/Normal-gamma_distribution "Normal-gamma distribution") [Inverse](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution") *[Matrix-valued:](https://en.wikipedia.org/wiki/Random_matrix "Random matrix")* [LKJ](https://en.wikipedia.org/wiki/Lewandowski-Kurowicka-Joe_distribution "Lewandowski-Kurowicka-Joe distribution") [Matrix beta](https://en.wikipedia.org/wiki/Matrix_variate_beta_distribution "Matrix variate beta distribution") [Matrix *F*](https://en.wikipedia.org/wiki/Matrix_F-distribution "Matrix F-distribution") [Matrix normal](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution") [Matrix *t*](https://en.wikipedia.org/wiki/Matrix_t-distribution "Matrix t-distribution") [Matrix gamma](https://en.wikipedia.org/wiki/Matrix_gamma_distribution "Matrix gamma distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse_matrix_gamma_distribution "Inverse matrix gamma distribution") [Wishart](https://en.wikipedia.org/wiki/Wishart_distribution "Wishart distribution") [Normal](https://en.wikipedia.org/wiki/Normal-Wishart_distribution "Normal-Wishart distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse-Wishart_distribution "Inverse-Wishart distribution") [Normal-inverse](https://en.wikipedia.org/wiki/Normal-inverse-Wishart_distribution "Normal-inverse-Wishart distribution") [Complex](https://en.wikipedia.org/wiki/Complex_Wishart_distribution "Complex Wishart distribution") [Uniform distribution on a Stiefel manifold](https://en.wikipedia.org/wiki/Uniform_distribution_on_a_Stiefel_manifold "Uniform distribution on a Stiefel manifold") |
| [Directional](https://en.wikipedia.org/wiki/Directional_statistics "Directional statistics") | *Univariate (circular) [directional](https://en.wikipedia.org/wiki/Directional_statistics "Directional statistics")* [Circular uniform](https://en.wikipedia.org/wiki/Circular_uniform_distribution "Circular uniform distribution") [Univariate von Mises](https://en.wikipedia.org/wiki/Von_Mises_distribution "Von Mises distribution") [Wrapped normal](https://en.wikipedia.org/wiki/Wrapped_normal_distribution "Wrapped normal distribution") [Wrapped Cauchy](https://en.wikipedia.org/wiki/Wrapped_Cauchy_distribution "Wrapped Cauchy distribution") [Wrapped exponential](https://en.wikipedia.org/wiki/Wrapped_exponential_distribution "Wrapped exponential distribution") [Wrapped asymmetric Laplace](https://en.wikipedia.org/wiki/Wrapped_asymmetric_Laplace_distribution "Wrapped asymmetric Laplace distribution") [Wrapped LĂ©vy](https://en.wikipedia.org/wiki/Wrapped_L%C3%A9vy_distribution "Wrapped LĂ©vy distribution") *Bivariate (spherical)* [Kent](https://en.wikipedia.org/wiki/Kent_distribution "Kent distribution") *Bivariate (toroidal)* [Bivariate von Mises](https://en.wikipedia.org/wiki/Bivariate_von_Mises_distribution "Bivariate von Mises distribution") *Multivariate* [von MisesâFisher](https://en.wikipedia.org/wiki/Von_Mises%E2%80%93Fisher_distribution "Von MisesâFisher distribution") [Bingham](https://en.wikipedia.org/wiki/Bingham_distribution "Bingham distribution") |
| [Degenerate](https://en.wikipedia.org/wiki/Degenerate_distribution "Degenerate distribution") and [singular](https://en.wikipedia.org/wiki/Singular_distribution "Singular distribution") | *Degenerate* [Dirac delta function](https://en.wikipedia.org/wiki/Dirac_delta_function "Dirac delta function") *Singular* [Cantor](https://en.wikipedia.org/wiki/Cantor_distribution "Cantor distribution") |
| Families | [Circular](https://en.wikipedia.org/wiki/Circular_distribution "Circular distribution") [Compound Poisson](https://en.wikipedia.org/wiki/Compound_Poisson_distribution "Compound Poisson distribution") [Elliptical](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution") [Exponential](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") [Natural exponential](https://en.wikipedia.org/wiki/Natural_exponential_family "Natural exponential family") [Locationâscale](https://en.wikipedia.org/wiki/Location%E2%80%93scale_family "Locationâscale family") [Maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution") [Mixture](https://en.wikipedia.org/wiki/Mixture_distribution "Mixture distribution") [Pearson](https://en.wikipedia.org/wiki/Pearson_distribution "Pearson distribution") [Tweedie](https://en.wikipedia.org/wiki/Tweedie_distribution "Tweedie distribution") [Wrapped](https://en.wikipedia.org/wiki/Wrapped_distribution "Wrapped distribution") |
|  [Category](https://en.wikipedia.org/wiki/Category:Probability_distributions "Category:Probability distributions") [](https://en.wikipedia.org/wiki/File:Commons-logo.svg "Commons page") [Commons](https://commons.wikimedia.org/wiki/Category:Probability_distributions "commons:Category:Probability distributions") | |
| [Authority control databases](https://en.wikipedia.org/wiki/Help:Authority_control "Help:Authority control") [](https://www.wikidata.org/wiki/Q133871#identifiers "Edit this at Wikidata") | |
|---|---|
| International | [GND](https://d-nb.info/gnd/4075494-7) |
| National | [United States](https://id.loc.gov/authorities/sh85053556) [France](https://catalogue.bnf.fr/ark:/12148/cb119421818) [BnF data](https://data.bnf.fr/ark:/12148/cb119421818) [Czech Republic](https://aleph.nkp.cz/F/?func=find-c&local_base=aut&ccl_term=ica=ph123321&CON_LNG=ENG) [Israel](https://www.nli.org.il/en/authorities/987007560462505171) |
| Other | [Yale LUX](https://lux.collections.yale.edu/view/concept/d5b5f87b-74e6-4f34-996b-7308b1fe9b73) |

Retrieved from "<https://en.wikipedia.org/w/index.php?title=Normal_distribution&oldid=1344852379>"
[Categories](https://en.wikipedia.org/wiki/Help:Category "Help:Category"):
- [Normal distribution](https://en.wikipedia.org/wiki/Category:Normal_distribution "Category:Normal distribution")
- [Continuous distributions](https://en.wikipedia.org/wiki/Category:Continuous_distributions "Category:Continuous distributions")
- [Conjugate prior distributions](https://en.wikipedia.org/wiki/Category:Conjugate_prior_distributions "Category:Conjugate prior distributions")
- [Exponential family distributions](https://en.wikipedia.org/wiki/Category:Exponential_family_distributions "Category:Exponential family distributions")
- [Stable distributions](https://en.wikipedia.org/wiki/Category:Stable_distributions "Category:Stable distributions")
- [Location-scale family probability distributions](https://en.wikipedia.org/wiki/Category:Location-scale_family_probability_distributions "Category:Location-scale family probability distributions")
Hidden categories:
- [All articles with unsourced statements](https://en.wikipedia.org/wiki/Category:All_articles_with_unsourced_statements "Category:All articles with unsourced statements")
- [Articles with unsourced statements from June 2011](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_June_2011 "Category:Articles with unsourced statements from June 2011")
- [CS1: long volume value](https://en.wikipedia.org/wiki/Category:CS1:_long_volume_value "Category:CS1: long volume value")
- [Articles with short description](https://en.wikipedia.org/wiki/Category:Articles_with_short_description "Category:Articles with short description")
- [Short description matches Wikidata](https://en.wikipedia.org/wiki/Category:Short_description_matches_Wikidata "Category:Short description matches Wikidata")
- [Use mdy dates from August 2012](https://en.wikipedia.org/wiki/Category:Use_mdy_dates_from_August_2012 "Category:Use mdy dates from August 2012")
- [Pages using infobox probability distribution with unknown parameters](https://en.wikipedia.org/wiki/Category:Pages_using_infobox_probability_distribution_with_unknown_parameters "Category:Pages using infobox probability distribution with unknown parameters")
- [Articles with unsourced statements from February 2023](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_February_2023 "Category:Articles with unsourced statements from February 2023")
- [Articles with unsourced statements from June 2025](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_June_2025 "Category:Articles with unsourced statements from June 2025")
- [CS1 Latin-language sources (la)](https://en.wikipedia.org/wiki/Category:CS1_Latin-language_sources_\(la\) "Category:CS1 Latin-language sources (la)")
- [Commons category link is on Wikidata](https://en.wikipedia.org/wiki/Category:Commons_category_link_is_on_Wikidata "Category:Commons category link is on Wikidata")
- This page was last edited on 22 March 2026, at 23:03 (UTC).
- Text is available under the [Creative Commons Attribution-ShareAlike 4.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_4.0_International_License "Wikipedia:Text of the Creative Commons Attribution-ShareAlike 4.0 International License"); additional terms may apply. By using this site, you agree to the [Terms of Use](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Terms_of_Use "foundation:Special:MyLanguage/Policy:Terms of Use") and [Privacy Policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy "foundation:Special:MyLanguage/Policy:Privacy policy"). WikipediaÂź is a registered trademark of the [Wikimedia Foundation, Inc.](https://wikimediafoundation.org/), a non-profit organization.
- [Privacy policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy)
- [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About)
- [Disclaimers](https://en.wikipedia.org/wiki/Wikipedia:General_disclaimer)
- [Contact Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Contact_us)
- [Legal & safety contacts](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Legal:Wikimedia_Foundation_Legal_and_Safety_Contact_Information)
- [Code of Conduct](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Universal_Code_of_Conduct)
- [Developers](https://developer.wikimedia.org/)
- [Statistics](https://stats.wikimedia.org/#/en.wikipedia.org)
- [Cookie statement](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Cookie_statement)
- [Mobile view](https://en.wikipedia.org/w/index.php?title=Normal_distribution&mobileaction=toggle_view_mobile)
- [](https://www.wikimedia.org/)
- [](https://www.mediawiki.org/)
Search
Toggle the table of contents
Normal distribution
73 languages
[Add topic](https://en.wikipedia.org/wiki/Normal_distribution) |
| Readable Markdown | | Normal distribution | |
|---|---|
| Probability density function[](https://en.wikipedia.org/wiki/File:Normal_Distribution_PDF.svg)The red curve is the [*standard normal distribution*](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution). | |
| Cumulative distribution function[](https://en.wikipedia.org/wiki/File:Normal_Distribution_CDF.svg) | |
| Notation |  |
In [probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") and [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics"), a **normal distribution** or **Gaussian distribution** is a type of [continuous probability distribution](https://en.wikipedia.org/wiki/Continuous_probability_distribution "Continuous probability distribution") for a [real-valued](https://en.wikipedia.org/wiki/Real_number "Real number") [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable"). The general form of its [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") is[\[2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-The_Joy_of_Finite_Mathematics-2)[\[3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Mathematics_for_Physical_Science_and_Engineering-3)[\[4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-4)  The parameter â â is the [mean](https://en.wikipedia.org/wiki/Mean#Mean_of_a_probability_distribution "Mean") or [expectation](https://en.wikipedia.org/wiki/Expected_value "Expected value") of the distribution (and also its [median](https://en.wikipedia.org/wiki/Median "Median") and [mode](https://en.wikipedia.org/wiki/Mode_\(statistics\) "Mode (statistics)")), while the parameter  is the [variance](https://en.wikipedia.org/wiki/Variance "Variance"). The [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") of the distribution is the positive value â â (sigma). A random variable with a Gaussian distribution is said to be **normally distributed** and is called a **normal deviate**.
Normal distributions are important in [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics") and are often used in the [natural](https://en.wikipedia.org/wiki/Natural_science "Natural science") and [social sciences](https://en.wikipedia.org/wiki/Social_science "Social science") to represent real-valued [random variables](https://en.wikipedia.org/wiki/Random_variable "Random variable") whose distributions are not known.[\[5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-5)[\[6\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-6) Their importance is partly due to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). It states that the average of many [statistically independent](https://en.wikipedia.org/wiki/Statistically_independent "Statistically independent") samples (observations) of a random variable with finite mean and variance is itself a random variableâwhose distribution [converges](https://en.wikipedia.org/wiki/Convergence_in_distribution "Convergence in distribution") to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as [measurement errors](https://en.wikipedia.org/wiki/Measurement_error "Measurement error"), often have distributions that are nearly normal.[\[7\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-7)
Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any [linear combination](https://en.wikipedia.org/wiki/Linear_combination "Linear combination") of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as [propagation of uncertainty](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") and [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares")[\[8\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-8) parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.
A normal distribution is sometimes informally called a **bell curve**.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9)[\[10\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-10) However, many other distributions are [bell-shaped](https://en.wikipedia.org/wiki/Bell-shaped_function "Bell-shaped function") (such as the [Cauchy](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"), [Student's t](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution"), and [logistic](https://en.wikipedia.org/wiki/Logistic_distribution "Logistic distribution") distributions). (For other names, see *[Naming](https://en.wikipedia.org/wiki/Normal_distribution#Naming)*.)
The [univariate probability distribution](https://en.wikipedia.org/wiki/Univariate_distribution "Univariate distribution") is generalized for [vectors](https://en.wikipedia.org/wiki/Vector_\(mathematics_and_physics\) "Vector (mathematics and physics)") in the [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") and for matrices in the [matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution").
### Standard normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=2 "Edit section: Standard normal distribution")\]
The simplest case of a normal distribution is known as the **standard normal distribution** or **unit normal distribution**. This is a special case when  and , and it is described by this [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") (or density):[\[11\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-11)  The variable â â has a mean of 0 and a variance and standard deviation of 1. The density  has its peak value  at  and [inflection points](https://en.wikipedia.org/wiki/Inflection_point "Inflection point") at  and â â .
Although the density above is most commonly known as the *standard normal,* a few authors have used that term to describe other versions of the normal distribution. [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss"), for example, once defined the standard normal as  which has a variance of â â , and [Stephen Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") once defined the standard normal as  which has a simple functional form and a variance of [\[12\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-12)
### General normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=3 "Edit section: General normal distribution")\]
If â â is a [standard normal deviate](https://en.wikipedia.org/wiki/Standard_normal_deviate "Standard normal deviate"), then  will have a normal distribution with expected value â â and standard deviation â â . This is equivalent to saying that the standard normal distribution â â can be scaled/stretched by a factor of â â and shifted by â â to yield a different normal distribution, called â â .
Conversely, if â â is a normal deviate with parameters â â and , then this â â distribution can be re-scaled and shifted via the formula  to convert it to the standard normal distribution. This variate is also called the standardized form of â â .
In particular, the probability density function for â â can be written in terms of the standard normal distribution â â (with zero mean and unit variance):  The probability density must be scaled by  so that the [integral](https://en.wikipedia.org/wiki/Integral "Integral") is still 1.
The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter â â ([phi](https://en.wikipedia.org/wiki/Phi "Phi")).[\[13\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-13) The variant form of the Greek letter phi, â â , is also used quite often.
The normal distribution is often referred to as  or â â .[\[14\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-14) Thus when a random variable â â is normally distributed with mean â â and standard deviation â â , one may write

### Alternative parameterizations
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=5 "Edit section: Alternative parameterizations")\]
Some authors advocate using the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)") â â as the parameter defining the width of the distribution, instead of the standard deviation â â or the variance â â . The precision is normally defined as the reciprocal of the variance, â â .[\[15\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-15) The formula for the distribution then becomes 
This choice is claimed to have advantages in numerical computations when â â is very close to zero, and simplifies formulas in some contexts, such as in the [Bayesian inference](https://en.wikipedia.org/wiki/Bayesian_statistics "Bayesian statistics") of variables with [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution").
Alternatively, the reciprocal of the standard deviation  might be defined as the *precision*, in which case the expression of the normal distribution becomes 
According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the distribution.
Normal distributions form an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") with [natural parameters](https://en.wikipedia.org/wiki/Natural_parameter "Natural parameter")  and , and natural statistics x and *x*2. The dual expectation parameters for normal distribution are *η*1 = *ÎŒ* and *η*2 = *ÎŒ*2 + *Ï*2.
### Cumulative distribution function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=6 "Edit section: Cumulative distribution function")\]
The [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") (CDF) of the standard normal distribution, usually denoted with the capital Greek letter â â , is the integral 
The related [error function](https://en.wikipedia.org/wiki/Error_function "Error function")  gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2, falling in the range â ![{\\displaystyle \[-x,x\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e23c41ff0bd6f01a0e27054c2b85819fcd08b762)â . That is: 
These integrals cannot be expressed in terms of elementary functions, and are often said to be [special functions](https://en.wikipedia.org/wiki/Special_function "Special function"). However, many numerical approximations are known; see [below](https://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_cumulative_distribution_function_and_normal_quantile_function) for more.
The two functions are closely related, namely ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x}{\\sqrt {2}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8356fb040a87c7199fe5d99ca78fc217bb22260)
For a generic normal distribution with density â â , mean â â and variance , the cumulative distribution function is ![{\\displaystyle F(x)=\\Phi {\\left({\\frac {x-\\mu }{\\sigma }}\\right)}={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d71d8dab3627a46c34fafde729c82724b641b3eb)
The probability that x lies between a and b with a \< b is therefore[\[16\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-KunIlPark-16): 84 ![{\\displaystyle \\operatorname {P} (a\<x\\leq b)={\\frac {1}{2}}\\left\[\\operatorname {erf} \\left({\\frac {b-\\mu }{\\sigma {\\sqrt {2}}}}\\right)-\\operatorname {erf} \\left({\\frac {a-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/081e3058b1d40566c4e105866b7728e950d32f1d)
The complement of the standard normal cumulative distribution function, , is often called the [Q-function](https://en.wikipedia.org/wiki/Q-function "Q-function"), especially in engineering texts.[\[17\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-17)[\[18\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-18) It gives the probability that the value of a standard normal random variable â â will exceed â â : â â . Other definitions of the â â \-function, all of which are simple transformations of â â , are also used occasionally.[\[19\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-19)
The [graph](https://en.wikipedia.org/wiki/Graph_of_a_function "Graph of a function") of the standard normal cumulative distribution function â â has 2-fold [rotational symmetry](https://en.wikipedia.org/wiki/Rotational_symmetry "Rotational symmetry") around the point (0,1/2); that is, â â . Its [antiderivative](https://en.wikipedia.org/wiki/Antiderivative "Antiderivative") (indefinite integral) can be expressed as follows: 
An [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") of the cumulative distribution function for large x can be derived using [integration by parts](https://en.wikipedia.org/wiki/Integration_by_parts "Integration by parts"):  where  denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"). For more, see [Error function § Asymptotic expansion](https://en.wikipedia.org/wiki/Error_function#Asymptotic_expansion "Error function").[\[20\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-20)
#### Taylor series representation
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=7 "Edit section: Taylor series representation")\]
The [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") for the normal distribution â â can be derived by substituting â â into the [Taylor series for the exponential function](https://en.wikipedia.org/wiki/Exponential_function#Power_series "Exponential function"):[\[21\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-duff-21)

This series can be integrated term by term to obtain the Taylor series for the cumulative distribution function:[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22)
 However, this series is ineffective for calculation due to slow convergence, except when â â is small.[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22)
Both of these series describe [entire functions](https://en.wikipedia.org/wiki/Entire_function "Entire function"), which converge for all real and complex values of â â .
#### Recursive computation with Taylor series
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=8 "Edit section: Recursive computation with Taylor series")\]
The recurrence relation for [Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials "Hermite polynomials") He*n*(*x*) may be used to efficiently construct the [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") expansion about any point *x*0:  where: 
#### Standard deviation and coverage
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=9 "Edit section: Standard deviation and coverage")\]
[](https://en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg)
For the normal distribution, the values less than one standard deviation from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.
About 68% of values drawn from a normal distribution are within one standard deviation Ï from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9) This is known as the [68â95â99.7 (empirical) rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule "68â95â99.7 rule"), or the *3-sigma rule*.
More precisely, the probability that a normal deviate lies in the range between  and  is given by  To 12 significant digits, the values for  are:
| â â | | | | |
|---|---|---|---|---|
| [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A178647](https://oeis.org/A178647 "oeis:A178647") | | | | |
| 2 | 0\.954499736104 | 0\.045500263896 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A110894](https://oeis.org/A110894 "oeis:A110894") |
| | | | | |
| 21 | .9778945080 | | | |
| 3 | 0\.997300203937 | 0\.002699796063 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A270712](https://oeis.org/A270712 "oeis:A270712") |
| | | | | |
| 370 | .398347345 | | | |
| 4 | 0\.999936657516 | 0\.000063342484 | | |
| | | | | |
| 15787 | .1927673 | | | |
| 5 | 0\.999999426697 | 0\.000000573303 | | |
| | | | | |
| 1744277 | .89362 | | | |
| 6 | 0\.999999998027 | 0\.000000001973 | | |
| | | | | |
| 506797345 | .897 | | | |
For large â â , one can use the approximation 
The [quantile function](https://en.wikipedia.org/wiki/Quantile_function "Quantile function") of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function"), and can be expressed in terms of the inverse [error function](https://en.wikipedia.org/wiki/Error_function "Error function"):  For a normal random variable with mean â â and variance , the quantile function is  The [quantile](https://en.wikipedia.org/wiki/Quantile "Quantile")  of the standard normal distribution is commonly denoted as â â . These values are used in [hypothesis testing](https://en.wikipedia.org/wiki/Hypothesis_testing "Hypothesis testing"), construction of [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") and [QâQ plots](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "QâQ plot"). A normal random variable â â will exceed  with probability , and will lie outside the interval  with probability â â . In particular, the quantile  is [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96"); therefore a normal random variable will lie outside the interval  in only 5% of cases.
The following table gives the quantile  such that â â will lie in the range  with a specified probability â â . These values are useful to determine [tolerance interval](https://en.wikipedia.org/wiki/Tolerance_interval "Tolerance interval") for [sample averages](https://en.wikipedia.org/wiki/Sample_mean_and_sample_covariance#Sample_mean "Sample mean and sample covariance") and other statistical [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") with normal (or [asymptotically](https://en.wikipedia.org/wiki/Asymptotic "Asymptotic") normal) distributions.[\[23\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-23) The following table shows , not  as defined above.
| â â |
|---|
For small â â , the quantile function has the useful [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") \[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\]
#### Using root finding to compute the quantile function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=11 "Edit section: Using root finding to compute the quantile function")\]
Any of the described approaches for computing the cumulative distribution function  can be used with [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method "Newton's method") (or another [root-finding algorithm](https://en.wikipedia.org/wiki/Root-finding_algorithm "Root-finding algorithm") such as [Halley's method](https://en.wikipedia.org/wiki/Halley%27s_method "Halley's method")) to find the value of â â for which â â for some desired quantile â â . For example, starting with an initial, approximately correct guess â â , increasingly better approximations â â , â â , ... can be calculated iteratively using Newton's method with 
The normal distribution is the only distribution whose [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") beyond the first two (i.e., other than the mean and [variance](https://en.wikipedia.org/wiki/Variance "Variance")) are zero. It is also the continuous distribution with the [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution") for a specified mean and variance.[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24)[\[25\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-25) Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.[\[26\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Geary_RC-26)[\[27\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-27)
The normal distribution is a subclass of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). The normal distribution is [symmetric](https://en.wikipedia.org/wiki/Symmetric_distribution "Symmetric distribution") about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the [weight](https://en.wikipedia.org/wiki/Weight "Weight") of a person or the price of a [share of stock](https://en.wikipedia.org/wiki/Share_\(finance\) "Share (finance)"). Such variables may be better described by other distributions, such as the [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") or the [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution").
The value of the normal density is practically zero when the value â â lies more than a few [standard deviations](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of [outliers](https://en.wikipedia.org/wiki/Outlier "Outlier")âvalues that lie many standard deviations away from the meanâand least squares and other [statistical inference](https://en.wikipedia.org/wiki/Statistical_inference "Statistical inference") methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more [heavy-tailed](https://en.wikipedia.org/wiki/Heavy-tailed "Heavy-tailed") distribution should be assumed and appropriate [robust statistical inference](https://en.wikipedia.org/wiki/Robust_statistics "Robust statistics") methods applied.
The Gaussian distribution belongs to the family of [stable distributions](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") which are the attractors of sums of [independent, identically distributed](https://en.wikipedia.org/wiki/Independent,_identically_distributed "Independent, identically distributed") distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution") and the [Lévy distribution](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "Lévy distribution").
### Symmetries and derivatives
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=13 "Edit section: Symmetries and derivatives")\]
The normal distribution with density  (mean â â and variance ) has the following properties:
Furthermore, the density â â of the standard normal distribution (i.e.  and ) also has the following properties:
The plain and absolute [moments](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") of a variable â â are the expected values of  and , respectively. If the expected value â â of â â is zero, these parameters are called *central moments;* otherwise, these parameters are called *non-central moments.* Usually we are interested only in moments with integer order â â .
If â â has a normal distribution, the non-central moments exist and are finite for any â â whose real part is greater than â1. For any non-negative integer â â , the plain central moments are:[\[31\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-31) ![{\\displaystyle \\operatorname {E} \\left\[(X-\\mu )^{p}\\right\]={\\begin{cases}0&{\\text{if }}p{\\text{ is odd,}}\\\\\\sigma ^{p}(p-1)!!&{\\text{if }}p{\\text{ is even.}}\\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1d2c92b62ac2bbe07a8e475faac29c8cc5f7755) Here  denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"), that is, the product of all numbers from â â to 1 that have the same parity as 
The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer 
![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[\|X-\\mu \|^{p}\\right\]&=\\sigma ^{p}(p-1)!!\\cdot {\\begin{cases}{\\sqrt {\\frac {2}{\\pi }}}&{\\text{if }}p{\\text{ is odd}}\\\\1&{\\text{if }}p{\\text{ is even}}\\end{cases}}\\\\\[8pt\]&=\\sigma ^{p}\\cdot {\\frac {2^{p/2}\\Gamma \\left({\\frac {p+1}{2}}\\right)}{\\sqrt {\\pi }}}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3be5c6403b0141985f1980de6283a118ef4ea267) The last formula is valid also for any non-integer  When the mean  the plain and absolute moments can be expressed in terms of [confluent hypergeometric functions](https://en.wikipedia.org/wiki/Confluent_hypergeometric_function "Confluent hypergeometric function")  and [\[32\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-32) ![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[X^{p}\\right\]&=\\sigma ^{p}\\cdot {\\left(-i{\\sqrt {2}}\\right)}^{p}\\,U{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)},\\\\\\operatorname {E} \\left\[\|X\|^{p}\\right\]&=\\sigma ^{p}\\cdot 2^{p/2}{\\frac {\\Gamma {\\left({\\frac {1+p}{2}}\\right)}}{\\sqrt {\\pi }}}\\,{}\_{1}F\_{1}{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c1a76b43032d0b01ff1935cced240253263ec62)
These expressions remain valid even when â â is not an integer. See also [generalized Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials#"Negative_variance" "Hermite polynomials").
| Order | Non-central moment, ![{\\displaystyle \\operatorname {E} \\left\[X^{p}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53264b5ab94e93f2e7de05de69012af27df4c4f4) |
|---|---|
The expectation of â â conditioned on the event that â â lies in an interval ![{\\textstyle \[a,b\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c780cbaafb5b1d4a6912aa65d2b0b1982097108) is given by ![{\\displaystyle \\operatorname {E} \\left\[X\\mid a\<X\<b\\right\]=\\mu -\\sigma ^{2}{\\frac {f(b)-f(a)}{F(b)-F(a)}}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad97cc40960e6d1d4e65f51596c9cd0c9accfdc0) where â â and â â respectively are the density and the cumulative distribution function of â â . For  this is known as the [inverse Mills ratio](https://en.wikipedia.org/wiki/Inverse_Mills_ratio "Inverse Mills ratio"). Note that above, density â â of â â is used instead of standard normal density as in inverse Mills ratio, so here we have  instead of â â .
### Fourier transform and characteristic function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=15 "Edit section: Fourier transform and characteristic function")\]
The [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform "Fourier transform") of a normal density â â with mean â â and variance  is[\[33\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-33)

where â â is the [imaginary unit](https://en.wikipedia.org/wiki/Imaginary_unit "Imaginary unit"). If the mean , the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the [frequency domain](https://en.wikipedia.org/wiki/Frequency_domain "Frequency domain"), with mean 0 and variance â â . In particular, the standard normal distribution â â is an [eigenfunction](https://en.wikipedia.org/wiki/Fourier_transform#Eigenfunctions "Fourier transform") of the Fourier transform.
In probability theory, the Fourier transform of the probability distribution of a real-valued random variable â â is closely connected to the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)")  of that variable, which is defined as the [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") of , as a function of the real variable â â (the [frequency](https://en.wikipedia.org/wiki/Frequency "Frequency") parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable â â .[\[34\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-34) The relation between both is: 
The real and imaginary parts of ![{\\displaystyle {\\hat {f}}(t)=\\operatorname {E} \[e^{-itx}\]=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/952332f7eee9e208a58adcb0fc9579bdaa143cce) give: ![{\\displaystyle \\operatorname {E} \[\\cos(tx)\]=\\cos(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5395d7de6a6755a20b760ae1a32fffc4321f63be) and ![{\\displaystyle \\operatorname {E} \[\\sin(tx)\]=\\sin(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da4035208745307ad5ea8c4ac3274bdebc42bccd)
Similarly, ![{\\displaystyle \\operatorname {E} \[\\cosh(tx)\]=\\cosh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ab17a3e9ea7d35d5f8f142bdb6057ea1cf8ff224) and ![{\\displaystyle \\operatorname {E} \[\\sinh(tx)\]=\\sinh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/448bbe59bcf7fbe7f53f44effd9ce6d75fc19fc3)
These formulas evaluated at  give the expected value of these basic trigonometric and hyperbolic functions over a Gaussian random variable , which also could be seen as consequences of the [Isserlis's theorem](https://en.wikipedia.org/wiki/Isserlis%27s_theorem "Isserlis's theorem").
### Moment- and cumulant-generating functions
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=16 "Edit section: Moment- and cumulant-generating functions")\]
The [moment generating function](https://en.wikipedia.org/wiki/Moment_generating_function "Moment generating function") of a real random variable â â is the expected value of , as a function of the real parameter â â . For a normal distribution with density â â , mean â â and variance , the moment generating function exists and is equal to
![{\\displaystyle M(t)=\\operatorname {E} \\left\[e^{tX}\\right\]={\\hat {f}}(it)=e^{\\mu t}e^{\\sigma ^{2}t^{2}/2}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3b5930b107fb4328bc04d077d65ce3d2bf1510de) For any â â , the coefficient of â â in the moment generating function (expressed as an [exponential power series](https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function_\(EGF\) "Generating function") in â â ) is the normal distribution's expected value â ![{\\displaystyle \\operatorname {E} \[X^{k}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9893a5b728d8111751abbcf6bd653d214b462cdc)â .
The [cumulant generating function](https://en.wikipedia.org/wiki/Cumulant_generating_function "Cumulant generating function") is the logarithm of the moment generating function, namely 
The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in â â , only the first two [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") are nonzero, namely the mean â â and the variance â â .
Some authors prefer to instead work with the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") E\[*e**itX*\] = *e**iÎŒt* â *Ï*2*t*2/2 and ln E\[*e**itX*\] = *iÎŒt* â â 1/2â *Ï*2*t*2.
### Stein operator and class
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=17 "Edit section: Stein operator and class")\]
Within [Stein's method](https://en.wikipedia.org/wiki/Stein%27s_method "Stein's method") the Stein operator and class of a random variable  are  and  the class of all absolutely continuous functions â â such that â ![{\\displaystyle \\operatorname {E} \[\\vert f'(X)\\vert \]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/c172f331ab4cca02c7e06a7322b7832f082e717b)â .
### Zero-variance limit
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=18 "Edit section: Zero-variance limit")\]
In the [limit](https://en.wikipedia.org/wiki/Limit_\(mathematics\) "Limit (mathematics)") when  approaches zero, the probability density  approaches zero everywhere except at , where it approaches , while its integral remains equal to 1. An extension of the normal distribution to the case with zero variance can be defined using the [Dirac delta measure](https://en.wikipedia.org/wiki/Dirac_measure "Dirac measure") , although the resulting random variables are not [absolutely continuous](https://en.wikipedia.org/wiki/Absolutely_continuous_random_variable "Absolutely continuous random variable") and thus do not have [probability density functions](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function"). The cumulative distribution function of such a random variable is then the [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function "Heaviside step function") translated by the mean , namely 
Of all probability distributions over the reals with a specified finite mean â â and finite variance â â , the normal distribution  is the one with [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution").[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24) To see this, let â â be a [continuous random variable](https://en.wikipedia.org/wiki/Continuous_random_variable "Continuous random variable") with [probability density](https://en.wikipedia.org/wiki/Probability_density "Probability density") â â . The entropy of â â is defined as[\[35\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-35)[\[36\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-36)[\[37\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-37)  where  is understood to be zero whenever â â . This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by using [variational calculus](https://en.wikipedia.org/wiki/Variational_calculus "Variational calculus"). A function with three [Lagrange multipliers](https://en.wikipedia.org/wiki/Lagrange_multipliers "Lagrange multipliers") is defined: 
At maximum entropy, a small variation  about  will produce a variation  about â â which is equal to 0: 
Since this must hold for any small â â , the factor multiplying â â must be zero, and solving for â â yields: 
The Lagrange constraints that â â is properly normalized and has the specified mean and variance are satisfied if and only if â â , â â , and â â are chosen so that  The entropy of a normal distribution  is equal to  which is independent of the mean â â .
1. If the characteristic function  of some random variable â â is of the form  in a neighborhood of zero, where  is a [polynomial](https://en.wikipedia.org/wiki/Polynomial "Polynomial"), then the **Marcinkiewicz theorem** (named after [JĂłzef Marcinkiewicz](https://en.wikipedia.org/wiki/J%C3%B3zef_Marcinkiewicz "JĂłzef Marcinkiewicz")) asserts that â â can be at most a quadratic polynomial, and therefore â â is a normal random variable.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38) The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant").
2. If â â and â â are [jointly normal](https://en.wikipedia.org/wiki/Jointly_normal "Jointly normal") and [uncorrelated](https://en.wikipedia.org/wiki/Uncorrelated "Uncorrelated"), then they are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). The requirement that â â and â â should be *jointly* normal is essential; without it the property does not hold.[\[39\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-39)[\[40\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-40)[\[proof\]](https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent "Normally distributed and uncorrelated does not imply independent") For non-normal random variables uncorrelatedness does not imply independence.
3. The [KullbackâLeibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence "KullbackâLeibler divergence") of one normal distribution  from another  is given by:[\[41\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-41)  The [Hellinger distance](https://en.wikipedia.org/wiki/Hellinger_distance "Hellinger distance") between the same distributions is equal to 
4. The [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") for a normal distribution w.r.t. â â and  is diagonal and takes the form 
5. The [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the mean of a normal distribution is another normal distribution.[\[42\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-42) Specifically, if  are iid  and the prior is , then the posterior distribution for the estimator of â â will be 
6. The family of normal distributions not only forms an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") (EF), but in fact forms a [natural exponential family](https://en.wikipedia.org/wiki/Natural_exponential_family "Natural exponential family") (NEF) with quadratic [variance function](https://en.wikipedia.org/wiki/Variance_function "Variance function") ([NEF-QVF](https://en.wikipedia.org/wiki/NEF-QVF "NEF-QVF")). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.
7. In [information geometry](https://en.wikipedia.org/wiki/Information_geometry "Information geometry"), the family of normal distributions forms a [statistical manifold](https://en.wikipedia.org/wiki/Statistical_manifold "Statistical manifold") with [constant curvature](https://en.wikipedia.org/wiki/Constant_curvature "Constant curvature") â â . The same family is [flat](https://en.wikipedia.org/wiki/Flat_manifold "Flat manifold") with respect to the (±1)-connections  and .[\[43\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-43)
8. If  are distributed according to , then ![{\\textstyle E\[\\max \_{i}X\_{i}\]\\leq \\sigma {\\sqrt {2\\ln n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0dfb87c9b047ccf23ace2139d97810dff1ed6670). Note that there is no assumption of independence.[\[44\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-44)
### Central limit theorem
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=22 "Edit section: Central limit theorem")\]
[](https://en.wikipedia.org/wiki/File:De_moivre-laplace.gif)
As the number of discrete events increases, the function begins to resemble a normal distribution.
[](https://en.wikipedia.org/wiki/File:Dice_sum_central_limit_theorem.svg)
Comparison of probability density functions, *p*(*k*) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing na, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve).
The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where  are [independent and identically distributed](https://en.wikipedia.org/wiki/Independent_and_identically_distributed "Independent and identically distributed") random variables with the same arbitrary distribution, zero mean, and variance  and â â is their mean scaled by   Then, as â â increases, the probability distribution of â â will tend to the normal distribution with zero mean and variance â â .
The theorem can be extended to variables  that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.
Many [test statistics](https://en.wikipedia.org/wiki/Test_statistic "Test statistic"), [scores](https://en.wikipedia.org/wiki/Score_\(statistics\) "Score (statistics)"), and [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of [influence functions](https://en.wikipedia.org/wiki/Influence_function_\(statistics\) "Influence function (statistics)"). The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.
The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:
Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.
A general upper bound for the approximation error in the central limit theorem is given by the [BerryâEsseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "BerryâEsseen theorem"), improvements of the approximation are given by the [Edgeworth expansions](https://en.wikipedia.org/wiki/Edgeworth_expansion "Edgeworth expansion").
This theorem can also be used to justify modeling the sum of many uniform noise sources as [Gaussian noise](https://en.wikipedia.org/wiki/Gaussian_noise "Gaussian noise"). See [AWGN](https://en.wikipedia.org/wiki/AWGN "AWGN").
### Operations and functions of normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=23 "Edit section: Operations and functions of normal variables")\]
#### Operations on a single normal variable
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=24 "Edit section: Operations on a single normal variable")\]
If â â is distributed normally with mean â â and variance , then
##### Operations on two independent normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=25 "Edit section: Operations on two independent normal variables")\]
- If  and  are two [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)") normal random variables, with means ,  and variances , , then their sum  will also be normally distributed,[\[proof\]](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables "Sum of normally distributed random variables") with mean  and variance .
- In particular, if â â and â â are independent normal deviates with zero mean and variance , then  and  are also independent and normally distributed, with zero mean and variance . This is a special case of the [polarization identity](https://en.wikipedia.org/wiki/Polarization_identity "Polarization identity").[\[46\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-46)
- If ,  are two independent normal deviates with mean â â and variance , and â â , â â are arbitrary real numbers, then the variable  is also normally distributed with mean â â and variance . It follows that the normal distribution is [stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") (with exponent ).
- If ,  are normal distributions, then their normalized [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean "Geometric mean")  is a normal distribution  with  and .
##### Operations on two independent standard normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=26 "Edit section: Operations on two independent standard normal variables")\]
If  and  are two independent standard normal random variables with mean 0 and variance 1, then
#### Operations on multiple independent normal variables
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=27 "Edit section: Operations on multiple independent normal variables")\]
- A [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") of a normal vector, i.e. a quadratic function  of multiple independent or correlated normal variables, is a [generalized chi-square](https://en.wikipedia.org/wiki/Generalized_chi-square_distribution "Generalized chi-square distribution") variable.
### Operations on the density function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=29 "Edit section: Operations on the density function")\]
The [split normal distribution](https://en.wikipedia.org/wiki/Split_normal_distribution "Split normal distribution") is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The [truncated normal distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution") results from rescaling a section of a single density function.
### Infinite divisibility and Cramér's theorem
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=30 "Edit section: Infinite divisibility and CramĂ©r's theorem")\]
For any positive integer n, any normal distribution with mean â â and variance  is the distribution of the sum of n independent normal deviates, each with mean  and variance . This property is called [infinite divisibility](https://en.wikipedia.org/wiki/Infinite_divisibility_\(probability\) "Infinite divisibility (probability)").[\[51\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-51)
Conversely, if  and  are independent random variables and their sum  has a normal distribution, then both  and  must be normal deviates.[\[52\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-52)
This result is known as [Cramér's decomposition theorem](https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_decomposition_theorem "Cramér's decomposition theorem"), and is equivalent to saying that the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38)
### The KacâBernstein theorem
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=31 "Edit section: The KacâBernstein theorem")\]
The [KacâBernstein theorem](https://en.wikipedia.org/wiki/Kac%E2%80%93Bernstein_theorem "KacâBernstein theorem") states that if  and â â are independent and  and  are also independent, then both X and Y must necessarily have normal distributions.[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53)[\[54\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-54)
More generally, if  are independent random variables, then two distinct linear combinations  and will be independent if and only if all  are normal and , where  denotes the variance of .[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53)
The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called *normal* or *Gaussian* laws, so a certain ambiguity in names exists.
- The [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") describes the Gaussian law in the k\-dimensional [Euclidean space](https://en.wikipedia.org/wiki/Euclidean_space "Euclidean space"). A vector *X* â **R***k* is multivariate-normally distributed if any linear combination of its components ÎŁ*k*
*j*\=1*a**j* *X**j* has a (univariate) normal distribution. The variance of X is a *k* Ă *k* symmetric positive-definite matrix V. The multivariate normal distribution is a special case of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). As such, its iso-density loci in the *k* = 2 case are [ellipses](https://en.wikipedia.org/wiki/Ellipse "Ellipse") and in the case of arbitrary k are [ellipsoids](https://en.wikipedia.org/wiki/Ellipsoid "Ellipsoid").
- [Rectified Gaussian distribution](https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution "Rectified Gaussian distribution") a rectified version of normal distribution with all the negative elements reset to 0.
- [Complex normal distribution](https://en.wikipedia.org/wiki/Complex_normal_distribution "Complex normal distribution") deals with the complex normal vectors. A complex vector *X* â **C***k* is said to be normal if both its real and imaginary components jointly possess a 2*k*\-dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the *variance* matrix Î, and the *relation* matrix C.
- [Matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution") describes the case of normally distributed matrices.
- [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process "Gaussian process") are the normally distributed [stochastic processes](https://en.wikipedia.org/wiki/Stochastic_process "Stochastic process"). These can be viewed as elements of some infinite-dimensional [Hilbert space](https://en.wikipedia.org/wiki/Hilbert_space "Hilbert space") H, and thus are the analogues of multivariate normal vectors for the case *k* = â. A random element *h* â *H* is said to be normal if for any constant *a* â *H* the [scalar product](https://en.wikipedia.org/wiki/Scalar_product "Scalar product") (*a*, *h*) has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear *covariance operator* *K*: *H* â *H*. Several Gaussian processes became popular enough to have their own names:
- [Brownian motion](https://en.wikipedia.org/wiki/Wiener_process "Wiener process");
- [Brownian bridge](https://en.wikipedia.org/wiki/Brownian_bridge "Brownian bridge"); and
- [OrnsteinâUhlenbeck process](https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process "OrnsteinâUhlenbeck process").
- [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") is an abstract mathematical construction that represents a [q-analogue](https://en.wikipedia.org/wiki/Q-analogue "Q-analogue") of the normal distribution.
- the [q-Gaussian](https://en.wikipedia.org/wiki/Q-Gaussian "Q-Gaussian") is an analogue of the Gaussian distribution, in the sense that it maximises the [Tsallis entropy](https://en.wikipedia.org/wiki/Tsallis_entropy "Tsallis entropy"), and is one type of [Tsallis distribution](https://en.wikipedia.org/wiki/Tsallis_distribution "Tsallis distribution"). This distribution is different from the [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") above.
- The [Kaniadakis Îș\-Gaussian distribution](https://en.wikipedia.org/wiki/Kaniadakis_Gaussian_distribution "Kaniadakis Gaussian distribution") is a generalization of the Gaussian distribution which arises from the [Kaniadakis statistics](https://en.wikipedia.org/wiki/Kaniadakis_statistics "Kaniadakis statistics"), being one of the [Kaniadakis distributions](https://en.wikipedia.org/wiki/Kaniadakis_distribution "Kaniadakis distribution").
A random variable X has a two-piece normal distribution if it has a distribution  where ÎŒ is the mean and *Ï*2
1 and *Ï*2
2 are the variances of the distribution to the left and right of the mean respectively.
The mean E(*X*), variance V(*X*), and third central moment T(*X*) of this distribution have been determined[\[55\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-John-1982-55) ![{\\displaystyle {\\begin{aligned}\\operatorname {E} (X)&=\\mu +{\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1}),\\\\\\operatorname {V} (X)&=\\left(1-{\\frac {2}{\\pi }}\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2},\\\\\\operatorname {T} (X)&={\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1})\\left\[\\left({\\frac {4}{\\pi }}-1\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2}\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/97f32cc8147bff0b5cdc02123a520a1119854060)
One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:
- [Pearson distribution](https://en.wikipedia.org/wiki/Pearson_distribution "Pearson distribution") â a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values.
- The [generalized normal distribution](https://en.wikipedia.org/wiki/Generalized_normal_distribution "Generalized normal distribution"), also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors.
## Statistical inference
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=33 "Edit section: Statistical inference")\]
### Estimation of parameters
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=34 "Edit section: Estimation of parameters")\]
It is often the case that we do not know the parameters of the normal distribution, but instead want to [estimate](https://en.wikipedia.org/wiki/Estimation_theory "Estimation theory") them. That is, having a sample  from a normal  population we would like to learn the approximate values of parameters â â and . The standard approach to this problem is the [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") method, which requires maximization of the *[log-likelihood function](https://en.wikipedia.org/wiki/Log-likelihood_function "Log-likelihood function")*:  Taking derivatives with respect to â â and  and solving the resulting system of first order conditions yields the *maximum likelihood estimates*: 
Then  is as follows: ![{\\displaystyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})=(-n/2)\[\\ln(2\\pi {\\hat {\\sigma }}^{2})+1\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/561353e6bc80d226fddd9510be61d21bc67b3aee)
Estimator  is called the *[sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean")*, since it is the arithmetic mean of all observations. The statistic  is [complete](https://en.wikipedia.org/wiki/Complete_statistic "Complete statistic") and [sufficient](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") for â â , and therefore by the [LehmannâScheffĂ© theorem](https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9_theorem "LehmannâScheffĂ© theorem"),  is the [uniformly minimum variance unbiased](https://en.wikipedia.org/wiki/Uniformly_minimum_variance_unbiased "Uniformly minimum variance unbiased") (UMVU) estimator.[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) In finite samples it is distributed normally:  The variance of this estimator is equal to the *ΌΌ*\-element of the inverse [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") . This implies that the estimator is [finite-sample efficient](https://en.wikipedia.org/wiki/Efficient_estimator "Efficient estimator"). Of practical importance is the [standard error](https://en.wikipedia.org/wiki/Standard_error "Standard error") of  being proportional to , that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in [Monte Carlo simulations](https://en.wikipedia.org/wiki/Monte_Carlo_simulation "Monte Carlo simulation").
From the standpoint of the [asymptotic theory](https://en.wikipedia.org/wiki/Asymptotic_theory_\(statistics\) "Asymptotic theory (statistics)"),  is [consistent](https://en.wikipedia.org/wiki/Consistent_estimator "Consistent estimator"), that is, it [converges in probability](https://en.wikipedia.org/wiki/Converges_in_probability "Converges in probability") to â â as . The estimator is also [asymptotically normal](https://en.wikipedia.org/wiki/Asymptotic_normality "Asymptotic normality"), which is a simple corollary of it being normal in finite samples: 
The estimator  is called the *[sample variance](https://en.wikipedia.org/wiki/Sample_variance "Sample variance")*, since it is the variance of the sample (). In practice, another estimator is often used instead of the . This other estimator is denoted , and is also called the *sample variance*, which represents a certain ambiguity in terminology; its square root â â is called the *sample standard deviation*. The estimator  differs from  by having (*n* â 1) instead of n in the denominator (the so-called [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction "Bessel's correction")):  The difference between  and  becomes negligibly small for large n's. In finite samples however, the motivation behind the use of  is that it is an [unbiased estimator](https://en.wikipedia.org/wiki/Unbiased_estimator "Unbiased estimator") of the underlying parameter , whereas  is biased. Also, by the LehmannâScheffĂ© theorem the estimator  is uniformly minimum variance unbiased ([UMVU](https://en.wikipedia.org/wiki/UMVU "UMVU")),[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator  is better than the  in terms of the [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") (MSE) criterion. In finite samples both  and  have scaled [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with (*n* â 1) degrees of freedom:  The first of these expressions shows that the variance of  is equal to , which is slightly greater than the *ÏÏ*\-element of the inverse Fisher information matrix , which is . Thus,  is not an efficient estimator for , and moreover, since  is UMVU, we can conclude that the finite-sample efficient estimator for  does not exist.
Applying the asymptotic theory, both estimators  and  are consistent, that is they converge in probability to  as the sample size . The two estimators are also both asymptotically normal:  In particular, both estimators are asymptotically efficient for .
### Confidence intervals
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=37 "Edit section: Confidence intervals")\]
By [Cochran's theorem](https://en.wikipedia.org/wiki/Cochran%27s_theorem "Cochran's theorem"), for normal distributions the sample mean  and the sample variance *s*2 are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"), which means there can be no gain in considering their [joint distribution](https://en.wikipedia.org/wiki/Joint_distribution "Joint distribution"). There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between  and s can be employed to construct the so-called *t-statistic*:  This quantity t has the [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") with (*n* â 1) degrees of freedom, and it is an [ancillary statistic](https://en.wikipedia.org/wiki/Ancillary_statistic "Ancillary statistic") (independent of the value of the parameters). Inverting the distribution of this t\-statistics will allow us to construct the [confidence interval](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") for ÎŒ;[\[57\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-57) similarly, inverting the *Ï*2 distribution of the statistic *s*2 will give us the confidence interval for *Ï*2:[\[58\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-58) ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}},\\,{\\hat {\\mu }}+t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/86ad00c4aac2907f6358d3ab3a5e413a58158be4) ![{\\displaystyle \\sigma ^{2}\\in \\left\[{\\frac {n-1}{\\chi \_{n-1,1-\\alpha /2}^{2}}}s^{2},\\,{\\frac {n-1}{\\chi \_{n-1,\\alpha /2}^{2}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87c0c6ae7bd48ba8377279fed58df479f0de900c) where *t**k*,*p* and Ï 2
*k,p* are the pth [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the t\- and *Ï*2\-distributions respectively. These confidence intervals are of the *[confidence level](https://en.wikipedia.org/wiki/Confidence_level "Confidence level")* 1 â *α*, meaning that the true values ÎŒ and *Ï*2 fall outside of these intervals with probability (or [significance level](https://en.wikipedia.org/wiki/Significance_level "Significance level")) α. In practice people usually take *α* = 5%, resulting in the 95% confidence intervals. The confidence interval for Ï can be found by taking the square root of the interval bounds for *Ï*2.
Approximate formulas can be derived from the asymptotic distributions of  and *s*2: ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s,\\,{\\hat {\\mu }}+{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6ed5adb135a9cd03de1aa21d774e66be1adb4ea8) ![{\\displaystyle \\sigma ^{2}\\in \\left\[s^{2}-{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2},\\,s^{2}+{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56646fb560578a0414ad2f045c14031c4015b9a2) The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles *z**α*/2 do not depend on n. In particular, the most popular value of *α* = 5%, results in \|*z*0\.025\| = [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96").
Normality tests assess the likelihood that the given data set {*x*1, ..., *x**n*} comes from a normal distribution. Typically the [null hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis "Null hypothesis") *H*0 is that the observations are distributed normally with unspecified mean ÎŒ and variance *Ï*2, versus the alternative *H**a* that the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below:
**Diagnostic plots** are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis.
- [QâQ plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "QâQ plot"), also known as [normal probability plot](https://en.wikipedia.org/wiki/Normal_probability_plot "Normal probability plot") or [rankit](https://en.wikipedia.org/wiki/Rankit "Rankit") plotâis a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it is a plot of point of the form (*Ί*â1(*p**k*), *x*(*k*)), where plotting points *p**k* are equal to *p**k* = (*k* â *α*)/(*n* + 1 â 2*α*) and α is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line.
- [PâP plot](https://en.wikipedia.org/wiki/P%E2%80%93P_plot "PâP plot") â similar to the QâQ plot, but used much less frequently. This method consists of plotting the points (*Ί*(*z*(*k*)), *p**k*), where . For normally distributed data this plot should lie on a straight line between (0, 0) and (1, 1).
**Goodness-of-fit tests**:
*Moment-based tests*:
- [D'Agostino's K-squared test](https://en.wikipedia.org/wiki/D%27Agostino%27s_K-squared_test "D'Agostino's K-squared test")
- [JarqueâBera test](https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test "JarqueâBera test")
- [ShapiroâWilk test](https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test "ShapiroâWilk test"): This is based on the line in the QâQ plot having the slope of Ï. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.
*Tests based on the empirical distribution function*:
- [AndersonâDarling test](https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test "AndersonâDarling test")
- [Lilliefors test](https://en.wikipedia.org/wiki/Lilliefors_test "Lilliefors test") (an adaptation of the [KolmogorovâSmirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test "KolmogorovâSmirnov test"))
### Bayesian analysis of the normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=39 "Edit section: Bayesian analysis of the normal distribution")\]
Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:
- Either the mean, or the variance, or neither, may be considered a fixed quantity.
- When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified.
- Both univariate and [multivariate](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") cases need to be considered.
- Either [conjugate](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") or [improper](https://en.wikipedia.org/wiki/Improper_prior "Improper prior") [prior distributions](https://en.wikipedia.org/wiki/Prior_distribution "Prior distribution") may be placed on the unknown variables.
- An additional set of cases occurs in [Bayesian linear regression](https://en.wikipedia.org/wiki/Bayesian_linear_regression "Bayesian linear regression"), where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the [regression coefficients](https://en.wikipedia.org/wiki/Regression_coefficient "Regression coefficient"). The resulting analysis is similar to the basic cases of [independent identically distributed](https://en.wikipedia.org/wiki/Independent_identically_distributed "Independent identically distributed") data.
The formulas for the non-linear-regression cases are summarized in the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") article.
#### Sum of two quadratics
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=40 "Edit section: Sum of two quadratics")\]
The following auxiliary formula is useful for simplifying the [posterior](https://en.wikipedia.org/wiki/Posterior_distribution "Posterior distribution") update equations, which otherwise become fairly tedious.

This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and [completing the square](https://en.wikipedia.org/wiki/Completing_the_square "Completing the square"). Note the following about the complex constant factors attached to some of the terms:
1. The factor  has the form of a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of y and z.
2.  This shows that this factor can be thought of as resulting from a situation where the [reciprocals](https://en.wikipedia.org/wiki/Multiplicative_inverse "Multiplicative inverse") of quantities a and b add directly, so to combine a and b themselves, it is necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean"), so it is not surprising that  is one-half the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean") of a and b.
A similar formula can be written for the sum of two vector quadratics: If **x**, **y**, **z** are vectors of length k, and **A** and **B** are [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"), [invertible matrices](https://en.wikipedia.org/wiki/Invertible_matrices "Invertible matrices") of size , then
 where 
The form **x**âČ **A** **x** is called a [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") and is a [scalar](https://en.wikipedia.org/wiki/Scalar_\(mathematics\) "Scalar (mathematics)"):  In other words, it sums up all possible combinations of products of pairs of elements from **x**, with a separate coefficient for each. In addition, since , only the sum  matters for any off-diagonal elements of **A**, and there is no loss of generality in assuming that **A** is [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"). Furthermore, if **A** is symmetric, then the form 
#### Sum of differences from the mean
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=43 "Edit section: Sum of differences from the mean")\]
Another useful formula is as follows:  where 
### With known variance
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=44 "Edit section: With known variance")\]
For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows  with known [variance](https://en.wikipedia.org/wiki/Variance "Variance") *Ï*2, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") distribution is also normally distributed.
This can be shown more easily by rewriting the variance as the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), i.e. using *Ï* = 1/*Ï*2. Then if  and  we proceed as follows.
First, the [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") is (using the formula above for the sum of differences from the mean): ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\tau )&=\\prod \_{i=1}^{n}{\\sqrt {\\frac {\\tau }{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau (x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left(-{\\frac {1}{2}}\\tau \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2bcd1c34520a24e29b758a0f7427e79e9d8a414)
Then, we proceed as follows: ![{\\displaystyle {\\begin{aligned}p(\\mu \\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\mu )p(\\mu )\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\]{\\sqrt {\\frac {\\tau \_{0}}{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(n\\tau ({\\bar {x}}-\\mu )^{2}+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&=\\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}+{\\frac {n\\tau \\tau \_{0}}{n\\tau +\\tau \_{0}}}({\\bar {x}}-\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}\\right)\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/96e309ead00fbc8603eced5342aa5df534522d6a)
In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving Ό. The result is the [kernel](https://en.wikipedia.org/wiki/Kernel_\(statistics\) "Kernel (statistics)") of a normal distribution, with mean  and precision , i.e. 
This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: ![{\\displaystyle {\\begin{aligned}\\tau \_{0}'&=\\tau \_{0}+n\\tau \\\\\[5pt\]\\mu \_{0}'&={\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a6cfbdf504b1a9ce4cbe79561b4ae983fdf7271d)
That is, to combine n data points with total precision of *nÏ* (or equivalently, total variance of *n*/*Ï*2) and mean of values , derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a *precision-weighted average*, i.e. a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)
The above formula reveals why it is more convenient to do [Bayesian analysis](https://en.wikipedia.org/wiki/Bayesian_analysis "Bayesian analysis") of [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas ![{\\displaystyle {\\begin{aligned}{\\sigma \_{0}^{2}}'&={\\frac {1}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]\\mu \_{0}'&={\\frac {{\\frac {n{\\bar {x}}}{\\sigma ^{2}}}+{\\frac {\\mu \_{0}}{\\sigma \_{0}^{2}}}}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea454c8840683777ce8192d9ae63068c63962858)
For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows  with known mean ÎŒ, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the [variance](https://en.wikipedia.org/wiki/Variance "Variance") has an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") or a [scaled inverse chi-squared distribution](https://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution "Scaled inverse chi-squared distribution"). The two are equivalent except for having different [parameterizations](https://en.wikipedia.org/wiki/Parameter "Parameter"). Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for *Ï*2 is as follows: ![{\\displaystyle p(\\sigma ^{2}\\mid \\nu \_{0},\\sigma \_{0}^{2})={\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\nu \_{0}/2}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\propto {\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ef2528fe4774a93087d4adae570ef9ab84707f52)
The [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") from above, written in terms of the variance, is: ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\sigma ^{2})&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right\]\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc06aa31588bba03e4748f8f345f0638a75dc156) where 
Then: ![{\\displaystyle {\\begin{aligned}p(\\sigma ^{2}\\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\sigma ^{2})p(\\sigma ^{2})\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]{\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\frac {\\nu \_{0}}{2}}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\\\&\\propto \\left({\\frac {1}{\\sigma ^{2}}}\\right)^{n/2}{\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}+{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]\\\\&={\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}+n}{2}}}}}\\exp \\left\[-{\\frac {\\nu \_{0}\\sigma \_{0}^{2}+S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/381c1b93f6dc76e2cdca9f3f1f77132dd51dc55f)
The above is also a scaled inverse chi-squared distribution where  or equivalently 
Reparameterizing in terms of an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution"), the result is: 
#### With unknown mean and unknown variance
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=46 "Edit section: With unknown mean and unknown variance")\]
For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows  with unknown mean ÎŒ and unknown [variance](https://en.wikipedia.org/wiki/Variance "Variance") *Ï*2, a combined (multivariate) [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") is placed over the mean and variance, consisting of a [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"). Logically, this originates as follows:
1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve [sufficient statistics](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.
2. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and [sum of squared deviations](https://en.wikipedia.org/wiki/Sum_of_squared_deviations "Sum of squared deviations").
3. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.
4. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence.
5. This suggests that we create a *conditional prior* of the mean on the unknown variance, with a hyperparameter specifying the mean of the [pseudo-observations](https://en.wikipedia.org/wiki/Pseudo-observation "Pseudo-observation") associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately.
6. This leads immediately to the [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"), which is the product of the two distributions just defined, with [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") used (an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") over the variance, and a normal distribution over the mean, *conditional* on the variance) and with the same four parameters just defined.
The priors are normally defined as follows: 
The update equations can be derived, and look as follows: The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for  is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.
## Occurrence and applications
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=47 "Edit section: Occurrence and applications")\]
The occurrence of normal distribution in practical problems can be loosely classified into four categories:
1. Exactly normal distributions;
2. Approximately normal laws, for example when such approximation is justified by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"); and
3. Distributions modeled as normal â the normal distribution being the distribution with [maximum entropy](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy "Principle of maximum entropy") for a given mean and variance.
4. Regression problems â the normal distribution being found after systematic effects have been modeled sufficiently well.
[](https://en.wikipedia.org/wiki/File:QHarmonicOscillator.png)
The ground state of a [quantum harmonic oscillator](https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator "Quantum harmonic oscillator") has the Gaussian distribution.
A normal distribution occurs in some [physical theories](https://en.wikipedia.org/wiki/Physical_theory "Physical theory"):
### Approximate normality
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=49 "Edit section: Approximate normality")\]
*Approximately* normal distributions occur in many situations, as explained by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). When the outcome is produced by many small effects acting *additively and independently*, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects.
- In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where [infinitely divisible](https://en.wikipedia.org/wiki/Infinitely_divisible "Infinitely divisible") and [decomposable](https://en.wikipedia.org/wiki/Indecomposable_distribution "Indecomposable distribution") distributions are involved, such as
- [Binomial random variables](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"), associated with binary response variables;
- [Poisson random variables](https://en.wikipedia.org/wiki/Poisson_random_variables "Poisson random variables"), associated with rare events;
- [Thermal radiation](https://en.wikipedia.org/wiki/Thermal_radiation "Thermal radiation") has a [BoseâEinstein](https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statistics "BoseâEinstein statistics") distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem.
[](https://en.wikipedia.org/wiki/File:Fisher_iris_versicolor_sepalwidth.svg)
Histogram of sepal widths for *Iris versicolor* from Fisher's [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set "Iris flower data set"), with superimposed best-fitting normal distribution
> I can only recognize the occurrence of the normal curve â the Laplacian curve of errors â as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations.
There are statistical methods to empirically test that assumption; see the above [Normality tests](https://en.wikipedia.org/wiki/Normal_distribution#Normality_tests) section.
- In [biology](https://en.wikipedia.org/wiki/Biology "Biology"), the *logarithm* of various variables tend to have a normal distribution, that is, they tend to have a [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") (after separation on male/female subpopulations), with examples including:
- Measures of size of living tissue (length, height, skin area, weight);[\[62\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-62)
- The *length* of *inert* appendages (hair, claws, nails, teeth) of biological specimens, *in the direction of growth*; presumably the thickness of tree bark also falls under this category;
- Certain physiological measurements, such as blood pressure of adult humans.
- In finance, in particular the [BlackâScholes model](https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model "BlackâScholes model"), changes in the *logarithm* of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like [compound interest](https://en.wikipedia.org/wiki/Compound_interest "Compound interest"), not like simple interest, and so are multiplicative). Some mathematicians such as [Benoit Mandelbrot](https://en.wikipedia.org/wiki/Benoit_Mandelbrot "Benoit Mandelbrot") have argued that [log-Levy distributions](https://en.wikipedia.org/wiki/Levy_skew_alpha-stable_distribution "Levy skew alpha-stable distribution"), which possess [heavy tails](https://en.wikipedia.org/wiki/Heavy_tails "Heavy tails"), would be a more appropriate model, in particular for the analysis for [stock market crashes](https://en.wikipedia.org/wiki/Stock_market_crash "Stock market crash"). The use of the assumption of normal distribution occurring in financial models has also been criticized by [Nassim Nicholas Taleb](https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb "Nassim Nicholas Taleb") in his works.
- [Measurement errors](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.[\[63\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-63)
- In [standardized testing](https://en.wikipedia.org/wiki/Standardized_testing_\(statistics\) "Standardized testing (statistics)"), results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the [IQ test](https://en.wikipedia.org/wiki/Intelligence_quotient "Intelligence quotient")) or transforming the raw test scores into output scores by fitting them to the normal distribution. For example, the [SAT](https://en.wikipedia.org/wiki/SAT "SAT")'s traditional range of 200â800 is based on a normal distribution with a mean of 500 and a standard deviation of 100.
[](https://en.wikipedia.org/wiki/File:FitNormDistr.tif)
Fitted cumulative normal distribution to October rainfalls, see [distribution fitting](https://en.wikipedia.org/wiki/Distribution_fitting "Distribution fitting")
- Many scores are derived from the normal distribution, including [percentile ranks](https://en.wikipedia.org/wiki/Percentile_rank "Percentile rank") (percentiles or quantiles), [normal curve equivalents](https://en.wikipedia.org/wiki/Normal_curve_equivalent "Normal curve equivalent"), [stanines](https://en.wikipedia.org/wiki/Stanine "Stanine"), [z-scores](https://en.wikipedia.org/wiki/Z-scores "Z-scores"), and T-scores. Additionally, some [behavioral statistical](https://en.wikipedia.org/wiki/Psychological_statistics "Psychological statistics") procedures assume that scores are normally distributed; for example, [t-tests](https://en.wikipedia.org/wiki/T-tests "T-tests") and [ANOVAs](https://en.wikipedia.org/wiki/Analysis_of_variance "Analysis of variance"). [Bell curve grading](https://en.wikipedia.org/wiki/Bell_curve_grading "Bell curve grading") assigns relative grades based on a normal distribution of scores.
- In [hydrology](https://en.wikipedia.org/wiki/Hydrology "Hydrology") the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem").[\[64\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-64) The plot on the right illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% [confidence belt](https://en.wikipedia.org/wiki/Confidence_belt "Confidence belt") based on the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"). The rainfall data are represented by [plotting positions](https://en.wikipedia.org/wiki/Plotting_position "Plotting position") as part of the [cumulative frequency analysis](https://en.wikipedia.org/wiki/Cumulative_frequency_analysis "Cumulative frequency analysis").
### Methodological problems and peer review
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=51 "Edit section: Methodological problems and peer review")\]
[John Ioannidis](https://en.wikipedia.org/wiki/John_Ioannidis "John Ioannidis") [argued](https://en.wikipedia.org/wiki/Why_Most_Published_Research_Findings_Are_False "Why Most Published Research Findings Are False") that using normally distributed standard deviations as standards for validating research findings leave [falsifiable predictions](https://en.wikipedia.org/wiki/Falsifiability "Falsifiability") about phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if they were unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.[\[65\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-65)
## Computational methods
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=52 "Edit section: Computational methods")\]
### Generating values from normal distribution
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=53 "Edit section: Generating values from normal distribution")\]
[](https://en.wikipedia.org/wiki/File:Planche_de_Galton.jpg)
The [bean machine](https://en.wikipedia.org/wiki/Bean_machine "Bean machine"), a device invented by [Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton"), can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.
In computer simulations, especially in applications of the [Monte-Carlo method](https://en.wikipedia.org/wiki/Monte-Carlo_method "Monte-Carlo method"), it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a *N*(*ÎŒ*, *Ï*2) can be generated as *X* = *ÎŒ* + *ÏZ*, where Z is standard normal. All these algorithms rely on the availability of a [random number generator](https://en.wikipedia.org/wiki/Random_number_generator "Random number generator") U capable of producing [uniform](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") random variates.
- The most straightforward method is based on the [probability integral transform](https://en.wikipedia.org/wiki/Probability_integral_transform "Probability integral transform") property: if U is distributed uniformly on (0,1), then *Ί*â1(*U*) will have the standard normal distribution. The drawback of this method is that it relies on calculation of the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function") Ίâ1, which cannot be done analytically. Some approximate methods are described in [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) and in the [erf](https://en.wikipedia.org/wiki/Error_function "Error function") article. Wichura gives a fast algorithm for computing this function to 16 decimal places,[\[66\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-66) which is used by [R](https://en.wikipedia.org/wiki/R_programming_language "R programming language") to compute random variates of the normal distribution.
- [An easy-to-program approximate approach](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution#Approximating_a_Normal_distribution "IrwinâHall distribution") that relies on the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") is as follows: generate 12 uniform *U*(0,1) deviates, add them all up, and subtract 6 â the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be [IrwinâHall](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "IrwinâHall distribution"), which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (â6, 6).[\[67\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-67) Note that in a true normal distribution, only 0.00034% of all samples will fall outside ±6*Ï*.
- The [BoxâMuller method](https://en.wikipedia.org/wiki/Box%E2%80%93Muller_method "BoxâMuller method") uses two independent random numbers U and V distributed [uniformly](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") on (0,1). Then the two random variables X and Y  will both have the standard normal distribution, and will be [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). This formulation arises because for a [bivariate normal](https://en.wikipedia.org/wiki/Bivariate_normal "Bivariate normal") random vector (*X*, *Y*) the squared norm *X*2 + *Y*2 will have the [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with two degrees of freedom, which is an easily generated [exponential random variable](https://en.wikipedia.org/wiki/Exponential_random_variable "Exponential random variable") corresponding to the quantity â2 ln(*U*) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V.
- The [Marsaglia polar method](https://en.wikipedia.org/wiki/Marsaglia_polar_method "Marsaglia polar method") is a modification of the BoxâMuller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (â1,1) distribution, and then *S* = *U*2 + *V*2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities  are returned. Again, X and Y are independent, standard normal random variables.
- The Ratio method[\[68\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-68) is a rejection method. The algorithm proceeds as follows:
- Generate two independent uniform deviates U and V;
- Compute *X* = â8/*e* (*V* â 0.5)/*U*;
- Optional: if *X*2 †5 â 4*e*1/4*U* then accept X and terminate algorithm;
- Optional: if *X*2 â„ 4*e*â1.35/*U* + 1.4 then reject X and start over from step 1;
- If *X*2 †â4 ln *U* then accept X, otherwise start over the algorithm.
The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved[\[69\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-69) so that the logarithm is rarely evaluated.
- The [ziggurat algorithm](https://en.wikipedia.org/wiki/Ziggurat_algorithm "Ziggurat algorithm")[\[70\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-70) is faster than the BoxâMuller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed.
- Integer arithmetic can be used to sample from the standard normal distribution.[\[71\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-71)[\[72\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-72) This method is exact in the sense that it satisfies the conditions of *ideal approximation*;[\[73\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-73) i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number.
- There is also some investigation[\[74\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-74) into the connection between the fast [Hadamard transform](https://en.wikipedia.org/wiki/Hadamard_transform "Hadamard transform") and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data.
### Numerical approximations for the normal cumulative distribution function and normal quantile function
\[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit§ion=54 "Edit section: Numerical approximations for the normal cumulative distribution function and normal quantile function")\]
The standard normal [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") is widely used in scientific and statistical computing.
The values *Ί*(*x*) may be approximated very accurately by a variety of methods, such as [numerical integration](https://en.wikipedia.org/wiki/Numerical_integration "Numerical integration"), [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series"), [asymptotic series](https://en.wikipedia.org/wiki/Asymptotic_series "Asymptotic series") and [continued fractions](https://en.wikipedia.org/wiki/Gauss%27s_continued_fraction#Of_Kummer's_confluent_hypergeometric_function "Gauss's continued fraction"). Different approximations are used depending on the desired level of accuracy.
- [Zelen & Severo (1964)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFZelenSevero1964) give the approximation for *Ί*(*x*) for *x* \> 0 with the absolute error \|*Δ*(*x*)\| \< 7.5·10â8 (algorithm [26\.2.17](https://secure.math.ubc.ca/~cbm/aands/page_932.htm)):  where *Ï*(*x*) is the standard normal probability density function, and *b*0 = 0.2316419, *b*1 = 0.319381530, *b*2 = â0.356563782, *b*3 = 1.781477937, *b*4 = â1.821255978, *b*5 = 1.330274429.
- [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) lists dozens of approximations by means of rational functions, with or without exponentials, for the erfc() function, where erfc(x) = 1 - erf(x). His algorithms vary in the degree of complexity and the resulting precision, with a maximum absolute precision of 24 digits. An algorithm by [West (2009)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWest2009) combines Hart's algorithm 5666 with a [continued fraction](https://en.wikipedia.org/wiki/Continued_fraction "Continued fraction") approximation in the tail to provide a fast computation algorithm with 16-digit precision.
- [Cody (1969)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCody1969), after recalling the Hart68 solution is not suited for erf, gave a solution for both erf and erfc, with maximal relative error bound, via [Rational Chebyshev Approximation](https://en.wikipedia.org/wiki/Rational_function "Rational function").
- [Marsaglia (2004)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsaglia2004) suggested a simple algorithm[\[note 1\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-75) based on the Taylor series expansion  for calculating *Ί*(*x*) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when *x* = 10).
- The [GNU Scientific Library](https://en.wikipedia.org/wiki/GNU_Scientific_Library "GNU Scientific Library") calculates values of the standard normal cumulative distribution function using Hart's algorithms and approximations with [Chebyshev polynomials](https://en.wikipedia.org/wiki/Chebyshev_polynomial "Chebyshev polynomial").
- [Dia (2023)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDia2023) proposes the following approximation of  with a maximum relative error less than   in absolute value: for  and for ,

Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting *p* = *Ί*(*z*), the simplest approximation for the quantile function is: ![{\\displaystyle z=\\Phi ^{-1}(p)=5.5556\\left\[1-\\left({\\frac {1-p}{p}}\\right)^{0.1186}\\right\],\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5f2df7f1427d0c90d075faef38f4f5ab7acce5c9)
This approximation delivers for z a maximum absolute error of 0.026 (for 0\.5 †*p* †0.9999, corresponding to 0 †*z* †3.719). For *p* \< 1/2 replace p by 1 â *p* and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation: ![{\\displaystyle z=-0.4115\\left\\{{\\frac {1-p}{p}}+\\log \\left\[{\\frac {1-p}{p}}\\right\]-1\\right\\},\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1edea9f990058f741db6735799c8b40999b833b)
The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by ![{\\displaystyle {\\begin{aligned}L(z)&=\\int \_{z}^{\\infty }(u-z)\\varphi (u)\\,du=\\int \_{z}^{\\infty }\[1-\\Phi (u)\]\\,du\\\\\[5pt\]L(z)&\\approx {\\begin{cases}0.4115\\left({\\dfrac {p}{1-p}}\\right)-z,\&p\<1/2,\\\\\\\\0.4115\\left({\\dfrac {1-p}{p}}\\right),\&p\\geq 1/2.\\end{cases}}\\\\\[5pt\]{\\text{or, equivalently,}}\\\\L(z)&\\approx {\\begin{cases}0.4115\\left\\{1-\\log \\left\[{\\frac {p}{1-p}}\\right\]\\right\\},\&p\<1/2,\\\\\\\\0.4115{\\dfrac {1-p}{p}},\&p\\geq 1/2.\\end{cases}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4b69fa586cffdfbbd40a94c65629726e4ae78bf)
This approximation is particularly accurate for the right far-tail (maximum error of 10â3 for *z* â„ 1.4). Highly accurate approximations for the cumulative distribution function, based on [Response Modeling Methodology](https://en.wikipedia.org/wiki/Response_Modeling_Methodology "Response Modeling Methodology") (RMM, Shore, 2011, 2012), are shown in Shore (2005).
Some more approximations can be found at: [Error function\#Approximation with elementary functions](https://en.wikipedia.org/wiki/Error_function#Approximation_with_elementary_functions "Error function"). In particular, small *relative* error on the whole domain for the cumulative distribution function â â and the quantile function  as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.
Some authors[\[75\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-76)[\[76\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-77) attribute the discovery of the normal distribution to [de Moivre](https://en.wikipedia.org/wiki/De_Moivre "De Moivre"), who in 1738[\[note 2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-78) published in the second edition of his *[The Doctrine of Chances](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances")* the study of the coefficients in the [binomial expansion](https://en.wikipedia.org/wiki/Binomial_expansion "Binomial expansion") of (*a* + *b*)*n*. De Moivre proved that the middle term in this expansion has the approximate magnitude of , and that "If m or â 1/2â *n* be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval â, has to the middle Term, is ."[\[77\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-79) Although this theorem can be interpreted as the first obscure expression for the normal probability law, [Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[\[78\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-80)
[](https://en.wikipedia.org/wiki/File:Carl_Friedrich_Gauss.jpg)
In 1809, [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") showed that the normal distribution provides a way to rationalize the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares").
In 1823 [Gauss](https://en.wikipedia.org/wiki/Gauss "Gauss") published his monograph "*Theoria combinationis observationum erroribus minimis obnoxiae*" where among other things he introduces several important statistical concepts, such as the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares"), the [method of maximum likelihood](https://en.wikipedia.org/wiki/Method_of_maximum_likelihood "Method of maximum likelihood"), and the *normal distribution*. Gauss used M, *M*âČ, *M*âł, ... to denote the measurements of some unknown quantity V, and sought the most probable estimator of that quantity: the one that maximizes the probability *Ï*(*M* â *V*) · *Ï*(*M*âČ â *V*) · *Ï*(*M*âł â *V*) · ... of obtaining the observed experimental results. In his notation ÏÎ is the probability density function of the measurement errors of magnitude Î. Not knowing what the function Ï is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[\[note 3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-81) Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[\[79\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-82)  where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the [non-linear](https://en.wikipedia.org/wiki/Non-linear_least_squares "Non-linear least squares") [weighted least squares](https://en.wikipedia.org/wiki/Weighted_least_squares "Weighted least squares") method.[\[80\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-83)
[](https://en.wikipedia.org/wiki/File:Pierre-Simon_Laplace.jpg)
[Pierre-Simon Laplace](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") proved the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") in 1810, consolidating the importance of the normal distribution in statistics.
Although Gauss was the first to suggest the normal distribution law, [Laplace](https://en.wikipedia.org/wiki/Laplace "Laplace") made significant contributions.[\[note 4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-84) It was Laplace who first posed the problem of aggregating several observations in 1774,[\[81\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-85) although his own solution led to the [Laplacian distribution](https://en.wikipedia.org/wiki/Laplacian_distribution "Laplacian distribution"). It was Laplace who first calculated the value of the [integral â« *e*â*t*2 *dt* = âÏ](https://en.wikipedia.org/wiki/Gaussian_integral "Gaussian integral") in 1782, providing the normalization constant for the normal distribution.[\[82\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-86) For this accomplishment, Gauss acknowledged the priority of Laplace.[\[83\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-87) Finally, it was Laplace who in 1810 proved and presented to the academy the fundamental [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"), which emphasized the theoretical importance of the normal distribution.[\[84\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-88)
It is of interest to note that in 1809 an Irish-American mathematician [Robert Adrain](https://en.wikipedia.org/wiki/Robert_Adrain "Robert Adrain") published two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.[\[85\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-89) His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed by [Abbe](https://en.wikipedia.org/wiki/Cleveland_Abbe "Cleveland Abbe").[\[86\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-90)
In the middle of the 19th century [Maxwell](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[\[59\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEMaxwell186023-59) The number of particles whose velocity, resolved in a certain direction, lies between x and *x* + *dx* is 
Today, the concept is usually known in English as the **normal distribution** or **Gaussian distribution**. Other less common names include Gauss distribution, LaplaceâGauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law.
Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.[\[87\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-91) However, by the end of the 19th century some authors[\[note 5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-92) had started using the name *normal distribution*, where the word "normal" was used as an adjective â the term now being seen as a reflection of this distribution being seen as typical, common â and thus normal. [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce") (one of those authors) once defined "normal" thus: "... the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what *would*, in the long run, occur under certain circumstances."[\[88\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-93) Around the turn of the 20th century [Pearson](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") popularized the term *normal* as a designation for this distribution.[\[89\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-94)
> Many years ago I called the LaplaceâGaussian curve the *normal* curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'.
Also, it was Pearson who first wrote the distribution in terms of the standard deviation Ï as in modern notation. Soon after this, in year 1915, [Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher "Ronald Fisher") added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: 
The term *standard normal distribution*, which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947) *Introduction to Mathematical Statistics* and [Alexander M. Mood](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950) *Introduction to the Theory of Statistics*.[\[90\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-95)[\[91\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-96)[\[92\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-97)
1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-75)** For example, this algorithm is given in the article [Bc programming language](https://en.wikipedia.org/wiki/Bc_programming_language#A_translated_C_function "Bc programming language").
2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-78)** De Moivre first published his findings in 1733, in a pamphlet *Approximatio ad Summam Terminorum Binomii* (*a* + *b*)*n* *in Seriem Expansi* that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example [Walker (1985)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985).
3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-81)** "It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been determined by several direct observations, made under the same circumstances and with equal care, the arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at least, so that it is always most safe to adhere to it." â [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177)
4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-84)** "My custom of terming the curve the GaussâLaplacian or *normal* curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189)
5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-92)** Besides those specifically referenced here, such use is encountered in the works of [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce"), [Galton](https://en.wikipedia.org/wiki/Galton "Galton") ([Galton (1889](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalton1889), chapter V)) and [Lexis](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") ([Lexis (1878)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLexis1878), [Rohrbasser & Véron (2003)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFRohrbasserV%C3%A9ron2003)) c. 1875.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\]
1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Norton-2019_1-0)**
Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019). ["Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation"](https://web.archive.org/web/20230331230821/http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF). *Annals of Operations Research*. **299** (1â2\). Springer: 1281â1315\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1811\.11301](https://arxiv.org/abs/1811.11301). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s10479-019-03373-1](https://doi.org/10.1007%2Fs10479-019-03373-1). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [254231768](https://api.semanticscholar.org/CorpusID:254231768). Archived from [the original](http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF) on March 31, 2023. Retrieved February 27, 2023.
2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-The_Joy_of_Finite_Mathematics_2-0)**
Tsokos, Chris; Wooten, Rebecca (January 1, 2016). Tsokos, Chris; Wooten, Rebecca (eds.). [*The Joy of Finite Mathematics*](https://linkinghub.elsevier.com/retrieve/pii/B9780128029671000073). Boston: Academic Press. pp. 231â263\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-802967-1.00007-3](https://doi.org/10.1016%2Fb978-0-12-802967-1.00007-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-12-802967-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-802967-1 "Special:BookSources/978-0-12-802967-1")
.
3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Mathematics_for_Physical_Science_and_Engineering_3-0)**
Harris, Frank E. (January 1, 2014). Harris, Frank E. (ed.). [*Mathematics for Physical Science and Engineering*](https://linkinghub.elsevier.com/retrieve/pii/B9780128010006000183). Boston: Academic Press. pp. 663â709\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-801000-6.00018-3](https://doi.org/10.1016%2Fb978-0-12-801000-6.00018-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-12-801000-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-801000-6 "Special:BookSources/978-0-12-801000-6")
.
4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-4)** [Hoel (1947](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947), [p. 31](https://archive.org/details/in.ernet.dli.2015.263186/page/n39/mode/2up?q=%22normal+distribution%22)) and [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 109](https://archive.org/details/introductiontoth0000alex/page/108/mode/2up?q=%22normal+distribution%22)) give this definition with slightly different notation.
5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-5)** [*Normal Distribution*](http://www.encyclopedia.com/topic/Normal_Distribution.aspx#3), Gale Encyclopedia of Psychology
6. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-6)** [Casella & Berger (2001](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCasellaBerger2001), p. 102)
7. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-7)** Lyon, A. (2014). [Why are Normal Distributions Normal?](https://aidanlyon.com/normal_distributions.pdf), The British Journal for the Philosophy of Science.
8. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-8)**
Jorge, Nocedal; Stephan, J. Wright (2006). *Numerical Optimization* (2nd ed.). Springer. p. 249. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0387-30303-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0387-30303-1 "Special:BookSources/978-0387-30303-1")
.
9. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-1)
["Normal Distribution"](https://www.mathsisfun.com/data/standard-normal-distribution.html). *www.mathsisfun.com*. Retrieved August 15, 2020.
10. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-10)**
["bell curve"](https://www.merriam-webster.com/dictionary/bell%20curve). *Merriam-Webster.com Dictionary*. Retrieved May 25, 2025.
11. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-11)** [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 112](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22)) explicitly defines the *standard normal distribution*. In contrast, [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) explicitly defines the *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and introduces the term *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22).
12. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-12)** [Stigler (1982)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1982)
13. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-13)** [Halperin, Hartley & Hoel (1965](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHalperinHartleyHoel1965), item 7)
14. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-14)** [McPherson (1990](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMcPherson1990), p. 110)
15. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-15)** [Bernardo & Smith (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBernardoSmith2000), p. 121)
16. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-KunIlPark_16-0)**
Park, Kun Il (2018). *Fundamentals of Probability and Stochastic Processes with Applications to Communications*. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-3-319-68074-3](https://en.wikipedia.org/wiki/Special:BookSources/978-3-319-68074-3 "Special:BookSources/978-3-319-68074-3")
.
17. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-17)**
Scott, Clayton; Nowak, Robert (August 7, 2003). ["The Q-function"](http://cnx.org/content/m11537/1.2/). *Connexions*.
18. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-18)**
Barak, Ohad (April 6, 2006). ["Q Function and Error Function"](https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF). Tel Aviv University. Archived from [the original](http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF) on March 25, 2009.
19. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-19)**
[Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution Function"](https://mathworld.wolfram.com/NormalDistributionFunction.html). *[MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld")*.
20. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-20)**
[Abramowitz, Milton](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); [Stegun, Irene Ann](https://en.wikipedia.org/wiki/Irene_Stegun "Irene Stegun"), eds. (1983) \[June 1964\]. ["Chapter 26, eqn 26.2.12"](http://www.math.ubc.ca/~cbm/aands/page_932.htm). [*Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun"). Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 932. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0")
. [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [64-60036](https://lccn.loc.gov/64-60036). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0167642](https://mathscinet.ams.org/mathscinet-getitem?mr=0167642). [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [65-12253](https://www.loc.gov/item/65012253).
21. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-duff_21-0)**
Duff, Michael (2003). "Normal Distribution Algorithms". *The Mathematical Gazette*. **87** (509): 331â336\. [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [3621062](https://www.jstor.org/stable/3621062).
22. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-1)
Stuart, Alan; Ord, J. Keith (1987). ["The normal d.f."](https://archive.org/details/kendallsadvanced0001kend/page/183/mode/1up). *Kendall's Advanced Theory of Statistics*. Vol. 1: Distribution Theory. originally by [Maurice Kendall](https://en.wikipedia.org/wiki/Maurice_Kendall "Maurice Kendall") (5th ed.). Charles Griffin & Co. § 5\.37, pp. 183â185. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[0-85264-285-7](https://en.wikipedia.org/wiki/Special:BookSources/0-85264-285-7 "Special:BookSources/0-85264-285-7")
.
23. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-23)**
Vaart, A. W. van der (October 13, 1998). [*Asymptotic Statistics*](https://dx.doi.org/10.1017/cbo9780511802256). Cambridge University Press. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1017/cbo9780511802256](https://doi.org/10.1017%2Fcbo9780511802256). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-511-80225-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-511-80225-6 "Special:BookSources/978-0-511-80225-6")
.
24. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-1) [Cover & Thomas (2006)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCoverThomas2006), p. 254.
25. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-25)**
Park, Sung Y.; Bera, Anil K. (2009). ["Maximum Entropy Autoregressive Conditional Heteroskedasticity Model"](https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf) (PDF). *Journal of Econometrics*. **150** (2): 219â230\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2009JEcon.150..219P](https://ui.adsabs.harvard.edu/abs/2009JEcon.150..219P). [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.511.9750](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.511.9750). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.jeconom.2008.12.014](https://doi.org/10.1016%2Fj.jeconom.2008.12.014). Archived from [the original](http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf) (PDF) on March 7, 2016. Retrieved June 2, 2011.
26. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Geary_RC_26-0)** Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178â184
27. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-27)**
[Lukacs, Eugene](https://en.wikipedia.org/wiki/Eugene_Lukacs "Eugene Lukacs") (March 1942). ["A Characterization of the Normal Distribution"](https://archive.org/details/dli.ernet.4125/page/91). *[Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics "Annals of Mathematical Statistics")*. **13** (1): 91â93\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/AOMS/1177731647](https://doi.org/10.1214%2FAOMS%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0006626](https://mathscinet.ams.org/mathscinet-getitem?mr=0006626). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0060\.28509](https://zbmath.org/?format=complete&q=an:0060.28509). [Wikidata](https://en.wikipedia.org/wiki/WDQ_\(identifier\) "WDQ (identifier)") [Q55897617](https://www.wikidata.org/wiki/Q55897617 "d:Q55897617").
28. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-1) [***c***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-2) [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.4\])
29. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-29)** [Fan (1991](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFFan1991), p. 1258)
30. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-30)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.8\])
31. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-31)**
Papoulis, Athanasios. *Probability, Random Variables and Stochastic Processes* (4th ed.). p. 148.
32. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-32)**
Winkelbauer, Andreas (2012). "Moments and Absolute Moments of the Normal Distribution". [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1209\.4340](https://arxiv.org/abs/1209.4340) \[[math.ST](https://arxiv.org/archive/math.ST)\].
33. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-33)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 23)
34. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-34)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 24)
35. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-35)**
Williams, David (2001). [*Weighing the odds : a course in probability and statistics*](https://archive.org/details/weighingoddscour00will) (Reprinted. ed.). Cambridge \[u.a.\]: Cambridge Univ. Press. pp. [197](https://archive.org/details/weighingoddscour00will/page/n219)â199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-521-00618-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-00618-7 "Special:BookSources/978-0-521-00618-7")
.
36. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-36)**
José M. Bernardo; Adrian F. M. Smith (2000). [*Bayesian theory*](https://archive.org/details/bayesiantheory00bern_963) (Reprint ed.). Chichester \[u.a.\]: Wiley. pp. [209](https://archive.org/details/bayesiantheory00bern_963/page/n224), 366. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5")
.
37. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-37)**
O'Hagan, A. (1994) *Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference*, Edward Arnold. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[0-340-52922-9](https://en.wikipedia.org/wiki/Special:BookSources/0-340-52922-9 "Special:BookSources/0-340-52922-9")
(Section 5.40)
38. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-1) [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 35)
39. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-39)** [UIUC, Lecture 21. *The Multivariate Normal Distribution*](http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf), 21.6:"Individually Gaussian Versus Jointly Gaussian".
40. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-40)** Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", *[The American Statistician](https://en.wikipedia.org/wiki/The_American_Statistician "The American Statistician")*, volume 36, number 4 November 1982, pages 372â373
41. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-41)**
["Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions"](http://www.allisons.org/ll/MML/KL/Normal/). *Allisons.org*. December 5, 2007. Retrieved March 3, 2017.
42. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-42)**
Jordan, Michael I. (February 8, 2010). ["Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution"](http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf) (PDF).
43. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-43)** [Amari & Nagaoka (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFAmariNagaoka2000)
44. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-44)**
["Expectation of the maximum of gaussian random variables"](https://math.stackexchange.com/a/89147). *Mathematics Stack Exchange*. Retrieved April 7, 2024.
45. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-45)**
["Normal Approximation to Poisson Distribution"](http://www.stat.ucla.edu/~dinov/courses_students.dir/Applets.dir/NormalApprox2PoissonApplet.html). *Stat.ucla.edu*. Retrieved March 3, 2017.
46. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-46)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 27)
47. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-47)**
Weisstein, Eric W. ["Normal Product Distribution"](http://mathworld.wolfram.com/NormalProductDistribution.html). *MathWorld*. wolfram.com.
48. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-48)**
Lukacs, Eugene (1942). ["A Characterization of the Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177731647). *[The Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/The_Annals_of_Mathematical_Statistics "The Annals of Mathematical Statistics")*. **13** (1): 91â3\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177731647](https://doi.org/10.1214%2Faoms%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166).
49. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-49)**
Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". *[SankhyÄ](https://en.wikipedia.org/wiki/Sankhy%C4%81_\(journal\) "SankhyÄ (journal)")*. **13** (4): 359â62\. [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0036-4452](https://search.worldcat.org/issn/0036-4452). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [25048183](https://www.jstor.org/stable/25048183).
50. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-50)**
Lehmann, E. L. (1997). *Testing Statistical Hypotheses* (2nd ed.). Springer. p. 199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-94919-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-94919-2 "Special:BookSources/978-0-387-94919-2")
.
51. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-51)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.3.6\])
52. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-52)** [Galambos & Simonelli (2004](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalambosSimonelli2004), Theorem 3.5)
53. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-1) [Lukacs & King (1954)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLukacsKing1954)
54. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-54)**
Quine, M.P. (1993). ["On three characterisations of the normal distribution"](http://www.math.uni.wroc.pl/~pms/publicationsArticle.php?nr=14.2&nrA=8&ppB=257&ppE=263). *Probability and Mathematical Statistics*. **14** (2): 257â263\.
55. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-John-1982_55-0)**
John, S (1982). "The three parameter two-piece normal family of distributions and its fitting". *Communications in Statistics â Theory and Methods*. **11** (8): 879â885\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610928208828279](https://doi.org/10.1080%2F03610928208828279).
56. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-1) [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 127)
57. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-57)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 130)
58. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-58)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 133)
59. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-1) [Maxwell (1860)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMaxwell1860), p. 23.
60. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEBryc19951_60-0)** [Bryc (1995)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 1.
61. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-61)**
Larkoski, Andrew J. (2023). [*Quantum Mechanics: A Mathematical Introduction*](https://books.google.com/books?id=iKmnEAAAQBAJ&dq=normal%20distribution&pg=PA120). United Kingdom: Cambridge University Press. pp. 120â121\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-1-009-12222-1](https://en.wikipedia.org/wiki/Special:BookSources/978-1-009-12222-1 "Special:BookSources/978-1-009-12222-1")
. Retrieved May 30, 2025.
62. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-62)** [Huxley (1932)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHuxley1932)
63. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-63)**
Jaynes, Edwin T. (2003). [*Probability Theory: The Logic of Science*](https://books.google.com/books?id=tTN4HuUNXjgC&pg=PA592). Cambridge University Press. pp. 592â593\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[9780521592710](https://en.wikipedia.org/wiki/Special:BookSources/9780521592710 "Special:BookSources/9780521592710")
.
64. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-64)**
Oosterbaan, Roland J. (1994). ["Chapter 6: Frequency and Regression Analysis of Hydrologic Data"](http://www.waterlog.info/pdf/freqtxt.pdf) (PDF). In Ritzema, Henk P. (ed.). *Drainage Principles and Applications, Publication 16* (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp. 175â224\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-90-70754-33-4](https://en.wikipedia.org/wiki/Special:BookSources/978-90-70754-33-4 "Special:BookSources/978-90-70754-33-4")
.
65. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-65)** Why Most Published Research Findings Are False, John P. A. Ioannidis, 2005
66. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-66)**
Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". *Applied Statistics*. **37** (3): 477â84\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347330](https://doi.org/10.2307%2F2347330). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347330](https://www.jstor.org/stable/2347330).
67. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-67)** [Johnson, Kotz & Balakrishnan (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1995), Equation (26.48))
68. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-68)** [Kinderman & Monahan (1977)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKindermanMonahan1977)
69. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-69)** [Leva (1992)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLeva1992)
70. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-70)** [Marsaglia & Tsang (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsagliaTsang2000)
71. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-71)** [Karney (2016)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKarney2016)
72. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-72)** [Du, Fan & Wei (2022)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDuFanWei2022)
73. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-73)** [Monahan (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMonahan1985), section 2)
74. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-74)** [Wallace (1996)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWallace1996)
75. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-76)** [Johnson, Kotz & Balakrishnan (1994](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1994), p. 85)
76. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-77)** [Le Cam & Lo Yang (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLe_CamLo_Yang2000), p. 74)
77. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-79)** De Moivre, Abraham (1733), Corollary I â see [Walker (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985), p. 77)
78. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-80)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), [p. 76](https://archive.org/details/historyofstatist00stig/page/76/mode/2up?q=%22de+moivre%22))
79. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-82)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177)
80. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-83)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 179)
81. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-85)** [Laplace (1774](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLaplace1774), Problem III)
82. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-86)** [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189)
83. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-87)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177)
84. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-88)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), p. 144)
85. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-89)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 243)
86. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-90)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 244)
87. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-91)** Jaynes, Edwin J.; *Probability Theory: The Logic of Science*, [Ch. 7](http://www-biba.inrialpes.fr/Jaynes/cc07s.pdf).
88. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-93)** Peirce, Charles S. (c. 1909 MS), *[Collected Papers](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce_bibliography#CP "Charles Sanders Peirce bibliography")* v. 6, paragraph 327.
89. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-94)** [Kruskal & Stigler (1997)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKruskalStigler1997).
90. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-95)**
["Earliest Uses... (Entry Standard Normal Curve)"](http://jeff560.tripod.com/s.html).
91. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-96)** [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) introduces the terms *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22).
92. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-97)** [Mood (1950)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950) explicitly defines the *standard normal distribution* [(p. 112)](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22).
93. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Sun-2021_98-0)**
Sun, Jingchao; Kong, Maiying; Pal, Subhadip (June 22, 2021). ["The Modified-Half-Normal distribution: Properties and an efficient sampling scheme"](https://www.tandfonline.com/doi/abs/10.1080/03610926.2021.1934700?journalCode=lsta20). *Communications in Statistics â Theory and Methods*. **52** (5): 1591â1613\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610926.2021.1934700](https://doi.org/10.1080%2F03610926.2021.1934700). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0361-0926](https://search.worldcat.org/issn/0361-0926). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [237919587](https://api.semanticscholar.org/CorpusID:237919587).
- Aldrich, John; Miller, Jeff. ["Earliest Uses of Symbols in Probability and Statistics"](http://jeff560.tripod.com/stat.html).
- Aldrich, John; Miller, Jeff. ["Earliest Known Uses of Some of the Words of Mathematics"](http://jeff560.tripod.com/mathword.html).
In particular, the entries for ["bell-shaped and bell curve"](http://jeff560.tripod.com/b.html), ["normal (distribution)"](http://jeff560.tripod.com/n.html), ["Gaussian"](http://jeff560.tripod.com/g.html), and ["Error, law of error, theory of errors, etc."](http://jeff560.tripod.com/e.html).
- [Amari, Shun'ichi](https://en.wikipedia.org/wiki/Shun%27ichi_Amari "Shun'ichi Amari"); Nagaoka, Hiroshi (2000). *Methods of Information Geometry*. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8218-0531-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-0531-2 "Special:BookSources/978-0-8218-0531-2")
.
- [Bernardo, José M.](https://en.wikipedia.org/wiki/Jos%C3%A9-Miguel_Bernardo "José-Miguel Bernardo"); [Smith, Adrian F. M.](https://en.wikipedia.org/wiki/Adrian_Smith_\(statistician\) "Adrian Smith (statistician)") (2000). *Bayesian Theory*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5")
.
- Bryc, Wlodzimierz (1995). [*The Normal Distribution: Characterizations with Applications*](https://books.google.com/books?id=tyXjBwAAQBAJ). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-97990-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97990-8 "Special:BookSources/978-0-387-97990-8")
.
- [Casella, George](https://en.wikipedia.org/wiki/George_Casella "George Casella"); [Berger, Roger L.](https://en.wikipedia.org/wiki/Roger_Lee_Berger "Roger Lee Berger") (2001). *Statistical Inference* (2nd ed.). Duxbury. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-534-24312-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-24312-8 "Special:BookSources/978-0-534-24312-8")
.
- Cody, William J. (1969). ["Rational Chebyshev Approximations for the Error Function"](https://en.wikipedia.org/wiki/Error_function#cite_note-5 "Error function"). *Mathematics of Computation*. **23** (107): 631â638\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1969MaCom..23..631C](https://ui.adsabs.harvard.edu/abs/1969MaCom..23..631C). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1969-0247736-4](https://doi.org/10.1090%2FS0025-5718-1969-0247736-4).
- [Cover, Thomas M.](https://en.wikipedia.org/wiki/Thomas_M._Cover "Thomas M. Cover"); [Thomas, Joy A.](https://en.wikipedia.org/wiki/Joy_A._Thomas "Joy A. Thomas") (2006). [*Elements of Information Theory*](https://books.google.com/books?id=VWq5GG6ycxMC). John Wiley and Sons. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[9780471241959](https://en.wikipedia.org/wiki/Special:BookSources/9780471241959 "Special:BookSources/9780471241959")
.
- Dia, Yaya D. (2023). ["Approximate Incomplete Integrals, Application to Complementary Error Function"](https://ssrn.com/abstract=4487559). *SSRN*. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2139/ssrn.4487559](https://doi.org/10.2139%2Fssrn.4487559). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [259689086](https://api.semanticscholar.org/CorpusID:259689086).
- [de Moivre, Abraham](https://en.wikipedia.org/wiki/Abraham_de_Moivre "Abraham de Moivre") (2000) \[First published 1738\]. [*The Doctrine of Chances*](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances"). American Mathematical Society. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8218-2103-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-2103-9 "Special:BookSources/978-0-8218-2103-9")
.
- Du, Y.; Fan, B.; Wei, B. (2022). "An improved exact sampling algorithm for the standard normal distribution". *Computational Statistics*. **37** (2): 721â737\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[2008\.03855](https://arxiv.org/abs/2008.03855). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00180-021-01136-w](https://doi.org/10.1007%2Fs00180-021-01136-w).
- Fan, Jianqing (1991). ["On the optimal rates of convergence for nonparametric deconvolution problems"](https://doi.org/10.1214%2Faos%2F1176348248). *The Annals of Statistics*. **19** (3): 1257â1272\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176348248](https://doi.org/10.1214%2Faos%2F1176348248). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2241949](https://www.jstor.org/stable/2241949).
- [Galton, Francis](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") (1889). [*Natural Inheritance*](http://galton.org/books/natural-inheritance/pdf/galton-nat-inh-1up-clean.pdf) (PDF). London, UK: Richard Clay and Sons.
- [Galambos, Janos](https://en.wikipedia.org/wiki/Janos_Galambos "Janos Galambos"); Simonelli, Italo (2004). [*Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions*](https://archive.org/details/productsofrandom00gala). Marcel Dekker, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8247-5402-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-5402-0 "Special:BookSources/978-0-8247-5402-0")
.
- [Gauss, Carolo Friderico](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") (1809). [*Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm*](https://archive.org/details/theoriamotuscor00gausgoog) \[*Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections*\] (in Latin). Hambvrgi, Svmtibvs F. Perthes et I. H. Besser. [English translation](https://books.google.com/books?id=1TIAAAAAQAAJ).
- [Gould, Stephen Jay](https://en.wikipedia.org/wiki/Stephen_Jay_Gould "Stephen Jay Gould") (1981). [*The Mismeasure of Man*](https://en.wikipedia.org/wiki/The_Mismeasure_of_Man "The Mismeasure of Man") (first ed.). W. W. Norton. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-393-01489-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-393-01489-1 "Special:BookSources/978-0-393-01489-1")
.
- Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation". *The American Statistician*. **19** (3): 12â14\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2681417](https://doi.org/10.2307%2F2681417). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2681417](https://www.jstor.org/stable/2681417).
- Hart, John F.; et al. (1968). *Computer Approximations*. New York, NY: John Wiley & Sons, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-88275-642-4](https://en.wikipedia.org/wiki/Special:BookSources/978-0-88275-642-4 "Special:BookSources/978-0-88275-642-4")
.
- ["Normal Distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_Distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\]
- [Herrnstein, Richard J.](https://en.wikipedia.org/wiki/Richard_J._Herrnstein "Richard J. Herrnstein"); [Murray, Charles](https://en.wikipedia.org/wiki/Charles_Murray_\(political_scientist\) "Charles Murray (political scientist)") (1994). [*The Bell Curve: Intelligence and Class Structure in American Life*](https://en.wikipedia.org/wiki/The_Bell_Curve "The Bell Curve"). [Free Press](https://en.wikipedia.org/wiki/Free_Press_\(publisher\) "Free Press (publisher)"). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-02-914673-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-02-914673-6 "Special:BookSources/978-0-02-914673-6")
.
- Hoel, Paul G. (1947). [*Introduction To Mathematical Statistics*](https://archive.org/details/in.ernet.dli.2015.263186/page/n1/mode/2up). New York: Wiley.
- [Huxley, Julian S.](https://en.wikipedia.org/wiki/Julian_S._Huxley "Julian S. Huxley") (1972) \[First published 1932\]. *Problems of Relative Growth*. London. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-61114-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61114-3 "Special:BookSources/978-0-486-61114-3")
. [OCLC](https://en.wikipedia.org/wiki/OCLC_\(identifier\) "OCLC (identifier)") [476909537](https://search.worldcat.org/oclc/476909537).
- [Johnson, Norman L.](https://en.wikipedia.org/wiki/Norman_Lloyd_Johnson "Norman Lloyd Johnson"); [Kotz, Samuel](https://en.wikipedia.org/wiki/Samuel_Kotz "Samuel Kotz"); Balakrishnan, Narayanaswamy (1994). *Continuous Univariate Distributions, Volume 1*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-58495-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58495-7 "Special:BookSources/978-0-471-58495-7")
.
- Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). *Continuous Univariate Distributions, Volume 2*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-471-58494-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58494-0 "Special:BookSources/978-0-471-58494-0")
.
- Karney, C. F. F. (2016). ["Sampling exactly from the normal distribution"](https://doi.org/10.1145%2F2710016). *ACM Transactions on Mathematical Software*. **42** (1): 3:1â14. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1303\.6257](https://arxiv.org/abs/1303.6257). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/2710016](https://doi.org/10.1145%2F2710016). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [14252035](https://api.semanticscholar.org/CorpusID:14252035).
- Kinderman, Albert J.; Monahan, John F. (1977). ["Computer Generation of Random Variables Using the Ratio of Uniform Deviates"](https://doi.org/10.1145%2F355744.355750). *ACM Transactions on Mathematical Software*. **3** (3): 257â260\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/355744.355750](https://doi.org/10.1145%2F355744.355750). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [12884505](https://api.semanticscholar.org/CorpusID:12884505).
- Krishnamoorthy, Kalimuthu (2006). *Handbook of Statistical Distributions with Applications*. Chapman & Hall/CRC. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-1-58488-635-8](https://en.wikipedia.org/wiki/Special:BookSources/978-1-58488-635-8 "Special:BookSources/978-1-58488-635-8")
.
- [Kruskal, William H.](https://en.wikipedia.org/wiki/William_H._Kruskal "William H. Kruskal"); Stigler, Stephen M. (1997). Spencer, Bruce D. (ed.). *Normative Terminology: 'Normal' in Statistics and Elsewhere*. Statistics and Public Policy. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-19-852341-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-19-852341-3 "Special:BookSources/978-0-19-852341-3")
.
- [Laplace, Pierre-Simon de](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") (1774). ["MĂ©moire sur la probabilitĂ© des causes par les Ă©vĂ©nements"](http://gallica.bnf.fr/ark:/12148/bpt6k77596b/f32). *MĂ©moires de l'AcadĂ©mie Royale des Sciences de Paris (Savants Ă©trangers), Tome 6*: 621â656\.
Translated by Stephen M. Stigler in *Statistical Science* **1** (3), 1986: [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2245476](https://www.jstor.org/stable/2245476).
- Laplace, Pierre-Simon (1812). [*Théorie analytique des probabilités*](https://archive.org/details/thorieanalytiqu00laplgoog) \[*[Analytical theory of probabilities](https://en.wikipedia.org/wiki/Analytical_theory_of_probabilities "Analytical theory of probabilities")*\]. Paris, Ve. Courcier.
- [Le Cam, Lucien](https://en.wikipedia.org/wiki/Lucien_Le_Cam "Lucien Le Cam"); [Lo Yang, Grace](https://en.wikipedia.org/wiki/Grace_Yang "Grace Yang") (2000). *Asymptotics in Statistics: Some Basic Concepts* (second ed.). Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-95036-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-95036-5 "Special:BookSources/978-0-387-95036-5")
.
- Leva, Joseph L. (1992). ["A fast normal random number generator"](https://web.archive.org/web/20100716035328/http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF). *ACM Transactions on Mathematical Software*. **18** (4): 449â453\. [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.544.5806](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.544.5806). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/138351.138364](https://doi.org/10.1145%2F138351.138364). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [15802663](https://api.semanticscholar.org/CorpusID:15802663). Archived from [the original](http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF) on July 16, 2010.
- [Lexis, Wilhelm](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") (1878). "Sur la durĂ©e normale de la vie humaine et sur la thĂ©orie de la stabilitĂ© des rapports statistiques". *Annales de DĂ©mographie Internationale*. **II**. Paris: 447â462\.
- Lukacs, Eugene; King, Edgar P. (1954). ["A Property of Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177728796). *The Annals of Mathematical Statistics*. **25** (2): 389â394\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177728796](https://doi.org/10.1214%2Faoms%2F1177728796). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236741](https://www.jstor.org/stable/2236741).
- McPherson, Glen (1990). [*Statistics in Scientific Investigation: Its Basis, Application and Interpretation*](https://archive.org/details/statisticsinscie0000mcph). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-387-97137-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97137-7 "Special:BookSources/978-0-387-97137-7")
.
- [Marsaglia, George](https://en.wikipedia.org/wiki/George_Marsaglia "George Marsaglia"); Tsang, Wai Wan (2000). ["The Ziggurat Method for Generating Random Variables"](https://doi.org/10.18637%2Fjss.v005.i08). *Journal of Statistical Software*. **5** (8). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v005.i08](https://doi.org/10.18637%2Fjss.v005.i08).
- Marsaglia, George (2004). ["Evaluating the Normal Distribution"](https://doi.org/10.18637%2Fjss.v011.i04). *Journal of Statistical Software*. **11** (4). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v011.i04](https://doi.org/10.18637%2Fjss.v011.i04).
- [Maxwell, James Clerk](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") (1860). ["V. Illustrations of the dynamical theory of gases. â Part I: On the motions and collisions of perfectly elastic spheres"](https://books.google.com/books?id=-YU7AQAAMAAJ&pg=PA19). *Philosophical Magazine*. Series 4. **19** (124): 19â32\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1860LEDPM..19...19M](https://ui.adsabs.harvard.edu/abs/1860LEDPM..19...19M). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786446008642818](https://doi.org/10.1080%2F14786446008642818).
- Monahan, J. F. (1985). ["Accuracy in random number generation"](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X). *Mathematics of Computation*. **45** (172): 559â568\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1985-0804945-X](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X).
- [Mood, Alexander McFarlane](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950). [*Introduction to the Theory of Statistics*](https://archive.org/details/introductiontoth0000alex/page/n5/mode/2up). New York: McGraw-Hill.
- Patel, Jagdish K.; Read, Campbell B. (1996). *Handbook of the Normal Distribution* (2nd ed.). CRC Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-8247-9342-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-9342-5 "Special:BookSources/978-0-8247-9342-5")
.
- [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1901). ["On Lines and Planes of Closest Fit to Systems of Points in Space"](http://stat.smmu.edu.cn/history/pearson1901.pdf) (PDF). *[Philosophical Magazine](https://en.wikipedia.org/wiki/Philosophical_Magazine "Philosophical Magazine")*. 6. **2** (11): 559â572\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786440109462720](https://doi.org/10.1080%2F14786440109462720). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [125037489](https://api.semanticscholar.org/CorpusID:125037489).
- [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1905). ["'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A rejoinder"](https://zenodo.org/record/1449456). *Biometrika*. **4** (1): 169â212\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2331536](https://doi.org/10.2307%2F2331536). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331536](https://www.jstor.org/stable/2331536).
- Pearson, Karl (1920). ["Notes on the History of Correlation"](https://zenodo.org/record/1431597). *Biometrika*. **13** (1): 25â45\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1093/biomet/13.1.25](https://doi.org/10.1093%2Fbiomet%2F13.1.25). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331722](https://www.jstor.org/stable/2331722).
- Rohrbasser, Jean-Marc; VĂ©ron, Jacques (2003). ["Wilhelm Lexis: The Normal Length of Life as an Expression of the "Nature of Things""](http://www.persee.fr/web/revues/home/prescript/article/pop_1634-2941_2003_num_58_3_18444). *Population*. **58** (3): 303â322\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.3917/pope.303.0303](https://doi.org/10.3917%2Fpope.303.0303).
- Shore, H (1982). "Simple Approximations for the Inverse Cumulative Function, the Density Function and the Loss Integral of the Normal Distribution". *Journal of the Royal Statistical Society. Series C (Applied Statistics)*. **31** (2): 108â114\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347972](https://doi.org/10.2307%2F2347972). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347972](https://www.jstor.org/stable/2347972).
- Shore, H (2005). "Accurate RMM-Based Approximations for the CDF of the Normal Distribution". *Communications in Statistics â Theory and Methods*. **34** (3): 507â513\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1081/sta-200052102](https://doi.org/10.1081%2Fsta-200052102). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122148043](https://api.semanticscholar.org/CorpusID:122148043).
- Shore, H (2011). "Response Modeling Methodology". *WIREs Comput Stat*. **3** (4): 357â372\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.151](https://doi.org/10.1002%2Fwics.151). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [62021374](https://api.semanticscholar.org/CorpusID:62021374).
- Shore, H (2012). "Estimating Response Modeling Methodology Models". *WIREs Comput Stat*. **4** (3): 323â333\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.1199](https://doi.org/10.1002%2Fwics.1199). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122366147](https://api.semanticscholar.org/CorpusID:122366147).
- [Stigler, Stephen M.](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") (1978). ["Mathematical Statistics in the Early States"](https://doi.org/10.1214%2Faos%2F1176344123). *The Annals of Statistics*. **6** (2): 239â265\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176344123](https://doi.org/10.1214%2Faos%2F1176344123). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2958876](https://www.jstor.org/stable/2958876).
- Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal". *The American Statistician*. **36** (2): 137â138\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2684031](https://doi.org/10.2307%2F2684031). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2684031](https://www.jstor.org/stable/2684031).
- Stigler, Stephen M. (1986). [*The History of Statistics: The Measurement of Uncertainty before 1900*](https://archive.org/details/historyofstatist00stig). Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-674-40340-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-40340-6 "Special:BookSources/978-0-674-40340-6")
.
- Stigler, Stephen M. (1999). *Statistics on the Table*. Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-674-83601-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-83601-3 "Special:BookSources/978-0-674-83601-3")
.
- Walker, Helen M. (1985). ["De Moivre on the Law of Normal Probability"](http://www.york.ac.uk/depts/maths/histstat/demoivre.pdf) (PDF). In Smith, David Eugene (ed.). *A Source Book in Mathematics*. Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-64690-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-64690-9 "Special:BookSources/978-0-486-64690-9")
.
- [Wallace, C. S.](https://en.wikipedia.org/wiki/Chris_Wallace_\(computer_scientist\) "Chris Wallace (computer scientist)") (1996). ["Fast pseudo-random generators for normal and exponential variates"](https://doi.org/10.1145%2F225545.225554). *ACM Transactions on Mathematical Software*. **22** (1): 119â127\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/225545.225554](https://doi.org/10.1145%2F225545.225554). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [18514848](https://api.semanticscholar.org/CorpusID:18514848).
- [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution"](http://mathworld.wolfram.com/NormalDistribution.html). [MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld").
- West, Graeme (2009). ["Better Approximations to Cumulative Normal Functions"](https://web.archive.org/web/20120229202051/https://wilmott.com/pdfs/090721_west.pdf) (PDF). *Wilmott Magazine*: 70â76\. Archived from [the original](https://wilmott.com/pdfs/090721_west.pdf) (PDF) on February 29, 2012.
- Zelen, Marvin; Severo, Norman C. (1972) \[First published 1964\]. [*Probability Functions (chapter 26)*](http://www.math.sfu.ca/~cbm/aands/page_931.htm). *[Handbook of mathematical functions with formulas, graphs, and mathematical tables](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun")*, by [Abramowitz, M.](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); and [Stegun, I. A.](https://en.wikipedia.org/wiki/Irene_A._Stegun "Irene A. Stegun"): National Bureau of Standards. New York, NY: Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)")
[978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0")
.
- ["Normal distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\]
- [Normal distribution calculator](https://www.hackmath.net/en/calculator/normal-distribution) |
| Shard | 152 (laksa) |
| Root Hash | 17790707453426894952 |
| Unparsed URL | org,wikipedia!en,/wiki/Normal_distribution s443 |