đŸ•·ïž Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 152 (from laksa138)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

â„č Skipped - page is already crawled

📄
INDEXABLE
✅
CRAWLED
2 days ago
đŸ€–
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0.1 months ago (distributed domain, exempt)
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://en.wikipedia.org/wiki/Normal_distribution
Last Crawled2026-04-09 01:27:17 (2 days ago)
First Indexed2013-08-08 16:24:53 (12 years ago)
HTTP Status Code200
Meta TitleNormal distribution - Wikipedia
Meta Descriptionnull
Meta Canonicalnull
Boilerpipe Text
Normal distribution Probability density function The red curve is the standard normal distribution . Cumulative distribution function Notation Parameters = mean ( location ) = variance (squared scale ) Support PDF CDF Quantile Mean Median Mode Variance MAD AAD Skewness Excess kurtosis Entropy MGF CF Fisher information Kullback–Leibler divergence In probability theory and statistics , a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable . The general form of its probability density function is [ 2 ] [ 3 ] [ 4 ] The parameter ⁠ ⁠ is the mean or expectation of the distribution (and also its median and mode ), while the parameter is the variance . The standard deviation of the distribution is the positive value ⁠ ⁠ (sigma). A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate . Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. [ 5 ] [ 6 ] Their importance is partly due to the central limit theorem . It states that the average of many statistically independent samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors , often have distributions that are nearly normal. [ 7 ] Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares [ 8 ] parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed. A normal distribution is sometimes informally called a bell curve . [ 9 ] [ 10 ] However, many other distributions are bell-shaped (such as the Cauchy , Student's t , and logistic distributions). (For other names, see Naming .) The univariate probability distribution is generalized for vectors in the multivariate normal distribution and for matrices in the matrix normal distribution . Standard normal distribution [ edit ] The simplest case of a normal distribution is known as the standard normal distribution or unit normal distribution . This is a special case when and , and it is described by this probability density function (or density): [ 11 ] The variable ⁠ ⁠ has a mean of 0 and a variance and standard deviation of 1. The density has its peak value at and inflection points at and ⁠ ⁠ . Although the density above is most commonly known as the standard normal, a few authors have used that term to describe other versions of the normal distribution. Carl Friedrich Gauss , for example, once defined the standard normal as which has a variance of ⁠ ⁠ , and Stephen Stigler once defined the standard normal as which has a simple functional form and a variance of [ 12 ] General normal distribution [ edit ] If ⁠ ⁠ is a standard normal deviate , then will have a normal distribution with expected value ⁠ ⁠ and standard deviation ⁠ ⁠ . This is equivalent to saying that the standard normal distribution ⁠ ⁠ can be scaled/stretched by a factor of ⁠ ⁠ and shifted by ⁠ ⁠ to yield a different normal distribution, called ⁠ ⁠ . Conversely, if ⁠ ⁠ is a normal deviate with parameters ⁠ ⁠ and , then this ⁠ ⁠ distribution can be re-scaled and shifted via the formula to convert it to the standard normal distribution. This variate is also called the standardized form of ⁠ ⁠ . In particular, the probability density function for ⁠ ⁠ can be written in terms of the standard normal distribution ⁠ ⁠ (with zero mean and unit variance): The probability density must be scaled by so that the integral is still 1. The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter ⁠ ⁠ ( phi ). [ 13 ] The variant form of the Greek letter phi, ⁠ ⁠ , is also used quite often. The normal distribution is often referred to as or ⁠ ⁠ . [ 14 ] Thus when a random variable ⁠ ⁠ is normally distributed with mean ⁠ ⁠ and standard deviation ⁠ ⁠ , one may write Alternative parameterizations [ edit ] Some authors advocate using the precision ⁠ ⁠ as the parameter defining the width of the distribution, instead of the standard deviation ⁠ ⁠ or the variance ⁠ ⁠ . The precision is normally defined as the reciprocal of the variance, ⁠ ⁠ . [ 15 ] The formula for the distribution then becomes This choice is claimed to have advantages in numerical computations when ⁠ ⁠ is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution . Alternatively, the reciprocal of the standard deviation might be defined as the precision , in which case the expression of the normal distribution becomes According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution. Normal distributions form an exponential family with natural parameters and , and natural statistics x and x 2 . The dual expectation parameters for normal distribution are η 1 = ÎŒ and η 2 = ÎŒ 2 + σ 2 . Cumulative distribution function [ edit ] The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter ⁠ ⁠ , is the integral The related error function gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2, falling in the range ⁠ ⁠ . That is: These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions . However, many numerical approximations are known; see below for more. The two functions are closely related, namely For a generic normal distribution with density ⁠ ⁠ , mean ⁠ ⁠ and variance , the cumulative distribution function is The probability that x lies between a and b with a < b is therefore [ 16 ] : 84  The complement of the standard normal cumulative distribution function, , is often called the Q-function , especially in engineering texts. [ 17 ] [ 18 ] It gives the probability that the value of a standard normal random variable ⁠ ⁠ will exceed ⁠ ⁠ : ⁠ ⁠ . Other definitions of the ⁠ ⁠ -function, all of which are simple transformations of ⁠ ⁠ , are also used occasionally. [ 19 ] The graph of the standard normal cumulative distribution function ⁠ ⁠ has 2-fold rotational symmetry around the point (0,1/2); that is, ⁠ ⁠ . Its antiderivative (indefinite integral) can be expressed as follows: An asymptotic expansion of the cumulative distribution function for large x can be derived using integration by parts : where denotes the double factorial . For more, see Error function § Asymptotic expansion . [ 20 ] Taylor series representation [ edit ] The Taylor series for the normal distribution ⁠ ⁠ can be derived by substituting ⁠ ⁠ into the Taylor series for the exponential function : [ 21 ] This series can be integrated term by term to obtain the Taylor series for the cumulative distribution function: [ 22 ] However, this series is ineffective for calculation due to slow convergence, except when ⁠ ⁠ is small. [ 22 ] Both of these series describe entire functions , which converge for all real and complex values of ⁠ ⁠ . Recursive computation with Taylor series [ edit ] The recurrence relation for Hermite polynomials He n ( x ) may be used to efficiently construct the Taylor series expansion about any point x 0 : where: Standard deviation and coverage [ edit ] For the normal distribution, the values less than one standard deviation from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%. About 68% of values drawn from a normal distribution are within one standard deviation σ from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. [ 9 ] This is known as the 68–95–99.7 (empirical) rule , or the 3-sigma rule . More precisely, the probability that a normal deviate lies in the range between and is given by To 12 significant digits, the values for are: ⁠ ⁠ OEIS 1 0.682 689 492 137 0.317 310 507 863 3 .151 487 187 53 OEIS :  A178647 2 0.954 499 736 104 0.045 500 263 896 21 .977 894 5080 OEIS :  A110894 3 0.997 300 203 937 0.002 699 796 063 370 .398 347 345 OEIS :  A270712 4 0.999 936 657 516 0.000 063 342 484 15 787 .192 7673 5 0.999 999 426 697 0.000 000 573 303 1 744 277 .893 62 6 0.999 999 998 027 0.000 000 001 973 506 797 345 .897 For large ⁠ ⁠ , one can use the approximation The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function , and can be expressed in terms of the inverse error function : For a normal random variable with mean ⁠ ⁠ and variance , the quantile function is The quantile of the standard normal distribution is commonly denoted as ⁠ ⁠ . These values are used in hypothesis testing , construction of confidence intervals and Q–Q plots . A normal random variable ⁠ ⁠ will exceed with probability , and will lie outside the interval with probability ⁠ ⁠ . In particular, the quantile is 1.96 ; therefore a normal random variable will lie outside the interval in only 5% of cases. The following table gives the quantile such that ⁠ ⁠ will lie in the range with a specified probability ⁠ ⁠ . These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions. [ 23 ] The following table shows , not as defined above. ⁠ ⁠   ⁠ ⁠ 0.80 1.281 551 565 545 0.999 3.290 526 731 492 0.90 1.644 853 626 951 0.9999 3.890 591 886 413 0.95 1.959 963 984 540 0.99999 4.417 173 413 469 0.98 2.326 347 874 041 0.999999 4.891 638 475 699 0.99 2.575 829 303 549 0.9999999 5.326 723 886 384 0.995 2.807 033 768 344 0.99999999 5.730 728 868 236 0.998 3.090 232 306 168 0.999999999 6.109 410 204 869 For small ⁠ ⁠ , the quantile function has the useful asymptotic expansion [ citation needed ] Using root finding to compute the quantile function [ edit ] Any of the described approaches for computing the cumulative distribution function can be used with Newton's method (or another root-finding algorithm such as Halley's method ) to find the value of ⁠ ⁠ for which ⁠ ⁠ for some desired quantile ⁠ ⁠ . For example, starting with an initial, approximately correct guess ⁠ ⁠ , increasingly better approximations ⁠ ⁠ , ⁠ ⁠ , ... can be calculated iteratively using Newton's method with The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance ) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. [ 24 ] [ 25 ] Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other. [ 26 ] [ 27 ] The normal distribution is a subclass of the elliptical distributions . The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share of stock . Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution . The value of the normal density is practically zero when the value ⁠ ⁠ lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers —values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and appropriate robust statistical inference methods applied. The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the LĂ©vy distribution . Symmetries and derivatives [ edit ] The normal distribution with density (mean ⁠ ⁠ and variance ) has the following properties: Furthermore, the density ⁠ ⁠ of the standard normal distribution (i.e. and ) also has the following properties: The plain and absolute moments of a variable ⁠ ⁠ are the expected values of and , respectively. If the expected value ⁠ ⁠ of ⁠ ⁠ is zero, these parameters are called central moments; otherwise, these parameters are called non-central moments. Usually we are interested only in moments with integer order ⁠ ⁠ . If ⁠ ⁠ has a normal distribution, the non-central moments exist and are finite for any ⁠ ⁠ whose real part is greater than −1. For any non-negative integer ⁠ ⁠ , the plain central moments are: [ 31 ] Here denotes the double factorial , that is, the product of all numbers from ⁠ ⁠ to 1 that have the same parity as The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer The last formula is valid also for any non-integer When the mean the plain and absolute moments can be expressed in terms of confluent hypergeometric functions and [ 32 ] These expressions remain valid even when ⁠ ⁠ is not an integer. See also generalized Hermite polynomials . Order Non-central moment, Central moment, 0 ⁠ ⁠ ⁠ ⁠ 1 ⁠ ⁠ ⁠ ⁠ 2 3 ⁠ ⁠ 4 5 ⁠ ⁠ 6 7 ⁠ ⁠ 8 The expectation of ⁠ ⁠ conditioned on the event that ⁠ ⁠ lies in an interval is given by where ⁠ ⁠ and ⁠ ⁠ respectively are the density and the cumulative distribution function of ⁠ ⁠ . For this is known as the inverse Mills ratio . Note that above, density ⁠ ⁠ of ⁠ ⁠ is used instead of standard normal density as in inverse Mills ratio, so here we have instead of ⁠ ⁠ . Fourier transform and characteristic function [ edit ] The Fourier transform of a normal density ⁠ ⁠ with mean ⁠ ⁠ and variance is [ 33 ] where ⁠ ⁠ is the imaginary unit . If the mean , the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain , with mean 0 and variance ⁠ ⁠ . In particular, the standard normal distribution ⁠ ⁠ is an eigenfunction of the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable ⁠ ⁠ is closely connected to the characteristic function of that variable, which is defined as the expected value of , as a function of the real variable ⁠ ⁠ (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable ⁠ ⁠ . [ 34 ] The relation between both is: The real and imaginary parts of give: and Similarly, and These formulas evaluated at give the expected value of these basic trigonometric and hyperbolic functions over a Gaussian random variable , which also could be seen as consequences of the Isserlis's theorem . Moment- and cumulant-generating functions [ edit ] The moment generating function of a real random variable ⁠ ⁠ is the expected value of , as a function of the real parameter ⁠ ⁠ . For a normal distribution with density ⁠ ⁠ , mean ⁠ ⁠ and variance , the moment generating function exists and is equal to For any ⁠ ⁠ , the coefficient of ⁠ ⁠ in the moment generating function (expressed as an exponential power series in ⁠ ⁠ ) is the normal distribution's expected value ⁠ ⁠ . The cumulant generating function is the logarithm of the moment generating function, namely The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in ⁠ ⁠ , only the first two cumulants are nonzero, namely the mean  ⁠ ⁠ and the variance  ⁠ ⁠ . Some authors prefer to instead work with the characteristic function E[ e itX ] = e iÎŒt − σ 2 t 2 /2 and ln E[ e itX ] = iÎŒt − ⁠ 1 / 2 ⁠ σ 2 t 2 . Stein operator and class [ edit ] Within Stein's method the Stein operator and class of a random variable are and the class of all absolutely continuous functions ⁠ ⁠ such that ⁠ ⁠ . Zero-variance limit [ edit ] In the limit when approaches zero, the probability density approaches zero everywhere except at , where it approaches , while its integral remains equal to 1. An extension of the normal distribution to the case with zero variance can be defined using the Dirac delta measure , although the resulting random variables are not absolutely continuous and thus do not have probability density functions . The cumulative distribution function of such a random variable is then the Heaviside step function translated by the mean , namely Of all probability distributions over the reals with a specified finite mean ⁠ ⁠ and finite variance ⁠ ⁠ , the normal distribution is the one with maximum entropy . [ 24 ] To see this, let ⁠ ⁠ be a continuous random variable with probability density ⁠ ⁠ . The entropy of ⁠ ⁠ is defined as [ 35 ] [ 36 ] [ 37 ] where is understood to be zero whenever ⁠ ⁠ . This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by using variational calculus . A function with three Lagrange multipliers is defined: At maximum entropy, a small variation about will produce a variation about ⁠ ⁠ which is equal to 0: Since this must hold for any small ⁠ ⁠ , the factor multiplying ⁠ ⁠ must be zero, and solving for ⁠ ⁠ yields: The Lagrange constraints that ⁠ ⁠ is properly normalized and has the specified mean and variance are satisfied if and only if ⁠ ⁠ , ⁠ ⁠ , and ⁠ ⁠ are chosen so that The entropy of a normal distribution is equal to which is independent of the mean ⁠ ⁠ . If the characteristic function of some random variable ⁠ ⁠ is of the form in a neighborhood of zero, where is a polynomial , then the Marcinkiewicz theorem (named after JĂłzef Marcinkiewicz ) asserts that ⁠ ⁠ can be at most a quadratic polynomial, and therefore ⁠ ⁠ is a normal random variable. [ 38 ] The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero cumulants . If ⁠ ⁠ and ⁠ ⁠ are jointly normal and uncorrelated , then they are independent . The requirement that ⁠ ⁠ and ⁠ ⁠ should be jointly normal is essential; without it the property does not hold. [ 39 ] [ 40 ] [proof] For non-normal random variables uncorrelatedness does not imply independence. The Kullback–Leibler divergence of one normal distribution from another is given by: [ 41 ] The Hellinger distance between the same distributions is equal to The Fisher information matrix for a normal distribution w.r.t. ⁠ ⁠ and is diagonal and takes the form The conjugate prior of the mean of a normal distribution is another normal distribution. [ 42 ] Specifically, if are iid and the prior is , then the posterior distribution for the estimator of ⁠ ⁠ will be The family of normal distributions not only forms an exponential family (EF), but in fact forms a natural exponential family (NEF) with quadratic variance function ( NEF-QVF ). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF. In information geometry , the family of normal distributions forms a statistical manifold with constant curvature ⁠ ⁠ . The same family is flat with respect to the (±1)-connections and . [ 43 ] If are distributed according to , then . Note that there is no assumption of independence. [ 44 ] Central limit theorem [ edit ] As the number of discrete events increases, the function begins to resemble a normal distribution. Comparison of probability density functions, p ( k ) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing na , in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance and ⁠ ⁠ is their mean scaled by Then, as ⁠ ⁠ increases, the probability distribution of ⁠ ⁠ will tend to the normal distribution with zero mean and variance ⁠ ⁠ . The theorem can be extended to variables that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions. Many test statistics , scores , and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions . The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example: Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem , improvements of the approximation are given by the Edgeworth expansions . This theorem can also be used to justify modeling the sum of many uniform noise sources as Gaussian noise . See AWGN . Operations and functions of normal variables [ edit ] Operations on a single normal variable [ edit ] If ⁠ ⁠ is distributed normally with mean ⁠ ⁠ and variance , then Operations on two independent normal variables [ edit ] If and are two independent normal random variables, with means , and variances , , then their sum will also be normally distributed, [proof] with mean and variance . In particular, if ⁠ ⁠ and ⁠ ⁠ are independent normal deviates with zero mean and variance , then and are also independent and normally distributed, with zero mean and variance . This is a special case of the polarization identity . [ 46 ] If , are two independent normal deviates with mean ⁠ ⁠ and variance , and ⁠ ⁠ , ⁠ ⁠ are arbitrary real numbers, then the variable is also normally distributed with mean ⁠ ⁠ and variance . It follows that the normal distribution is stable (with exponent ). If , are normal distributions, then their normalized geometric mean is a normal distribution with and . Operations on two independent standard normal variables [ edit ] If and are two independent standard normal random variables with mean 0 and variance 1, then Operations on multiple independent normal variables [ edit ] A quadratic form of a normal vector, i.e. a quadratic function of multiple independent or correlated normal variables, is a generalized chi-square variable. Operations on the density function [ edit ] The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function. Infinite divisibility and CramĂ©r's theorem [ edit ] For any positive integer n , any normal distribution with mean ⁠ ⁠ and variance is the distribution of the sum of n independent normal deviates, each with mean and variance . This property is called infinite divisibility . [ 51 ] Conversely, if and are independent random variables and their sum has a normal distribution, then both and must be normal deviates. [ 52 ] This result is known as CramĂ©r's decomposition theorem , and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. CramĂ©r's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely. [ 38 ] The Kac–Bernstein theorem [ edit ] The Kac–Bernstein theorem states that if and ⁠ ⁠ are independent and and are also independent, then both X and Y must necessarily have normal distributions. [ 53 ] [ 54 ] More generally, if are independent random variables, then two distinct linear combinations and will be independent if and only if all are normal and , where denotes the variance of . [ 53 ] The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists. The multivariate normal distribution describes the Gaussian law in the k -dimensional Euclidean space . A vector X ∈ R k is multivariate-normally distributed if any linear combination of its components ÎŁ k j =1 a j X j has a (univariate) normal distribution. The variance of X is a k  ×  k symmetric positive-definite matrix V . The multivariate normal distribution is a special case of the elliptical distributions . As such, its iso-density loci in the k = 2 case are ellipses and in the case of arbitrary k are ellipsoids . Rectified Gaussian distribution a rectified version of normal distribution with all the negative elements reset to 0. Complex normal distribution deals with the complex normal vectors. A complex vector X ∈ C k is said to be normal if both its real and imaginary components jointly possess a 2 k -dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the variance matrix Γ , and the relation matrix C . Matrix normal distribution describes the case of normally distributed matrices. Gaussian processes are the normally distributed stochastic processes . These can be viewed as elements of some infinite-dimensional Hilbert space H , and thus are the analogues of multivariate normal vectors for the case k = ∞ . A random element h ∈ H is said to be normal if for any constant a ∈ H the scalar product ( a , h ) has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear covariance operator K : H → H . Several Gaussian processes became popular enough to have their own names: Brownian motion ; Brownian bridge ; and Ornstein–Uhlenbeck process . Gaussian q-distribution is an abstract mathematical construction that represents a q-analogue of the normal distribution. the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy , and is one type of Tsallis distribution . This distribution is different from the Gaussian q-distribution above. The Kaniadakis Îș -Gaussian distribution is a generalization of the Gaussian distribution which arises from the Kaniadakis statistics , being one of the Kaniadakis distributions . A random variable X has a two-piece normal distribution if it has a distribution where ÎŒ is the mean and σ 2 1   and σ 2 2   are the variances of the distribution to the left and right of the mean respectively. The mean E( X ) , variance V( X ) , and third central moment T( X ) of this distribution have been determined [ 55 ] One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are: Pearson distribution — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values. The generalized normal distribution , also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors. Statistical inference [ edit ] Estimation of parameters [ edit ] It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample from a normal population we would like to learn the approximate values of parameters ⁠ ⁠ and . The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function : Taking derivatives with respect to ⁠ ⁠ and and solving the resulting system of first order conditions yields the maximum likelihood estimates : Then is as follows: Estimator is called the sample mean , since it is the arithmetic mean of all observations. The statistic is complete and sufficient for ⁠ ⁠ , and therefore by the Lehmann–ScheffĂ© theorem , is the uniformly minimum variance unbiased (UMVU) estimator. [ 56 ] In finite samples it is distributed normally: The variance of this estimator is equal to the ΌΌ -element of the inverse Fisher information matrix . This implies that the estimator is finite-sample efficient . Of practical importance is the standard error of being proportional to , that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations . From the standpoint of the asymptotic theory , is consistent , that is, it converges in probability to ⁠ ⁠ as . The estimator is also asymptotically normal , which is a simple corollary of it being normal in finite samples: The estimator is called the sample variance , since it is the variance of the sample ( ). In practice, another estimator is often used instead of the . This other estimator is denoted , and is also called the sample variance , which represents a certain ambiguity in terminology; its square root ⁠ ⁠ is called the sample standard deviation . The estimator differs from by having ( n − 1) instead of  n in the denominator (the so-called Bessel's correction ): The difference between and becomes negligibly small for large n ' s. In finite samples however, the motivation behind the use of is that it is an unbiased estimator of the underlying parameter , whereas is biased. Also, by the Lehmann–ScheffĂ© theorem the estimator is uniformly minimum variance unbiased ( UMVU ), [ 56 ] which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator is better than the in terms of the mean squared error (MSE) criterion. In finite samples both and have scaled chi-squared distribution with ( n − 1) degrees of freedom: The first of these expressions shows that the variance of is equal to , which is slightly greater than the σσ -element of the inverse Fisher information matrix , which is . Thus, is not an efficient estimator for , and moreover, since is UMVU, we can conclude that the finite-sample efficient estimator for does not exist. Applying the asymptotic theory, both estimators and are consistent, that is they converge in probability to as the sample size . The two estimators are also both asymptotically normal: In particular, both estimators are asymptotically efficient for . Confidence intervals [ edit ] By Cochran's theorem , for normal distributions the sample mean and the sample variance s 2 are independent , which means there can be no gain in considering their joint distribution . There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between and s can be employed to construct the so-called t-statistic : This quantity t has the Student's t-distribution with ( n − 1) degrees of freedom, and it is an ancillary statistic (independent of the value of the parameters). Inverting the distribution of this t -statistics will allow us to construct the confidence interval for ÎŒ ; [ 57 ] similarly, inverting the χ 2 distribution of the statistic s 2 will give us the confidence interval for σ 2 : [ 58 ] where t k , p and χ   2 k,p   are the p th quantiles of the t - and χ 2 -distributions respectively. These confidence intervals are of the confidence level 1 − α , meaning that the true values ÎŒ and σ 2 fall outside of these intervals with probability (or significance level ) α . In practice people usually take α = 5% , resulting in the 95% confidence intervals. The confidence interval for σ can be found by taking the square root of the interval bounds for σ 2 . Approximate formulas can be derived from the asymptotic distributions of and s 2 : The approximate formulas become valid for large values of n , and are more convenient for the manual calculation since the standard normal quantiles z α /2 do not depend on n . In particular, the most popular value of α = 5% , results in | z 0.025 | = 1.96 . Normality tests assess the likelihood that the given data set { x 1 , ..., x n } comes from a normal distribution. Typically the null hypothesis H 0 is that the observations are distributed normally with unspecified mean ÎŒ and variance σ 2 , versus the alternative H a that the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below: Diagnostic plots are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. Q–Q plot , also known as normal probability plot or rankit plot—is a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it is a plot of point of the form ( Ί −1 ( p k ), x ( k ) ) , where plotting points p k are equal to p k = ( k − α )/( n + 1 − 2 α ) and α is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line. P–P plot – similar to the Q–Q plot, but used much less frequently. This method consists of plotting the points ( Ί ( z ( k ) ), p k ) , where . For normally distributed data this plot should lie on a straight line between (0, 0) and  (1, 1) . Goodness-of-fit tests : Moment-based tests : D'Agostino's K-squared test Jarque–Bera test Shapiro–Wilk test : This is based on the line in the Q–Q plot having the slope of σ . The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly. Tests based on the empirical distribution function : Anderson–Darling test Lilliefors test (an adaptation of the Kolmogorov–Smirnov test ) Bayesian analysis of the normal distribution [ edit ] Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: Either the mean, or the variance, or neither, may be considered a fixed quantity. When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision , the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified. Both univariate and multivariate cases need to be considered. Either conjugate or improper prior distributions may be placed on the unknown variables. An additional set of cases occurs in Bayesian linear regression , where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the regression coefficients . The resulting analysis is similar to the basic cases of independent identically distributed data. The formulas for the non-linear-regression cases are summarized in the conjugate prior article. Sum of two quadratics [ edit ] The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious. This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x , and completing the square . Note the following about the complex constant factors attached to some of the terms: The factor has the form of a weighted average of y and z . This shows that this factor can be thought of as resulting from a situation where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it is necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the harmonic mean , so it is not surprising that is one-half the harmonic mean of a and b . A similar formula can be written for the sum of two vector quadratics: If x , y , z are vectors of length k , and A and B are symmetric , invertible matrices of size , then where The form x â€Č A x is called a quadratic form and is a scalar : In other words, it sums up all possible combinations of products of pairs of elements from x , with a separate coefficient for each. In addition, since , only the sum matters for any off-diagonal elements of A , and there is no loss of generality in assuming that A is symmetric . Furthermore, if A is symmetric, then the form Sum of differences from the mean [ edit ] Another useful formula is as follows: where With known variance [ edit ] For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known variance σ 2 , the conjugate prior distribution is also normally distributed. This can be shown more easily by rewriting the variance as the precision , i.e. using τ = 1/ σ 2 . Then if and we proceed as follows. First, the likelihood function is (using the formula above for the sum of differences from the mean): Then, we proceed as follows: In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving  ÎŒ . The result is the kernel of a normal distribution, with mean and precision , i.e. This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: That is, to combine n data points with total precision of nτ (or equivalently, total variance of n / σ 2 ) and mean of values , derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average , i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.) The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known mean ÎŒ , the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution . The two are equivalent except for having different parameterizations . Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ 2 is as follows: The likelihood function from above, written in terms of the variance, is: where Then: The above is also a scaled inverse chi-squared distribution where or equivalently Reparameterizing in terms of an inverse gamma distribution , the result is: With unknown mean and unknown variance [ edit ] For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with unknown mean ÎŒ and unknown variance σ 2 , a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution . Logically, this originates as follows: From the analysis of the case with unknown mean but known variance, we see that the update equations involve sufficient statistics computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and sum of squared deviations . Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence. This suggests that we create a conditional prior of the mean on the unknown variance, with a hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately. This leads immediately to the normal-inverse-gamma distribution , which is the product of the two distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a normal distribution over the mean, conditional on the variance) and with the same four parameters just defined. The priors are normally defined as follows: The update equations can be derived, and look as follows: The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean. Occurrence and applications [ edit ] The occurrence of normal distribution in practical problems can be loosely classified into four categories: Exactly normal distributions; Approximately normal laws, for example when such approximation is justified by the central limit theorem ; and Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a given mean and variance. Regression problems – the normal distribution being found after systematic effects have been modeled sufficiently well. The ground state of a quantum harmonic oscillator has the Gaussian distribution. A normal distribution occurs in some physical theories : Approximate normality [ edit ] Approximately normal distributions occur in many situations, as explained by the central limit theorem . When the outcome is produced by many small effects acting additively and independently , its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where infinitely divisible and decomposable distributions are involved, such as Binomial random variables , associated with binary response variables; Poisson random variables , associated with rare events; Thermal radiation has a Bose–Einstein distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. Histogram of sepal widths for Iris versicolor from Fisher's Iris flower data set , with superimposed best-fitting normal distribution I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations. There are statistical methods to empirically test that assumption; see the above Normality tests section. In biology , the logarithm of various variables tend to have a normal distribution, that is, they tend to have a log-normal distribution (after separation on male/female subpopulations), with examples including: Measures of size of living tissue (length, height, skin area, weight); [ 62 ] The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth ; presumably the thickness of tree bark also falls under this category; Certain physiological measurements, such as blood pressure of adult humans. In finance, in particular the Black–Scholes model , changes in the logarithm of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like compound interest , not like simple interest, and so are multiplicative). Some mathematicians such as Benoit Mandelbrot have argued that log-Levy distributions , which possess heavy tails , would be a more appropriate model, in particular for the analysis for stock market crashes . The use of the assumption of normal distribution occurring in financial models has also been criticized by Nassim Nicholas Taleb in his works. Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors. [ 63 ] In standardized testing , results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the IQ test ) or transforming the raw test scores into output scores by fitting them to the normal distribution. For example, the SAT 's traditional range of 200–800 is based on a normal distribution with a mean of 500 and a standard deviation of 100. Fitted cumulative normal distribution to October rainfalls, see distribution fitting Many scores are derived from the normal distribution, including percentile ranks (percentiles or quantiles), normal curve equivalents , stanines , z-scores , and T-scores. Additionally, some behavioral statistical procedures assume that scores are normally distributed; for example, t-tests and ANOVAs . Bell curve grading assigns relative grades based on a normal distribution of scores. In hydrology the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the central limit theorem . [ 64 ] The plot on the right illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% confidence belt based on the binomial distribution . The rainfall data are represented by plotting positions as part of the cumulative frequency analysis . Methodological problems and peer review [ edit ] John Ioannidis argued that using normally distributed standard deviations as standards for validating research findings leave falsifiable predictions about phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if they were unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct. [ 65 ] Computational methods [ edit ] Generating values from normal distribution [ edit ] The bean machine , a device invented by Francis Galton , can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve. In computer simulations, especially in applications of the Monte-Carlo method , it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a N ( ÎŒ , σ 2 ) can be generated as X = ÎŒ + σZ , where Z is standard normal. All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates. The most straightforward method is based on the probability integral transform property: if U is distributed uniformly on (0,1), then Ί −1 ( U ) will have the standard normal distribution. The drawback of this method is that it relies on calculation of the probit function Ί −1 , which cannot be done analytically. Some approximate methods are described in Hart (1968) and in the erf article. Wichura gives a fast algorithm for computing this function to 16 decimal places, [ 66 ] which is used by R to compute random variates of the normal distribution. An easy-to-program approximate approach that relies on the central limit theorem is as follows: generate 12 uniform U (0,1) deviates, add them all up, and subtract 6 – the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be Irwin–Hall , which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (−6, 6) . [ 67 ] Note that in a true normal distribution, only 0.00034% of all samples will fall outside  ±6 σ . The Box–Muller method uses two independent random numbers U and V distributed uniformly on (0,1). Then the two random variables X and Y will both have the standard normal distribution, and will be independent . This formulation arises because for a bivariate normal random vector ( X , Y ) the squared norm X 2 + Y 2 will have the chi-squared distribution with two degrees of freedom, which is an easily generated exponential random variable corresponding to the quantity −2 ln( U ) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V . The Marsaglia polar method is a modification of the Box–Muller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (−1,1) distribution, and then S = U 2 + V 2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities are returned. Again, X and Y are independent, standard normal random variables. The Ratio method [ 68 ] is a rejection method. The algorithm proceeds as follows: Generate two independent uniform deviates U and V ; Compute X = √ 8/ e ( V − 0.5)/ U ; Optional: if X 2 ≀ 5 − 4 e 1/4 U then accept X and terminate algorithm; Optional: if X 2 ≄ 4 e −1.35 / U + 1.4 then reject X and start over from step 1; If X 2 ≀ −4 ln U then accept X , otherwise start over the algorithm. The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved [ 69 ] so that the logarithm is rarely evaluated. The ziggurat algorithm [ 70 ] is faster than the Box–Muller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed. Integer arithmetic can be used to sample from the standard normal distribution. [ 71 ] [ 72 ] This method is exact in the sense that it satisfies the conditions of ideal approximation ; [ 73 ] i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number. There is also some investigation [ 74 ] into the connection between the fast Hadamard transform and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data. Numerical approximations for the normal cumulative distribution function and normal quantile function [ edit ] The standard normal cumulative distribution function is widely used in scientific and statistical computing. The values Ί ( x ) may be approximated very accurately by a variety of methods, such as numerical integration , Taylor series , asymptotic series and continued fractions . Different approximations are used depending on the desired level of accuracy. Zelen & Severo (1964) give the approximation for Ί ( x ) for x > 0 with the absolute error | Δ ( x ) | < 7.5·10 −8 (algorithm 26.2.17 ): where ϕ ( x ) is the standard normal probability density function, and b 0 = 0.2316419 , b 1 = 0.319381530 , b 2 = −0.356563782 , b 3 = 1.781477937 , b 4 = −1.821255978 , b 5 = 1.330274429 . Hart (1968) lists dozens of approximations by means of rational functions, with or without exponentials, for the erfc() function, where erfc(x) = 1 - erf(x). His algorithms vary in the degree of complexity and the resulting precision, with a maximum absolute precision of 24 digits. An algorithm by West (2009) combines Hart's algorithm 5666 with a continued fraction approximation in the tail to provide a fast computation algorithm with 16-digit precision. Cody (1969) , after recalling the Hart68 solution is not suited for erf, gave a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation . Marsaglia (2004) suggested a simple algorithm [ note 1 ] based on the Taylor series expansion for calculating Ί ( x ) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when x = 10 ). The GNU Scientific Library calculates values of the standard normal cumulative distribution function using Hart's algorithms and approximations with Chebyshev polynomials . Dia (2023) proposes the following approximation of with a maximum relative error less than in absolute value: for and for , Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p = Ί ( z ) , the simplest approximation for the quantile function is: This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≀ p ≀ 0.9999 , corresponding to 0 ≀ z ≀ 3.719 ). For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation: The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by This approximation is particularly accurate for the right far-tail (maximum error of 10 −3 for z ≄ 1.4 ). Highly accurate approximations for the cumulative distribution function, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore (2005). Some more approximations can be found at: Error function#Approximation with elementary functions . In particular, small relative error on the whole domain for the cumulative distribution function ⁠ ⁠ and the quantile function as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008. Some authors [ 75 ] [ 76 ] attribute the discovery of the normal distribution to de Moivre , who in 1738 [ note 2 ] published in the second edition of his The Doctrine of Chances the study of the coefficients in the binomial expansion of ( a + b ) n . De Moivre proved that the middle term in this expansion has the approximate magnitude of , and that "If m or ⁠ 1 / 2 ⁠ n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ , has to the middle Term, is ." [ 77 ] Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function. [ 78 ] In 1809, Carl Friedrich Gauss showed that the normal distribution provides a way to rationalize the method of least squares . In 1823 Gauss published his monograph " Theoria combinationis observationum erroribus minimis obnoxiae " where among other things he introduces several important statistical concepts, such as the method of least squares , the method of maximum likelihood , and the normal distribution . Gauss used M , M â€Č , M ″, ... to denote the measurements of some unknown quantity  V , and sought the most probable estimator of that quantity: the one that maximizes the probability φ ( M − V ) · φ ( M â€Č − V ) · φ ( M ″ − V ) · ... of obtaining the observed experimental results. In his notation φΔ is the probability density function of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values. [ note 3 ] Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors: [ 79 ] where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares method. [ 80 ] Pierre-Simon Laplace proved the central limit theorem in 1810, consolidating the importance of the normal distribution in statistics. Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions. [ note 4 ] It was Laplace who first posed the problem of aggregating several observations in 1774, [ 81 ] although his own solution led to the Laplacian distribution . It was Laplace who first calculated the value of the integral ∫ e − t 2 dt = √ π in 1782, providing the normalization constant for the normal distribution. [ 82 ] For this accomplishment, Gauss acknowledged the priority of Laplace. [ 83 ] Finally, it was Laplace who in 1810 proved and presented to the academy the fundamental central limit theorem , which emphasized the theoretical importance of the normal distribution. [ 84 ] It is of interest to note that in 1809 an Irish-American mathematician Robert Adrain published two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss. [ 85 ] His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed by Abbe . [ 86 ] In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena: [ 59 ] The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is Today, the concept is usually known in English as the normal distribution or Gaussian distribution . Other less common names include Gauss distribution, Laplace–Gauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual. [ 87 ] However, by the end of the 19th century some authors [ note 5 ] had started using the name normal distribution , where the word "normal" was used as an adjective – the term now being seen as a reflection of this distribution being seen as typical, common – and thus normal. Peirce (one of those authors) once defined "normal" thus: "... the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would , in the long run, occur under certain circumstances." [ 88 ] Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution. [ 89 ] Many years ago I called the Laplace–Gaussian curve the normal curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'. Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: The term standard normal distribution , which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947) Introduction to Mathematical Statistics and Alexander M. Mood (1950) Introduction to the Theory of Statistics . [ 90 ] [ 91 ] [ 92 ] ^ For example, this algorithm is given in the article Bc programming language . ^ De Moivre first published his findings in 1733, in a pamphlet Approximatio ad Summam Terminorum Binomii ( a + b ) n in Seriem Expansi that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example Walker (1985) . ^ "It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been determined by several direct observations, made under the same circumstances and with equal care, the arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at least, so that it is always most safe to adhere to it." — Gauss (1809 , section 177) ^ "My custom of terming the curve the Gauss–Laplacian or normal curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from Pearson (1905 , p. 189) ^ Besides those specifically referenced here, such use is encountered in the works of Peirce , Galton ( Galton (1889 , chapter V)) and Lexis ( Lexis (1878) , Rohrbasser & VĂ©ron (2003) ) c. 1875. [ citation needed ] ^ Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019). "Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation" (PDF) . Annals of Operations Research . 299 ( 1– 2). Springer: 1281– 1315. arXiv : 1811.11301 . doi : 10.1007/s10479-019-03373-1 . S2CID   254231768 . Archived from the original (PDF) on March 31, 2023 . Retrieved February 27, 2023 . ^ Tsokos, Chris; Wooten, Rebecca (January 1, 2016). Tsokos, Chris; Wooten, Rebecca (eds.). The Joy of Finite Mathematics . Boston: Academic Press. pp.  231– 263. doi : 10.1016/b978-0-12-802967-1.00007-3 . ISBN   978-0-12-802967-1 . ^ Harris, Frank E. (January 1, 2014). Harris, Frank E. (ed.). Mathematics for Physical Science and Engineering . Boston: Academic Press. pp.  663– 709. doi : 10.1016/b978-0-12-801000-6.00018-3 . ISBN   978-0-12-801000-6 . ^ Hoel (1947 , p. 31 ) and Mood (1950 , p. 109 ) give this definition with slightly different notation. ^ Normal Distribution , Gale Encyclopedia of Psychology ^ Casella & Berger (2001 , p. 102) ^ Lyon, A. (2014). Why are Normal Distributions Normal? , The British Journal for the Philosophy of Science. ^ Jorge, Nocedal; Stephan, J. Wright (2006). Numerical Optimization (2nd ed.). Springer. p. 249. ISBN   978-0387-30303-1 . ^ a b "Normal Distribution" . www.mathsisfun.com . Retrieved August 15, 2020 . ^ "bell curve" . Merriam-Webster.com Dictionary . Retrieved May 25, 2025 . ^ Mood (1950 , p. 112 ) explicitly defines the standard normal distribution . In contrast, Hoel (1947) explicitly defines the standard normal curve (p. 33) and introduces the term standard normal distribution (p. 69) . ^ Stigler (1982) ^ Halperin, Hartley & Hoel (1965 , item 7) ^ McPherson (1990 , p. 110) ^ Bernardo & Smith (2000 , p. 121) ^ Park, Kun Il (2018). Fundamentals of Probability and Stochastic Processes with Applications to Communications . Springer. ISBN   978-3-319-68074-3 . ^ Scott, Clayton; Nowak, Robert (August 7, 2003). "The Q-function" . Connexions . ^ Barak, Ohad (April 6, 2006). "Q Function and Error Function" (PDF) . Tel Aviv University. Archived from the original (PDF) on March 25, 2009. ^ Weisstein, Eric W. "Normal Distribution Function" . MathWorld . ^ Abramowitz, Milton ; Stegun, Irene Ann , eds. (1983) [June 1964]. "Chapter 26, eqn 26.2.12" . Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables . Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 932. ISBN   978-0-486-61272-0 . LCCN   64-60036 . MR   0167642 . LCCN   65-12253 . ^ Duff, Michael (2003). "Normal Distribution Algorithms". The Mathematical Gazette . 87 (509): 331– 336. JSTOR   3621062 . ^ a b Stuart, Alan; Ord, J. Keith (1987). "The normal d.f." . Kendall's Advanced Theory of Statistics . Vol. 1: Distribution Theory. originally by Maurice Kendall (5th ed.). Charles Griffin & Co. §   5.37, pp. 183–185. ISBN   0-85264-285-7 . ^ Vaart, A. W. van der (October 13, 1998). Asymptotic Statistics . Cambridge University Press. doi : 10.1017/cbo9780511802256 . ISBN   978-0-511-80225-6 . ^ a b Cover & Thomas (2006) , p. 254. ^ Park, Sung Y.; Bera, Anil K. (2009). "Maximum Entropy Autoregressive Conditional Heteroskedasticity Model" (PDF) . Journal of Econometrics . 150 (2): 219– 230. Bibcode : 2009JEcon.150..219P . CiteSeerX   10.1.1.511.9750 . doi : 10.1016/j.jeconom.2008.12.014 . Archived from the original (PDF) on March 7, 2016 . Retrieved June 2, 2011 . ^ Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184 ^ Lukacs, Eugene (March 1942). "A Characterization of the Normal Distribution" . Annals of Mathematical Statistics . 13 (1): 91– 93. doi : 10.1214/AOMS/1177731647 . ISSN   0003-4851 . JSTOR   2236166 . MR   0006626 . Zbl   0060.28509 . Wikidata   Q55897617 . ^ a b c Patel & Read (1996 , [2.1.4]) ^ Fan (1991 , p. 1258) ^ Patel & Read (1996 , [2.1.8]) ^ Papoulis, Athanasios. Probability, Random Variables and Stochastic Processes (4th ed.). p. 148. ^ Winkelbauer, Andreas (2012). "Moments and Absolute Moments of the Normal Distribution". arXiv : 1209.4340 [ math.ST ]. ^ Bryc (1995 , p. 23) ^ Bryc (1995 , p. 24) ^ Williams, David (2001). Weighing the odds : a course in probability and statistics (Reprinted. ed.). Cambridge [u.a.]: Cambridge Univ. Press. pp.  197 –199. ISBN   978-0-521-00618-7 . ^ JosĂ© M. Bernardo; Adrian F. M. Smith (2000). Bayesian theory (Reprint ed.). Chichester [u.a.]: Wiley. pp.  209 , 366. ISBN   978-0-471-49464-5 . ^ O'Hagan, A. (1994) Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference , Edward Arnold. ISBN   0-340-52922-9 (Section 5.40) ^ a b Bryc (1995 , p. 35) ^ UIUC, Lecture 21. The Multivariate Normal Distribution , 21.6:"Individually Gaussian Versus Jointly Gaussian". ^ Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", The American Statistician , volume 36, number 4 November 1982, pages 372–373 ^ "Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions" . Allisons.org . December 5, 2007 . Retrieved March 3, 2017 . ^ Jordan, Michael I. (February 8, 2010). "Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution" (PDF) . ^ Amari & Nagaoka (2000) ^ "Expectation of the maximum of gaussian random variables" . Mathematics Stack Exchange . Retrieved April 7, 2024 . ^ "Normal Approximation to Poisson Distribution" . Stat.ucla.edu . Retrieved March 3, 2017 . ^ Bryc (1995 , p. 27) ^ Weisstein, Eric W. "Normal Product Distribution" . MathWorld . wolfram.com. ^ Lukacs, Eugene (1942). "A Characterization of the Normal Distribution" . The Annals of Mathematical Statistics . 13 (1): 91– 3. doi : 10.1214/aoms/1177731647 . ISSN   0003-4851 . JSTOR   2236166 . ^ Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". Sankhyā . 13 (4): 359– 62. ISSN   0036-4452 . JSTOR   25048183 . ^ Lehmann, E. L. (1997). Testing Statistical Hypotheses (2nd ed.). Springer. p. 199. ISBN   978-0-387-94919-2 . ^ Patel & Read (1996 , [2.3.6]) ^ Galambos & Simonelli (2004 , Theorem 3.5) ^ a b Lukacs & King (1954) ^ Quine, M.P. (1993). "On three characterisations of the normal distribution" . Probability and Mathematical Statistics . 14 (2): 257– 263. ^ John, S (1982). "The three parameter two-piece normal family of distributions and its fitting". Communications in Statistics – Theory and Methods . 11 (8): 879– 885. doi : 10.1080/03610928208828279 . ^ a b Krishnamoorthy (2006 , p. 127) ^ Krishnamoorthy (2006 , p. 130) ^ Krishnamoorthy (2006 , p. 133) ^ a b Maxwell (1860) , p. 23. ^ Bryc (1995) , p. 1. ^ Larkoski, Andrew J. (2023). Quantum Mechanics: A Mathematical Introduction . United Kingdom: Cambridge University Press. pp.  120– 121. ISBN   978-1-009-12222-1 . Retrieved May 30, 2025 . ^ Huxley (1932) ^ Jaynes, Edwin T. (2003). Probability Theory: The Logic of Science . Cambridge University Press. pp.  592– 593. ISBN   9780521592710 . ^ Oosterbaan, Roland J. (1994). "Chapter 6: Frequency and Regression Analysis of Hydrologic Data" (PDF) . In Ritzema, Henk P. (ed.). Drainage Principles and Applications, Publication 16 (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp.  175– 224. ISBN   978-90-70754-33-4 . ^ Why Most Published Research Findings Are False, John P. A. Ioannidis, 2005 ^ Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". Applied Statistics . 37 (3): 477– 84. doi : 10.2307/2347330 . JSTOR   2347330 . ^ Johnson, Kotz & Balakrishnan (1995 , Equation (26.48)) ^ Kinderman & Monahan (1977) ^ Leva (1992) ^ Marsaglia & Tsang (2000) ^ Karney (2016) ^ Du, Fan & Wei (2022) ^ Monahan (1985 , section 2) ^ Wallace (1996) ^ Johnson, Kotz & Balakrishnan (1994 , p. 85) ^ Le Cam & Lo Yang (2000 , p. 74) ^ De Moivre, Abraham (1733), Corollary I – see Walker (1985 , p. 77) ^ Stigler (1986 , p. 76 ) ^ Gauss (1809 , section 177) ^ Gauss (1809 , section 179) ^ Laplace (1774 , Problem III) ^ Pearson (1905 , p. 189) ^ Gauss (1809 , section 177) ^ Stigler (1986 , p. 144) ^ Stigler (1978 , p. 243) ^ Stigler (1978 , p. 244) ^ Jaynes, Edwin J.; Probability Theory: The Logic of Science , Ch. 7 . ^ Peirce, Charles S. (c. 1909 MS), Collected Papers v. 6, paragraph 327. ^ Kruskal & Stigler (1997) . ^ "Earliest Uses... (Entry Standard Normal Curve)" . ^ Hoel (1947) introduces the terms standard normal curve (p. 33) and standard normal distribution (p. 69) . ^ Mood (1950) explicitly defines the standard normal distribution (p. 112) . ^ Sun, Jingchao; Kong, Maiying; Pal, Subhadip (June 22, 2021). "The Modified-Half-Normal distribution: Properties and an efficient sampling scheme" . Communications in Statistics – Theory and Methods . 52 (5): 1591– 1613. doi : 10.1080/03610926.2021.1934700 . ISSN   0361-0926 . S2CID   237919587 . Aldrich, John; Miller, Jeff. "Earliest Uses of Symbols in Probability and Statistics" . Aldrich, John; Miller, Jeff. "Earliest Known Uses of Some of the Words of Mathematics" . In particular, the entries for "bell-shaped and bell curve" , "normal (distribution)" , "Gaussian" , and "Error, law of error, theory of errors, etc." . Amari, Shun'ichi ; Nagaoka, Hiroshi (2000). Methods of Information Geometry . Oxford University Press. ISBN   978-0-8218-0531-2 . Bernardo, JosĂ© M. ; Smith, Adrian F. M. (2000). Bayesian Theory . Wiley. ISBN   978-0-471-49464-5 . Bryc, Wlodzimierz (1995). The Normal Distribution: Characterizations with Applications . Springer-Verlag. ISBN   978-0-387-97990-8 . Casella, George ; Berger, Roger L. (2001). Statistical Inference (2nd ed.). Duxbury. ISBN   978-0-534-24312-8 . Cody, William J. (1969). "Rational Chebyshev Approximations for the Error Function" . Mathematics of Computation . 23 (107): 631– 638. Bibcode : 1969MaCom..23..631C . doi : 10.1090/S0025-5718-1969-0247736-4 . Cover, Thomas M. ; Thomas, Joy A. (2006). Elements of Information Theory . John Wiley and Sons. ISBN   9780471241959 . Dia, Yaya D. (2023). "Approximate Incomplete Integrals, Application to Complementary Error Function" . SSRN . doi : 10.2139/ssrn.4487559 . S2CID   259689086 . de Moivre, Abraham (2000) [First published 1738]. The Doctrine of Chances . American Mathematical Society. ISBN   978-0-8218-2103-9 . Du, Y.; Fan, B.; Wei, B. (2022). "An improved exact sampling algorithm for the standard normal distribution". Computational Statistics . 37 (2): 721– 737. arXiv : 2008.03855 . doi : 10.1007/s00180-021-01136-w . Fan, Jianqing (1991). "On the optimal rates of convergence for nonparametric deconvolution problems" . The Annals of Statistics . 19 (3): 1257– 1272. doi : 10.1214/aos/1176348248 . JSTOR   2241949 . Galton, Francis (1889). Natural Inheritance (PDF) . London, UK: Richard Clay and Sons. Galambos, Janos ; Simonelli, Italo (2004). Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions . Marcel Dekker, Inc. ISBN   978-0-8247-5402-0 . Gauss, Carolo Friderico (1809). Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm [ Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections ] (in Latin). Hambvrgi, Svmtibvs F. Perthes et I. H. Besser. English translation . Gould, Stephen Jay (1981). The Mismeasure of Man (first ed.). W. W. Norton. ISBN   978-0-393-01489-1 . Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation". The American Statistician . 19 (3): 12– 14. doi : 10.2307/2681417 . JSTOR   2681417 . Hart, John F.; et al. (1968). Computer Approximations . New York, NY: John Wiley & Sons, Inc. ISBN   978-0-88275-642-4 . "Normal Distribution" , Encyclopedia of Mathematics , EMS Press , 2001 [1994] Herrnstein, Richard J. ; Murray, Charles (1994). The Bell Curve: Intelligence and Class Structure in American Life . Free Press . ISBN   978-0-02-914673-6 . Hoel, Paul G. (1947). Introduction To Mathematical Statistics . New York: Wiley. Huxley, Julian S. (1972) [First published 1932]. Problems of Relative Growth . London. ISBN   978-0-486-61114-3 . OCLC   476909537 . Johnson, Norman L. ; Kotz, Samuel ; Balakrishnan, Narayanaswamy (1994). Continuous Univariate Distributions, Volume 1 . Wiley. ISBN   978-0-471-58495-7 . Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). Continuous Univariate Distributions, Volume 2 . Wiley. ISBN   978-0-471-58494-0 . Karney, C. F. F. (2016). "Sampling exactly from the normal distribution" . ACM Transactions on Mathematical Software . 42 (1): 3:1–14. arXiv : 1303.6257 . doi : 10.1145/2710016 . S2CID   14252035 . Kinderman, Albert J.; Monahan, John F. (1977). "Computer Generation of Random Variables Using the Ratio of Uniform Deviates" . ACM Transactions on Mathematical Software . 3 (3): 257– 260. doi : 10.1145/355744.355750 . S2CID   12884505 . Krishnamoorthy, Kalimuthu (2006). Handbook of Statistical Distributions with Applications . Chapman & Hall/CRC. ISBN   978-1-58488-635-8 . Kruskal, William H. ; Stigler, Stephen M. (1997). Spencer, Bruce D. (ed.). Normative Terminology: 'Normal' in Statistics and Elsewhere . Statistics and Public Policy. Oxford University Press. ISBN   978-0-19-852341-3 . Laplace, Pierre-Simon de (1774). "MĂ©moire sur la probabilitĂ© des causes par les Ă©vĂ©nements" . MĂ©moires de l'AcadĂ©mie Royale des Sciences de Paris (Savants Ă©trangers), Tome 6 : 621– 656. Translated by Stephen M. Stigler in Statistical Science 1 (3), 1986: JSTOR   2245476 . Laplace, Pierre-Simon (1812). ThĂ©orie analytique des probabilitĂ©s [ Analytical theory of probabilities ]. Paris, Ve. Courcier. Le Cam, Lucien ; Lo Yang, Grace (2000). Asymptotics in Statistics: Some Basic Concepts (second ed.). Springer. ISBN   978-0-387-95036-5 . Leva, Joseph L. (1992). "A fast normal random number generator" (PDF) . ACM Transactions on Mathematical Software . 18 (4): 449– 453. CiteSeerX   10.1.1.544.5806 . doi : 10.1145/138351.138364 . S2CID   15802663 . Archived from the original (PDF) on July 16, 2010. Lexis, Wilhelm (1878). "Sur la durĂ©e normale de la vie humaine et sur la thĂ©orie de la stabilitĂ© des rapports statistiques". Annales de DĂ©mographie Internationale . II . Paris: 447– 462. Lukacs, Eugene; King, Edgar P. (1954). "A Property of Normal Distribution" . The Annals of Mathematical Statistics . 25 (2): 389– 394. doi : 10.1214/aoms/1177728796 . JSTOR   2236741 . McPherson, Glen (1990). Statistics in Scientific Investigation: Its Basis, Application and Interpretation . Springer-Verlag. ISBN   978-0-387-97137-7 . Marsaglia, George ; Tsang, Wai Wan (2000). "The Ziggurat Method for Generating Random Variables" . Journal of Statistical Software . 5 (8). doi : 10.18637/jss.v005.i08 . Marsaglia, George (2004). "Evaluating the Normal Distribution" . Journal of Statistical Software . 11 (4). doi : 10.18637/jss.v011.i04 . Maxwell, James Clerk (1860). "V. Illustrations of the dynamical theory of gases. — Part I: On the motions and collisions of perfectly elastic spheres" . Philosophical Magazine . Series 4. 19 (124): 19– 32. Bibcode : 1860LEDPM..19...19M . doi : 10.1080/14786446008642818 . Monahan, J. F. (1985). "Accuracy in random number generation" . Mathematics of Computation . 45 (172): 559– 568. doi : 10.1090/S0025-5718-1985-0804945-X . Mood, Alexander McFarlane (1950). Introduction to the Theory of Statistics . New York: McGraw-Hill. Patel, Jagdish K.; Read, Campbell B. (1996). Handbook of the Normal Distribution (2nd ed.). CRC Press. ISBN   978-0-8247-9342-5 . Pearson, Karl (1901). "On Lines and Planes of Closest Fit to Systems of Points in Space" (PDF) . Philosophical Magazine . 6. 2 (11): 559– 572. doi : 10.1080/14786440109462720 . S2CID   125037489 . Pearson, Karl (1905). " 'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A rejoinder" . Biometrika . 4 (1): 169– 212. doi : 10.2307/2331536 . JSTOR   2331536 . Pearson, Karl (1920). "Notes on the History of Correlation" . Biometrika . 13 (1): 25– 45. doi : 10.1093/biomet/13.1.25 . JSTOR   2331722 . Rohrbasser, Jean-Marc; VĂ©ron, Jacques (2003). "Wilhelm Lexis: The Normal Length of Life as an Expression of the "Nature of Things" " . Population . 58 (3): 303– 322. doi : 10.3917/pope.303.0303 . Shore, H (1982). "Simple Approximations for the Inverse Cumulative Function, the Density Function and the Loss Integral of the Normal Distribution". Journal of the Royal Statistical Society. Series C (Applied Statistics) . 31 (2): 108– 114. doi : 10.2307/2347972 . JSTOR   2347972 . Shore, H (2005). "Accurate RMM-Based Approximations for the CDF of the Normal Distribution". Communications in Statistics – Theory and Methods . 34 (3): 507– 513. doi : 10.1081/sta-200052102 . S2CID   122148043 . Shore, H (2011). "Response Modeling Methodology". WIREs Comput Stat . 3 (4): 357– 372. doi : 10.1002/wics.151 . S2CID   62021374 . Shore, H (2012). "Estimating Response Modeling Methodology Models". WIREs Comput Stat . 4 (3): 323– 333. doi : 10.1002/wics.1199 . S2CID   122366147 . Stigler, Stephen M. (1978). "Mathematical Statistics in the Early States" . The Annals of Statistics . 6 (2): 239– 265. doi : 10.1214/aos/1176344123 . JSTOR   2958876 . Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal". The American Statistician . 36 (2): 137– 138. doi : 10.2307/2684031 . JSTOR   2684031 . Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900 . Harvard University Press. ISBN   978-0-674-40340-6 . Stigler, Stephen M. (1999). Statistics on the Table . Harvard University Press. ISBN   978-0-674-83601-3 . Walker, Helen M. (1985). "De Moivre on the Law of Normal Probability" (PDF) . In Smith, David Eugene (ed.). A Source Book in Mathematics . Dover. ISBN   978-0-486-64690-9 . Wallace, C. S. (1996). "Fast pseudo-random generators for normal and exponential variates" . ACM Transactions on Mathematical Software . 22 (1): 119– 127. doi : 10.1145/225545.225554 . S2CID   18514848 . Weisstein, Eric W. "Normal Distribution" . MathWorld . West, Graeme (2009). "Better Approximations to Cumulative Normal Functions" (PDF) . Wilmott Magazine : 70– 76. Archived from the original (PDF) on February 29, 2012. Zelen, Marvin; Severo, Norman C. (1972) [First published 1964]. Probability Functions (chapter 26) . Handbook of mathematical functions with formulas, graphs, and mathematical tables , by Abramowitz, M. ; and Stegun, I. A. : National Bureau of Standards. New York, NY: Dover. ISBN   978-0-486-61272-0 . "Normal distribution" , Encyclopedia of Mathematics , EMS Press , 2001 [1994] Normal distribution calculator
Markdown
[Jump to content](https://en.wikipedia.org/wiki/Normal_distribution#bodyContent) Main menu Main menu move to sidebar hide Navigation - [Main page](https://en.wikipedia.org/wiki/Main_Page "Visit the main page [z]") - [Contents](https://en.wikipedia.org/wiki/Wikipedia:Contents "Guides to browsing Wikipedia") - [Current events](https://en.wikipedia.org/wiki/Portal:Current_events "Articles related to current events") - [Random article](https://en.wikipedia.org/wiki/Special:Random "Visit a randomly selected article [x]") - [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About "Learn about Wikipedia and how it works") - [Contact us](https://en.wikipedia.org/wiki/Wikipedia:Contact_us "How to contact Wikipedia") Contribute - [Help](https://en.wikipedia.org/wiki/Help:Contents "Guidance on how to use and edit Wikipedia") - [Learn to edit](https://en.wikipedia.org/wiki/Help:Introduction "Learn how to edit Wikipedia") - [Community portal](https://en.wikipedia.org/wiki/Wikipedia:Community_portal "The hub for editors") - [Recent changes](https://en.wikipedia.org/wiki/Special:RecentChanges "A list of recent changes to Wikipedia [r]") - [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_upload_wizard "Add images or other media for use on Wikipedia") - [Special pages](https://en.wikipedia.org/wiki/Special:SpecialPages "A list of all special pages [q]") [![](https://en.wikipedia.org/static/images/icons/enwiki-25.svg) ![Wikipedia](https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en-25.svg) ![The Free Encyclopedia](https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en-25.svg)](https://en.wikipedia.org/wiki/Main_Page) [Search](https://en.wikipedia.org/wiki/Special:Search "Search Wikipedia [f]") Appearance - [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en) - [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=Normal+distribution "You are encouraged to create an account and log in; however, it is not mandatory") - [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Normal+distribution "You're encouraged to log in; however, it's not mandatory. [o]") Personal tools - [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en) - [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=Normal+distribution "You are encouraged to create an account and log in; however, it is not mandatory") - [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Normal+distribution "You're encouraged to log in; however, it's not mandatory. [o]") ## Contents move to sidebar hide - [(Top)](https://en.wikipedia.org/wiki/Normal_distribution) - [1 Definitions](https://en.wikipedia.org/wiki/Normal_distribution#Definitions) Toggle Definitions subsection - [1\.1 Standard normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution) - [1\.2 General normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#General_normal_distribution) - [1\.3 Notation](https://en.wikipedia.org/wiki/Normal_distribution#Notation) - [1\.4 Alternative parameterizations](https://en.wikipedia.org/wiki/Normal_distribution#Alternative_parameterizations) - [1\.5 Cumulative distribution function](https://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution_function) - [1\.5.1 Taylor series representation](https://en.wikipedia.org/wiki/Normal_distribution#Taylor_series_representation) - [1\.5.2 Recursive computation with Taylor series](https://en.wikipedia.org/wiki/Normal_distribution#Recursive_computation_with_Taylor_series) - [1\.5.3 Standard deviation and coverage](https://en.wikipedia.org/wiki/Normal_distribution#Standard_deviation_and_coverage) - [1\.5.4 Quantile function](https://en.wikipedia.org/wiki/Normal_distribution#Quantile_function) - [1\.5.5 Using root finding to compute the quantile function](https://en.wikipedia.org/wiki/Normal_distribution#Using_root_finding_to_compute_the_quantile_function) - [2 Properties](https://en.wikipedia.org/wiki/Normal_distribution#Properties) Toggle Properties subsection - [2\.1 Symmetries and derivatives](https://en.wikipedia.org/wiki/Normal_distribution#Symmetries_and_derivatives) - [2\.2 Moments](https://en.wikipedia.org/wiki/Normal_distribution#Moments) - [2\.3 Fourier transform and characteristic function](https://en.wikipedia.org/wiki/Normal_distribution#Fourier_transform_and_characteristic_function) - [2\.4 Moment- and cumulant-generating functions](https://en.wikipedia.org/wiki/Normal_distribution#Moment-_and_cumulant-generating_functions) - [2\.5 Stein operator and class](https://en.wikipedia.org/wiki/Normal_distribution#Stein_operator_and_class) - [2\.6 Zero-variance limit](https://en.wikipedia.org/wiki/Normal_distribution#Zero-variance_limit) - [2\.7 Maximum entropy](https://en.wikipedia.org/wiki/Normal_distribution#Maximum_entropy) - [2\.8 Other properties](https://en.wikipedia.org/wiki/Normal_distribution#Other_properties) - [3 Related distributions](https://en.wikipedia.org/wiki/Normal_distribution#Related_distributions) Toggle Related distributions subsection - [3\.1 Central limit theorem](https://en.wikipedia.org/wiki/Normal_distribution#Central_limit_theorem) - [3\.2 Operations and functions of normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_and_functions_of_normal_variables) - [3\.2.1 Operations on a single normal variable](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_a_single_normal_variable) - [3\.2.1.1 Operations on two independent normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_two_independent_normal_variables) - [3\.2.1.2 Operations on two independent standard normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_two_independent_standard_normal_variables) - [3\.2.2 Operations on multiple independent normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_multiple_independent_normal_variables) - [3\.2.3 Operations on multiple correlated normal variables](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_multiple_correlated_normal_variables) - [3\.3 Operations on the density function](https://en.wikipedia.org/wiki/Normal_distribution#Operations_on_the_density_function) - [3\.4 Infinite divisibility and CramĂ©r's theorem](https://en.wikipedia.org/wiki/Normal_distribution#Infinite_divisibility_and_Cram%C3%A9r's_theorem) - [3\.5 The Kac–Bernstein theorem](https://en.wikipedia.org/wiki/Normal_distribution#The_Kac%E2%80%93Bernstein_theorem) - [3\.6 Extensions](https://en.wikipedia.org/wiki/Normal_distribution#Extensions) - [4 Statistical inference](https://en.wikipedia.org/wiki/Normal_distribution#Statistical_inference) Toggle Statistical inference subsection - [4\.1 Estimation of parameters](https://en.wikipedia.org/wiki/Normal_distribution#Estimation_of_parameters) - [4\.1.1 Sample mean](https://en.wikipedia.org/wiki/Normal_distribution#Sample_mean) - [4\.1.2 Sample variance](https://en.wikipedia.org/wiki/Normal_distribution#Sample_variance) - [4\.2 Confidence intervals](https://en.wikipedia.org/wiki/Normal_distribution#Confidence_intervals) - [4\.3 Normality tests](https://en.wikipedia.org/wiki/Normal_distribution#Normality_tests) - [4\.4 Bayesian analysis of the normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Bayesian_analysis_of_the_normal_distribution) - [4\.4.1 Sum of two quadratics](https://en.wikipedia.org/wiki/Normal_distribution#Sum_of_two_quadratics) - [4\.4.1.1 Scalar form](https://en.wikipedia.org/wiki/Normal_distribution#Scalar_form) - [4\.4.1.2 Vector form](https://en.wikipedia.org/wiki/Normal_distribution#Vector_form) - [4\.4.2 Sum of differences from the mean](https://en.wikipedia.org/wiki/Normal_distribution#Sum_of_differences_from_the_mean) - [4\.5 With known variance](https://en.wikipedia.org/wiki/Normal_distribution#With_known_variance) - [4\.5.1 With known mean](https://en.wikipedia.org/wiki/Normal_distribution#With_known_mean) - [4\.5.2 With unknown mean and unknown variance](https://en.wikipedia.org/wiki/Normal_distribution#With_unknown_mean_and_unknown_variance) - [5 Occurrence and applications](https://en.wikipedia.org/wiki/Normal_distribution#Occurrence_and_applications) Toggle Occurrence and applications subsection - [5\.1 Exact normality](https://en.wikipedia.org/wiki/Normal_distribution#Exact_normality) - [5\.2 Approximate normality](https://en.wikipedia.org/wiki/Normal_distribution#Approximate_normality) - [5\.3 Assumed normality](https://en.wikipedia.org/wiki/Normal_distribution#Assumed_normality) - [5\.4 Methodological problems and peer review](https://en.wikipedia.org/wiki/Normal_distribution#Methodological_problems_and_peer_review) - [6 Computational methods](https://en.wikipedia.org/wiki/Normal_distribution#Computational_methods) Toggle Computational methods subsection - [6\.1 Generating values from normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Generating_values_from_normal_distribution) - [6\.2 Numerical approximations for the normal cumulative distribution function and normal quantile function](https://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_cumulative_distribution_function_and_normal_quantile_function) - [7 History](https://en.wikipedia.org/wiki/Normal_distribution#History) Toggle History subsection - [7\.1 Development](https://en.wikipedia.org/wiki/Normal_distribution#Development) - [7\.2 Naming](https://en.wikipedia.org/wiki/Normal_distribution#Naming) - [8 See also](https://en.wikipedia.org/wiki/Normal_distribution#See_also) - [9 Notes](https://en.wikipedia.org/wiki/Normal_distribution#Notes) - [10 References](https://en.wikipedia.org/wiki/Normal_distribution#References) Toggle References subsection - [10\.1 Citations](https://en.wikipedia.org/wiki/Normal_distribution#Citations) - [10\.2 Sources](https://en.wikipedia.org/wiki/Normal_distribution#Sources) - [11 External links](https://en.wikipedia.org/wiki/Normal_distribution#External_links) Toggle the table of contents # Normal distribution 73 languages - [Alemannisch](https://als.wikipedia.org/wiki/Normalverteilung "Normalverteilung – Alemannic") - [Ű§Ù„ŰčŰ±ŰšÙŠŰ©](https://ar.wikipedia.org/wiki/%D8%AA%D9%88%D8%B2%D9%8A%D8%B9_%D8%A7%D8%AD%D8%AA%D9%85%D8%A7%D9%84%D9%8A_%D8%B7%D8%A8%D9%8A%D8%B9%D9%8A "ŰȘوŰČيŰč ۭۧŰȘÙ…Ű§Ù„ÙŠ Ű·ŰšÙŠŰčي – Arabic") - [Asturianu](https://ast.wikipedia.org/wiki/Distribuci%C3%B3n_normal "DistribuciĂłn normal – Asturian") - [Azərbaycanca](https://az.wikipedia.org/wiki/Normal_paylanma "Normal paylanma – Azerbaijani") - [ŰȘÛ†Ű±Ú©ŰŹÙ‡](https://azb.wikipedia.org/wiki/%D9%86%D9%88%D8%B1%D9%85%D8%A7%D9%84_%D8%AF%D8%A7%D8%BA%DB%8C%D9%84%DB%8C%D9%85 "Ù†ÙˆŰ±Ù…Ű§Ù„ ۯۧŰșیلیم – South Azerbaijani") - [Đ‘Đ”Đ»Đ°Ń€ŃƒŃĐșая](https://be.wikipedia.org/wiki/%D0%9D%D0%B0%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0%D0%B5_%D1%80%D0%B0%D0%B7%D0%BC%D0%B5%D1%80%D0%BA%D0%B0%D0%B2%D0%B0%D0%BD%D0%BD%D0%B5 "ĐĐ°Ń€ĐŒĐ°Đ»ŃŒĐœĐ°Đ” Ń€Đ°Đ·ĐŒĐ”Ń€ĐșаĐČĐ°ĐœĐœĐ” – Belarusian") - [БългарсĐșĐž](https://bg.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%BE_%D1%80%D0%B0%D0%B7%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5 "ĐĐŸŃ€ĐŒĐ°Đ»ĐœĐŸ Ń€Đ°Đ·ĐżŃ€Đ”ĐŽĐ”Đ»Đ”ĐœĐžĐ” – Bulgarian") - [Bosanski](https://bs.wikipedia.org/wiki/Normalna_raspodjela "Normalna raspodjela – Bosnian") - [CatalĂ ](https://ca.wikipedia.org/wiki/Distribuci%C3%B3_normal "DistribuciĂł normal – Catalan") - [ČeĆĄtina](https://cs.wikipedia.org/wiki/Norm%C3%A1ln%C3%AD_rozd%C4%9Blen%C3%AD "NormĂĄlnĂ­ rozdělenĂ­ – Czech") - [ЧӑĐČашла](https://cv.wikipedia.org/wiki/%D0%93%D0%B0%D1%83%D1%81%D1%81_%D0%B2%D0%B0%D0%BB%D0%B5%C3%A7%C4%95%D0%B2%C4%95 "Гаусс ĐČалДçĕĐČĕ – Chuvash") - [Cymraeg](https://cy.wikipedia.org/wiki/Dosraniad_normal "Dosraniad normal – Welsh") - [Dansk](https://da.wikipedia.org/wiki/Normalfordeling "Normalfordeling – Danish") - [Deutsch](https://de.wikipedia.org/wiki/Normalverteilung "Normalverteilung – German") - [ΕλληΜÎčÎșÎŹ](https://el.wikipedia.org/wiki/%CE%9A%CE%B1%CE%BD%CE%BF%CE%BD%CE%B9%CE%BA%CE%AE_%CE%BA%CE%B1%CF%84%CE%B1%CE%BD%CE%BF%CE%BC%CE%AE "ÎšÎ±ÎœÎżÎœÎčÎșÎź ÎșÎ±Ï„Î±ÎœÎżÎŒÎź – Greek") - [Esperanto](https://eo.wikipedia.org/wiki/Normala_distribuo "Normala distribuo – Esperanto") - [Español](https://es.wikipedia.org/wiki/Distribuci%C3%B3n_normal "DistribuciĂłn normal – Spanish") - [Eesti](https://et.wikipedia.org/wiki/Normaaljaotus "Normaaljaotus – Estonian") - [Euskara](https://eu.wikipedia.org/wiki/Banaketa_normal "Banaketa normal – Basque") - [ÙŰ§Ű±ŰłÛŒ](https://fa.wikipedia.org/wiki/%D8%AA%D9%88%D8%B2%DB%8C%D8%B9_%D9%86%D8%B1%D9%85%D8%A7%D9%84 "ŰȘوŰČیŰč Ù†Ű±Ù…Ű§Ù„ – Persian") - [Suomi](https://fi.wikipedia.org/wiki/Normaalijakauma "Normaalijakauma – Finnish") - [Français](https://fr.wikipedia.org/wiki/Loi_normale "Loi normale – French") - [Nordfriisk](https://frr.wikipedia.org/wiki/Normoolferdialang "Normoolferdialang – Northern Frisian") - [Gaeilge](https://ga.wikipedia.org/wiki/D%C3%A1ileadh_normalach "DĂĄileadh normalach – Irish") - [Galego](https://gl.wikipedia.org/wiki/Distribuci%C3%B3n_normal "DistribuciĂłn normal – Galician") - [ŚąŚ‘ŚšŚ™ŚȘ](https://he.wikipedia.org/wiki/%D7%94%D7%AA%D7%A4%D7%9C%D7%92%D7%95%D7%AA_%D7%A0%D7%95%D7%A8%D7%9E%D7%9C%D7%99%D7%AA "Ś”ŚȘŚ€ŚœŚ’Ś•ŚȘ Ś Ś•ŚšŚžŚœŚ™ŚȘ – Hebrew") - [à€čà€żà€šà„à€Šà„€](https://hi.wikipedia.org/wiki/%E0%A4%AA%E0%A5%8D%E0%A4%B0%E0%A4%B8%E0%A4%BE%E0%A4%AE%E0%A4%BE%E0%A4%A8%E0%A5%8D%E0%A4%AF_%E0%A4%AC%E0%A4%82%E0%A4%9F%E0%A4%A8 "à€Șà„à€°à€žà€Ÿà€źà€Ÿà€šà„à€Ż à€Źà€‚à€Ÿà€š – Hindi") - [Hrvatski](https://hr.wikipedia.org/wiki/Normalna_raspodjela "Normalna raspodjela – Croatian") - [Magyar](https://hu.wikipedia.org/wiki/Norm%C3%A1lis_eloszl%C3%A1s "NormĂĄlis eloszlĂĄs – Hungarian") - [Ő€ŐĄŐ”Ő„Ö€Ő„Ő¶](https://hy.wikipedia.org/wiki/%D5%86%D5%B8%D6%80%D5%B4%D5%A1%D5%AC_%D5%A2%D5%A1%D5%B7%D5%AD%D5%B8%D6%82%D5%B4 "Նվրծալ ŐąŐĄŐ·Ő­ŐžÖ‚ŐŽ – Armenian") - [Bahasa Indonesia](https://id.wikipedia.org/wiki/Distribusi_normal "Distribusi normal – Indonesian") - [Íslenska](https://is.wikipedia.org/wiki/Normaldreifing "Normaldreifing – Icelandic") - [Italiano](https://it.wikipedia.org/wiki/Distribuzione_normale "Distribuzione normale – Italian") - [æ—„æœŹèȘž](https://ja.wikipedia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%88%86%E5%B8%83 "æ­ŁèŠćˆ†ćžƒ – Japanese") - [áƒ„áƒáƒ áƒ—áƒŁáƒšáƒ˜](https://ka.wikipedia.org/wiki/%E1%83%9C%E1%83%9D%E1%83%A0%E1%83%9B%E1%83%90%E1%83%9A%E1%83%A3%E1%83%A0%E1%83%98_%E1%83%92%E1%83%90%E1%83%9C%E1%83%90%E1%83%AC%E1%83%98%E1%83%9A%E1%83%94%E1%83%91%E1%83%90 "ნორმალური განაწილება – Georgian") - [ÒšĐ°Đ·Đ°Ò›ŃˆĐ°](https://kk.wikipedia.org/wiki/%D2%9A%D0%B0%D0%BB%D1%8B%D0%BF%D1%82%D1%8B_%D0%B4%D0%B8%D1%81%D0%BF%D0%B5%D1%80%D1%81%D0%B8%D1%8F "ÒšĐ°Đ»Ń‹ĐżŃ‚Ń‹ ĐŽĐžŃĐżĐ”Ń€ŃĐžŃ – Kazakh") - [한ꔭ얎](https://ko.wikipedia.org/wiki/%EC%A0%95%EA%B7%9C_%EB%B6%84%ED%8F%AC "정규 ë¶„íŹ – Korean") - [Latina](https://la.wikipedia.org/wiki/Distributio_normalis "Distributio normalis – Latin") - [Lombard](https://lmo.wikipedia.org/wiki/Distribuzzion_normala "Distribuzzion normala – Lombard") - [LietuviĆł](https://lt.wikipedia.org/wiki/Normalusis_skirstinys "Normalusis skirstinys – Lithuanian") - [LatvieĆĄu](https://lv.wikipedia.org/wiki/Norm%C4%81lais_sadal%C4%ABjums "Normālais sadalÄ«jums – Latvian") - [МаĐșĐ”ĐŽĐŸĐœŃĐșĐž](https://mk.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%B0_%D1%80%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B1%D0%B0 "ĐĐŸŃ€ĐŒĐ°Đ»ĐœĐ° распрДЎДлба – Macedonian") - [à€źà€°à€Ÿà€ à„€](https://mr.wikipedia.org/wiki/%E0%A4%B8%E0%A4%BE%E0%A4%AE%E0%A4%BE%E0%A4%A8%E0%A5%8D%E0%A4%AF_%E0%A4%B5%E0%A4%BF%E0%A4%A4%E0%A4%B0%E0%A4%A3 "à€žà€Ÿà€źà€Ÿà€šà„à€Ż à€”à€żà€€à€°à€Ł – Marathi") - [Bahasa Melayu](https://ms.wikipedia.org/wiki/Taburan_normal "Taburan normal – Malay") - [Nederlands](https://nl.wikipedia.org/wiki/Normale_verdeling "Normale verdeling – Dutch") - [Norsk nynorsk](https://nn.wikipedia.org/wiki/Normalfordeling "Normalfordeling – Norwegian Nynorsk") - [Norsk bokmĂ„l](https://no.wikipedia.org/wiki/Normalfordeling "Normalfordeling – Norwegian BokmĂ„l") - [Polski](https://pl.wikipedia.org/wiki/Rozk%C5%82ad_normalny "RozkƂad normalny – Polish") - [PiemontĂšis](https://pms.wikipedia.org/wiki/Distribussion_%C3%ABd_Gauss "Distribussion Ă«d Gauss – Piedmontese") - [PortuguĂȘs](https://pt.wikipedia.org/wiki/Distribui%C3%A7%C3%A3o_normal "Distribuição normal – Portuguese") - [RomĂąnă](https://ro.wikipedia.org/wiki/Distribu%C8%9Bia_Gauss "Distribuția Gauss – Romanian") - [РуссĐșĐžĐč](https://ru.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%BE%D0%B5_%D1%80%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5 "ĐĐŸŃ€ĐŒĐ°Đ»ŃŒĐœĐŸĐ” Ń€Đ°ŃĐżŃ€Đ”ĐŽĐ”Đ»Đ”ĐœĐžĐ” – Russian") - [Srpskohrvatski / српсĐșĐŸŃ…Ń€ĐČатсĐșĐž](https://sh.wikipedia.org/wiki/Normalna_raspodjela "Normalna raspodjela – Serbo-Croatian") - [Simple English](https://simple.wikipedia.org/wiki/Normal_distribution "Normal distribution – Simple English") - [Slovenčina](https://sk.wikipedia.org/wiki/Norm%C3%A1lne_rozdelenie "NormĂĄlne rozdelenie – Slovak") - [Slovenơčina](https://sl.wikipedia.org/wiki/Normalna_porazdelitev "Normalna porazdelitev – Slovenian") - [Shqip](https://sq.wikipedia.org/wiki/Shp%C3%ABrndarja_normale "ShpĂ«rndarja normale – Albanian") - [СрпсĐșĐž / srpski](https://sr.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%B0_%D1%80%D0%B0%D1%81%D0%BF%D0%BE%D0%B4%D0%B5%D0%BB%D0%B0 "ĐĐŸŃ€ĐŒĐ°Đ»ĐœĐ° Ń€Đ°ŃĐżĐŸĐŽĐ”Đ»Đ° – Serbian") - [Sunda](https://su.wikipedia.org/wiki/Sebaran_normal "Sebaran normal – Sundanese") - [Svenska](https://sv.wikipedia.org/wiki/Normalf%C3%B6rdelning "Normalfördelning – Swedish") - [àź€àźźàźżàźŽàŻ](https://ta.wikipedia.org/wiki/%E0%AE%87%E0%AE%AF%E0%AE%B2%E0%AF%8D%E0%AE%A8%E0%AE%BF%E0%AE%B2%E0%AF%88%E0%AE%AA%E0%AF%8D_%E0%AE%AA%E0%AE%B0%E0%AE%B5%E0%AE%B2%E0%AF%8D "àź‡àźŻàźČàŻàźšàźżàźČàŻˆàźȘàŻ àźȘàź°àź”àźČàŻ – Tamil") - [àč„àž—àžą](https://th.wikipedia.org/wiki/%E0%B8%81%E0%B8%B2%E0%B8%A3%E0%B9%81%E0%B8%88%E0%B8%81%E0%B9%81%E0%B8%88%E0%B8%87%E0%B8%9B%E0%B8%A3%E0%B8%81%E0%B8%95%E0%B8%B4 "àžàžČàžŁàčàžˆàžàčàžˆàž‡àž›àžŁàžàž•àžŽ – Thai") - [Tagalog](https://tl.wikipedia.org/wiki/Distribusyong_normal "Distribusyong normal – Tagalog") - [TĂŒrkçe](https://tr.wikipedia.org/wiki/Normal_da%C4%9F%C4%B1l%C4%B1m "Normal dağılım – Turkish") - [батарча / tatarça](https://tt.wikipedia.org/wiki/%D0%93%D0%B0%D1%83%D1%81%D1%81_%D0%B1%D2%AF%D0%BB%D0%B5%D0%BD%D0%B5%D1%88%D0%B5 "Гаусс Đ±ÒŻĐ»Đ”ĐœĐ”ŃˆĐ” – Tatar") - [ĐŁĐșŃ€Đ°Ń—ĐœŃŃŒĐșа](https://uk.wikipedia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%B8%D0%B9_%D1%80%D0%BE%D0%B7%D0%BF%D0%BE%D0%B4%D1%96%D0%BB "ĐĐŸŃ€ĐŒĐ°Đ»ŃŒĐœĐžĐč Ń€ĐŸĐ·ĐżĐŸĐŽŃ–Đ» – Ukrainian") - [Ű§Ű±ŰŻÙˆ](https://ur.wikipedia.org/wiki/%D9%86%D8%A7%D8%B1%D9%85%D9%84_%D8%AA%D9%82%D8%B3%DB%8C%D9%85 "Ù†Ű§Ű±Ù…Ù„ ŰȘÙ‚ŰłÛŒÙ… – Urdu") - [Tiáșżng Việt](https://vi.wikipedia.org/wiki/Ph%C3%A2n_ph%E1%BB%91i_chu%E1%BA%A9n "PhĂąn phối chuáș©n – Vietnamese") - [ćŽèŻ­](https://wuu.wikipedia.org/wiki/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83 "æ­Łæ€ćˆ†ćžƒ – Wu") - [Ś™Ś™ÖŽŚ“Ś™Ś©](https://yi.wikipedia.org/wiki/%D7%A0%D7%90%D7%A8%D7%9E%D7%90%D7%9C%D7%A2_%D7%A4%D7%90%D7%A8%D7%98%D7%99%D7%99%D7%9C%D7%95%D7%A0%D7%92 "Ś ŚŚšŚžŚŚœŚą Ś€ŚŚšŚ˜Ś™Ś™ŚœŚ•Ś Ś’ – Yiddish") - [é–©ć—èȘž / BĂąn-lĂąm-gĂ­](https://zh-min-nan.wikipedia.org/wiki/Si%C3%B4ng-th%C3%A0i_hun-p%C3%B2%CD%98 "SiĂŽng-thĂ i hun-pĂČ͘ – Minnan") - [çČ”èȘž](https://zh-yue.wikipedia.org/wiki/%E5%B8%B8%E6%85%8B%E5%88%86%E4%BD%88 "ćžžæ…‹ćˆ†äœˆ – Cantonese") - [äž­æ–‡](https://zh.wikipedia.org/wiki/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83 "æ­Łæ€ćˆ†ćžƒ – Chinese") [Edit links](https://www.wikidata.org/wiki/Special:EntityPage/Q133871#sitelinks-wikipedia "Edit interlanguage links") - [Article](https://en.wikipedia.org/wiki/Normal_distribution "View the content page [c]") - [Talk](https://en.wikipedia.org/wiki/Talk:Normal_distribution "Discuss improvements to the content page [t]") English - [Read](https://en.wikipedia.org/wiki/Normal_distribution) - [Edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit "Edit this page [e]") - [View history](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=history "Past revisions of this page [h]") Tools Tools move to sidebar hide Actions - [Read](https://en.wikipedia.org/wiki/Normal_distribution) - [Edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit "Edit this page [e]") - [View history](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=history) General - [What links here](https://en.wikipedia.org/wiki/Special:WhatLinksHere/Normal_distribution "List of all English Wikipedia pages containing links to this page [j]") - [Related changes](https://en.wikipedia.org/wiki/Special:RecentChangesLinked/Normal_distribution "Recent changes in pages linked from this page [k]") - [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_Upload_Wizard "Upload files [u]") - [Permanent link](https://en.wikipedia.org/w/index.php?title=Normal_distribution&oldid=1344852379 "Permanent link to this revision of this page") - [Page information](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=info "More information about this page") - [Cite this page](https://en.wikipedia.org/w/index.php?title=Special:CiteThisPage&page=Normal_distribution&id=1344852379&wpFormIdentifier=titleform "Information on how to cite this page") - [Get shortened URL](https://en.wikipedia.org/w/index.php?title=Special:UrlShortener&url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FNormal_distribution) Print/export - [Download as PDF](https://en.wikipedia.org/w/index.php?title=Special:DownloadAsPdf&page=Normal_distribution&action=show-download-screen "Download this page as a PDF file") - [Printable version](https://en.wikipedia.org/w/index.php?title=Normal_distribution&printable=yes "Printable version of this page [p]") In other projects - [Wikimedia Commons](https://commons.wikimedia.org/wiki/Category:Normal_distribution) - [Wikidata item](https://www.wikidata.org/wiki/Special:EntityPage/Q133871 "Structured data on this page hosted by Wikidata [g]") Appearance move to sidebar hide From Wikipedia, the free encyclopedia Probability distribution "Bell curve" redirects here. For other uses, see [Bell curve (disambiguation)](https://en.wikipedia.org/wiki/Bell_curve_\(disambiguation\) "Bell curve (disambiguation)"). | Normal distribution | | |---|---| | Probability density function[![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Normal_Distribution_PDF.svg/500px-Normal_Distribution_PDF.svg.png)](https://en.wikipedia.org/wiki/File:Normal_Distribution_PDF.svg)The red curve is the [*standard normal distribution*](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution). | | | Cumulative distribution function[![](https://upload.wikimedia.org/wikipedia/commons/thumb/c/ca/Normal_Distribution_CDF.svg/500px-Normal_Distribution_CDF.svg.png)](https://en.wikipedia.org/wiki/File:Normal_Distribution_CDF.svg) | | | Notation | N ( ÎŒ , σ 2 ) {\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})} ![{\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/863304aaa42a945f2f07d79facc3d2eebc845ce7) | | | |---| | Part of a series on [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics") | | [Probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") | | [![](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3a/Standard_deviation_diagram_micro.svg/250px-Standard_deviation_diagram_micro.svg.png)](https://en.wikipedia.org/wiki/File:Standard_deviation_diagram_micro.svg) | | [Probability](https://en.wikipedia.org/wiki/Probability "Probability") [Axioms](https://en.wikipedia.org/wiki/Probability_axioms "Probability axioms") [Determinism](https://en.wikipedia.org/wiki/Determinism "Determinism") [System](https://en.wikipedia.org/wiki/Deterministic_system "Deterministic system") [Indeterminism](https://en.wikipedia.org/wiki/Indeterminism "Indeterminism") [Randomness](https://en.wikipedia.org/wiki/Randomness "Randomness") | | [Probability space](https://en.wikipedia.org/wiki/Probability_space "Probability space") [Sample space](https://en.wikipedia.org/wiki/Sample_space "Sample space") [Event](https://en.wikipedia.org/wiki/Event_\(probability_theory\) "Event (probability theory)") [Collectively exhaustive events](https://en.wikipedia.org/wiki/Collectively_exhaustive_events "Collectively exhaustive events") [Elementary event](https://en.wikipedia.org/wiki/Elementary_event "Elementary event") [Mutual exclusivity](https://en.wikipedia.org/wiki/Mutual_exclusivity "Mutual exclusivity") [Outcome](https://en.wikipedia.org/wiki/Outcome_\(probability\) "Outcome (probability)") [Singleton](https://en.wikipedia.org/wiki/Singleton_\(mathematics\) "Singleton (mathematics)") [Experiment](https://en.wikipedia.org/wiki/Experiment_\(probability_theory\) "Experiment (probability theory)") [Bernoulli trial](https://en.wikipedia.org/wiki/Bernoulli_trial "Bernoulli trial") [Probability distribution](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution "Bernoulli distribution") [Binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution") [Exponential distribution](https://en.wikipedia.org/wiki/Exponential_distribution "Exponential distribution") [Normal distribution]() [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution") [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution "Poisson distribution") [Probability measure](https://en.wikipedia.org/wiki/Probability_measure "Probability measure") [Random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable") [Bernoulli process](https://en.wikipedia.org/wiki/Bernoulli_process "Bernoulli process") [Continuous or discrete](https://en.wikipedia.org/wiki/Continuous_or_discrete_variable "Continuous or discrete variable") [Expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") [Variance](https://en.wikipedia.org/wiki/Variance "Variance") [Markov chain](https://en.wikipedia.org/wiki/Markov_chain "Markov chain") [Observed value](https://en.wikipedia.org/wiki/Realization_\(probability\) "Realization (probability)") [Random walk](https://en.wikipedia.org/wiki/Random_walk "Random walk") [Stochastic process](https://en.wikipedia.org/wiki/Stochastic_process "Stochastic process") | | [Complementary event](https://en.wikipedia.org/wiki/Complementary_event "Complementary event") [Joint probability](https://en.wikipedia.org/wiki/Joint_probability_distribution "Joint probability distribution") [Marginal probability](https://en.wikipedia.org/wiki/Marginal_distribution "Marginal distribution") [Conditional probability](https://en.wikipedia.org/wiki/Conditional_probability "Conditional probability") | | [Independence](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)") [Conditional independence](https://en.wikipedia.org/wiki/Conditional_independence "Conditional independence") [Law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability "Law of total probability") [Law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers") [Bayes' theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem "Bayes' theorem") [Boole's inequality](https://en.wikipedia.org/wiki/Boole%27s_inequality "Boole's inequality") | | [Venn diagram](https://en.wikipedia.org/wiki/Venn_diagram "Venn diagram") [Tree diagram](https://en.wikipedia.org/wiki/Tree_diagram_\(probability_theory\) "Tree diagram (probability theory)") | | [v](https://en.wikipedia.org/wiki/Template:Probability_fundamentals "Template:Probability fundamentals") [t](https://en.wikipedia.org/wiki/Template_talk:Probability_fundamentals "Template talk:Probability fundamentals") [e](https://en.wikipedia.org/wiki/Special:EditPage/Template:Probability_fundamentals "Special:EditPage/Template:Probability fundamentals") | In [probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") and [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics"), a **normal distribution** or **Gaussian distribution** is a type of [continuous probability distribution](https://en.wikipedia.org/wiki/Continuous_probability_distribution "Continuous probability distribution") for a [real-valued](https://en.wikipedia.org/wiki/Real_number "Real number") [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable"). The general form of its [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") is[\[2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-The_Joy_of_Finite_Mathematics-2)[\[3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Mathematics_for_Physical_Science_and_Engineering-3)[\[4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-4) f ( x ) \= 1 2 π σ 2 exp ⁥ ( − ( x − ÎŒ ) 2 2 σ 2 ) . {\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}\\exp {\\left(-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}\\right)}\\,.} ![{\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}\\exp {\\left(-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}\\right)}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/76a8168ecf0cf6a7cce21549645ba8912bcdad9e) The parameter ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ is the [mean](https://en.wikipedia.org/wiki/Mean#Mean_of_a_probability_distribution "Mean") or [expectation](https://en.wikipedia.org/wiki/Expected_value "Expected value") of the distribution (and also its [median](https://en.wikipedia.org/wiki/Median "Median") and [mode](https://en.wikipedia.org/wiki/Mode_\(statistics\) "Mode (statistics)")), while the parameter σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is the [variance](https://en.wikipedia.org/wiki/Variance "Variance"). The [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") of the distribution is the positive value ⁠ σ {\\displaystyle \\sigma } ![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36) ⁠ (sigma). A random variable with a Gaussian distribution is said to be **normally distributed** and is called a **normal deviate**. Normal distributions are important in [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics") and are often used in the [natural](https://en.wikipedia.org/wiki/Natural_science "Natural science") and [social sciences](https://en.wikipedia.org/wiki/Social_science "Social science") to represent real-valued [random variables](https://en.wikipedia.org/wiki/Random_variable "Random variable") whose distributions are not known.[\[5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-5)[\[6\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-6) Their importance is partly due to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). It states that the average of many [statistically independent](https://en.wikipedia.org/wiki/Statistically_independent "Statistically independent") samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution [converges](https://en.wikipedia.org/wiki/Convergence_in_distribution "Convergence in distribution") to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as [measurement errors](https://en.wikipedia.org/wiki/Measurement_error "Measurement error"), often have distributions that are nearly normal.[\[7\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-7) Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any [linear combination](https://en.wikipedia.org/wiki/Linear_combination "Linear combination") of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as [propagation of uncertainty](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") and [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares")[\[8\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-8) parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed. A normal distribution is sometimes informally called a **bell curve**.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9)[\[10\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-10) However, many other distributions are [bell-shaped](https://en.wikipedia.org/wiki/Bell-shaped_function "Bell-shaped function") (such as the [Cauchy](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"), [Student's t](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution"), and [logistic](https://en.wikipedia.org/wiki/Logistic_distribution "Logistic distribution") distributions). (For other names, see *[Naming](https://en.wikipedia.org/wiki/Normal_distribution#Naming)*.) The [univariate probability distribution](https://en.wikipedia.org/wiki/Univariate_distribution "Univariate distribution") is generalized for [vectors](https://en.wikipedia.org/wiki/Vector_\(mathematics_and_physics\) "Vector (mathematics and physics)") in the [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") and for matrices in the [matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution"). ## Definitions \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=1 "Edit section: Definitions")\] ### Standard normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=2 "Edit section: Standard normal distribution")\] The simplest case of a normal distribution is known as the **standard normal distribution** or **unit normal distribution**. This is a special case when ÎŒ \= 0 {\\textstyle \\mu =0} ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118) and σ 2 \= 1 {\\textstyle \\sigma ^{2}=1} ![{\\textstyle \\sigma ^{2}=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9d2598ad7a8b77472fe5859d97603776dce7d5ba), and it is described by this [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") (or density):[\[11\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-11) φ ( z ) \= e − z 2 / 2 2 π . {\\displaystyle \\varphi (z)={\\frac {e^{-z^{2}/2}}{\\sqrt {2\\pi }}}\\,.} ![{\\displaystyle \\varphi (z)={\\frac {e^{-z^{2}/2}}{\\sqrt {2\\pi }}}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c3ece18a271826e23b5befbe96e9fa9b7750146) The variable ⁠ z {\\displaystyle z} ![{\\displaystyle z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf368e72c009decd9b6686ee84a375632e11de98) ⁠ has a mean of 0 and a variance and standard deviation of 1. The density φ ( z ) {\\textstyle \\varphi (z)} ![{\\textstyle \\varphi (z)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1358bdf1286826fffd8e70843233399b5d69f8ee) has its peak value 1 2 π {\\textstyle {\\frac {1}{\\sqrt {2\\pi }}}} ![{\\textstyle {\\frac {1}{\\sqrt {2\\pi }}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eccd2c28dc343be5631094e573191b2c17edd21d) at z \= 0 {\\textstyle z=0} ![{\\textstyle z=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0829ff59a6fdc19b44396956e8767fac4ba87ba3) and [inflection points](https://en.wikipedia.org/wiki/Inflection_point "Inflection point") at z \= \+ 1 {\\textstyle z=+1} ![{\\textstyle z=+1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/696b845d6e60244a38cdcbe4c380cf780fa896cb) and ⁠ z \= − 1 {\\displaystyle z=-1} ![{\\displaystyle z=-1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/af5d92c041c1ddaa688b8f7f68d333e107e93709) ⁠. Although the density above is most commonly known as the *standard normal,* a few authors have used that term to describe other versions of the normal distribution. [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss"), for example, once defined the standard normal as φ ( z ) \= 1 π e − z 2 , {\\textstyle \\varphi (z)={\\frac {1}{\\sqrt {\\pi }}}e^{-z^{2}},} ![{\\textstyle \\varphi (z)={\\frac {1}{\\sqrt {\\pi }}}e^{-z^{2}},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f03b5e7bd0f9ba671f4c660d390291fe5b599673) which has a variance of ⁠ 1 2 {\\displaystyle {\\tfrac {1}{2}}} ![{\\displaystyle {\\tfrac {1}{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/edef8290613648790a8ac1a95c2fb7c3972aea2f) ⁠, and [Stephen Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") once defined the standard normal as φ ( z ) \= e − π z 2 , {\\textstyle \\varphi (z)=e^{-\\pi z^{2}},} ![{\\textstyle \\varphi (z)=e^{-\\pi z^{2}},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83ce834a9cc639b399bc65f8e2db8439d6d126a6) which has a simple functional form and a variance of σ 2 \= 1 2 π . {\\textstyle \\sigma ^{2}={\\frac {1}{2\\pi }}.} ![{\\textstyle \\sigma ^{2}={\\frac {1}{2\\pi }}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/906d23a23bb72831878ca5ef45498aa67b40ba1f)[\[12\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-12) ### General normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=3 "Edit section: General normal distribution")\] If ⁠ Z {\\displaystyle Z} ![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd) ⁠ is a [standard normal deviate](https://en.wikipedia.org/wiki/Standard_normal_deviate "Standard normal deviate"), then X \= σ Z \+ ÎŒ {\\textstyle X=\\sigma Z+\\mu } ![{\\textstyle X=\\sigma Z+\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/1acabbc54b0e79a7c76dce7de93aaaee9374b206) will have a normal distribution with expected value ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and standard deviation ⁠ σ {\\displaystyle \\sigma } ![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36) ⁠. This is equivalent to saying that the standard normal distribution ⁠ Z {\\displaystyle Z} ![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd) ⁠ can be scaled/stretched by a factor of ⁠ σ {\\displaystyle \\sigma } ![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36) ⁠ and shifted by ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ to yield a different normal distribution, called ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠. Conversely, if ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is a normal deviate with parameters ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), then this ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ distribution can be re-scaled and shifted via the formula Z \= ( X − ÎŒ ) / σ {\\textstyle Z=(X-\\mu )/\\sigma } ![{\\textstyle Z=(X-\\mu )/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/8a2807a7efd142e91103b494889fff5f3c4f3b56) to convert it to the standard normal distribution. This variate is also called the standardized form of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠. In particular, the probability density function for ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ can be written in terms of the standard normal distribution ⁠ φ {\\displaystyle \\varphi } ![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e) ⁠ (with zero mean and unit variance): f ( x ∣ ÎŒ , σ 2 ) \= 1 σ φ ( x − ÎŒ σ ) . {\\displaystyle f(x\\mid \\mu ,\\sigma ^{2})={\\frac {1}{\\sigma }}\\varphi \\left({\\frac {x-\\mu }{\\sigma }}\\right)\\,.} ![{\\displaystyle f(x\\mid \\mu ,\\sigma ^{2})={\\frac {1}{\\sigma }}\\varphi \\left({\\frac {x-\\mu }{\\sigma }}\\right)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd10de785e04c8023943b58c3794881d6037d959) The probability density must be scaled by 1 / σ {\\textstyle 1/\\sigma } ![{\\textstyle 1/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/d20071b089d5bd816234098110b4c2bd18ed32d9) so that the [integral](https://en.wikipedia.org/wiki/Integral "Integral") is still 1. ### Notation \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=4 "Edit section: Notation")\] The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter ⁠ ϕ {\\displaystyle \\phi } ![{\\displaystyle \\phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/72b1f30316670aee6270a28334bdf4f5072cdde4) ⁠ ([phi](https://en.wikipedia.org/wiki/Phi "Phi")).[\[13\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-13) The variant form of the Greek letter phi, ⁠ φ {\\displaystyle \\varphi } ![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e) ⁠, is also used quite often. The normal distribution is often referred to as N ( ÎŒ , σ 2 ) {\\textstyle N(\\mu ,\\sigma ^{2})} ![{\\textstyle N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6c34ca495ee2609c49ba6b010c03c31e1968ae87) or ⁠ N ( ÎŒ , σ 2 ) {\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})} ![{\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/863304aaa42a945f2f07d79facc3d2eebc845ce7) ⁠.[\[14\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-14) Thus when a random variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is normally distributed with mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and standard deviation ⁠ σ {\\displaystyle \\sigma } ![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36) ⁠, one may write X ∌ N ( ÎŒ , σ 2 ) . {\\displaystyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}).} ![{\\displaystyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0aeea1216143061c89f6a1944928a0aeee1b9cb1) ### Alternative parameterizations \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=5 "Edit section: Alternative parameterizations")\] Some authors advocate using the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)") ⁠ τ {\\displaystyle \\tau } ![{\\displaystyle \\tau }](https://wikimedia.org/api/rest_v1/media/math/render/svg/38a7dcde9730ef0853809fefc18d88771f95206c) ⁠ as the parameter defining the width of the distribution, instead of the standard deviation ⁠ σ {\\displaystyle \\sigma } ![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36) ⁠ or the variance ⁠ σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) ⁠. The precision is normally defined as the reciprocal of the variance, ⁠ 1 / σ 2 {\\displaystyle 1/\\sigma ^{2}} ![{\\displaystyle 1/\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd9d6d70944c9de586516f90477d752079617c07) ⁠.[\[15\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-15) The formula for the distribution then becomes f ( x ) \= τ 2 π e − τ ( x − ÎŒ ) 2 / 2 . {\\displaystyle f(x)={\\sqrt {\\frac {\\tau }{2\\pi }}}e^{-\\tau (x-\\mu )^{2}/2}.} ![{\\displaystyle f(x)={\\sqrt {\\frac {\\tau }{2\\pi }}}e^{-\\tau (x-\\mu )^{2}/2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e18260c517de859f1451477bc0c91d2d46f092a1) This choice is claimed to have advantages in numerical computations when ⁠ σ {\\displaystyle \\sigma } ![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36) ⁠ is very close to zero, and simplifies formulas in some contexts, such as in the [Bayesian inference](https://en.wikipedia.org/wiki/Bayesian_statistics "Bayesian statistics") of variables with [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution"). Alternatively, the reciprocal of the standard deviation τ â€Č \= 1 / σ {\\textstyle \\tau '=1/\\sigma } ![{\\textstyle \\tau '=1/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/2cb4526c1105a954e898b1a04d9614d40eb53a98) might be defined as the *precision*, in which case the expression of the normal distribution becomes f ( x ) \= τ â€Č 2 π e − ( τ â€Č ) 2 ( x − ÎŒ ) 2 / 2 . {\\displaystyle f(x)={\\frac {\\tau '}{\\sqrt {2\\pi }}}e^{-(\\tau ')^{2}(x-\\mu )^{2}/2}.} ![{\\displaystyle f(x)={\\frac {\\tau '}{\\sqrt {2\\pi }}}e^{-(\\tau ')^{2}(x-\\mu )^{2}/2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e16b2715bf35fcc08b305f756ca921d8557ddc4f) According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the distribution. Normal distributions form an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") with [natural parameters](https://en.wikipedia.org/wiki/Natural_parameter "Natural parameter") Ξ 1 \= ÎŒ σ 2 {\\textstyle \\textstyle \\theta \_{1}={\\frac {\\mu }{\\sigma ^{2}}}} ![{\\textstyle \\textstyle \\theta \_{1}={\\frac {\\mu }{\\sigma ^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/008c5a8c14b3669bb14c1af1053414284aa96206) and Ξ 2 \= − 1 2 σ 2 {\\textstyle \\textstyle \\theta \_{2}=-{\\frac {1}{2\\sigma ^{2}}}} ![{\\textstyle \\textstyle \\theta \_{2}=-{\\frac {1}{2\\sigma ^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4307052cdece25e306590d90818a7a348ec29c12), and natural statistics x and *x*2. The dual expectation parameters for normal distribution are *η*1 = *ÎŒ* and *η*2 = *ÎŒ*2 + *σ*2. ### Cumulative distribution function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=6 "Edit section: Cumulative distribution function")\] The [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") (CDF) of the standard normal distribution, usually denoted with the capital Greek letter ⁠ Ί {\\displaystyle \\Phi } ![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24) ⁠, is the integral Ί ( x ) \= 1 2 π ∫ − ∞ x e − t 2 / 2 d t . {\\displaystyle \\Phi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x}e^{-t^{2}/2}\\,dt\\,.} ![{\\displaystyle \\Phi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x}e^{-t^{2}/2}\\,dt\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fe155a8dcfd41749d5e0e0e6a9ee8bc2614b32b5) The related [error function](https://en.wikipedia.org/wiki/Error_function "Error function") erf ⁥ ( x ) {\\textstyle \\operatorname {erf} (x)} ![{\\textstyle \\operatorname {erf} (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/90f3dc9d94749c443d5adce54b6a51200bdd52f8) gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2, falling in the range ⁠ \[ − x , x \] {\\displaystyle \[-x,x\]} ![{\\displaystyle \[-x,x\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e23c41ff0bd6f01a0e27054c2b85819fcd08b762) ⁠. That is: erf ⁥ ( x ) \= 1 π ∫ − x x e − t 2 d t \= 2 π ∫ 0 x e − t 2 d t . {\\displaystyle \\operatorname {erf} (x)={\\frac {1}{\\sqrt {\\pi }}}\\int \_{-x}^{x}e^{-t^{2}}\\,dt={\\frac {2}{\\sqrt {\\pi }}}\\int \_{0}^{x}e^{-t^{2}}\\,dt\\,.} ![{\\displaystyle \\operatorname {erf} (x)={\\frac {1}{\\sqrt {\\pi }}}\\int \_{-x}^{x}e^{-t^{2}}\\,dt={\\frac {2}{\\sqrt {\\pi }}}\\int \_{0}^{x}e^{-t^{2}}\\,dt\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/42ad512624379200c07b7148812f0841710e5650) These integrals cannot be expressed in terms of elementary functions, and are often said to be [special functions](https://en.wikipedia.org/wiki/Special_function "Special function"). However, many numerical approximations are known; see [below](https://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_cumulative_distribution_function_and_normal_quantile_function) for more. The two functions are closely related, namely Ί ( x ) \= 1 2 \[ 1 \+ erf ⁥ ( x 2 ) \] . {\\displaystyle \\Phi (x)={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x}{\\sqrt {2}}}\\right)\\right\].} ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x}{\\sqrt {2}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8356fb040a87c7199fe5d99ca78fc217bb22260) For a generic normal distribution with density ⁠ f {\\displaystyle f} ![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61) ⁠, mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), the cumulative distribution function is F ( x ) \= Ί ( x − ÎŒ σ ) \= 1 2 \[ 1 \+ erf ⁥ ( x − ÎŒ σ 2 ) \] . {\\displaystyle F(x)=\\Phi {\\left({\\frac {x-\\mu }{\\sigma }}\\right)}={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\].} ![{\\displaystyle F(x)=\\Phi {\\left({\\frac {x-\\mu }{\\sigma }}\\right)}={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d71d8dab3627a46c34fafde729c82724b641b3eb) The probability that x lies between a and b with a \< b is therefore[\[16\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-KunIlPark-16): 84 P ⁥ ( a \< x ≀ b ) \= 1 2 \[ erf ⁥ ( b − ÎŒ σ 2 ) − erf ⁥ ( a − ÎŒ σ 2 ) \] {\\displaystyle \\operatorname {P} (a\<x\\leq b)={\\frac {1}{2}}\\left\[\\operatorname {erf} \\left({\\frac {b-\\mu }{\\sigma {\\sqrt {2}}}}\\right)-\\operatorname {erf} \\left({\\frac {a-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\]} ![{\\displaystyle \\operatorname {P} (a\<x\\leq b)={\\frac {1}{2}}\\left\[\\operatorname {erf} \\left({\\frac {b-\\mu }{\\sigma {\\sqrt {2}}}}\\right)-\\operatorname {erf} \\left({\\frac {a-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/081e3058b1d40566c4e105866b7728e950d32f1d) The complement of the standard normal cumulative distribution function, Q ( x ) \= 1 − Ί ( x ) {\\textstyle Q(x)=1-\\Phi (x)} ![{\\textstyle Q(x)=1-\\Phi (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55ee5d81b5e0df283ceaf4c4fb84f2735fa703e4), is often called the [Q-function](https://en.wikipedia.org/wiki/Q-function "Q-function"), especially in engineering texts.[\[17\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-17)[\[18\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-18) It gives the probability that the value of a standard normal random variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ will exceed ⁠ x {\\displaystyle x} ![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4) ⁠: ⁠ P ( X \> x ) {\\displaystyle P(X\>x)} ![{\\displaystyle P(X\>x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/767fd276524cfb3556093722a4f40a9209194ea5) ⁠. Other definitions of the ⁠ Q {\\displaystyle Q} ![{\\displaystyle Q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8752c7023b4b3286800fe3238271bbca681219ed) ⁠\-function, all of which are simple transformations of ⁠ Ί {\\displaystyle \\Phi } ![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24) ⁠, are also used occasionally.[\[19\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-19) The [graph](https://en.wikipedia.org/wiki/Graph_of_a_function "Graph of a function") of the standard normal cumulative distribution function ⁠ Ί {\\displaystyle \\Phi } ![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24) ⁠ has 2-fold [rotational symmetry](https://en.wikipedia.org/wiki/Rotational_symmetry "Rotational symmetry") around the point (0,1/2); that is, ⁠ Ί ( − x ) \= 1 − Ί ( x ) {\\displaystyle \\Phi (-x)=1-\\Phi (x)} ![{\\displaystyle \\Phi (-x)=1-\\Phi (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ac1a5e4fc7858485f2a5448635fd0a85b7fd53b0) ⁠. Its [antiderivative](https://en.wikipedia.org/wiki/Antiderivative "Antiderivative") (indefinite integral) can be expressed as follows: ∫ Ί ( x ) d x \= x Ί ( x ) \+ φ ( x ) \+ C . {\\displaystyle \\int \\Phi (x)\\,dx=x\\Phi (x)+\\varphi (x)+C.} ![{\\displaystyle \\int \\Phi (x)\\,dx=x\\Phi (x)+\\varphi (x)+C.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ec7f747ce873d091260c617c82359d7c407fee6) An [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") of the cumulative distribution function for large x can be derived using [integration by parts](https://en.wikipedia.org/wiki/Integration_by_parts "Integration by parts"): Ί ( x ) \= 1 2 \+ 1 2 π e − x 2 / 2 ∑ n \= 0 ∞ 1 ( 2 n \+ 1 ) \! \! x 2 n \+ 1 . {\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}e^{-x^{2}/2}\\sum \_{n=0}^{\\infty }{\\frac {1}{(2n+1)!!}}x^{2n+1}\\,.} ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}e^{-x^{2}/2}\\sum \_{n=0}^{\\infty }{\\frac {1}{(2n+1)!!}}x^{2n+1}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e676037e64672c4894dd3f9b2abb609ff490b8e1) where \! \! {\\textstyle !!} ![{\\textstyle !!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/89dd9aa9898e3a45a024e12e27086d636a1bf5cd) denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"). For more, see [Error function § Asymptotic expansion](https://en.wikipedia.org/wiki/Error_function#Asymptotic_expansion "Error function").[\[20\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-20) #### Taylor series representation \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=7 "Edit section: Taylor series representation")\] The [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") for the normal distribution ⁠ φ {\\displaystyle \\varphi } ![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e) ⁠ can be derived by substituting ⁠ − 1 2 x 2 {\\displaystyle -{\\tfrac {1}{2}}x^{2}} ![{\\displaystyle -{\\tfrac {1}{2}}x^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fac175b0fc8ddfb03edecad7cd12331afdea1e3b) ⁠ into the [Taylor series for the exponential function](https://en.wikipedia.org/wiki/Exponential_function#Power_series "Exponential function"):[\[21\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-duff-21) φ ( x ) \= 1 2 π ∑ n \= 0 ∞ ( − 1 ) n n \! 2 n x 2 n {\\displaystyle \\varphi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}}}x^{2n}} ![{\\displaystyle \\varphi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}}}x^{2n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fed6d9f237b86e3ae908aac934d4780b6b08d2df) This series can be integrated term by term to obtain the Taylor series for the cumulative distribution function:[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22) Ί ( x ) \= 1 2 \+ 1 2 π ∑ n \= 0 ∞ ( − 1 ) n n \! 2 n ( 2 n \+ 1 ) x 2 n \+ 1 . {\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}(2n+1)}}x^{2n+1}.} ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}(2n+1)}}x^{2n+1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56b82860a095bd5378858edbd264bc29d3c58c17) However, this series is ineffective for calculation due to slow convergence, except when ⁠ x {\\displaystyle x} ![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4) ⁠ is small.[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22) Both of these series describe [entire functions](https://en.wikipedia.org/wiki/Entire_function "Entire function"), which converge for all real and complex values of ⁠ x {\\displaystyle x} ![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4) ⁠. #### Recursive computation with Taylor series \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=8 "Edit section: Recursive computation with Taylor series")\] The recurrence relation for [Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials "Hermite polynomials") He*n*(*x*) may be used to efficiently construct the [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") expansion about any point *x*0: Ί ( x ) \= ∑ n \= 0 ∞ Ί ( n ) ( x 0 ) n \! ( x − x 0 ) n , {\\displaystyle \\Phi (x)=\\sum \_{n=0}^{\\infty }{\\frac {\\Phi ^{(n)}(x\_{0})}{n!}}(x-x\_{0})^{n}\\,,} ![{\\displaystyle \\Phi (x)=\\sum \_{n=0}^{\\infty }{\\frac {\\Phi ^{(n)}(x\_{0})}{n!}}(x-x\_{0})^{n}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/899b25343db480efb1ae8a44a5331e78497b83cd) where: Ί ( 0 ) ( x 0 ) \= 1 2 π ∫ − ∞ x 0 e − t 2 / 2 d t Ί ( 1 ) ( x 0 ) \= 1 2 π e − x 0 2 / 2 Ί ( n ) ( x 0 ) \= − ( x 0 Ί ( n − 1 ) ( x 0 ) \+ ( n − 2 ) Ί ( n − 2 ) ( x 0 ) ) , n ≄ 2 . {\\displaystyle {\\begin{aligned}\\Phi ^{(0)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x\_{0}}e^{-t^{2}/2}\\,dt\\\\\\Phi ^{(1)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}e^{-x\_{0}^{2}/2}\\\\\\Phi ^{(n)}(x\_{0})&=-\\left(x\_{0}\\Phi ^{(n-1)}(x\_{0})+(n-2)\\Phi ^{(n-2)}(x\_{0})\\right),\&n\\geq 2\\,.\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\Phi ^{(0)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x\_{0}}e^{-t^{2}/2}\\,dt\\\\\\Phi ^{(1)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}e^{-x\_{0}^{2}/2}\\\\\\Phi ^{(n)}(x\_{0})&=-\\left(x\_{0}\\Phi ^{(n-1)}(x\_{0})+(n-2)\\Phi ^{(n-2)}(x\_{0})\\right),\&n\\geq 2\\,.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fed2112b27830b104cb3df5ea89238adbca49314) #### Standard deviation and coverage \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=9 "Edit section: Standard deviation and coverage")\] Further information: [Interval estimation](https://en.wikipedia.org/wiki/Interval_estimation "Interval estimation") and [Coverage probability](https://en.wikipedia.org/wiki/Coverage_probability "Coverage probability") [![](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Standard_deviation_diagram.svg/500px-Standard_deviation_diagram.svg.png)](https://en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg) For the normal distribution, the values less than one standard deviation from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%. About 68% of values drawn from a normal distribution are within one standard deviation σ from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9) This is known as the [68–95–99.7 (empirical) rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule "68–95–99.7 rule"), or the *3-sigma rule*. More precisely, the probability that a normal deviate lies in the range between ÎŒ − n σ {\\textstyle \\mu -n\\sigma } ![{\\textstyle \\mu -n\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/16ab399c484083545cd1266951f08743135539f3) and ÎŒ \+ n σ {\\textstyle \\mu +n\\sigma } ![{\\textstyle \\mu +n\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/d6d5a81dd97c468e3806e43e15761506633efb95) is given by F ( ÎŒ \+ n σ ) − F ( ÎŒ − n σ ) \= Ί ( n ) − Ί ( − n ) \= erf ⁥ ( n 2 ) . {\\displaystyle F(\\mu +n\\sigma )-F(\\mu -n\\sigma )=\\Phi (n)-\\Phi (-n)=\\operatorname {erf} \\left({\\frac {n}{\\sqrt {2}}}\\right).} ![{\\displaystyle F(\\mu +n\\sigma )-F(\\mu -n\\sigma )=\\Phi (n)-\\Phi (-n)=\\operatorname {erf} \\left({\\frac {n}{\\sqrt {2}}}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/effeceb477bf37b05d0035347946350b1f0155ce) To 12 significant digits, the values for n \= 1 , 2 , 
 , 6 {\\textstyle n=1,2,\\ldots ,6} ![{\\textstyle n=1,2,\\ldots ,6}](https://wikimedia.org/api/rest_v1/media/math/render/svg/79a9883d3cba69db9041e545b5bd90383dc72a76) are: | ⁠ n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) ⁠ | | | | | |---|---|---|---|---| | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A178647](https://oeis.org/A178647 "oeis:A178647") | | | | | | 2 | 0\.954499736104 | 0\.045500263896 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A110894](https://oeis.org/A110894 "oeis:A110894") | | | | | | | | 21 | .9778945080 | | | | | 3 | 0\.997300203937 | 0\.002699796063 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A270712](https://oeis.org/A270712 "oeis:A270712") | | | | | | | | 370 | .398347345 | | | | | 4 | 0\.999936657516 | 0\.000063342484 | | | | | | | | | | 15787 | .1927673 | | | | | 5 | 0\.999999426697 | 0\.000000573303 | | | | | | | | | | 1744277 | .89362 | | | | | 6 | 0\.999999998027 | 0\.000000001973 | | | | | | | | | | 506797345 | .897 | | | | For large ⁠ n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) ⁠, one can use the approximation 1 − p ≈ 2 n π e n 2 {\\displaystyle 1-p\\approx {\\frac {\\sqrt {2}}{n{\\sqrt {\\pi e^{n^{2}}}}}}} ![{\\displaystyle 1-p\\approx {\\frac {\\sqrt {2}}{n{\\sqrt {\\pi e^{n^{2}}}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/04078025dfb809088e494d493686a144f704740e) #### Quantile function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=10 "Edit section: Quantile function")\] Further information: [Quantile function § Normal distribution](https://en.wikipedia.org/wiki/Quantile_function#Normal_distribution "Quantile function") The [quantile function](https://en.wikipedia.org/wiki/Quantile_function "Quantile function") of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function"), and can be expressed in terms of the inverse [error function](https://en.wikipedia.org/wiki/Error_function "Error function"): Ί − 1 ( p ) \= 2 erf − 1 ⁥ ( 2 p − 1 ) , p ∈ ( 0 , 1 ) . {\\displaystyle \\Phi ^{-1}(p)={\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).} ![{\\displaystyle \\Phi ^{-1}(p)={\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/de61998182ddb364f8b77d67c1aa645685fb3c3b) For a normal random variable with mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), the quantile function is F − 1 ( p ) \= ÎŒ \+ σ Ί − 1 ( p ) \= ÎŒ \+ σ 2 erf − 1 ⁥ ( 2 p − 1 ) , p ∈ ( 0 , 1 ) . {\\displaystyle F^{-1}(p)=\\mu +\\sigma \\Phi ^{-1}(p)=\\mu +\\sigma {\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).} ![{\\displaystyle F^{-1}(p)=\\mu +\\sigma \\Phi ^{-1}(p)=\\mu +\\sigma {\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57ed565648bb5901c0da2dd3ad10b8d447d4c73c) The [quantile](https://en.wikipedia.org/wiki/Quantile "Quantile") Ί − 1 ( p ) {\\textstyle \\Phi ^{-1}(p)} ![{\\textstyle \\Phi ^{-1}(p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3a9f04f02c81a82f110ba3e0bc02b931ddba14c2) of the standard normal distribution is commonly denoted as ⁠ z p {\\displaystyle z\_{p}} ![{\\displaystyle z\_{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/52498d5e243c71b94e48fa16217a3f4a17be6687) ⁠. These values are used in [hypothesis testing](https://en.wikipedia.org/wiki/Hypothesis_testing "Hypothesis testing"), construction of [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") and [Q–Q plots](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "Q–Q plot"). A normal random variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ will exceed ÎŒ \+ z p σ {\\textstyle \\mu +z\_{p}\\sigma } ![{\\textstyle \\mu +z\_{p}\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/7ee41c1b531933b4fcf571bb55703e37c7a0fc77) with probability 1 − p {\\textstyle 1-p} ![{\\textstyle 1-p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ec5d41b23d96cef795a32f25060ef6ba352876bd), and will lie outside the interval ÎŒ ± z p σ {\\textstyle \\mu \\pm z\_{p}\\sigma } ![{\\textstyle \\mu \\pm z\_{p}\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/e909fd422a7987824dffd964868bd1e873ca0d40) with probability ⁠ 2 ( 1 − p ) {\\displaystyle 2(1-p)} ![{\\displaystyle 2(1-p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7403414204a8e5a6b889202992b9824f826cc72c) ⁠. In particular, the quantile z 0\.975 {\\textstyle z\_{0.975}} ![{\\textstyle z\_{0.975}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5207b0523601cf57cae2ee9cb74ce6ac33951595) is [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96"); therefore a normal random variable will lie outside the interval ÎŒ ± 1\.96 σ {\\textstyle \\mu \\pm 1.96\\sigma } ![{\\textstyle \\mu \\pm 1.96\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fa5d1373db8c4232587a27b948e3cf67aa3d83e) in only 5% of cases. The following table gives the quantile z p {\\textstyle z\_{p}} ![{\\textstyle z\_{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a5b93ea92fe03865ccbef09efc30804949e5545e) such that ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ will lie in the range ÎŒ ± z p σ {\\textstyle \\mu \\pm z\_{p}\\sigma } ![{\\textstyle \\mu \\pm z\_{p}\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/e909fd422a7987824dffd964868bd1e873ca0d40) with a specified probability ⁠ p {\\displaystyle p} ![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36) ⁠. These values are useful to determine [tolerance interval](https://en.wikipedia.org/wiki/Tolerance_interval "Tolerance interval") for [sample averages](https://en.wikipedia.org/wiki/Sample_mean_and_sample_covariance#Sample_mean "Sample mean and sample covariance") and other statistical [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") with normal (or [asymptotically](https://en.wikipedia.org/wiki/Asymptotic "Asymptotic") normal) distributions.[\[23\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-23) The following table shows 2 erf − 1 ⁥ ( p ) \= Ί − 1 ( p \+ 1 2 ) {\\textstyle {\\sqrt {2}}\\operatorname {erf} ^{-1}(p)=\\Phi ^{-1}\\left({\\frac {p+1}{2}}\\right)} ![{\\textstyle {\\sqrt {2}}\\operatorname {erf} ^{-1}(p)=\\Phi ^{-1}\\left({\\frac {p+1}{2}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e7ca400c2536e7fba7dcda56c02bb014978c01c), not Ί − 1 ( p ) {\\textstyle \\Phi ^{-1}(p)} ![{\\textstyle \\Phi ^{-1}(p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3a9f04f02c81a82f110ba3e0bc02b931ddba14c2) as defined above. | ⁠ p {\\displaystyle p} ![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36) ⁠ | |---| For small ⁠ p {\\displaystyle p} ![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36) ⁠, the quantile function has the useful [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") Ί − 1 ( p ) \= − ln ⁥ 1 p 2 − ln ⁥ ln ⁥ 1 p 2 − ln ⁥ ( 2 π ) \+ o ( 1 ) . {\\textstyle \\Phi ^{-1}(p)=-{\\sqrt {\\ln {\\frac {1}{p^{2}}}-\\ln \\ln {\\frac {1}{p^{2}}}-\\ln(2\\pi )}}+{\\mathcal {o}}(1).} ![{\\textstyle \\Phi ^{-1}(p)=-{\\sqrt {\\ln {\\frac {1}{p^{2}}}-\\ln \\ln {\\frac {1}{p^{2}}}-\\ln(2\\pi )}}+{\\mathcal {o}}(1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2e59806c8eb5b7858944464c01f370d982d20c72)\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] #### Using root finding to compute the quantile function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=11 "Edit section: Using root finding to compute the quantile function")\] Any of the described approaches for computing the cumulative distribution function Ί ( x ) {\\textstyle \\Phi (x)} ![{\\textstyle \\Phi (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fb97785cbcee6a45a8885563bcc90a913b803a87) can be used with [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method "Newton's method") (or another [root-finding algorithm](https://en.wikipedia.org/wiki/Root-finding_algorithm "Root-finding algorithm") such as [Halley's method](https://en.wikipedia.org/wiki/Halley%27s_method "Halley's method")) to find the value of ⁠ x {\\displaystyle x} ![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4) ⁠ for which ⁠ Ί ( x ) \= q {\\displaystyle \\Phi (x)=q} ![{\\displaystyle \\Phi (x)=q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0fba24eeb58b69a8aeebffbbcafbac5272094f90) ⁠ for some desired quantile ⁠ q {\\displaystyle q} ![{\\displaystyle q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/06809d64fa7c817ffc7e323f85997f783dbdf71d) ⁠. For example, starting with an initial, approximately correct guess ⁠ x 0 {\\displaystyle x\_{0}} ![{\\displaystyle x\_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/86f21d0e31751534cd6584264ecf864a6aa792cf) ⁠, increasingly better approximations ⁠ x 1 {\\displaystyle x\_{1}} ![{\\displaystyle x\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8788bf85d532fa88d1fb25eff6ae382a601c308) ⁠, ⁠ x 2 {\\displaystyle x\_{2}} ![{\\displaystyle x\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d7af1b928f06e4c7e3e8ebfd60704656719bd766) ⁠, ... can be calculated iteratively using Newton's method with x n \= x n − 1 − Ί ( x n − 1 ) − q φ ( x n − 1 ) . {\\displaystyle x\_{n}=x\_{n-1}-{\\frac {\\Phi (x\_{n-1})-q}{\\varphi (x\_{n-1})}}\\,.} ![{\\displaystyle x\_{n}=x\_{n-1}-{\\frac {\\Phi (x\_{n-1})-q}{\\varphi (x\_{n-1})}}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/712090ad5a1da4901e0ab1a80ad54d959a221223) ## Properties \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=12 "Edit section: Properties")\] The normal distribution is the only distribution whose [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") beyond the first two (i.e., other than the mean and [variance](https://en.wikipedia.org/wiki/Variance "Variance")) are zero. It is also the continuous distribution with the [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution") for a specified mean and variance.[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24)[\[25\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-25) Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.[\[26\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Geary_RC-26)[\[27\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-27) The normal distribution is a subclass of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). The normal distribution is [symmetric](https://en.wikipedia.org/wiki/Symmetric_distribution "Symmetric distribution") about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the [weight](https://en.wikipedia.org/wiki/Weight "Weight") of a person or the price of a [share of stock](https://en.wikipedia.org/wiki/Share_\(finance\) "Share (finance)"). Such variables may be better described by other distributions, such as the [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") or the [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution"). The value of the normal density is practically zero when the value ⁠ x {\\displaystyle x} ![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4) ⁠ lies more than a few [standard deviations](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of [outliers](https://en.wikipedia.org/wiki/Outlier "Outlier")—values that lie many standard deviations away from the mean—and least squares and other [statistical inference](https://en.wikipedia.org/wiki/Statistical_inference "Statistical inference") methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more [heavy-tailed](https://en.wikipedia.org/wiki/Heavy-tailed "Heavy-tailed") distribution should be assumed and appropriate [robust statistical inference](https://en.wikipedia.org/wiki/Robust_statistics "Robust statistics") methods applied. The Gaussian distribution belongs to the family of [stable distributions](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") which are the attractors of sums of [independent, identically distributed](https://en.wikipedia.org/wiki/Independent,_identically_distributed "Independent, identically distributed") distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution") and the [LĂ©vy distribution](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "LĂ©vy distribution"). ### Symmetries and derivatives \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=13 "Edit section: Symmetries and derivatives")\] The normal distribution with density f ( x ) {\\textstyle f(x)} ![{\\textstyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0a982c6635ab3b98d9e12d5f5a8533359bcb38a) (mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 \> 0 {\\textstyle \\sigma ^{2}\>0} ![{\\textstyle \\sigma ^{2}\>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ebe66d229cd7ffe26117b0210589335cd0ef9a1a)) has the following properties: - It is symmetric around the point x \= ÎŒ , {\\textstyle x=\\mu ,} ![{\\textstyle x=\\mu ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dcf27b80f77033d27425976fd9d0311e84d8e1a5) which is at the same time the [mode](https://en.wikipedia.org/wiki/Mode_\(statistics\) "Mode (statistics)"), the [median](https://en.wikipedia.org/wiki/Median "Median") and the [mean](https://en.wikipedia.org/wiki/Mean "Mean") of the distribution.[\[28\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Patel-28) - It is [unimodal](https://en.wikipedia.org/wiki/Unimodal "Unimodal"): its first [derivative](https://en.wikipedia.org/wiki/Derivative "Derivative") is positive for x \< ÎŒ , {\\textstyle x\<\\mu ,} ![{\\textstyle x\<\\mu ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1a67ee38eec71045f0d0821b5043b28dedf4a367) negative for x \> ÎŒ , {\\textstyle x\>\\mu ,} ![{\\textstyle x\>\\mu ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a9759302b1b0bf87dd4b7d54889c49523c2d19a1) and zero only at x \= ÎŒ . {\\textstyle x=\\mu .} ![{\\textstyle x=\\mu .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b983354c35ec0cf9663ad767488067e058ad05b6) - The area bounded by the curve and the ⁠ x {\\displaystyle x} ![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4) ⁠ \-axis is unity (i.e. equal to one). - Its first derivative is f â€Č ( x ) \= − x − ÎŒ σ 2 f ( x ) . {\\textstyle f'(x)=-{\\frac {x-\\mu }{\\sigma ^{2}}}f(x).} ![{\\textstyle f'(x)=-{\\frac {x-\\mu }{\\sigma ^{2}}}f(x).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4e2e0b863fdbf85250f7b351f549f947e75640e7) - Its second derivative is f ″ ( x ) \= ( x − ÎŒ ) 2 − σ 2 σ 4 f ( x ) . {\\textstyle f''(x)={\\frac {(x-\\mu )^{2}-\\sigma ^{2}}{\\sigma ^{4}}}f(x).} ![{\\textstyle f''(x)={\\frac {(x-\\mu )^{2}-\\sigma ^{2}}{\\sigma ^{4}}}f(x).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4788f1951bbfffb2cce335f34bc1840f8511d682) - Its density has two [inflection points](https://en.wikipedia.org/wiki/Inflection_point "Inflection point") (where the second derivative of ⁠ f {\\displaystyle f} ![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61) ⁠ is zero and changes sign), located one standard deviation away from the mean, namely at x \= ÎŒ − σ {\\textstyle x=\\mu -\\sigma } ![{\\textstyle x=\\mu -\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/4f0d6acba18449501d4f6adb4859fdfd9476f9cb) and x \= ÎŒ \+ σ . {\\textstyle x=\\mu +\\sigma .} ![{\\textstyle x=\\mu +\\sigma .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bbd2b2cd6a14795f19ba9c7915c30e69b94d9d82) [\[28\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Patel-28) - Its density is [log-concave](https://en.wikipedia.org/wiki/Logarithmically_concave_function "Logarithmically concave function").[\[28\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Patel-28) - Its density is infinitely [differentiable](https://en.wikipedia.org/wiki/Differentiable "Differentiable"), indeed [supersmooth](https://en.wikipedia.org/wiki/Supersmooth "Supersmooth") of order 2.[\[29\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-29) Furthermore, the density ⁠ φ {\\displaystyle \\varphi } ![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e) ⁠ of the standard normal distribution (i.e. ÎŒ \= 0 {\\textstyle \\mu =0} ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118) and σ \= 1 {\\textstyle \\sigma =1} ![{\\textstyle \\sigma =1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f84ca2bfc32e1664f86cae4803d641d22cfe08ca)) also has the following properties: - Its first derivative is φ â€Č ( x ) \= − x φ ( x ) . {\\textstyle \\varphi '(x)=-x\\varphi (x).} ![{\\textstyle \\varphi '(x)=-x\\varphi (x).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd4cd1339f8b2cd458c9821f66a65976c8758cbd) - Its second derivative is φ ″ ( x ) \= ( x 2 − 1 ) φ ( x ) {\\textstyle \\varphi ''(x)=(x^{2}-1)\\varphi (x)} ![{\\textstyle \\varphi ''(x)=(x^{2}-1)\\varphi (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d2d52f5d8163787a1bea4e29cd3f6e75d21fe74) - More generally, its nth derivative is φ ( n ) ( x ) \= ( − 1 ) n He n ⁥ ( x ) φ ( x ) , {\\textstyle \\varphi ^{(n)}(x)=(-1)^{n}\\operatorname {He} \_{n}(x)\\varphi (x),} ![{\\textstyle \\varphi ^{(n)}(x)=(-1)^{n}\\operatorname {He} \_{n}(x)\\varphi (x),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/de21e66600259ea4b3fa89aa14976b99531fb73b) where He n ⁥ ( x ) {\\textstyle \\operatorname {He} \_{n}(x)} ![{\\textstyle \\operatorname {He} \_{n}(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9b889c7ee6b57ab5b28b71d58bb97631c71f25d6) is the nth (probabilist) [Hermite polynomial](https://en.wikipedia.org/wiki/Hermite_polynomial "Hermite polynomial").[\[30\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-30) - The probability that a normally distributed variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ with known ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is in a particular set, can be calculated given that the fraction Z \= ( X − ÎŒ ) / σ {\\textstyle Z=(X-\\mu )/\\sigma } ![{\\textstyle Z=(X-\\mu )/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/8a2807a7efd142e91103b494889fff5f3c4f3b56) has a standard normal distribution. ### Moments \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=14 "Edit section: Moments")\] See also: [List of integrals of Gaussian functions](https://en.wikipedia.org/wiki/List_of_integrals_of_Gaussian_functions "List of integrals of Gaussian functions") The plain and absolute [moments](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") of a variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ are the expected values of X p {\\textstyle X^{p}} ![{\\textstyle X^{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c37706b810427d7356289f56e79feac9da48f931) and \| X \| p {\\textstyle \|X\|^{p}} ![{\\textstyle \|X\|^{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/84cb2a490da4ae1e7295a3d64f8965cd564dbfda), respectively. If the expected value ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is zero, these parameters are called *central moments;* otherwise, these parameters are called *non-central moments.* Usually we are interested only in moments with integer order ⁠ p {\\displaystyle p} ![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36) ⁠. If ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ has a normal distribution, the non-central moments exist and are finite for any ⁠ p {\\displaystyle p} ![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36) ⁠ whose real part is greater than −1. For any non-negative integer ⁠ p {\\displaystyle p} ![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36) ⁠, the plain central moments are:[\[31\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-31) E ⁥ \[ ( X − ÎŒ ) p \] \= { 0 if p is odd, σ p ( p − 1 ) \! \! if p is even. {\\displaystyle \\operatorname {E} \\left\[(X-\\mu )^{p}\\right\]={\\begin{cases}0&{\\text{if }}p{\\text{ is odd,}}\\\\\\sigma ^{p}(p-1)!!&{\\text{if }}p{\\text{ is even.}}\\end{cases}}} ![{\\displaystyle \\operatorname {E} \\left\[(X-\\mu )^{p}\\right\]={\\begin{cases}0&{\\text{if }}p{\\text{ is odd,}}\\\\\\sigma ^{p}(p-1)!!&{\\text{if }}p{\\text{ is even.}}\\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1d2c92b62ac2bbe07a8e475faac29c8cc5f7755) Here n \! \! {\\textstyle n!!} ![{\\textstyle n!!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6479cdc6d9299bee6791fd78541a9caced147e84) denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"), that is, the product of all numbers from ⁠ n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) ⁠ to 1 that have the same parity as n . {\\textstyle n.} ![{\\textstyle n.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6788de1792d07b05bdece3a07e26e3ca35cf0dcc) The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer p , {\\textstyle p,} ![{\\textstyle p,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd670f60ea60d0f801a51d07153211c4498c2b9b) E ⁥ \[ \| X − ÎŒ \| p \] \= σ p ( p − 1 ) \! \! ⋅ { 2 π if p is odd 1 if p is even \= σ p ⋅ 2 p / 2 Γ ( p \+ 1 2 ) π . {\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[\|X-\\mu \|^{p}\\right\]&=\\sigma ^{p}(p-1)!!\\cdot {\\begin{cases}{\\sqrt {\\frac {2}{\\pi }}}&{\\text{if }}p{\\text{ is odd}}\\\\1&{\\text{if }}p{\\text{ is even}}\\end{cases}}\\\\\[8pt\]&=\\sigma ^{p}\\cdot {\\frac {2^{p/2}\\Gamma \\left({\\frac {p+1}{2}}\\right)}{\\sqrt {\\pi }}}.\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[\|X-\\mu \|^{p}\\right\]&=\\sigma ^{p}(p-1)!!\\cdot {\\begin{cases}{\\sqrt {\\frac {2}{\\pi }}}&{\\text{if }}p{\\text{ is odd}}\\\\1&{\\text{if }}p{\\text{ is even}}\\end{cases}}\\\\\[8pt\]&=\\sigma ^{p}\\cdot {\\frac {2^{p/2}\\Gamma \\left({\\frac {p+1}{2}}\\right)}{\\sqrt {\\pi }}}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3be5c6403b0141985f1980de6283a118ef4ea267) The last formula is valid also for any non-integer p \> − 1\. {\\textstyle p\>-1.} ![{\\textstyle p\>-1.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/60bec6c456d39575d4d0bda8ef6126df6965184b) When the mean ÎŒ ≠ 0 , {\\textstyle \\mu \\neq 0,} ![{\\textstyle \\mu \\neq 0,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd40f78414e9c6ae54a809ff88f32fb619d8b65d) the plain and absolute moments can be expressed in terms of [confluent hypergeometric functions](https://en.wikipedia.org/wiki/Confluent_hypergeometric_function "Confluent hypergeometric function") 1 F 1 {\\textstyle {}\_{1}F\_{1}} ![{\\textstyle {}\_{1}F\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9b60e0e022990cc8edbdf635ade1ab912951bf32) and U . {\\textstyle U.} ![{\\textstyle U.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fa070da3b4046164d9d5e470611c5372c15c53e5)[\[32\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-32) E ⁥ \[ X p \] \= σ p ⋅ ( − i 2 ) p U ( − p 2 , 1 2 , − ÎŒ 2 2 σ 2 ) , E ⁥ \[ \| X \| p \] \= σ p ⋅ 2 p / 2 Γ ( 1 \+ p 2 ) π 1 F 1 ( − p 2 , 1 2 , − ÎŒ 2 2 σ 2 ) . {\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[X^{p}\\right\]&=\\sigma ^{p}\\cdot {\\left(-i{\\sqrt {2}}\\right)}^{p}\\,U{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)},\\\\\\operatorname {E} \\left\[\|X\|^{p}\\right\]&=\\sigma ^{p}\\cdot 2^{p/2}{\\frac {\\Gamma {\\left({\\frac {1+p}{2}}\\right)}}{\\sqrt {\\pi }}}\\,{}\_{1}F\_{1}{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)}.\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[X^{p}\\right\]&=\\sigma ^{p}\\cdot {\\left(-i{\\sqrt {2}}\\right)}^{p}\\,U{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)},\\\\\\operatorname {E} \\left\[\|X\|^{p}\\right\]&=\\sigma ^{p}\\cdot 2^{p/2}{\\frac {\\Gamma {\\left({\\frac {1+p}{2}}\\right)}}{\\sqrt {\\pi }}}\\,{}\_{1}F\_{1}{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c1a76b43032d0b01ff1935cced240253263ec62) These expressions remain valid even when ⁠ p \> − 1 {\\displaystyle p\>-1} ![{\\displaystyle p\>-1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/406c8961cfe4fad127623e38a89afa66115d815c) ⁠ is not an integer. See also [generalized Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials#"Negative_variance" "Hermite polynomials"). | Order | Non-central moment, E ⁥ \[ X p \] {\\displaystyle \\operatorname {E} \\left\[X^{p}\\right\]} ![{\\displaystyle \\operatorname {E} \\left\[X^{p}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53264b5ab94e93f2e7de05de69012af27df4c4f4) | |---|---| The expectation of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ conditioned on the event that ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ lies in an interval \[ a , b \] {\\textstyle \[a,b\]} ![{\\textstyle \[a,b\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c780cbaafb5b1d4a6912aa65d2b0b1982097108) is given by E ⁥ \[ X ∣ a \< X \< b \] \= ÎŒ − σ 2 f ( b ) − f ( a ) F ( b ) − F ( a ) , {\\displaystyle \\operatorname {E} \\left\[X\\mid a\<X\<b\\right\]=\\mu -\\sigma ^{2}{\\frac {f(b)-f(a)}{F(b)-F(a)}}\\,,} ![{\\displaystyle \\operatorname {E} \\left\[X\\mid a\<X\<b\\right\]=\\mu -\\sigma ^{2}{\\frac {f(b)-f(a)}{F(b)-F(a)}}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad97cc40960e6d1d4e65f51596c9cd0c9accfdc0) where ⁠ f {\\displaystyle f} ![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61) ⁠ and ⁠ F {\\displaystyle F} ![{\\displaystyle F}](https://wikimedia.org/api/rest_v1/media/math/render/svg/545fd099af8541605f7ee55f08225526be88ce57) ⁠ respectively are the density and the cumulative distribution function of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠. For b \= ∞ {\\textstyle b=\\infty } ![{\\textstyle b=\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/780ce7d0e407bb8380dfb1d1f2997d11255fa9ba) this is known as the [inverse Mills ratio](https://en.wikipedia.org/wiki/Inverse_Mills_ratio "Inverse Mills ratio"). Note that above, density ⁠ f {\\displaystyle f} ![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61) ⁠ of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is used instead of standard normal density as in inverse Mills ratio, so here we have σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) instead of ⁠ σ {\\displaystyle \\sigma } ![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36) ⁠. ### Fourier transform and characteristic function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=15 "Edit section: Fourier transform and characteristic function")\] The [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform "Fourier transform") of a normal density ⁠ f {\\displaystyle f} ![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61) ⁠ with mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is[\[33\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-33) f ^ ( t ) \= ∫ − ∞ ∞ f ( x ) e − i t x d x \= e − i ÎŒ t e − 1 2 σ 2 t 2 , {\\displaystyle {\\hat {f}}(t)=\\int \_{-\\infty }^{\\infty }f(x)e^{-itx}\\,dx=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}\\,,} ![{\\displaystyle {\\hat {f}}(t)=\\int \_{-\\infty }^{\\infty }f(x)e^{-itx}\\,dx=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dec8c2d78fe6c57003ffc124f982d4bb6d28610b) where ⁠ i {\\displaystyle i} ![{\\displaystyle i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20) ⁠ is the [imaginary unit](https://en.wikipedia.org/wiki/Imaginary_unit "Imaginary unit"). If the mean ÎŒ \= 0 {\\textstyle \\mu =0} ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118), the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the [frequency domain](https://en.wikipedia.org/wiki/Frequency_domain "Frequency domain"), with mean 0 and variance ⁠ 1 / σ 2 {\\displaystyle 1/\\sigma ^{2}} ![{\\displaystyle 1/\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd9d6d70944c9de586516f90477d752079617c07) ⁠. In particular, the standard normal distribution ⁠ φ {\\displaystyle \\varphi } ![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e) ⁠ is an [eigenfunction](https://en.wikipedia.org/wiki/Fourier_transform#Eigenfunctions "Fourier transform") of the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is closely connected to the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") φ X ( t ) {\\textstyle \\varphi \_{X}(t)} ![{\\textstyle \\varphi \_{X}(t)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/06b652a8fa9ef917c8f5c4843bc5c27904398e89) of that variable, which is defined as the [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") of e i t X {\\textstyle e^{itX}} ![{\\textstyle e^{itX}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2cee41fa7c645c5cd513bda031d2a93db5e0ea2f), as a function of the real variable ⁠ t {\\displaystyle t} ![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560) ⁠ (the [frequency](https://en.wikipedia.org/wiki/Frequency "Frequency") parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable ⁠ t {\\displaystyle t} ![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560) ⁠.[\[34\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-34) The relation between both is: φ X ( t ) \= f ^ ( − t ) . {\\displaystyle \\varphi \_{X}(t)={\\hat {f}}(-t)\\,.} ![{\\displaystyle \\varphi \_{X}(t)={\\hat {f}}(-t)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf8b863dd1facca4d0ada922d4420095387ea547) The real and imaginary parts of f ^ ( t ) \= E ⁥ \[ e − i t x \] \= e − i ÎŒ t e − 1 2 σ 2 t 2 {\\displaystyle {\\hat {f}}(t)=\\operatorname {E} \[e^{-itx}\]=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}} ![{\\displaystyle {\\hat {f}}(t)=\\operatorname {E} \[e^{-itx}\]=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/952332f7eee9e208a58adcb0fc9579bdaa143cce) give: E ⁥ \[ cos ⁥ ( t x ) \] \= cos ⁥ ( ÎŒ t ) e − 1 2 σ 2 t 2 {\\displaystyle \\operatorname {E} \[\\cos(tx)\]=\\cos(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}} ![{\\displaystyle \\operatorname {E} \[\\cos(tx)\]=\\cos(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5395d7de6a6755a20b760ae1a32fffc4321f63be) and E ⁥ \[ sin ⁥ ( t x ) \] \= sin ⁥ ( ÎŒ t ) e − 1 2 σ 2 t 2 . {\\displaystyle \\operatorname {E} \[\\sin(tx)\]=\\sin(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}.} ![{\\displaystyle \\operatorname {E} \[\\sin(tx)\]=\\sin(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da4035208745307ad5ea8c4ac3274bdebc42bccd) Similarly, E ⁥ \[ cosh ⁥ ( t x ) \] \= cosh ⁥ ( ÎŒ t ) e 1 2 σ 2 t 2 {\\displaystyle \\operatorname {E} \[\\cosh(tx)\]=\\cosh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}} ![{\\displaystyle \\operatorname {E} \[\\cosh(tx)\]=\\cosh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ab17a3e9ea7d35d5f8f142bdb6057ea1cf8ff224) and E ⁥ \[ sinh ⁥ ( t x ) \] \= sinh ⁥ ( ÎŒ t ) e 1 2 σ 2 t 2 . {\\displaystyle \\operatorname {E} \[\\sinh(tx)\]=\\sinh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}.} ![{\\displaystyle \\operatorname {E} \[\\sinh(tx)\]=\\sinh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/448bbe59bcf7fbe7f53f44effd9ce6d75fc19fc3) These formulas evaluated at t \= 1 {\\displaystyle t=1} ![{\\displaystyle t=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/970dea4a5f5ec5355c4cdd62f6396fbc8b1baaa1) give the expected value of these basic trigonometric and hyperbolic functions over a Gaussian random variable X ∌ N ( ÎŒ , σ 2 ) {\\displaystyle X\\sim N(\\mu ,\\sigma ^{2})} ![{\\displaystyle X\\sim N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1ac7642fb5117eb47ddd41db3006f20fb7886f01), which also could be seen as consequences of the [Isserlis's theorem](https://en.wikipedia.org/wiki/Isserlis%27s_theorem "Isserlis's theorem"). ### Moment- and cumulant-generating functions \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=16 "Edit section: Moment- and cumulant-generating functions")\] The [moment generating function](https://en.wikipedia.org/wiki/Moment_generating_function "Moment generating function") of a real random variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is the expected value of e t X {\\textstyle e^{tX}} ![{\\textstyle e^{tX}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ce1d303b270111ba07bf545bbf6ceca8f6e500dd), as a function of the real parameter ⁠ t {\\displaystyle t} ![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560) ⁠. For a normal distribution with density ⁠ f {\\displaystyle f} ![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61) ⁠, mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), the moment generating function exists and is equal to M ( t ) \= E ⁥ \[ e t X \] \= f ^ ( i t ) \= e ÎŒ t e σ 2 t 2 / 2 . {\\displaystyle M(t)=\\operatorname {E} \\left\[e^{tX}\\right\]={\\hat {f}}(it)=e^{\\mu t}e^{\\sigma ^{2}t^{2}/2}\\,.} ![{\\displaystyle M(t)=\\operatorname {E} \\left\[e^{tX}\\right\]={\\hat {f}}(it)=e^{\\mu t}e^{\\sigma ^{2}t^{2}/2}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3b5930b107fb4328bc04d077d65ce3d2bf1510de) For any ⁠ k {\\displaystyle k} ![{\\displaystyle k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40) ⁠, the coefficient of ⁠ t k / k \! {\\displaystyle t^{k}/k!} ![{\\displaystyle t^{k}/k!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/929a04a311aa91f40ff282b49d54f7e0815c28d8) ⁠ in the moment generating function (expressed as an [exponential power series](https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function_\(EGF\) "Generating function") in ⁠ t {\\displaystyle t} ![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560) ⁠) is the normal distribution's expected value ⁠ E ⁥ \[ X k \] {\\displaystyle \\operatorname {E} \[X^{k}\]} ![{\\displaystyle \\operatorname {E} \[X^{k}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9893a5b728d8111751abbcf6bd653d214b462cdc) ⁠. The [cumulant generating function](https://en.wikipedia.org/wiki/Cumulant_generating_function "Cumulant generating function") is the logarithm of the moment generating function, namely g ( t ) \= ln ⁥ M ( t ) \= ÎŒ t \+ 1 2 σ 2 t 2 . {\\displaystyle g(t)=\\ln M(t)=\\mu t+{\\tfrac {1}{2}}\\sigma ^{2}t^{2}\\,.} ![{\\displaystyle g(t)=\\ln M(t)=\\mu t+{\\tfrac {1}{2}}\\sigma ^{2}t^{2}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/44d74b9c48715ac4ee80e4cee69b1a551ee37b7e) The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in ⁠ t {\\displaystyle t} ![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560) ⁠, only the first two [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") are nonzero, namely the mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and the variance ⁠ σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) ⁠. Some authors prefer to instead work with the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") E\[*e**itX*\] = *e**iÎŒt* − *σ*2*t*2/2 and ln E\[*e**itX*\] = *iÎŒt* − ⁠1/2⁠*σ*2*t*2. ### Stein operator and class \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=17 "Edit section: Stein operator and class")\] Within [Stein's method](https://en.wikipedia.org/wiki/Stein%27s_method "Stein's method") the Stein operator and class of a random variable X ∌ N ( ÎŒ , σ 2 ) {\\textstyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})} ![{\\textstyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1ab057858ef1f46aa5e20f095bf0cf86322866a) are A f ( x ) \= σ 2 f â€Č ( x ) − ( x − ÎŒ ) f ( x ) {\\textstyle {\\mathcal {A}}f(x)=\\sigma ^{2}f'(x)-(x-\\mu )f(x)} ![{\\textstyle {\\mathcal {A}}f(x)=\\sigma ^{2}f'(x)-(x-\\mu )f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e8b9b89fdc012005cf1aea0c28982345f34fe12e) and F {\\textstyle {\\mathcal {F}}} ![{\\textstyle {\\mathcal {F}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/92b7034b359bf5ee4c81c8992fbb9289a1c6d5a3) the class of all absolutely continuous functions ⁠ f : R → R {\\displaystyle \\textstyle f:\\mathbb {R} \\to \\mathbb {R} } ![{\\displaystyle \\textstyle f:\\mathbb {R} \\to \\mathbb {R} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/04e4859bf24012d6cb2df8e1f13a611da5fd3ec2) ⁠ such that ⁠ E ⁥ \[ \| f â€Č ( X ) \| \] \< ∞ {\\displaystyle \\operatorname {E} \[\\vert f'(X)\\vert \]\<\\infty } ![{\\displaystyle \\operatorname {E} \[\\vert f'(X)\\vert \]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/c172f331ab4cca02c7e06a7322b7832f082e717b) ⁠. ### Zero-variance limit \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=18 "Edit section: Zero-variance limit")\] In the [limit](https://en.wikipedia.org/wiki/Limit_\(mathematics\) "Limit (mathematics)") when σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) approaches zero, the probability density f {\\textstyle f} ![{\\textstyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1b77076edca76caf3331d0551d1645b8f678283) approaches zero everywhere except at ÎŒ {\\textstyle \\mu } ![{\\textstyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/259577540a13444806174d5a1ae7662974f58085), where it approaches ∞ {\\textstyle \\infty } ![{\\textstyle \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/7672961cb69498135a93484fe61fedd72996ad03), while its integral remains equal to 1. An extension of the normal distribution to the case with zero variance can be defined using the [Dirac delta measure](https://en.wikipedia.org/wiki/Dirac_measure "Dirac measure") ÎŽ ÎŒ {\\textstyle \\delta \_{\\mu }} ![{\\textstyle \\delta \_{\\mu }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/90d9512ab27b5e327a7525d31932d16636b29ec6), although the resulting random variables are not [absolutely continuous](https://en.wikipedia.org/wiki/Absolutely_continuous_random_variable "Absolutely continuous random variable") and thus do not have [probability density functions](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function"). The cumulative distribution function of such a random variable is then the [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function "Heaviside step function") translated by the mean ÎŒ {\\textstyle \\mu } ![{\\textstyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/259577540a13444806174d5a1ae7662974f58085), namely F ( x ) \= { 0 if x \< ÎŒ 1 if x ≄ ÎŒ . {\\displaystyle F(x)={\\begin{cases}0&{\\text{if }}x\<\\mu \\\\1&{\\text{if }}x\\geq \\mu .\\end{cases}}} ![{\\displaystyle F(x)={\\begin{cases}0&{\\text{if }}x\<\\mu \\\\1&{\\text{if }}x\\geq \\mu .\\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0c3302167eb5f749bb47fd73b74c732f425e3dea) ### Maximum entropy \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=19 "Edit section: Maximum entropy")\] Of all probability distributions over the reals with a specified finite mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and finite variance ⁠ σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) ⁠, the normal distribution N ( ÎŒ , σ 2 ) {\\textstyle N(\\mu ,\\sigma ^{2})} ![{\\textstyle N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6c34ca495ee2609c49ba6b010c03c31e1968ae87) is the one with [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution").[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24) To see this, let ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ be a [continuous random variable](https://en.wikipedia.org/wiki/Continuous_random_variable "Continuous random variable") with [probability density](https://en.wikipedia.org/wiki/Probability_density "Probability density") ⁠ f ( x ) {\\displaystyle f(x)} ![{\\displaystyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074) ⁠. The entropy of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is defined as[\[35\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-35)[\[36\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-36)[\[37\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-37) H ( X ) \= − ∫ − ∞ ∞ f ( x ) ln ⁥ f ( x ) d x , {\\displaystyle H(X)=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx\\,,} ![{\\displaystyle H(X)=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/129fc3f44c225b79aa5515f97f7321102fd329e8) where f ( x ) log ⁥ f ( x ) {\\textstyle f(x)\\log f(x)} ![{\\textstyle f(x)\\log f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6a92396d2452d9a8a76d9e66eadf0bd18e6f9600) is understood to be zero whenever ⁠ f ( x ) \= 0 {\\displaystyle f(x)=0} ![{\\displaystyle f(x)=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf85883d74b75fe35ca8d3f2b44802df078e4fa1) ⁠. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by using [variational calculus](https://en.wikipedia.org/wiki/Variational_calculus "Variational calculus"). A function with three [Lagrange multipliers](https://en.wikipedia.org/wiki/Lagrange_multipliers "Lagrange multipliers") is defined: L \= − ∫ − ∞ ∞ f ( x ) ln ⁥ f ( x ) d x − λ 0 ( 1 − ∫ − ∞ ∞ f ( x ) d x ) − λ 1 ( ÎŒ − ∫ − ∞ ∞ f ( x ) x d x ) − λ 2 ( σ 2 − ∫ − ∞ ∞ f ( x ) ( x − ÎŒ ) 2 d x ) . {\\displaystyle L=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx-\\lambda \_{0}\\left(1-\\int \_{-\\infty }^{\\infty }f(x)\\,dx\\right)-\\lambda \_{1}\\left(\\mu -\\int \_{-\\infty }^{\\infty }f(x)x\\,dx\\right)-\\lambda \_{2}\\left(\\sigma ^{2}-\\int \_{-\\infty }^{\\infty }f(x)(x-\\mu )^{2}\\,dx\\right)\\,.} ![{\\displaystyle L=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx-\\lambda \_{0}\\left(1-\\int \_{-\\infty }^{\\infty }f(x)\\,dx\\right)-\\lambda \_{1}\\left(\\mu -\\int \_{-\\infty }^{\\infty }f(x)x\\,dx\\right)-\\lambda \_{2}\\left(\\sigma ^{2}-\\int \_{-\\infty }^{\\infty }f(x)(x-\\mu )^{2}\\,dx\\right)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/856af00c1ea9a107166c797aeb451a2978d699c3) At maximum entropy, a small variation ÎŽ f ( x ) {\\textstyle \\delta f(x)} ![{\\textstyle \\delta f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83e87224bdd1053bd82900662d625caf94321aeb) about f ( x ) {\\textstyle f(x)} ![{\\textstyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0a982c6635ab3b98d9e12d5f5a8533359bcb38a) will produce a variation ÎŽ L {\\textstyle \\delta L} ![{\\textstyle \\delta L}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4dd7ec66a3e71534556db59909a49ee04982c706) about ⁠ L {\\displaystyle L} ![{\\displaystyle L}](https://wikimedia.org/api/rest_v1/media/math/render/svg/103168b86f781fe6e9a4a87b8ea1cebe0ad4ede8) ⁠ which is equal to 0: 0 \= ÎŽ L \= ∫ − ∞ ∞ ÎŽ f ( x ) ( − ln ⁥ f ( x ) − 1 \+ λ 0 \+ λ 1 x \+ λ 2 ( x − ÎŒ ) 2 ) d x . {\\displaystyle 0=\\delta L=\\int \_{-\\infty }^{\\infty }\\delta f(x)\\left(-\\ln f(x)-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,dx\\,.} ![{\\displaystyle 0=\\delta L=\\int \_{-\\infty }^{\\infty }\\delta f(x)\\left(-\\ln f(x)-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,dx\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/91b0159db4e0722a4b5ededf3f98d97437605782) Since this must hold for any small ⁠ ÎŽ f ( x ) {\\displaystyle \\delta f(x)} ![{\\displaystyle \\delta f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6443aec0d016c556a8d440074b7bb5c4df23232b) ⁠, the factor multiplying ⁠ ÎŽ f ( x ) {\\displaystyle \\delta f(x)} ![{\\displaystyle \\delta f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6443aec0d016c556a8d440074b7bb5c4df23232b) ⁠ must be zero, and solving for ⁠ f ( x ) {\\displaystyle f(x)} ![{\\displaystyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074) ⁠ yields: f ( x ) \= exp ⁥ ( − 1 \+ λ 0 \+ λ 1 x \+ λ 2 ( x − ÎŒ ) 2 ) . {\\displaystyle f(x)=\\exp \\left(-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,.} ![{\\displaystyle f(x)=\\exp \\left(-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9a80908345c8c165230ad4e818f29470b33064d5) The Lagrange constraints that ⁠ f ( x ) {\\displaystyle f(x)} ![{\\displaystyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074) ⁠ is properly normalized and has the specified mean and variance are satisfied if and only if ⁠ λ 0 {\\displaystyle \\lambda \_{0}} ![{\\displaystyle \\lambda \_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cfa5ad1eb6cdaf3d8dfd77991ee9ce7bdf169184) ⁠, ⁠ λ 1 {\\displaystyle \\lambda \_{1}} ![{\\displaystyle \\lambda \_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/571a423bece8f29bcd1b48572f18dd4f6213dce2) ⁠, and ⁠ λ 2 {\\displaystyle \\lambda \_{2}} ![{\\displaystyle \\lambda \_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6b668a1bd1e8ab9452ca975b7497546e7c1ba187) ⁠ are chosen so that f ( x ) \= 1 2 π σ 2 e − ( x − ÎŒ ) 2 2 σ 2 . {\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}e^{-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}}\\,.} ![{\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}e^{-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ac6c71a4a3df62eeaf7e052e27ce356793102f5) The entropy of a normal distribution X ∌ N ( ÎŒ , σ 2 ) {\\textstyle X\\sim N(\\mu ,\\sigma ^{2})} ![{\\textstyle X\\sim N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4b2b33217663b3d5e94daf88d51b5de6d3c9c8e6) is equal to H ( X ) \= 1 2 ( 1 \+ ln ⁥ 2 σ 2 π ) , {\\displaystyle H(X)={\\tfrac {1}{2}}(1+\\ln 2\\sigma ^{2}\\pi )\\,,} ![{\\displaystyle H(X)={\\tfrac {1}{2}}(1+\\ln 2\\sigma ^{2}\\pi )\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2448beacbfebb0a9ccc54a4927aaa5dde946e77) which is independent of the mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠. ### Other properties \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=20 "Edit section: Other properties")\] 1. If the characteristic function ϕ X {\\textstyle \\phi \_{X}} ![{\\textstyle \\phi \_{X}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c6dea199efa8c774e513875b77ba51bff03b054e) of some random variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is of the form ϕ X ( t ) \= exp ⁥ Q ( t ) {\\textstyle \\phi \_{X}(t)=\\exp Q(t)} ![{\\textstyle \\phi \_{X}(t)=\\exp Q(t)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/354a1bdc69b6b27a0bf4cfaffb65219279017ff6) in a neighborhood of zero, where Q ( t ) {\\textstyle Q(t)} ![{\\textstyle Q(t)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7d6236f45c1edf052e5a044de03973e4f149bda2) is a [polynomial](https://en.wikipedia.org/wiki/Polynomial "Polynomial"), then the **Marcinkiewicz theorem** (named after [JĂłzef Marcinkiewicz](https://en.wikipedia.org/wiki/J%C3%B3zef_Marcinkiewicz "JĂłzef Marcinkiewicz")) asserts that ⁠ Q {\\displaystyle Q} ![{\\displaystyle Q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8752c7023b4b3286800fe3238271bbca681219ed) ⁠ can be at most a quadratic polynomial, and therefore ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is a normal random variable.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38) The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant"). 2. If ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ and ⁠ Y {\\displaystyle Y} ![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f) ⁠ are [jointly normal](https://en.wikipedia.org/wiki/Jointly_normal "Jointly normal") and [uncorrelated](https://en.wikipedia.org/wiki/Uncorrelated "Uncorrelated"), then they are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). The requirement that ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ and ⁠ Y {\\displaystyle Y} ![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f) ⁠ should be *jointly* normal is essential; without it the property does not hold.[\[39\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-39)[\[40\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-40)[\[proof\]](https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent "Normally distributed and uncorrelated does not imply independent") For non-normal random variables uncorrelatedness does not imply independence. 3. The [Kullback–Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence "Kullback–Leibler divergence") of one normal distribution X 1 ∌ N ( ÎŒ 1 , σ 1 2 ) {\\textstyle X\_{1}\\sim N(\\mu \_{1},\\sigma \_{1}^{2})} ![{\\textstyle X\_{1}\\sim N(\\mu \_{1},\\sigma \_{1}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/049151b05395320c984754f2ded5834c011e90d8) from another X 2 ∌ N ( ÎŒ 2 , σ 2 2 ) {\\textstyle X\_{2}\\sim N(\\mu \_{2},\\sigma \_{2}^{2})} ![{\\textstyle X\_{2}\\sim N(\\mu \_{2},\\sigma \_{2}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d6c3719a5439a1563f5a624b1af2dc278963269a) is given by:[\[41\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-41) D K L ( X 1 ∄ X 2 ) \= ( ÎŒ 1 − ÎŒ 2 ) 2 2 σ 2 2 \+ 1 2 ( σ 1 2 σ 2 2 − 1 − ln ⁥ σ 1 2 σ 2 2 ) {\\displaystyle D\_{\\mathrm {KL} }(X\_{1}\\parallel X\_{2})={\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{2\\sigma \_{2}^{2}}}+{\\frac {1}{2}}\\left({\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}-1-\\ln {\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}\\right)} ![{\\displaystyle D\_{\\mathrm {KL} }(X\_{1}\\parallel X\_{2})={\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{2\\sigma \_{2}^{2}}}+{\\frac {1}{2}}\\left({\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}-1-\\ln {\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/89a5fe7e76e6b9fbe30e34710d2d76d3073d89a6) The [Hellinger distance](https://en.wikipedia.org/wiki/Hellinger_distance "Hellinger distance") between the same distributions is equal to H 2 ( X 1 , X 2 ) \= 1 − 2 σ 1 σ 2 σ 1 2 \+ σ 2 2 exp ⁥ ( − 1 4 ( ÎŒ 1 − ÎŒ 2 ) 2 σ 1 2 \+ σ 2 2 ) {\\displaystyle H^{2}(X\_{1},X\_{2})=1-{\\sqrt {\\frac {2\\sigma \_{1}\\sigma \_{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}}\\exp \\left(-{\\frac {1}{4}}{\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}\\right)} ![{\\displaystyle H^{2}(X\_{1},X\_{2})=1-{\\sqrt {\\frac {2\\sigma \_{1}\\sigma \_{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}}\\exp \\left(-{\\frac {1}{4}}{\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b9f888884d0dcbcbb5ff916c32e1d2bad752517) 4. The [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") for a normal distribution w.r.t. ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is diagonal and takes the form I ( ÎŒ , σ 2 ) \= ( 1 σ 2 0 0 1 2 σ 4 ) {\\displaystyle {\\mathcal {I}}(\\mu ,\\sigma ^{2})={\\begin{pmatrix}{\\frac {1}{\\sigma ^{2}}}&0\\\\0&{\\frac {1}{2\\sigma ^{4}}}\\end{pmatrix}}} ![{\\displaystyle {\\mathcal {I}}(\\mu ,\\sigma ^{2})={\\begin{pmatrix}{\\frac {1}{\\sigma ^{2}}}&0\\\\0&{\\frac {1}{2\\sigma ^{4}}}\\end{pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e882ba24b6d046a40e779c6154532c352ce59f35) 5. The [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the mean of a normal distribution is another normal distribution.[\[42\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-42) Specifically, if x 1 , 
 , x n {\\textstyle x\_{1},\\ldots ,x\_{n}} ![{\\textstyle x\_{1},\\ldots ,x\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e9531d09966e9ceeb357705fc047d0c907d3841) are iid ∌ N ( ÎŒ , σ 2 ) {\\textstyle \\sim N(\\mu ,\\sigma ^{2})} ![{\\textstyle \\sim N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2d705c9cde9a1093fbde1ea4e506b8bca3edcbdf) and the prior is ÎŒ ∌ N ( ÎŒ 0 , σ 0 2 ) {\\textstyle \\mu \\sim N(\\mu \_{0},\\sigma \_{0}^{2})} ![{\\textstyle \\mu \\sim N(\\mu \_{0},\\sigma \_{0}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fc7273573f401eac0cce42900e5f46f0fadd64a9) , then the posterior distribution for the estimator of ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ will be ÎŒ ∣ x 1 , 
 , x n ∌ N ( σ 2 n ÎŒ 0 \+ σ 0 2 x ÂŻ σ 2 n \+ σ 0 2 , ( n σ 2 \+ 1 σ 0 2 ) − 1 ) {\\displaystyle \\mu \\mid x\_{1},\\ldots ,x\_{n}\\sim {\\mathcal {N}}\\left({\\frac {{\\frac {\\sigma ^{2}}{n}}\\mu \_{0}+\\sigma \_{0}^{2}{\\bar {x}}}{{\\frac {\\sigma ^{2}}{n}}+\\sigma \_{0}^{2}}},\\left({\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}\\right)^{-1}\\right)} ![{\\displaystyle \\mu \\mid x\_{1},\\ldots ,x\_{n}\\sim {\\mathcal {N}}\\left({\\frac {{\\frac {\\sigma ^{2}}{n}}\\mu \_{0}+\\sigma \_{0}^{2}{\\bar {x}}}{{\\frac {\\sigma ^{2}}{n}}+\\sigma \_{0}^{2}}},\\left({\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}\\right)^{-1}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/793f7cd8d5c23e2f8a92fed1b332d89375f9229b) 6. The family of normal distributions not only forms an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") (EF), but in fact forms a [natural exponential family](https://en.wikipedia.org/wiki/Natural_exponential_family "Natural exponential family") (NEF) with quadratic [variance function](https://en.wikipedia.org/wiki/Variance_function "Variance function") ([NEF-QVF](https://en.wikipedia.org/wiki/NEF-QVF "NEF-QVF")). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF. 7. In [information geometry](https://en.wikipedia.org/wiki/Information_geometry "Information geometry"), the family of normal distributions forms a [statistical manifold](https://en.wikipedia.org/wiki/Statistical_manifold "Statistical manifold") with [constant curvature](https://en.wikipedia.org/wiki/Constant_curvature "Constant curvature") ⁠ − 1 {\\displaystyle -1} ![{\\displaystyle -1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/704fb0427140d054dd267925495e78164fee9aac) ⁠ . The same family is [flat](https://en.wikipedia.org/wiki/Flat_manifold "Flat manifold") with respect to the (±1)-connections ∇ ( e ) {\\textstyle \\nabla ^{(e)}} ![{\\textstyle \\nabla ^{(e)}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6bcf8f6e96ed7e5c44d6aa0800f7ff991bb4b36e) and ∇ ( m ) {\\textstyle \\nabla ^{(m)}} ![{\\textstyle \\nabla ^{(m)}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6eeb62878a2414145eaf24df9e0e4bd48d24c84e) .[\[43\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-43) 8. If X 1 , 
 , X n {\\textstyle X\_{1},\\dots ,X\_{n}} ![{\\textstyle X\_{1},\\dots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ae9b42f93ae184132d9ede84c94bebe02d83109) are distributed according to N ( 0 , σ 2 ) {\\textstyle N(0,\\sigma ^{2})} ![{\\textstyle N(0,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ac432f4ef3005cdc1eb19d42a7939c8715052ec) , then E \[ max i X i \] ≀ σ 2 ln ⁥ n {\\textstyle E\[\\max \_{i}X\_{i}\]\\leq \\sigma {\\sqrt {2\\ln n}}} ![{\\textstyle E\[\\max \_{i}X\_{i}\]\\leq \\sigma {\\sqrt {2\\ln n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0dfb87c9b047ccf23ace2139d97810dff1ed6670) . Note that there is no assumption of independence.[\[44\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-44) ## Related distributions \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=21 "Edit section: Related distributions")\] ### Central limit theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=22 "Edit section: Central limit theorem")\] [![](https://upload.wikimedia.org/wikipedia/commons/0/06/De_moivre-laplace.gif)](https://en.wikipedia.org/wiki/File:De_moivre-laplace.gif) As the number of discrete events increases, the function begins to resemble a normal distribution. [![](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Dice_sum_central_limit_theorem.svg/250px-Dice_sum_central_limit_theorem.svg.png)](https://en.wikipedia.org/wiki/File:Dice_sum_central_limit_theorem.svg) Comparison of probability density functions, *p*(*k*) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing na, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). Main article: [Central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where X 1 , 
 , X n {\\textstyle X\_{1},\\ldots ,X\_{n}} ![{\\textstyle X\_{1},\\ldots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b38285b8894295da7e871244e729fddff576d35) are [independent and identically distributed](https://en.wikipedia.org/wiki/Independent_and_identically_distributed "Independent and identically distributed") random variables with the same arbitrary distribution, zero mean, and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) and ⁠ Z {\\displaystyle Z} ![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd) ⁠ is their mean scaled by n {\\textstyle {\\sqrt {n}}} ![{\\textstyle {\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fe0c841da590bde3ce98d1cf05d497678712ada0) Z \= n ( 1 n ∑ i \= 1 n X i ) {\\displaystyle Z={\\sqrt {n}}{\\biggl (}{\\frac {1}{n}}\\sum \_{i=1}^{n}X\_{i}{\\biggr )}} ![{\\displaystyle Z={\\sqrt {n}}{\\biggl (}{\\frac {1}{n}}\\sum \_{i=1}^{n}X\_{i}{\\biggr )}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/955771f557a34627b4cc5d6b4219e99e9fa0397d) Then, as ⁠ n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) ⁠ increases, the probability distribution of ⁠ Z {\\displaystyle Z} ![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd) ⁠ will tend to the normal distribution with zero mean and variance ⁠ σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) ⁠. The theorem can be extended to variables ( X i ) {\\textstyle (X\_{i})} ![{\\textstyle (X\_{i})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c96401b8a2d7a36bd58d49ae3f28143d08fe55cc) that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions. Many [test statistics](https://en.wikipedia.org/wiki/Test_statistic "Test statistic"), [scores](https://en.wikipedia.org/wiki/Score_\(statistics\) "Score (statistics)"), and [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of [influence functions](https://en.wikipedia.org/wiki/Influence_function_\(statistics\) "Influence function (statistics)"). The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example: - The [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution") B ( n , p ) {\\textstyle B(n,p)} ![{\\textstyle B(n,p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d621ff5ee51ad7c13d4612781aef2b7f6cced3d5) is [approximately normal](https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem "De Moivre–Laplace theorem") with mean n p {\\textstyle np} ![{\\textstyle np}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0651e1fca67ae9794f242d4a05aa2a9c9dfa871) and variance n p ( 1 − p ) {\\textstyle np(1-p)} ![{\\textstyle np(1-p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/60fc42b181d231ea67c1718d50d42b776bbf1cc6) for large ⁠ n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) ⁠ and for ⁠ p {\\displaystyle p} ![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36) ⁠ not too close to 0 or 1. - The [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution "Poisson distribution") with parameter ⁠ λ {\\displaystyle \\lambda } ![{\\displaystyle \\lambda }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b43d0ea3c9c025af1be9128e62a18fa74bedda2a) ⁠ is approximately normal with mean ⁠ λ {\\displaystyle \\lambda } ![{\\displaystyle \\lambda }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b43d0ea3c9c025af1be9128e62a18fa74bedda2a) ⁠ and variance ⁠ λ {\\displaystyle \\lambda } ![{\\displaystyle \\lambda }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b43d0ea3c9c025af1be9128e62a18fa74bedda2a) ⁠ , for large values of ⁠ λ {\\displaystyle \\lambda } ![{\\displaystyle \\lambda }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b43d0ea3c9c025af1be9128e62a18fa74bedda2a) ⁠ .[\[45\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-45) - The [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") χ 2 ( k ) {\\textstyle \\chi ^{2}(k)} ![{\\textstyle \\chi ^{2}(k)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4751a9c08cbd17a05ff411f13fcbd2ba010dfc2) is approximately normal with mean ⁠ k {\\displaystyle k} ![{\\displaystyle k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40) ⁠ and variance 2 k {\\textstyle 2k} ![{\\textstyle 2k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c48ffbda63e1d38ea010d15bc613318ff5eeecd) , for large ⁠ k {\\displaystyle k} ![{\\displaystyle k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40) ⁠ . - The [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") t ( Μ ) {\\textstyle t(\\nu )} ![{\\textstyle t(\\nu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4b3832a5837bc1baf76b0b5b52d0bcbe2400136e) is approximately normal with mean 0 and variance 1 when ⁠ Μ {\\displaystyle \\nu } ![{\\displaystyle \\nu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/c15bbbb971240cf328aba572178f091684585468) ⁠ is large. Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound for the approximation error in the central limit theorem is given by the [Berry–Esseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem"), improvements of the approximation are given by the [Edgeworth expansions](https://en.wikipedia.org/wiki/Edgeworth_expansion "Edgeworth expansion"). This theorem can also be used to justify modeling the sum of many uniform noise sources as [Gaussian noise](https://en.wikipedia.org/wiki/Gaussian_noise "Gaussian noise"). See [AWGN](https://en.wikipedia.org/wiki/AWGN "AWGN"). ### Operations and functions of normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=23 "Edit section: Operations and functions of normal variables")\] #### Operations on a single normal variable \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=24 "Edit section: Operations on a single normal variable")\] If ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is distributed normally with mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), then - a X \+ b {\\textstyle aX+b} ![{\\textstyle aX+b}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f6f1f82180a1f50195618ecb831f661c486af582) , for any real numbers ⁠ a {\\displaystyle a} ![{\\displaystyle a}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ffd2487510aa438433a2579450ab2b3d557e5edc) ⁠ and ⁠ b {\\displaystyle b} ![{\\displaystyle b}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f11423fbb2e967f986e36804a8ae4271734917c3) ⁠ , is also normally distributed, with mean a ÎŒ \+ b {\\textstyle a\\mu +b} ![{\\textstyle a\\mu +b}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f90bd224eb54f08933bf7667b6421b083a486397) and variance a 2 σ 2 {\\textstyle a^{2}\\sigma ^{2}} ![{\\textstyle a^{2}\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77b337f549b8898e8d5bde241727c5e48e623e78) . That is, the family of normal distributions is closed under [linear transformations](https://en.wikipedia.org/wiki/Linear_transformations "Linear transformations"). - The exponential of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is distributed [log-normally](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution"): e X ∌ ln ⁥ ( N ( ÎŒ , σ 2 ) ) {\\textstyle e^{X}\\sim \\ln(N(\\mu ,\\sigma ^{2}))} ![{\\textstyle e^{X}\\sim \\ln(N(\\mu ,\\sigma ^{2}))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6b8bc03058160fe2b7e327770351a0d11b794598) . - The standard [sigmoid](https://en.wikipedia.org/wiki/Logistic_function "Logistic function") of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ is [logit-normally distributed](https://en.wikipedia.org/wiki/Logit-normal_distribution "Logit-normal distribution"): σ ( X ) ∌ P ( N ( ÎŒ , σ 2 ) ) {\\textstyle \\sigma (X)\\sim P({\\mathcal {N}}(\\mu ,\\,\\sigma ^{2}))} ![{\\textstyle \\sigma (X)\\sim P({\\mathcal {N}}(\\mu ,\\,\\sigma ^{2}))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1c6d34b5937e5ea104e5c93ebf6eb51d8a15145d) . - The absolute value of ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ has [folded normal distribution](https://en.wikipedia.org/wiki/Folded_normal_distribution "Folded normal distribution"): \| X \| ∌ N f ( ÎŒ , σ 2 ) {\\textstyle {\\left\|X\\right\|\\sim N\_{f}(\\mu ,\\sigma ^{2})}} ![{\\textstyle {\\left\|X\\right\|\\sim N\_{f}(\\mu ,\\sigma ^{2})}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57ba2b6f77f5ee79ce533737750f805e384e535e) . If ÎŒ \= 0 {\\textstyle \\mu =0} ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118) this is known as the [half-normal distribution](https://en.wikipedia.org/wiki/Half-normal_distribution "Half-normal distribution"). - The absolute value of normalized residuals, \| X − ÎŒ \| / σ {\\textstyle \|X-\\mu \|/\\sigma } ![{\\textstyle \|X-\\mu \|/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/cb5a2a492a37c76731bf40cc0d66591b43faf959) , has [chi distribution](https://en.wikipedia.org/wiki/Chi_distribution "Chi distribution") with one degree of freedom: \| X − ÎŒ \| / σ ∌ χ 1 {\\textstyle \|X-\\mu \|/\\sigma \\sim \\chi \_{1}} ![{\\textstyle \|X-\\mu \|/\\sigma \\sim \\chi \_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cfeadb6a15473018854ce00a5f489315858a45a0) . - The square of X / σ {\\textstyle X/\\sigma } ![{\\textstyle X/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/8b2eb302ce8a66dd035d4e07945c03b38a41ee3c) has the [noncentral chi-squared distribution](https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution "Noncentral chi-squared distribution") with one degree of freedom: X 2 / σ 2 ∌ χ 1 2 ( ÎŒ 2 / σ 2 ) {\\textstyle X^{2}/\\sigma ^{2}\\sim \\chi \_{1}^{2}(\\mu ^{2}/\\sigma ^{2})} ![{\\textstyle X^{2}/\\sigma ^{2}\\sim \\chi \_{1}^{2}(\\mu ^{2}/\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8b3d087928e23e018e2e2e7ff3badaf23bdb536) . If ÎŒ \= 0 {\\textstyle \\mu =0} ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118) , the distribution is called simply [chi-squared](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution"). - The log-likelihood of a normal variable ⁠ x {\\displaystyle x} ![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4) ⁠ is simply the log of its [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function"): ln ⁥ p ( x ) \= − 1 2 ( x − ÎŒ σ ) 2 − ln ⁥ ( σ 2 π ) . {\\displaystyle \\ln p(x)=-{\\frac {1}{2}}\\left({\\frac {x-\\mu }{\\sigma }}\\right)^{2}-\\ln \\left(\\sigma {\\sqrt {2\\pi }}\\right).} ![{\\displaystyle \\ln p(x)=-{\\frac {1}{2}}\\left({\\frac {x-\\mu }{\\sigma }}\\right)^{2}-\\ln \\left(\\sigma {\\sqrt {2\\pi }}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/de1b937be2248646d42f58a296f05ec46eab9ded) Since this is a scaled and shifted square of a standard normal variable, it is distributed as a scaled and shifted [chi-squared](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") variable. - The distribution of the variable ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ restricted to an interval \[ a , b \] {\\textstyle \[a,b\]} ![{\\textstyle \[a,b\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c780cbaafb5b1d4a6912aa65d2b0b1982097108) is called the [truncated normal distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution"). - ( X − ÎŒ ) − 2 {\\textstyle (X-\\mu )^{-2}} ![{\\textstyle (X-\\mu )^{-2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba437038caf7d8aa28caf3bcc915dbca10aa7387) has a [LĂ©vy distribution](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "LĂ©vy distribution") with location 0 and scale σ − 2 {\\textstyle \\sigma ^{-2}} ![{\\textstyle \\sigma ^{-2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ad6c6a48f23684f563c6d8467744a80339c6bdc) . ##### Operations on two independent normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=25 "Edit section: Operations on two independent normal variables")\] - If X 1 {\\textstyle X\_{1}} ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and X 2 {\\textstyle X\_{2}} ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are two [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)") normal random variables, with means ÎŒ 1 {\\textstyle \\mu \_{1}} ![{\\textstyle \\mu \_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0672d6a80563b72164d70c7c6a0f39f093207de3) , ÎŒ 2 {\\textstyle \\mu \_{2}} ![{\\textstyle \\mu \_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/797d1f85ae11f23755ed4bf3d1a1c574911cff40) and variances σ 1 2 {\\textstyle \\sigma \_{1}^{2}} ![{\\textstyle \\sigma \_{1}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cba7a66cdd970ca6ecd6fbb92bc4f577a31f71a2) , σ 2 2 {\\textstyle \\sigma \_{2}^{2}} ![{\\textstyle \\sigma \_{2}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/259e8a5c676195775647fe765266c0a74eeed92b) , then their sum X 1 \+ X 2 {\\textstyle X\_{1}+X\_{2}} ![{\\textstyle X\_{1}+X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a09bb18cee0b5940e34ab7c35a8f582cb3a9ce5f) will also be normally distributed,[\[proof\]](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables "Sum of normally distributed random variables") with mean ÎŒ 1 \+ ÎŒ 2 {\\textstyle \\mu \_{1}+\\mu \_{2}} ![{\\textstyle \\mu \_{1}+\\mu \_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77ad5dae54aa9dd27f842a7d6dd199386e8b0a0d) and variance σ 1 2 \+ σ 2 2 {\\textstyle \\sigma \_{1}^{2}+\\sigma \_{2}^{2}} ![{\\textstyle \\sigma \_{1}^{2}+\\sigma \_{2}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/551b9e3d182ae052da11927e652710504685a917) . - In particular, if ⁠ X {\\displaystyle X} ![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab) ⁠ and ⁠ Y {\\displaystyle Y} ![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f) ⁠ are independent normal deviates with zero mean and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) , then X \+ Y {\\textstyle X+Y} ![{\\textstyle X+Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd96b99d81cffac6dcdd636ede2372218fffac12) and X − Y {\\textstyle X-Y} ![{\\textstyle X-Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/797cd2ee57ec14065d7bdbcc702699d7954c14a0) are also independent and normally distributed, with zero mean and variance 2 σ 2 {\\textstyle 2\\sigma ^{2}} ![{\\textstyle 2\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/187e0ddb488bc4b83589c8c3b0853bf00b8d80eb) . This is a special case of the [polarization identity](https://en.wikipedia.org/wiki/Polarization_identity "Polarization identity").[\[46\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-46) - If X 1 {\\textstyle X\_{1}} ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) , X 2 {\\textstyle X\_{2}} ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are two independent normal deviates with mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) , and ⁠ a {\\displaystyle a} ![{\\displaystyle a}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ffd2487510aa438433a2579450ab2b3d557e5edc) ⁠ , ⁠ b {\\displaystyle b} ![{\\displaystyle b}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f11423fbb2e967f986e36804a8ae4271734917c3) ⁠ are arbitrary real numbers, then the variable X 3 \= a X 1 \+ b X 2 − ( a \+ b ) ÎŒ a 2 \+ b 2 \+ ÎŒ {\\displaystyle X\_{3}={\\frac {aX\_{1}+bX\_{2}-(a+b)\\mu }{\\sqrt {a^{2}+b^{2}}}}+\\mu } ![{\\displaystyle X\_{3}={\\frac {aX\_{1}+bX\_{2}-(a+b)\\mu }{\\sqrt {a^{2}+b^{2}}}}+\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/cecad53efc9fb1f034f57b6ca0dd5754f504c919) is also normally distributed with mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) . It follows that the normal distribution is [stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") (with exponent α \= 2 {\\textstyle \\alpha =2} ![{\\textstyle \\alpha =2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/873c52c80dce07adc3c2a81eadf14d54f50589bf) ). - If X k ∌ N ( m k , σ k 2 ) {\\textstyle X\_{k}\\sim {\\mathcal {N}}(m\_{k},\\sigma \_{k}^{2})} ![{\\textstyle X\_{k}\\sim {\\mathcal {N}}(m\_{k},\\sigma \_{k}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0afe402f1fe01bbec5615f009ecbb2a33001476) , k ∈ { 0 , 1 } {\\textstyle k\\in \\{0,1\\}} ![{\\textstyle k\\in \\{0,1\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/17da80cc83f4f41abbb5cd2318c8450e76b8efa8) are normal distributions, then their normalized [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean "Geometric mean") 1 ∫ R n X 0 α ( x ) X 1 1 − α ( x ) d x X 0 α X 1 1 − α {\\textstyle {\\frac {1}{\\int \_{\\mathbb {R} ^{n}}X\_{0}^{\\alpha }(x)X\_{1}^{1-\\alpha }(x)\\,{\\text{d}}x}}X\_{0}^{\\alpha }X\_{1}^{1-\\alpha }} ![{\\textstyle {\\frac {1}{\\int \_{\\mathbb {R} ^{n}}X\_{0}^{\\alpha }(x)X\_{1}^{1-\\alpha }(x)\\,{\\text{d}}x}}X\_{0}^{\\alpha }X\_{1}^{1-\\alpha }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1156cfcef03aabda6ebc0c9b38eb7a22f16c6b21) is a normal distribution N ( m α , σ α 2 ) {\\textstyle {\\mathcal {N}}(m\_{\\alpha },\\sigma \_{\\alpha }^{2})} ![{\\textstyle {\\mathcal {N}}(m\_{\\alpha },\\sigma \_{\\alpha }^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4ef0b16243d84fd268ff7749996b279bb6826b1) with m α \= α m 0 σ 1 2 \+ ( 1 − α ) m 1 σ 0 2 α σ 1 2 \+ ( 1 − α ) σ 0 2 {\\textstyle m\_{\\alpha }={\\frac {\\alpha m\_{0}\\sigma \_{1}^{2}+(1-\\alpha )m\_{1}\\sigma \_{0}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}} ![{\\textstyle m\_{\\alpha }={\\frac {\\alpha m\_{0}\\sigma \_{1}^{2}+(1-\\alpha )m\_{1}\\sigma \_{0}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0e5ed5af81505ab74dfef47c0d5d6cd2d0c7de64) and σ α 2 \= σ 0 2 σ 1 2 α σ 1 2 \+ ( 1 − α ) σ 0 2 {\\textstyle \\sigma \_{\\alpha }^{2}={\\frac {\\sigma \_{0}^{2}\\sigma \_{1}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}} ![{\\textstyle \\sigma \_{\\alpha }^{2}={\\frac {\\sigma \_{0}^{2}\\sigma \_{1}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ce5dc27898fde9a32d74f281cb31ecbcbcf4f53c) . ##### Operations on two independent standard normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=26 "Edit section: Operations on two independent standard normal variables")\] If X 1 {\\textstyle X\_{1}} ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and X 2 {\\textstyle X\_{2}} ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are two independent standard normal random variables with mean 0 and variance 1, then - Their sum and difference is distributed normally with mean zero and variance two: X 1 ± X 2 ∌ N ( 0 , 2 ) {\\textstyle X\_{1}\\pm X\_{2}\\sim {\\mathcal {N}}(0,2)} ![{\\textstyle X\_{1}\\pm X\_{2}\\sim {\\mathcal {N}}(0,2)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2f9a5ad2791ebcafa159cfbbfe3f678f71c0aea2) . - Their product Z \= X 1 X 2 {\\textstyle Z=X\_{1}X\_{2}} ![{\\textstyle Z=X\_{1}X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd03c2a1da26df0923943cd51f8cca2f7e54a501) follows the [product distribution](https://en.wikipedia.org/wiki/Product_distribution#Independent_central-normal_distributions "Product distribution")[\[47\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-47) with density function f Z ( z ) \= π − 1 K 0 ( \| z \| ) {\\textstyle f\_{Z}(z)=\\pi ^{-1}K\_{0}(\|z\|)} ![{\\textstyle f\_{Z}(z)=\\pi ^{-1}K\_{0}(\|z\|)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/059cb792b7e121433075f7b2b303da36d56b8fd0) where K 0 {\\textstyle K\_{0}} ![{\\textstyle K\_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1b36f6f66ce8a1a22876ffb7424773bed7e05be) is the [modified Bessel function of the second kind](https://en.wikipedia.org/wiki/Macdonald_function "Macdonald function"). This distribution is symmetric around zero, unbounded at z \= 0 {\\textstyle z=0} ![{\\textstyle z=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0829ff59a6fdc19b44396956e8767fac4ba87ba3) , and has the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") ϕ Z ( t ) \= ( 1 \+ t 2 ) − 1 / 2 {\\textstyle \\phi \_{Z}(t)=(1+t^{2})^{-1/2}} ![{\\textstyle \\phi \_{Z}(t)=(1+t^{2})^{-1/2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9272820f880050f29e264a3954ce4b663e57f5a4) . - Their ratio follows the standard [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"): X 1 / X 2 ∌ Cauchy ⁥ ( 0 , 1 ) {\\textstyle X\_{1}/X\_{2}\\sim \\operatorname {Cauchy} (0,1)} ![{\\textstyle X\_{1}/X\_{2}\\sim \\operatorname {Cauchy} (0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/88defb0a13fa86397485ffa2f250fe7785d0ca77) . - Their Euclidean norm X 1 2 \+ X 2 2 {\\textstyle {\\sqrt {X\_{1}^{2}+X\_{2}^{2}}}} ![{\\textstyle {\\sqrt {X\_{1}^{2}+X\_{2}^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1e77f4b5251fba5f3a51e76cc0c01c14f654ebf) has the [Rayleigh distribution](https://en.wikipedia.org/wiki/Rayleigh_distribution "Rayleigh distribution"). #### Operations on multiple independent normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=27 "Edit section: Operations on multiple independent normal variables")\] - Any [linear combination](https://en.wikipedia.org/wiki/Linear_combination "Linear combination") of independent normal deviates is a normal deviate. - If X 1 , X 2 , 
 , X n {\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}} ![{\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/033c891113a195a08977779d3406751acc19fbda) are independent standard normal random variables, then the sum of their squares has the [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with ⁠ n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) ⁠ degrees of freedom X 1 2 \+ ⋯ \+ X n 2 ∌ χ n 2 . {\\displaystyle X\_{1}^{2}+\\cdots +X\_{n}^{2}\\sim \\chi \_{n}^{2}.} ![{\\displaystyle X\_{1}^{2}+\\cdots +X\_{n}^{2}\\sim \\chi \_{n}^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6b76ecc4420156e1818fea4a8500786c0cc41ecd) - If X 1 , X 2 , 
 , X n {\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}} ![{\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/033c891113a195a08977779d3406751acc19fbda) are independent normally distributed random variables with means ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variances σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) , then their [sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean") is independent from the sample [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation"),[\[48\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-48) which can be demonstrated using [Basu's theorem](https://en.wikipedia.org/wiki/Basu%27s_theorem "Basu's theorem") or [Cochran's theorem](https://en.wikipedia.org/wiki/Cochran%27s_theorem "Cochran's theorem").[\[49\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-49) The ratio of these two quantities will have the [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") with n − 1 {\\textstyle n-1} ![{\\textstyle n-1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/960c88fa1831b7505d9672de66058532fa5d4053) degrees of freedom: t \= X ÂŻ − ÎŒ S / n \= 1 n ( X 1 \+ ⋯ \+ X n ) − ÎŒ 1 n ( n − 1 ) \[ ( X 1 − X ÂŻ ) 2 \+ ⋯ \+ ( X n − X ÂŻ ) 2 \] ∌ t n − 1 . {\\displaystyle t={\\frac {{\\overline {X}}-\\mu }{S/{\\sqrt {n}}}}={\\frac {{\\frac {1}{n}}(X\_{1}+\\cdots +X\_{n})-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\left\[(X\_{1}-{\\overline {X}})^{2}+\\cdots +(X\_{n}-{\\overline {X}})^{2}\\right\]}}}\\sim t\_{n-1}.} ![{\\displaystyle t={\\frac {{\\overline {X}}-\\mu }{S/{\\sqrt {n}}}}={\\frac {{\\frac {1}{n}}(X\_{1}+\\cdots +X\_{n})-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\left\[(X\_{1}-{\\overline {X}})^{2}+\\cdots +(X\_{n}-{\\overline {X}})^{2}\\right\]}}}\\sim t\_{n-1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/36ff0d3c79a0504e8f259ef99192b825357914d7) - If X 1 , X 2 , 
 , X n {\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}} ![{\\textstyle X\_{1},X\_{2},\\ldots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/033c891113a195a08977779d3406751acc19fbda) , Y 1 , Y 2 , 
 , Y m {\\textstyle Y\_{1},Y\_{2},\\ldots ,Y\_{m}} ![{\\textstyle Y\_{1},Y\_{2},\\ldots ,Y\_{m}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/36fb033137341cdfab5dfc6e7aef7f78f87ca631) are independent standard normal random variables, then the ratio of their normalized sums of squares will have the [F-distribution](https://en.wikipedia.org/wiki/F-distribution "F-distribution") with (*n*, *m*) degrees of freedom:[\[50\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-50) F \= ( X 1 2 \+ X 2 2 \+ ⋯ \+ X n 2 ) / n ( Y 1 2 \+ Y 2 2 \+ ⋯ \+ Y m 2 ) / m ∌ F n , m . {\\displaystyle F={\\frac {\\left(X\_{1}^{2}+X\_{2}^{2}+\\cdots +X\_{n}^{2}\\right)/n}{\\left(Y\_{1}^{2}+Y\_{2}^{2}+\\cdots +Y\_{m}^{2}\\right)/m}}\\sim F\_{n,m}.} ![{\\displaystyle F={\\frac {\\left(X\_{1}^{2}+X\_{2}^{2}+\\cdots +X\_{n}^{2}\\right)/n}{\\left(Y\_{1}^{2}+Y\_{2}^{2}+\\cdots +Y\_{m}^{2}\\right)/m}}\\sim F\_{n,m}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8c1b5d2ab40c3e85b5f24d5b13e8f95202fdca93) #### Operations on multiple correlated normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=28 "Edit section: Operations on multiple correlated normal variables")\] - A [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") of a normal vector, i.e. a quadratic function q \= ∑ x i 2 \+ ∑ x j \+ c {\\textstyle q=\\sum x\_{i}^{2}+\\sum x\_{j}+c} ![{\\textstyle q=\\sum x\_{i}^{2}+\\sum x\_{j}+c}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3ed0ec175c34bf70d7336eb9f25dbc2d269ff701) of multiple independent or correlated normal variables, is a [generalized chi-square](https://en.wikipedia.org/wiki/Generalized_chi-square_distribution "Generalized chi-square distribution") variable. ### Operations on the density function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=29 "Edit section: Operations on the density function")\] The [split normal distribution](https://en.wikipedia.org/wiki/Split_normal_distribution "Split normal distribution") is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The [truncated normal distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution") results from rescaling a section of a single density function. ### Infinite divisibility and CramĂ©r's theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=30 "Edit section: Infinite divisibility and CramĂ©r's theorem")\] For any positive integer n, any normal distribution with mean ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is the distribution of the sum of n independent normal deviates, each with mean ÎŒ n {\\textstyle {\\frac {\\mu }{n}}} ![{\\textstyle {\\frac {\\mu }{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c57b6786aa4c5bd9599caffb5ae4480a3286961e) and variance σ 2 n {\\textstyle {\\frac {\\sigma ^{2}}{n}}} ![{\\textstyle {\\frac {\\sigma ^{2}}{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5e51eb9306d3daff888093e26f3e17267ce1fe3d). This property is called [infinite divisibility](https://en.wikipedia.org/wiki/Infinite_divisibility_\(probability\) "Infinite divisibility (probability)").[\[51\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-51) Conversely, if X 1 {\\textstyle X\_{1}} ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and X 2 {\\textstyle X\_{2}} ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are independent random variables and their sum X 1 \+ X 2 {\\textstyle X\_{1}+X\_{2}} ![{\\textstyle X\_{1}+X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a09bb18cee0b5940e34ab7c35a8f582cb3a9ce5f) has a normal distribution, then both X 1 {\\textstyle X\_{1}} ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and X 2 {\\textstyle X\_{2}} ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) must be normal deviates.[\[52\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-52) This result is known as [CramĂ©r's decomposition theorem](https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_decomposition_theorem "CramĂ©r's decomposition theorem"), and is equivalent to saying that the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of two distributions is normal if and only if both are normal. CramĂ©r's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38) ### The Kac–Bernstein theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=31 "Edit section: The Kac–Bernstein theorem")\] The [Kac–Bernstein theorem](https://en.wikipedia.org/wiki/Kac%E2%80%93Bernstein_theorem "Kac–Bernstein theorem") states that if X {\\textstyle X} ![{\\textstyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d80c41192705e1a6c6de1d65e16d7f70fbac391) and ⁠ Y {\\displaystyle Y} ![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f) ⁠ are independent and X \+ Y {\\textstyle X+Y} ![{\\textstyle X+Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd96b99d81cffac6dcdd636ede2372218fffac12) and X − Y {\\textstyle X-Y} ![{\\textstyle X-Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/797cd2ee57ec14065d7bdbcc702699d7954c14a0) are also independent, then both X and Y must necessarily have normal distributions.[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53)[\[54\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-54) More generally, if X 1 , 
 , X n {\\textstyle X\_{1},\\ldots ,X\_{n}} ![{\\textstyle X\_{1},\\ldots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b38285b8894295da7e871244e729fddff576d35) are independent random variables, then two distinct linear combinations ∑ a k X k {\\textstyle \\sum {a\_{k}X\_{k}}} ![{\\textstyle \\sum {a\_{k}X\_{k}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d62b3dc666b11bdb962170ace940a297ff8d9c7f) and ∑ b k X k {\\textstyle \\sum {b\_{k}X\_{k}}} ![{\\textstyle \\sum {b\_{k}X\_{k}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/701fc598b855acd656fb94b99cfc1696dd881016)will be independent if and only if all X k {\\textstyle X\_{k}} ![{\\textstyle X\_{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8782c28c517e623ccb7715a9e66a964f49446069) are normal and ∑ a k b k σ k 2 \= 0 {\\textstyle \\sum {a\_{k}b\_{k}\\sigma \_{k}^{2}=0}} ![{\\textstyle \\sum {a\_{k}b\_{k}\\sigma \_{k}^{2}=0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4cca7bca2de00b5be911848276eca1c52c8ce765), where σ k 2 {\\textstyle \\sigma \_{k}^{2}} ![{\\textstyle \\sigma \_{k}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55cef67d203b726b4d716826adbb3c680d991da5) denotes the variance of X k {\\textstyle X\_{k}} ![{\\textstyle X\_{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8782c28c517e623ccb7715a9e66a964f49446069).[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53) ### Extensions \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=32 "Edit section: Extensions")\] The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called *normal* or *Gaussian* laws, so a certain ambiguity in names exists. - The [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") describes the Gaussian law in the k\-dimensional [Euclidean space](https://en.wikipedia.org/wiki/Euclidean_space "Euclidean space"). A vector *X* ∈ **R***k* is multivariate-normally distributed if any linear combination of its components ÎŁ*k* *j*\=1*a**j* *X**j* has a (univariate) normal distribution. The variance of X is a *k* × *k* symmetric positive-definite matrix V. The multivariate normal distribution is a special case of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). As such, its iso-density loci in the *k* = 2 case are [ellipses](https://en.wikipedia.org/wiki/Ellipse "Ellipse") and in the case of arbitrary k are [ellipsoids](https://en.wikipedia.org/wiki/Ellipsoid "Ellipsoid"). - [Rectified Gaussian distribution](https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution "Rectified Gaussian distribution") a rectified version of normal distribution with all the negative elements reset to 0. - [Complex normal distribution](https://en.wikipedia.org/wiki/Complex_normal_distribution "Complex normal distribution") deals with the complex normal vectors. A complex vector *X* ∈ **C***k* is said to be normal if both its real and imaginary components jointly possess a 2*k*\-dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the *variance* matrix Γ, and the *relation* matrix C. - [Matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution") describes the case of normally distributed matrices. - [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process "Gaussian process") are the normally distributed [stochastic processes](https://en.wikipedia.org/wiki/Stochastic_process "Stochastic process"). These can be viewed as elements of some infinite-dimensional [Hilbert space](https://en.wikipedia.org/wiki/Hilbert_space "Hilbert space") H, and thus are the analogues of multivariate normal vectors for the case *k* = ∞. A random element *h* ∈ *H* is said to be normal if for any constant *a* ∈ *H* the [scalar product](https://en.wikipedia.org/wiki/Scalar_product "Scalar product") (*a*, *h*) has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear *covariance operator* *K*: *H* → *H*. Several Gaussian processes became popular enough to have their own names: - [Brownian motion](https://en.wikipedia.org/wiki/Wiener_process "Wiener process"); - [Brownian bridge](https://en.wikipedia.org/wiki/Brownian_bridge "Brownian bridge"); and - [Ornstein–Uhlenbeck process](https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process "Ornstein–Uhlenbeck process"). - [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") is an abstract mathematical construction that represents a [q-analogue](https://en.wikipedia.org/wiki/Q-analogue "Q-analogue") of the normal distribution. - the [q-Gaussian](https://en.wikipedia.org/wiki/Q-Gaussian "Q-Gaussian") is an analogue of the Gaussian distribution, in the sense that it maximises the [Tsallis entropy](https://en.wikipedia.org/wiki/Tsallis_entropy "Tsallis entropy"), and is one type of [Tsallis distribution](https://en.wikipedia.org/wiki/Tsallis_distribution "Tsallis distribution"). This distribution is different from the [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") above. - The [Kaniadakis Îș\-Gaussian distribution](https://en.wikipedia.org/wiki/Kaniadakis_Gaussian_distribution "Kaniadakis Gaussian distribution") is a generalization of the Gaussian distribution which arises from the [Kaniadakis statistics](https://en.wikipedia.org/wiki/Kaniadakis_statistics "Kaniadakis statistics"), being one of the [Kaniadakis distributions](https://en.wikipedia.org/wiki/Kaniadakis_distribution "Kaniadakis distribution"). A random variable X has a two-piece normal distribution if it has a distribution f X ( x ) \= { N ( ÎŒ , σ 1 2 ) , if x ≀ ÎŒ N ( ÎŒ , σ 2 2 ) , if x ≄ ÎŒ {\\displaystyle f\_{X}(x)={\\begin{cases}N(\\mu ,\\sigma \_{1}^{2}),&{\\text{ if }}x\\leq \\mu \\\\N(\\mu ,\\sigma \_{2}^{2}),&{\\text{ if }}x\\geq \\mu \\end{cases}}} ![{\\displaystyle f\_{X}(x)={\\begin{cases}N(\\mu ,\\sigma \_{1}^{2}),&{\\text{ if }}x\\leq \\mu \\\\N(\\mu ,\\sigma \_{2}^{2}),&{\\text{ if }}x\\geq \\mu \\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/31bf9650298445fee0ca47aa25f62df2e8d66286) where ÎŒ is the mean and *σ*2 1 and *σ*2 2 are the variances of the distribution to the left and right of the mean respectively. The mean E(*X*), variance V(*X*), and third central moment T(*X*) of this distribution have been determined[\[55\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-John-1982-55) E ⁥ ( X ) \= ÎŒ \+ 2 π ( σ 2 − σ 1 ) , V ⁥ ( X ) \= ( 1 − 2 π ) ( σ 2 − σ 1 ) 2 \+ σ 1 σ 2 , T ⁥ ( X ) \= 2 π ( σ 2 − σ 1 ) \[ ( 4 π − 1 ) ( σ 2 − σ 1 ) 2 \+ σ 1 σ 2 \] . {\\displaystyle {\\begin{aligned}\\operatorname {E} (X)&=\\mu +{\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1}),\\\\\\operatorname {V} (X)&=\\left(1-{\\frac {2}{\\pi }}\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2},\\\\\\operatorname {T} (X)&={\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1})\\left\[\\left({\\frac {4}{\\pi }}-1\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2}\\right\].\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\operatorname {E} (X)&=\\mu +{\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1}),\\\\\\operatorname {V} (X)&=\\left(1-{\\frac {2}{\\pi }}\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2},\\\\\\operatorname {T} (X)&={\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1})\\left\[\\left({\\frac {4}{\\pi }}-1\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2}\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/97f32cc8147bff0b5cdc02123a520a1119854060) One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are: - [Pearson distribution](https://en.wikipedia.org/wiki/Pearson_distribution "Pearson distribution") — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values. - The [generalized normal distribution](https://en.wikipedia.org/wiki/Generalized_normal_distribution "Generalized normal distribution"), also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors. ## Statistical inference \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=33 "Edit section: Statistical inference")\] ### Estimation of parameters \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=34 "Edit section: Estimation of parameters")\] See also: [Maximum likelihood § Continuous distribution, continuous parameter space](https://en.wikipedia.org/wiki/Maximum_likelihood#Continuous_distribution,_continuous_parameter_space "Maximum likelihood"); and [Gaussian function § Estimation of parameters](https://en.wikipedia.org/wiki/Gaussian_function#Estimation_of_parameters "Gaussian function") It is often the case that we do not know the parameters of the normal distribution, but instead want to [estimate](https://en.wikipedia.org/wiki/Estimation_theory "Estimation theory") them. That is, having a sample ( x 1 , 
 , x n ) {\\textstyle (x\_{1},\\ldots ,x\_{n})} ![{\\textstyle (x\_{1},\\ldots ,x\_{n})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55e09ee050ccb93f44cef510332f40a3d6bc651d) from a normal N ( ÎŒ , σ 2 ) {\\textstyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})} ![{\\textstyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fa40efb531f5b9513c921fd804868e727dfc71c0) population we would like to learn the approximate values of parameters ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490). The standard approach to this problem is the [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") method, which requires maximization of the *[log-likelihood function](https://en.wikipedia.org/wiki/Log-likelihood_function "Log-likelihood function")*: ln ⁥ L ( ÎŒ , σ 2 ) \= ∑ i \= 1 n ln ⁥ f ( x i ∣ ÎŒ , σ 2 ) \= − n 2 ln ⁥ ( 2 π ) − n 2 ln ⁥ σ 2 − 1 2 σ 2 ∑ i \= 1 n ( x i − ÎŒ ) 2 . {\\displaystyle \\ln {\\mathcal {L}}(\\mu ,\\sigma ^{2})=\\sum \_{i=1}^{n}\\ln f(x\_{i}\\mid \\mu ,\\sigma ^{2})=-{\\frac {n}{2}}\\ln(2\\pi )-{\\frac {n}{2}}\\ln \\sigma ^{2}-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.} ![{\\displaystyle \\ln {\\mathcal {L}}(\\mu ,\\sigma ^{2})=\\sum \_{i=1}^{n}\\ln f(x\_{i}\\mid \\mu ,\\sigma ^{2})=-{\\frac {n}{2}}\\ln(2\\pi )-{\\frac {n}{2}}\\ln \\sigma ^{2}-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/003faa08d27475dd2b029e9f7f0cebab17c0e147) Taking derivatives with respect to ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ and σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) and solving the resulting system of first order conditions yields the *maximum likelihood estimates*: ÎŒ ^ \= x ÂŻ ≡ 1 n ∑ i \= 1 n x i , σ ^ 2 \= 1 n ∑ i \= 1 n ( x i − x ÂŻ ) 2 . {\\displaystyle {\\hat {\\mu }}={\\overline {x}}\\equiv {\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i},\\qquad {\\hat {\\sigma }}^{2}={\\frac {1}{n}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.} ![{\\displaystyle {\\hat {\\mu }}={\\overline {x}}\\equiv {\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i},\\qquad {\\hat {\\sigma }}^{2}={\\frac {1}{n}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0269b28e095780b5f1f76c94505841fbe51aeec2) Then ln ⁥ L ( ÎŒ ^ , σ ^ 2 ) {\\textstyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})} ![{\\textstyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f2946f476cb2518422bed7d85a33eb8d8460d365) is as follows: ln ⁥ L ( ÎŒ ^ , σ ^ 2 ) \= ( − n / 2 ) \[ ln ⁥ ( 2 π σ ^ 2 ) \+ 1 \] {\\displaystyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})=(-n/2)\[\\ln(2\\pi {\\hat {\\sigma }}^{2})+1\]} ![{\\displaystyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})=(-n/2)\[\\ln(2\\pi {\\hat {\\sigma }}^{2})+1\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/561353e6bc80d226fddd9510be61d21bc67b3aee) #### Sample mean \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=35 "Edit section: Sample mean")\] See also: [Standard error of the mean](https://en.wikipedia.org/wiki/Standard_error_of_the_mean "Standard error of the mean") Estimator ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}} ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) is called the *[sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean")*, since it is the arithmetic mean of all observations. The statistic x ÂŻ {\\displaystyle \\textstyle {\\overline {x}}} ![{\\displaystyle \\textstyle {\\overline {x}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c74eb776989f75b04948837080faa9ebc08c8cd3) is [complete](https://en.wikipedia.org/wiki/Complete_statistic "Complete statistic") and [sufficient](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") for ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠, and therefore by the [Lehmann–ScheffĂ© theorem](https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9_theorem "Lehmann–ScheffĂ© theorem"), ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}} ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) is the [uniformly minimum variance unbiased](https://en.wikipedia.org/wiki/Uniformly_minimum_variance_unbiased "Uniformly minimum variance unbiased") (UMVU) estimator.[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) In finite samples it is distributed normally: ÎŒ ^ ∌ N ( ÎŒ , σ 2 / n ) . {\\displaystyle {\\hat {\\mu }}\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}/n).} ![{\\displaystyle {\\hat {\\mu }}\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}/n).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8f1fbb023c73b0f4010814107ac36419b16a226) The variance of this estimator is equal to the *ΌΌ*\-element of the inverse [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") I − 1 {\\displaystyle \\textstyle {\\mathcal {I}}^{-1}} ![{\\displaystyle \\textstyle {\\mathcal {I}}^{-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/98cf99dd702e8c61031251ee2506b639a6eff98f). This implies that the estimator is [finite-sample efficient](https://en.wikipedia.org/wiki/Efficient_estimator "Efficient estimator"). Of practical importance is the [standard error](https://en.wikipedia.org/wiki/Standard_error "Standard error") of ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}} ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) being proportional to 1 / n {\\displaystyle \\textstyle 1/{\\sqrt {n}}} ![{\\displaystyle \\textstyle 1/{\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2cd0024843448c587ee8246c08fe5af7fb03cc95), that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in [Monte Carlo simulations](https://en.wikipedia.org/wiki/Monte_Carlo_simulation "Monte Carlo simulation"). From the standpoint of the [asymptotic theory](https://en.wikipedia.org/wiki/Asymptotic_theory_\(statistics\) "Asymptotic theory (statistics)"), ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}} ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) is [consistent](https://en.wikipedia.org/wiki/Consistent_estimator "Consistent estimator"), that is, it [converges in probability](https://en.wikipedia.org/wiki/Converges_in_probability "Converges in probability") to ⁠ ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) ⁠ as n → ∞ {\\textstyle n\\rightarrow \\infty } ![{\\textstyle n\\rightarrow \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/680784f1a8c2242d7a04788c43a18d276b993466). The estimator is also [asymptotically normal](https://en.wikipedia.org/wiki/Asymptotic_normality "Asymptotic normality"), which is a simple corollary of it being normal in finite samples: n ( ÎŒ ^ − ÎŒ ) → d N ( 0 , σ 2 ) . {\\displaystyle {\\sqrt {n}}({\\hat {\\mu }}-\\mu )\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,\\sigma ^{2}).} ![{\\displaystyle {\\sqrt {n}}({\\hat {\\mu }}-\\mu )\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,\\sigma ^{2}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bfe762c7215a7ac297a3bd441e237f92cd415c00) #### Sample variance \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=36 "Edit section: Sample variance")\] See also: [Standard deviation § Estimation](https://en.wikipedia.org/wiki/Standard_deviation#Estimation "Standard deviation"), and [Variance § Estimation](https://en.wikipedia.org/wiki/Variance#Estimation "Variance") The estimator σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) is called the *[sample variance](https://en.wikipedia.org/wiki/Sample_variance "Sample variance")*, since it is the variance of the sample (( x 1 , 
 , x n ) {\\textstyle (x\_{1},\\ldots ,x\_{n})} ![{\\textstyle (x\_{1},\\ldots ,x\_{n})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55e09ee050ccb93f44cef510332f40a3d6bc651d)). In practice, another estimator is often used instead of the σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084). This other estimator is denoted s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713), and is also called the *sample variance*, which represents a certain ambiguity in terminology; its square root ⁠ s {\\displaystyle s} ![{\\displaystyle s}](https://wikimedia.org/api/rest_v1/media/math/render/svg/01d131dfd7673938b947072a13a9744fe997e632) ⁠ is called the *sample standard deviation*. The estimator s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) differs from σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) by having (*n* − 1) instead of n in the denominator (the so-called [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction "Bessel's correction")): s 2 \= n n − 1 σ ^ 2 \= 1 n − 1 ∑ i \= 1 n ( x i − x ÂŻ ) 2 . {\\displaystyle s^{2}={\\frac {n}{n-1}}{\\hat {\\sigma }}^{2}={\\frac {1}{n-1}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.} ![{\\displaystyle s^{2}={\\frac {n}{n-1}}{\\hat {\\sigma }}^{2}={\\frac {1}{n-1}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bb09766b1fa03887c9ec7f7254e3b25f94224532) The difference between s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) and σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) becomes negligibly small for large n's. In finite samples however, the motivation behind the use of s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is that it is an [unbiased estimator](https://en.wikipedia.org/wiki/Unbiased_estimator "Unbiased estimator") of the underlying parameter σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), whereas σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) is biased. Also, by the Lehmann–ScheffĂ© theorem the estimator s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is uniformly minimum variance unbiased ([UMVU](https://en.wikipedia.org/wiki/UMVU "UMVU")),[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) is better than the s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) in terms of the [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") (MSE) criterion. In finite samples both s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) and σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) have scaled [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with (*n* − 1) degrees of freedom: s 2 ∌ σ 2 n − 1 ⋅ χ n − 1 2 , σ ^ 2 ∌ σ 2 n ⋅ χ n − 1 2 . {\\displaystyle s^{2}\\sim {\\frac {\\sigma ^{2}}{n-1}}\\cdot \\chi \_{n-1}^{2},\\qquad {\\hat {\\sigma }}^{2}\\sim {\\frac {\\sigma ^{2}}{n}}\\cdot \\chi \_{n-1}^{2}.} ![{\\displaystyle s^{2}\\sim {\\frac {\\sigma ^{2}}{n-1}}\\cdot \\chi \_{n-1}^{2},\\qquad {\\hat {\\sigma }}^{2}\\sim {\\frac {\\sigma ^{2}}{n}}\\cdot \\chi \_{n-1}^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b55e6d2c748d5ba1ff42692650492b9506ab164d) The first of these expressions shows that the variance of s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is equal to 2 σ 4 / ( n − 1 ) {\\textstyle 2\\sigma ^{4}/(n-1)} ![{\\textstyle 2\\sigma ^{4}/(n-1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bb270a41c1ca6bb65b250cab50e47b302aca984f), which is slightly greater than the *σσ*\-element of the inverse Fisher information matrix I − 1 {\\displaystyle \\textstyle {\\mathcal {I}}^{-1}} ![{\\displaystyle \\textstyle {\\mathcal {I}}^{-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/98cf99dd702e8c61031251ee2506b639a6eff98f), which is 2 σ 4 / n {\\textstyle 2\\sigma ^{4}/n} ![{\\textstyle 2\\sigma ^{4}/n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e75535dfc8adc819cb974935bd3a1b5b4e08734c). Thus, s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is not an efficient estimator for σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), and moreover, since s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is UMVU, we can conclude that the finite-sample efficient estimator for σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) does not exist. Applying the asymptotic theory, both estimators s 2 {\\textstyle s^{2}} ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) and σ ^ 2 {\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}} ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) are consistent, that is they converge in probability to σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) as the sample size n → ∞ {\\textstyle n\\rightarrow \\infty } ![{\\textstyle n\\rightarrow \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/680784f1a8c2242d7a04788c43a18d276b993466). The two estimators are also both asymptotically normal: n ( σ ^ 2 − σ 2 ) ≃ n ( s 2 − σ 2 ) → d N ( 0 , 2 σ 4 ) . {\\displaystyle {\\sqrt {n}}({\\hat {\\sigma }}^{2}-\\sigma ^{2})\\simeq {\\sqrt {n}}(s^{2}-\\sigma ^{2})\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,2\\sigma ^{4}).} ![{\\displaystyle {\\sqrt {n}}({\\hat {\\sigma }}^{2}-\\sigma ^{2})\\simeq {\\sqrt {n}}(s^{2}-\\sigma ^{2})\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,2\\sigma ^{4}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/64e1884a9be0b16cd8d2bfbe88f08e7ca6b02a45) In particular, both estimators are asymptotically efficient for σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490). ### Confidence intervals \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=37 "Edit section: Confidence intervals")\] See also: [Studentization](https://en.wikipedia.org/wiki/Studentization "Studentization") and [3-sigma rule](https://en.wikipedia.org/wiki/3-sigma_rule "3-sigma rule") By [Cochran's theorem](https://en.wikipedia.org/wiki/Cochran%27s_theorem "Cochran's theorem"), for normal distributions the sample mean ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}} ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) and the sample variance *s*2 are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"), which means there can be no gain in considering their [joint distribution](https://en.wikipedia.org/wiki/Joint_distribution "Joint distribution"). There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}} ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) and s can be employed to construct the so-called *t-statistic*: t \= ÎŒ ^ − ÎŒ s / n \= x ÂŻ − ÎŒ 1 n ( n − 1 ) ∑ ( x i − x ÂŻ ) 2 ∌ t n − 1 {\\displaystyle t={\\frac {{\\hat {\\mu }}-\\mu }{s/{\\sqrt {n}}}}={\\frac {{\\overline {x}}-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\sum (x\_{i}-{\\overline {x}})^{2}}}}\\sim t\_{n-1}} ![{\\displaystyle t={\\frac {{\\hat {\\mu }}-\\mu }{s/{\\sqrt {n}}}}={\\frac {{\\overline {x}}-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\sum (x\_{i}-{\\overline {x}})^{2}}}}\\sim t\_{n-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/35f4ea0fbb1b9bdbcef271db64817c384d43497a) This quantity t has the [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") with (*n* − 1) degrees of freedom, and it is an [ancillary statistic](https://en.wikipedia.org/wiki/Ancillary_statistic "Ancillary statistic") (independent of the value of the parameters). Inverting the distribution of this t\-statistics will allow us to construct the [confidence interval](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") for ÎŒ;[\[57\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-57) similarly, inverting the *χ*2 distribution of the statistic *s*2 will give us the confidence interval for *σ*2:[\[58\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-58) ÎŒ ∈ \[ ÎŒ ^ − t n − 1 , 1 − α / 2 s n , ÎŒ ^ \+ t n − 1 , 1 − α / 2 s n \] {\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}},\\,{\\hat {\\mu }}+t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}}\\right\]} ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}},\\,{\\hat {\\mu }}+t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/86ad00c4aac2907f6358d3ab3a5e413a58158be4) σ 2 ∈ \[ n − 1 χ n − 1 , 1 − α / 2 2 s 2 , n − 1 χ n − 1 , α / 2 2 s 2 \] {\\displaystyle \\sigma ^{2}\\in \\left\[{\\frac {n-1}{\\chi \_{n-1,1-\\alpha /2}^{2}}}s^{2},\\,{\\frac {n-1}{\\chi \_{n-1,\\alpha /2}^{2}}}s^{2}\\right\]} ![{\\displaystyle \\sigma ^{2}\\in \\left\[{\\frac {n-1}{\\chi \_{n-1,1-\\alpha /2}^{2}}}s^{2},\\,{\\frac {n-1}{\\chi \_{n-1,\\alpha /2}^{2}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87c0c6ae7bd48ba8377279fed58df479f0de900c) where *t**k*,*p* and χ 2 *k,p* are the pth [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the t\- and *χ*2\-distributions respectively. These confidence intervals are of the *[confidence level](https://en.wikipedia.org/wiki/Confidence_level "Confidence level")* 1 − *α*, meaning that the true values ÎŒ and *σ*2 fall outside of these intervals with probability (or [significance level](https://en.wikipedia.org/wiki/Significance_level "Significance level")) α. In practice people usually take *α* = 5%, resulting in the 95% confidence intervals. The confidence interval for σ can be found by taking the square root of the interval bounds for *σ*2. Approximate formulas can be derived from the asymptotic distributions of ÎŒ ^ {\\displaystyle \\textstyle {\\hat {\\mu }}} ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) and *s*2: ÎŒ ∈ \[ ÎŒ ^ − \| z α / 2 \| n s , ÎŒ ^ \+ \| z α / 2 \| n s \] {\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s,\\,{\\hat {\\mu }}+{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s\\right\]} ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s,\\,{\\hat {\\mu }}+{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6ed5adb135a9cd03de1aa21d774e66be1adb4ea8) σ 2 ∈ \[ s 2 − 2 \| z α / 2 \| n s 2 , s 2 \+ 2 \| z α / 2 \| n s 2 \] {\\displaystyle \\sigma ^{2}\\in \\left\[s^{2}-{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2},\\,s^{2}+{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2}\\right\]} ![{\\displaystyle \\sigma ^{2}\\in \\left\[s^{2}-{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2},\\,s^{2}+{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56646fb560578a0414ad2f045c14031c4015b9a2) The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles *z**α*/2 do not depend on n. In particular, the most popular value of *α* = 5%, results in \|*z*0\.025\| = [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96"). ### Normality tests \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=38 "Edit section: Normality tests")\] Main article: [Normality tests](https://en.wikipedia.org/wiki/Normality_tests "Normality tests") Normality tests assess the likelihood that the given data set {*x*1, ..., *x**n*} comes from a normal distribution. Typically the [null hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis "Null hypothesis") *H*0 is that the observations are distributed normally with unspecified mean ÎŒ and variance *σ*2, versus the alternative *H**a* that the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below: **Diagnostic plots** are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. - [Q–Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "Q–Q plot"), also known as [normal probability plot](https://en.wikipedia.org/wiki/Normal_probability_plot "Normal probability plot") or [rankit](https://en.wikipedia.org/wiki/Rankit "Rankit") plot—is a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it is a plot of point of the form (*Ί*−1(*p**k*), *x*(*k*)), where plotting points *p**k* are equal to *p**k* = (*k* − *α*)/(*n* + 1 − 2*α*) and α is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line. - [P–P plot](https://en.wikipedia.org/wiki/P%E2%80%93P_plot "P–P plot") – similar to the Q–Q plot, but used much less frequently. This method consists of plotting the points (*Ί*(*z*(*k*)), *p**k*), where z ( k ) \= ( x ( k ) − ÎŒ ^ ) / σ ^ {\\textstyle \\textstyle z\_{(k)}=(x\_{(k)}-{\\hat {\\mu }})/{\\hat {\\sigma }}} ![{\\textstyle \\textstyle z\_{(k)}=(x\_{(k)}-{\\hat {\\mu }})/{\\hat {\\sigma }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/11fad9789c407013d2c4cb224bfa84b320563fd9) . For normally distributed data this plot should lie on a straight line between (0, 0) and (1, 1). **Goodness-of-fit tests**: *Moment-based tests*: - [D'Agostino's K-squared test](https://en.wikipedia.org/wiki/D%27Agostino%27s_K-squared_test "D'Agostino's K-squared test") - [Jarque–Bera test](https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test "Jarque–Bera test") - [Shapiro–Wilk test](https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test "Shapiro–Wilk test"): This is based on the line in the Q–Q plot having the slope of σ. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly. *Tests based on the empirical distribution function*: - [Anderson–Darling test](https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test "Anderson–Darling test") - [Lilliefors test](https://en.wikipedia.org/wiki/Lilliefors_test "Lilliefors test") (an adaptation of the [Kolmogorov–Smirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test "Kolmogorov–Smirnov test")) ### Bayesian analysis of the normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=39 "Edit section: Bayesian analysis of the normal distribution")\] Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: - Either the mean, or the variance, or neither, may be considered a fixed quantity. - When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified. - Both univariate and [multivariate](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") cases need to be considered. - Either [conjugate](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") or [improper](https://en.wikipedia.org/wiki/Improper_prior "Improper prior") [prior distributions](https://en.wikipedia.org/wiki/Prior_distribution "Prior distribution") may be placed on the unknown variables. - An additional set of cases occurs in [Bayesian linear regression](https://en.wikipedia.org/wiki/Bayesian_linear_regression "Bayesian linear regression"), where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the [regression coefficients](https://en.wikipedia.org/wiki/Regression_coefficient "Regression coefficient"). The resulting analysis is similar to the basic cases of [independent identically distributed](https://en.wikipedia.org/wiki/Independent_identically_distributed "Independent identically distributed") data. The formulas for the non-linear-regression cases are summarized in the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") article. #### Sum of two quadratics \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=40 "Edit section: Sum of two quadratics")\] ##### Scalar form \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=41 "Edit section: Scalar form")\] The following auxiliary formula is useful for simplifying the [posterior](https://en.wikipedia.org/wiki/Posterior_distribution "Posterior distribution") update equations, which otherwise become fairly tedious. a ( x − y ) 2 \+ b ( x − z ) 2 \= ( a \+ b ) ( x − a y \+ b z a \+ b ) 2 \+ a b a \+ b ( y − z ) 2 {\\displaystyle a(x-y)^{2}+b(x-z)^{2}=(a+b)\\left(x-{\\frac {ay+bz}{a+b}}\\right)^{2}+{\\frac {ab}{a+b}}(y-z)^{2}} ![{\\displaystyle a(x-y)^{2}+b(x-z)^{2}=(a+b)\\left(x-{\\frac {ay+bz}{a+b}}\\right)^{2}+{\\frac {ab}{a+b}}(y-z)^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dfac4114765b1f994800c9b424b82564b57ba179) This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and [completing the square](https://en.wikipedia.org/wiki/Completing_the_square "Completing the square"). Note the following about the complex constant factors attached to some of the terms: 1. The factor a y \+ b z a \+ b {\\textstyle {\\frac {ay+bz}{a+b}}} ![{\\textstyle {\\frac {ay+bz}{a+b}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b27b947e13b22bb972ea8cc460c5ab3de2db0237) has the form of a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of y and z. 2. a b a \+ b \= 1 1 a \+ 1 b \= ( a − 1 \+ b − 1 ) − 1 . {\\textstyle {\\frac {ab}{a+b}}={\\frac {1}{{\\frac {1}{a}}+{\\frac {1}{b}}}}=(a^{-1}+b^{-1})^{-1}.} ![{\\textstyle {\\frac {ab}{a+b}}={\\frac {1}{{\\frac {1}{a}}+{\\frac {1}{b}}}}=(a^{-1}+b^{-1})^{-1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/042395d4562e6a427ab04e879ba859a10e8087a7) This shows that this factor can be thought of as resulting from a situation where the [reciprocals](https://en.wikipedia.org/wiki/Multiplicative_inverse "Multiplicative inverse") of quantities a and b add directly, so to combine a and b themselves, it is necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean"), so it is not surprising that a b a \+ b {\\textstyle {\\frac {ab}{a+b}}} ![{\\textstyle {\\frac {ab}{a+b}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7021bc974153f898ddadd50b0351d9f3a28de8a8) is one-half the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean") of a and b. ##### Vector form \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=42 "Edit section: Vector form")\] A similar formula can be written for the sum of two vector quadratics: If **x**, **y**, **z** are vectors of length k, and **A** and **B** are [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"), [invertible matrices](https://en.wikipedia.org/wiki/Invertible_matrices "Invertible matrices") of size k × k {\\textstyle k\\times k} ![{\\textstyle k\\times k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2b9889111e6949ae57adb3a883df2f8a29bb5062), then ( y − x ) â€Č A ( y − x ) \+ ( x − z ) â€Č B ( x − z ) \= ( x − c ) â€Č ( A \+ B ) ( x − c ) \+ ( y − z ) â€Č ( A − 1 \+ B − 1 ) − 1 ( y − z ) {\\displaystyle {\\begin{aligned}&(\\mathbf {y} -\\mathbf {x} )'\\mathbf {A} (\\mathbf {y} -\\mathbf {x} )+(\\mathbf {x} -\\mathbf {z} )'\\mathbf {B} (\\mathbf {x} -\\mathbf {z} )\\\\={}&(\\mathbf {x} -\\mathbf {c} )'(\\mathbf {A} +\\mathbf {B} )(\\mathbf {x} -\\mathbf {c} )+(\\mathbf {y} -\\mathbf {z} )'(\\mathbf {A} ^{-1}+\\mathbf {B} ^{-1})^{-1}(\\mathbf {y} -\\mathbf {z} )\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}&(\\mathbf {y} -\\mathbf {x} )'\\mathbf {A} (\\mathbf {y} -\\mathbf {x} )+(\\mathbf {x} -\\mathbf {z} )'\\mathbf {B} (\\mathbf {x} -\\mathbf {z} )\\\\={}&(\\mathbf {x} -\\mathbf {c} )'(\\mathbf {A} +\\mathbf {B} )(\\mathbf {x} -\\mathbf {c} )+(\\mathbf {y} -\\mathbf {z} )'(\\mathbf {A} ^{-1}+\\mathbf {B} ^{-1})^{-1}(\\mathbf {y} -\\mathbf {z} )\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6374f98fcb11f7c1273b06c44e1c0f0b84154048) where c \= ( A \+ B ) − 1 ( A y \+ B z ) {\\displaystyle \\mathbf {c} =(\\mathbf {A} +\\mathbf {B} )^{-1}(\\mathbf {A} \\mathbf {y} +\\mathbf {B} \\mathbf {z} )} ![{\\displaystyle \\mathbf {c} =(\\mathbf {A} +\\mathbf {B} )^{-1}(\\mathbf {A} \\mathbf {y} +\\mathbf {B} \\mathbf {z} )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/267a22091cc9d9afb86fcacebcc6b842cb0e9b1b) The form **x**â€Č **A** **x** is called a [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") and is a [scalar](https://en.wikipedia.org/wiki/Scalar_\(mathematics\) "Scalar (mathematics)"): x â€Č A x \= ∑ i , j a i j x i x j {\\displaystyle \\mathbf {x} '\\mathbf {A} \\mathbf {x} =\\sum \_{i,j}a\_{ij}x\_{i}x\_{j}} ![{\\displaystyle \\mathbf {x} '\\mathbf {A} \\mathbf {x} =\\sum \_{i,j}a\_{ij}x\_{i}x\_{j}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ef06ff3139875b96fe704a43bbecebacdbea460) In other words, it sums up all possible combinations of products of pairs of elements from **x**, with a separate coefficient for each. In addition, since x i x j \= x j x i {\\textstyle x\_{i}x\_{j}=x\_{j}x\_{i}} ![{\\textstyle x\_{i}x\_{j}=x\_{j}x\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0e9baf61d13d13e6695b57c3f31856e72c860f8), only the sum a i j \+ a j i {\\textstyle a\_{ij}+a\_{ji}} ![{\\textstyle a\_{ij}+a\_{ji}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c48f5fdcc3f2d80c5a75c3714a154a7e16a3195f) matters for any off-diagonal elements of **A**, and there is no loss of generality in assuming that **A** is [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"). Furthermore, if **A** is symmetric, then the form x â€Č A y \= y â€Č A x . {\\textstyle \\mathbf {x} '\\mathbf {A} \\mathbf {y} =\\mathbf {y} '\\mathbf {A} \\mathbf {x} .} ![{\\textstyle \\mathbf {x} '\\mathbf {A} \\mathbf {y} =\\mathbf {y} '\\mathbf {A} \\mathbf {x} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d25486eab8a57da216fb9418eb8b4fa889c7b03) #### Sum of differences from the mean \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=43 "Edit section: Sum of differences from the mean")\] Another useful formula is as follows: ∑ i \= 1 n ( x i − ÎŒ ) 2 \= ∑ i \= 1 n ( x i − x ÂŻ ) 2 \+ n ( x ÂŻ − ÎŒ ) 2 {\\displaystyle \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}=\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}} ![{\\displaystyle \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}=\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6abcabe83cd01aabf39c16b0bc67994086519d02) where x ÂŻ \= 1 n ∑ i \= 1 n x i . {\\textstyle {\\bar {x}}={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}.} ![{\\textstyle {\\bar {x}}={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/17005337073440fa8d7c41536f875c6cd5d1fc0e) ### With known variance \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=44 "Edit section: With known variance")\] For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows x ∌ N ( ÎŒ , σ 2 ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})} ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf563c08aa87170a438239b0d291a4093fd2cb27) with known [variance](https://en.wikipedia.org/wiki/Variance "Variance") *σ*2, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") distribution is also normally distributed. This can be shown more easily by rewriting the variance as the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), i.e. using *τ* = 1/*σ*2. Then if x ∌ N ( ÎŒ , 1 / τ ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,1/\\tau )} ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,1/\\tau )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83dac9d557f3dae598f1438c0a8164d82501fe19) and ÎŒ ∌ N ( ÎŒ 0 , 1 / τ 0 ) , {\\textstyle \\mu \\sim {\\mathcal {N}}(\\mu \_{0},1/\\tau \_{0}),} ![{\\textstyle \\mu \\sim {\\mathcal {N}}(\\mu \_{0},1/\\tau \_{0}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2f12cd6980bf93122c047ab32474d8403317756b) we proceed as follows. First, the [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") is (using the formula above for the sum of differences from the mean): p ( X ∣ ÎŒ , τ ) \= ∏ i \= 1 n τ 2 π exp ⁥ ( − 1 2 τ ( x i − ÎŒ ) 2 ) \= ( τ 2 π ) n / 2 exp ⁥ ( − 1 2 τ ∑ i \= 1 n ( x i − ÎŒ ) 2 ) \= ( τ 2 π ) n / 2 exp ⁥ \[ − 1 2 τ ( ∑ i \= 1 n ( x i − x ÂŻ ) 2 \+ n ( x ÂŻ − ÎŒ ) 2 ) \] . {\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\tau )&=\\prod \_{i=1}^{n}{\\sqrt {\\frac {\\tau }{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau (x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left(-{\\frac {1}{2}}\\tau \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\].\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\tau )&=\\prod \_{i=1}^{n}{\\sqrt {\\frac {\\tau }{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau (x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left(-{\\frac {1}{2}}\\tau \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2bcd1c34520a24e29b758a0f7427e79e9d8a414) Then, we proceed as follows: p ( ÎŒ ∣ X ) ∝ p ( X ∣ ÎŒ ) p ( ÎŒ ) \= ( τ 2 π ) n / 2 exp ⁥ \[ − 1 2 τ ( ∑ i \= 1 n ( x i − x ÂŻ ) 2 \+ n ( x ÂŻ − ÎŒ ) 2 ) \] τ 0 2 π exp ⁥ ( − 1 2 τ 0 ( ÎŒ − ÎŒ 0 ) 2 ) ∝ exp ⁥ ( − 1 2 ( τ ( ∑ i \= 1 n ( x i − x ÂŻ ) 2 \+ n ( x ÂŻ − ÎŒ ) 2 ) \+ τ 0 ( ÎŒ − ÎŒ 0 ) 2 ) ) ∝ exp ⁥ ( − 1 2 ( n τ ( x ÂŻ − ÎŒ ) 2 \+ τ 0 ( ÎŒ − ÎŒ 0 ) 2 ) ) \= exp ⁥ ( − 1 2 ( n τ \+ τ 0 ) ( ÎŒ − n τ x ÂŻ \+ τ 0 ÎŒ 0 n τ \+ τ 0 ) 2 \+ n τ τ 0 n τ \+ τ 0 ( x ÂŻ − ÎŒ 0 ) 2 ) ∝ exp ⁥ ( − 1 2 ( n τ \+ τ 0 ) ( ÎŒ − n τ x ÂŻ \+ τ 0 ÎŒ 0 n τ \+ τ 0 ) 2 ) {\\displaystyle {\\begin{aligned}p(\\mu \\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\mu )p(\\mu )\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\]{\\sqrt {\\frac {\\tau \_{0}}{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(n\\tau ({\\bar {x}}-\\mu )^{2}+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&=\\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}+{\\frac {n\\tau \\tau \_{0}}{n\\tau +\\tau \_{0}}}({\\bar {x}}-\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}\\right)\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\mu \\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\mu )p(\\mu )\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\]{\\sqrt {\\frac {\\tau \_{0}}{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(n\\tau ({\\bar {x}}-\\mu )^{2}+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&=\\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}+{\\frac {n\\tau \\tau \_{0}}{n\\tau +\\tau \_{0}}}({\\bar {x}}-\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}\\right)\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/96e309ead00fbc8603eced5342aa5df534522d6a) In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving ÎŒ. The result is the [kernel](https://en.wikipedia.org/wiki/Kernel_\(statistics\) "Kernel (statistics)") of a normal distribution, with mean n τ x ÂŻ \+ τ 0 ÎŒ 0 n τ \+ τ 0 {\\textstyle {\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}} ![{\\textstyle {\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b92a63cba1c7539f1d484ee0296c910912780a79) and precision n τ \+ τ 0 {\\textstyle n\\tau +\\tau \_{0}} ![{\\textstyle n\\tau +\\tau \_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/23717a4ce103170f3da265af44ce96e80461871e), i.e. p ( ÎŒ ∣ X ) ∌ N ( n τ x ÂŻ \+ τ 0 ÎŒ 0 n τ \+ τ 0 , 1 n τ \+ τ 0 ) {\\displaystyle p(\\mu \\mid \\mathbf {X} )\\sim {\\mathcal {N}}\\left({\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}},{\\frac {1}{n\\tau +\\tau \_{0}}}\\right)} ![{\\displaystyle p(\\mu \\mid \\mathbf {X} )\\sim {\\mathcal {N}}\\left({\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}},{\\frac {1}{n\\tau +\\tau \_{0}}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a45b361f59d044be9a7d87bf92514795f38419c8) This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: τ 0 â€Č \= τ 0 \+ n τ ÎŒ 0 â€Č \= n τ x ÂŻ \+ τ 0 ÎŒ 0 n τ \+ τ 0 x ÂŻ \= 1 n ∑ i \= 1 n x i {\\displaystyle {\\begin{aligned}\\tau \_{0}'&=\\tau \_{0}+n\\tau \\\\\[5pt\]\\mu \_{0}'&={\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\tau \_{0}'&=\\tau \_{0}+n\\tau \\\\\[5pt\]\\mu \_{0}'&={\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a6cfbdf504b1a9ce4cbe79561b4ae983fdf7271d) That is, to combine n data points with total precision of *nτ* (or equivalently, total variance of *n*/*σ*2) and mean of values x ÂŻ {\\textstyle {\\bar {x}}} ![{\\textstyle {\\bar {x}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/66ef5983f162d4e49610bc8240e713cc2bbca7d8), derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a *precision-weighted average*, i.e. a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.) The above formula reveals why it is more convenient to do [Bayesian analysis](https://en.wikipedia.org/wiki/Bayesian_analysis "Bayesian analysis") of [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas σ 0 2 â€Č \= 1 n σ 2 \+ 1 σ 0 2 ÎŒ 0 â€Č \= n x ÂŻ σ 2 \+ ÎŒ 0 σ 0 2 n σ 2 \+ 1 σ 0 2 x ÂŻ \= 1 n ∑ i \= 1 n x i {\\displaystyle {\\begin{aligned}{\\sigma \_{0}^{2}}'&={\\frac {1}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]\\mu \_{0}'&={\\frac {{\\frac {n{\\bar {x}}}{\\sigma ^{2}}}+{\\frac {\\mu \_{0}}{\\sigma \_{0}^{2}}}}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}{\\sigma \_{0}^{2}}'&={\\frac {1}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]\\mu \_{0}'&={\\frac {{\\frac {n{\\bar {x}}}{\\sigma ^{2}}}+{\\frac {\\mu \_{0}}{\\sigma \_{0}^{2}}}}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea454c8840683777ce8192d9ae63068c63962858) #### With known mean \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=45 "Edit section: With known mean")\] For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows x ∌ N ( ÎŒ , σ 2 ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})} ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf563c08aa87170a438239b0d291a4093fd2cb27) with known mean ÎŒ, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the [variance](https://en.wikipedia.org/wiki/Variance "Variance") has an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") or a [scaled inverse chi-squared distribution](https://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution "Scaled inverse chi-squared distribution"). The two are equivalent except for having different [parameterizations](https://en.wikipedia.org/wiki/Parameter "Parameter"). Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for *σ*2 is as follows: p ( σ 2 ∣ Μ 0 , σ 0 2 ) \= ( σ 0 2 Μ 0 2 ) Μ 0 / 2 Γ ( Μ 0 2 ) exp ⁥ \[ − Μ 0 σ 0 2 2 σ 2 \] ( σ 2 ) 1 \+ Μ 0 2 ∝ exp ⁥ \[ − Μ 0 σ 0 2 2 σ 2 \] ( σ 2 ) 1 \+ Μ 0 2 {\\displaystyle p(\\sigma ^{2}\\mid \\nu \_{0},\\sigma \_{0}^{2})={\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\nu \_{0}/2}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\propto {\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}} ![{\\displaystyle p(\\sigma ^{2}\\mid \\nu \_{0},\\sigma \_{0}^{2})={\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\nu \_{0}/2}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\propto {\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ef2528fe4774a93087d4adae570ef9ab84707f52) The [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") from above, written in terms of the variance, is: p ( X ∣ ÎŒ , σ 2 ) \= ( 1 2 π σ 2 ) n / 2 exp ⁥ \[ − 1 2 σ 2 ∑ i \= 1 n ( x i − ÎŒ ) 2 \] \= ( 1 2 π σ 2 ) n / 2 exp ⁥ \[ − S 2 σ 2 \] {\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\sigma ^{2})&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right\]\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\sigma ^{2})&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right\]\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc06aa31588bba03e4748f8f345f0638a75dc156) where S \= ∑ i \= 1 n ( x i − ÎŒ ) 2 . {\\displaystyle S=\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.} ![{\\displaystyle S=\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56adf28a77173ce852c7de7eeee102b2f6895b39) Then: p ( σ 2 ∣ X ) ∝ p ( X ∣ σ 2 ) p ( σ 2 ) \= ( 1 2 π σ 2 ) n / 2 exp ⁥ \[ − S 2 σ 2 \] ( σ 0 2 Μ 0 2 ) Μ 0 2 Γ ( Μ 0 2 ) exp ⁥ \[ − Μ 0 σ 0 2 2 σ 2 \] ( σ 2 ) 1 \+ Μ 0 2 ∝ ( 1 σ 2 ) n / 2 1 ( σ 2 ) 1 \+ Μ 0 2 exp ⁥ \[ − S 2 σ 2 \+ − Μ 0 σ 0 2 2 σ 2 \] \= 1 ( σ 2 ) 1 \+ Μ 0 \+ n 2 exp ⁥ \[ − Μ 0 σ 0 2 \+ S 2 σ 2 \] {\\displaystyle {\\begin{aligned}p(\\sigma ^{2}\\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\sigma ^{2})p(\\sigma ^{2})\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]{\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\frac {\\nu \_{0}}{2}}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\\\&\\propto \\left({\\frac {1}{\\sigma ^{2}}}\\right)^{n/2}{\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}+{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]\\\\&={\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}+n}{2}}}}}\\exp \\left\[-{\\frac {\\nu \_{0}\\sigma \_{0}^{2}+S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\sigma ^{2}\\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\sigma ^{2})p(\\sigma ^{2})\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]{\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\frac {\\nu \_{0}}{2}}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\\\&\\propto \\left({\\frac {1}{\\sigma ^{2}}}\\right)^{n/2}{\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}+{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]\\\\&={\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}+n}{2}}}}}\\exp \\left\[-{\\frac {\\nu \_{0}\\sigma \_{0}^{2}+S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/381c1b93f6dc76e2cdca9f3f1f77132dd51dc55f) The above is also a scaled inverse chi-squared distribution where Μ 0 â€Č \= Μ 0 \+ n Μ 0 â€Č σ 0 2 â€Č \= Μ 0 σ 0 2 \+ ∑ i \= 1 n ( x i − ÎŒ ) 2 {\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1d9cea4f20a8750894be82fb32d617284c433fd) or equivalently Μ 0 â€Č \= Μ 0 \+ n σ 0 2 â€Č \= Μ 0 σ 0 2 \+ ∑ i \= 1 n ( x i − ÎŒ ) 2 Μ 0 \+ n {\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\{\\sigma \_{0}^{2}}'&={\\frac {\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{\\nu \_{0}+n}}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\{\\sigma \_{0}^{2}}'&={\\frac {\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{\\nu \_{0}+n}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/192be53c5d9d249b2ef7ca5622430b689f1aee64) Reparameterizing in terms of an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution"), the result is: α â€Č \= α \+ n 2 ÎČ â€Č \= ÎČ \+ ∑ i \= 1 n ( x i − ÎŒ ) 2 2 {\\displaystyle {\\begin{aligned}\\alpha '&=\\alpha +{\\frac {n}{2}}\\\\\\beta '&=\\beta +{\\frac {\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{2}}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\alpha '&=\\alpha +{\\frac {n}{2}}\\\\\\beta '&=\\beta +{\\frac {\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{2}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6242673d0e1932e640fa7ebb2167edbb20535f35) #### With unknown mean and unknown variance \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=46 "Edit section: With unknown mean and unknown variance")\] For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows x ∌ N ( ÎŒ , σ 2 ) {\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})} ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf563c08aa87170a438239b0d291a4093fd2cb27) with unknown mean ÎŒ and unknown [variance](https://en.wikipedia.org/wiki/Variance "Variance") *σ*2, a combined (multivariate) [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") is placed over the mean and variance, consisting of a [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"). Logically, this originates as follows: 1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve [sufficient statistics](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points. 2. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and [sum of squared deviations](https://en.wikipedia.org/wiki/Sum_of_squared_deviations "Sum of squared deviations"). 3. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible. 4. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence. 5. This suggests that we create a *conditional prior* of the mean on the unknown variance, with a hyperparameter specifying the mean of the [pseudo-observations](https://en.wikipedia.org/wiki/Pseudo-observation "Pseudo-observation") associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately. 6. This leads immediately to the [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"), which is the product of the two distributions just defined, with [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") used (an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") over the variance, and a normal distribution over the mean, *conditional* on the variance) and with the same four parameters just defined. The priors are normally defined as follows: p ( ÎŒ ∣ σ 2 ; ÎŒ 0 , n 0 ) ∌ N ( ÎŒ 0 , σ 2 / n 0 ) p ( σ 2 ; Μ 0 , σ 0 2 ) ∌ I χ 2 ( Μ 0 , σ 0 2 ) \= I G ( Μ 0 / 2 , Μ 0 σ 0 2 / 2 ) {\\displaystyle {\\begin{aligned}p(\\mu \\mid \\sigma ^{2};\\mu \_{0},n\_{0})&\\sim {\\mathcal {N}}(\\mu \_{0},\\sigma ^{2}/n\_{0})\\\\p(\\sigma ^{2};\\nu \_{0},\\sigma \_{0}^{2})&\\sim I\\chi ^{2}(\\nu \_{0},\\sigma \_{0}^{2})=IG(\\nu \_{0}/2,\\nu \_{0}\\sigma \_{0}^{2}/2)\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}p(\\mu \\mid \\sigma ^{2};\\mu \_{0},n\_{0})&\\sim {\\mathcal {N}}(\\mu \_{0},\\sigma ^{2}/n\_{0})\\\\p(\\sigma ^{2};\\nu \_{0},\\sigma \_{0}^{2})&\\sim I\\chi ^{2}(\\nu \_{0},\\sigma \_{0}^{2})=IG(\\nu \_{0}/2,\\nu \_{0}\\sigma \_{0}^{2}/2)\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bab8dee515d3208f73dd85d1cb46706e3a9097f9) The update equations can be derived, and look as follows: x ÂŻ \= 1 n ∑ i \= 1 n x i ÎŒ 0 â€Č \= n 0 ÎŒ 0 \+ n x ÂŻ n 0 \+ n n 0 â€Č \= n 0 \+ n Μ 0 â€Č \= Μ 0 \+ n Μ 0 â€Č σ 0 2 â€Č \= Μ 0 σ 0 2 \+ ∑ i \= 1 n ( x i − x ÂŻ ) 2 \+ n 0 n n 0 \+ n ( ÎŒ 0 − x ÂŻ ) 2 {\\displaystyle {\\begin{aligned}{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\\\\\mu \_{0}'&={\\frac {n\_{0}\\mu \_{0}+n{\\bar {x}}}{n\_{0}+n}}\\\\n\_{0}'&=n\_{0}+n\\\\\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+{\\frac {n\_{0}n}{n\_{0}+n}}(\\mu \_{0}-{\\bar {x}})^{2}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\\\\\mu \_{0}'&={\\frac {n\_{0}\\mu \_{0}+n{\\bar {x}}}{n\_{0}+n}}\\\\n\_{0}'&=n\_{0}+n\\\\\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+{\\frac {n\_{0}n}{n\_{0}+n}}(\\mu \_{0}-{\\bar {x}})^{2}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/673b045d8322e2ce9e1ecc33c00585873b85547a)The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for Μ 0 â€Č σ 0 2 â€Č {\\textstyle \\nu \_{0}'{\\sigma \_{0}^{2}}'} ![{\\textstyle \\nu \_{0}'{\\sigma \_{0}^{2}}'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/97dcd132cd10175d6ce232772d0c5c5964a9f195) is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean. ## Occurrence and applications \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=47 "Edit section: Occurrence and applications")\] The occurrence of normal distribution in practical problems can be loosely classified into four categories: 1. Exactly normal distributions; 2. Approximately normal laws, for example when such approximation is justified by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"); and 3. Distributions modeled as normal – the normal distribution being the distribution with [maximum entropy](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy "Principle of maximum entropy") for a given mean and variance. 4. Regression problems – the normal distribution being found after systematic effects have been modeled sufficiently well. ### Exact normality \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=48 "Edit section: Exact normality")\] [![](https://upload.wikimedia.org/wikipedia/commons/b/bb/QHarmonicOscillator.png)](https://en.wikipedia.org/wiki/File:QHarmonicOscillator.png) The ground state of a [quantum harmonic oscillator](https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator "Quantum harmonic oscillator") has the Gaussian distribution. A normal distribution occurs in some [physical theories](https://en.wikipedia.org/wiki/Physical_theory "Physical theory"): - The [velocity distribution](https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution#Distribution_for_the_velocity_vector "Maxwell–Boltzmann distribution") of independently moving and perfectly elastic spheres, which is a consequence of [Maxwell's Dynamical Theory of Gases, Part I (1860)](https://en.wikipedia.org/wiki/Maxwell%27s_theorem "Maxwell's theorem").[\[59\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEMaxwell186023-59)[\[60\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEBryc19951-60) - The [ground state](https://en.wikipedia.org/wiki/Ground_state "Ground state") [wave function](https://en.wikipedia.org/wiki/Wave_function "Wave function") in [position space](https://en.wikipedia.org/wiki/Position_and_momentum_spaces#Quantum_mechanics "Position and momentum spaces") of the [quantum harmonic oscillator](https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator "Quantum harmonic oscillator").[\[61\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-61) - The position of a particle that experiences [diffusion](https://en.wikipedia.org/wiki/Diffusion "Diffusion").\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] If initially the particle is located at a specific point (that is its probability distribution is the [Dirac delta function](https://en.wikipedia.org/wiki/Dirac_delta_function "Dirac delta function")), then after time t its location is described by a normal distribution with variance t, which satisfies the [diffusion equation](https://en.wikipedia.org/wiki/Diffusion_equation "Diffusion equation") ∂ ∂ t f ( x , t ) \= 1 2 ∂ 2 ∂ x 2 f ( x , t ) {\\textstyle {\\frac {\\partial }{\\partial t}}f(x,t)={\\frac {1}{2}}{\\frac {\\partial ^{2}}{\\partial x^{2}}}f(x,t)} ![{\\textstyle {\\frac {\\partial }{\\partial t}}f(x,t)={\\frac {1}{2}}{\\frac {\\partial ^{2}}{\\partial x^{2}}}f(x,t)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d4f0268a17b46d104a4b1ab3967ead4e7da28d4b) . If the initial location is given by a certain density function g ( x ) {\\textstyle g(x)} ![{\\textstyle g(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0b78b479cc1c1ccd06f8cdfd31223335921e5a5b) , then the density at time t is the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of g and the normal probability density function. ### Approximate normality \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=49 "Edit section: Approximate normality")\] *Approximately* normal distributions occur in many situations, as explained by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). When the outcome is produced by many small effects acting *additively and independently*, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. - In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where [infinitely divisible](https://en.wikipedia.org/wiki/Infinitely_divisible "Infinitely divisible") and [decomposable](https://en.wikipedia.org/wiki/Indecomposable_distribution "Indecomposable distribution") distributions are involved, such as - [Binomial random variables](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"), associated with binary response variables; - [Poisson random variables](https://en.wikipedia.org/wiki/Poisson_random_variables "Poisson random variables"), associated with rare events; - [Thermal radiation](https://en.wikipedia.org/wiki/Thermal_radiation "Thermal radiation") has a [Bose–Einstein](https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statistics "Bose–Einstein statistics") distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. ### Assumed normality \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=50 "Edit section: Assumed normality")\] [![](https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Fisher_iris_versicolor_sepalwidth.svg/250px-Fisher_iris_versicolor_sepalwidth.svg.png)](https://en.wikipedia.org/wiki/File:Fisher_iris_versicolor_sepalwidth.svg) Histogram of sepal widths for *Iris versicolor* from Fisher's [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set "Iris flower data set"), with superimposed best-fitting normal distribution > I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations. — [Pearson (1901)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1901) There are statistical methods to empirically test that assumption; see the above [Normality tests](https://en.wikipedia.org/wiki/Normal_distribution#Normality_tests) section. - In [biology](https://en.wikipedia.org/wiki/Biology "Biology"), the *logarithm* of various variables tend to have a normal distribution, that is, they tend to have a [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") (after separation on male/female subpopulations), with examples including: - Measures of size of living tissue (length, height, skin area, weight);[\[62\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-62) - The *length* of *inert* appendages (hair, claws, nails, teeth) of biological specimens, *in the direction of growth*; presumably the thickness of tree bark also falls under this category; - Certain physiological measurements, such as blood pressure of adult humans. - In finance, in particular the [Black–Scholes model](https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model "Black–Scholes model"), changes in the *logarithm* of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like [compound interest](https://en.wikipedia.org/wiki/Compound_interest "Compound interest"), not like simple interest, and so are multiplicative). Some mathematicians such as [Benoit Mandelbrot](https://en.wikipedia.org/wiki/Benoit_Mandelbrot "Benoit Mandelbrot") have argued that [log-Levy distributions](https://en.wikipedia.org/wiki/Levy_skew_alpha-stable_distribution "Levy skew alpha-stable distribution"), which possess [heavy tails](https://en.wikipedia.org/wiki/Heavy_tails "Heavy tails"), would be a more appropriate model, in particular for the analysis for [stock market crashes](https://en.wikipedia.org/wiki/Stock_market_crash "Stock market crash"). The use of the assumption of normal distribution occurring in financial models has also been criticized by [Nassim Nicholas Taleb](https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb "Nassim Nicholas Taleb") in his works. - [Measurement errors](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.[\[63\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-63) - In [standardized testing](https://en.wikipedia.org/wiki/Standardized_testing_\(statistics\) "Standardized testing (statistics)"), results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the [IQ test](https://en.wikipedia.org/wiki/Intelligence_quotient "Intelligence quotient")) or transforming the raw test scores into output scores by fitting them to the normal distribution. For example, the [SAT](https://en.wikipedia.org/wiki/SAT "SAT")'s traditional range of 200–800 is based on a normal distribution with a mean of 500 and a standard deviation of 100. [![](https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/FitNormDistr.tif/lossless-page1-250px-FitNormDistr.tif.png)](https://en.wikipedia.org/wiki/File:FitNormDistr.tif) Fitted cumulative normal distribution to October rainfalls, see [distribution fitting](https://en.wikipedia.org/wiki/Distribution_fitting "Distribution fitting") - Many scores are derived from the normal distribution, including [percentile ranks](https://en.wikipedia.org/wiki/Percentile_rank "Percentile rank") (percentiles or quantiles), [normal curve equivalents](https://en.wikipedia.org/wiki/Normal_curve_equivalent "Normal curve equivalent"), [stanines](https://en.wikipedia.org/wiki/Stanine "Stanine"), [z-scores](https://en.wikipedia.org/wiki/Z-scores "Z-scores"), and T-scores. Additionally, some [behavioral statistical](https://en.wikipedia.org/wiki/Psychological_statistics "Psychological statistics") procedures assume that scores are normally distributed; for example, [t-tests](https://en.wikipedia.org/wiki/T-tests "T-tests") and [ANOVAs](https://en.wikipedia.org/wiki/Analysis_of_variance "Analysis of variance"). [Bell curve grading](https://en.wikipedia.org/wiki/Bell_curve_grading "Bell curve grading") assigns relative grades based on a normal distribution of scores. - In [hydrology](https://en.wikipedia.org/wiki/Hydrology "Hydrology") the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem").[\[64\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-64) The plot on the right illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% [confidence belt](https://en.wikipedia.org/wiki/Confidence_belt "Confidence belt") based on the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"). The rainfall data are represented by [plotting positions](https://en.wikipedia.org/wiki/Plotting_position "Plotting position") as part of the [cumulative frequency analysis](https://en.wikipedia.org/wiki/Cumulative_frequency_analysis "Cumulative frequency analysis"). ### Methodological problems and peer review \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=51 "Edit section: Methodological problems and peer review")\] [John Ioannidis](https://en.wikipedia.org/wiki/John_Ioannidis "John Ioannidis") [argued](https://en.wikipedia.org/wiki/Why_Most_Published_Research_Findings_Are_False "Why Most Published Research Findings Are False") that using normally distributed standard deviations as standards for validating research findings leave [falsifiable predictions](https://en.wikipedia.org/wiki/Falsifiability "Falsifiability") about phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if they were unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.[\[65\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-65) ## Computational methods \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=52 "Edit section: Computational methods")\] ### Generating values from normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=53 "Edit section: Generating values from normal distribution")\] [![](https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Planche_de_Galton.jpg/250px-Planche_de_Galton.jpg)](https://en.wikipedia.org/wiki/File:Planche_de_Galton.jpg) The [bean machine](https://en.wikipedia.org/wiki/Bean_machine "Bean machine"), a device invented by [Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton"), can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve. In computer simulations, especially in applications of the [Monte-Carlo method](https://en.wikipedia.org/wiki/Monte-Carlo_method "Monte-Carlo method"), it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a *N*(*ÎŒ*, *σ*2) can be generated as *X* = *ÎŒ* + *σZ*, where Z is standard normal. All these algorithms rely on the availability of a [random number generator](https://en.wikipedia.org/wiki/Random_number_generator "Random number generator") U capable of producing [uniform](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") random variates. - The most straightforward method is based on the [probability integral transform](https://en.wikipedia.org/wiki/Probability_integral_transform "Probability integral transform") property: if U is distributed uniformly on (0,1), then *Ί*−1(*U*) will have the standard normal distribution. The drawback of this method is that it relies on calculation of the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function") Ω−1, which cannot be done analytically. Some approximate methods are described in [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) and in the [erf](https://en.wikipedia.org/wiki/Error_function "Error function") article. Wichura gives a fast algorithm for computing this function to 16 decimal places,[\[66\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-66) which is used by [R](https://en.wikipedia.org/wiki/R_programming_language "R programming language") to compute random variates of the normal distribution. - [An easy-to-program approximate approach](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution#Approximating_a_Normal_distribution "Irwin–Hall distribution") that relies on the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") is as follows: generate 12 uniform *U*(0,1) deviates, add them all up, and subtract 6 – the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be [Irwin–Hall](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "Irwin–Hall distribution"), which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (−6, 6).[\[67\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-67) Note that in a true normal distribution, only 0.00034% of all samples will fall outside ±6*σ*. - The [Box–Muller method](https://en.wikipedia.org/wiki/Box%E2%80%93Muller_method "Box–Muller method") uses two independent random numbers U and V distributed [uniformly](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") on (0,1). Then the two random variables X and Y X \= − 2 ln ⁥ U cos ⁥ ( 2 π V ) , Y \= − 2 ln ⁥ U sin ⁥ ( 2 π V ) . {\\displaystyle X={\\sqrt {-2\\ln U}}\\,\\cos(2\\pi V),\\qquad Y={\\sqrt {-2\\ln U}}\\,\\sin(2\\pi V).} ![{\\displaystyle X={\\sqrt {-2\\ln U}}\\,\\cos(2\\pi V),\\qquad Y={\\sqrt {-2\\ln U}}\\,\\sin(2\\pi V).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/51fa20f18a8a5ed19c147db4686e7b15b6ca2e38) will both have the standard normal distribution, and will be [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). This formulation arises because for a [bivariate normal](https://en.wikipedia.org/wiki/Bivariate_normal "Bivariate normal") random vector (*X*, *Y*) the squared norm *X*2 + *Y*2 will have the [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with two degrees of freedom, which is an easily generated [exponential random variable](https://en.wikipedia.org/wiki/Exponential_random_variable "Exponential random variable") corresponding to the quantity −2 ln(*U*) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V. - The [Marsaglia polar method](https://en.wikipedia.org/wiki/Marsaglia_polar_method "Marsaglia polar method") is a modification of the Box–Muller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (−1,1) distribution, and then *S* = *U*2 + *V*2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities X \= U − 2 ln ⁥ S S , Y \= V − 2 ln ⁥ S S {\\displaystyle X=U{\\sqrt {\\frac {-2\\ln S}{S}}},\\qquad Y=V{\\sqrt {\\frac {-2\\ln S}{S}}}} ![{\\displaystyle X=U{\\sqrt {\\frac {-2\\ln S}{S}}},\\qquad Y=V{\\sqrt {\\frac {-2\\ln S}{S}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bdace1879c7c786ba946a60e5acb29f354d86796) are returned. Again, X and Y are independent, standard normal random variables. - The Ratio method[\[68\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-68) is a rejection method. The algorithm proceeds as follows: - Generate two independent uniform deviates U and V; - Compute *X* = √8/*e* (*V* − 0.5)/*U*; - Optional: if *X*2 ≀ 5 − 4*e*1/4*U* then accept X and terminate algorithm; - Optional: if *X*2 ≄ 4*e*−1.35/*U* + 1.4 then reject X and start over from step 1; - If *X*2 ≀ −4 ln *U* then accept X, otherwise start over the algorithm. The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved[\[69\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-69) so that the logarithm is rarely evaluated. - The [ziggurat algorithm](https://en.wikipedia.org/wiki/Ziggurat_algorithm "Ziggurat algorithm")[\[70\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-70) is faster than the Box–Muller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed. - Integer arithmetic can be used to sample from the standard normal distribution.[\[71\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-71)[\[72\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-72) This method is exact in the sense that it satisfies the conditions of *ideal approximation*;[\[73\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-73) i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number. - There is also some investigation[\[74\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-74) into the connection between the fast [Hadamard transform](https://en.wikipedia.org/wiki/Hadamard_transform "Hadamard transform") and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data. ### Numerical approximations for the normal cumulative distribution function and normal quantile function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=54 "Edit section: Numerical approximations for the normal cumulative distribution function and normal quantile function")\] The standard normal [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") is widely used in scientific and statistical computing. The values *Ί*(*x*) may be approximated very accurately by a variety of methods, such as [numerical integration](https://en.wikipedia.org/wiki/Numerical_integration "Numerical integration"), [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series"), [asymptotic series](https://en.wikipedia.org/wiki/Asymptotic_series "Asymptotic series") and [continued fractions](https://en.wikipedia.org/wiki/Gauss%27s_continued_fraction#Of_Kummer's_confluent_hypergeometric_function "Gauss's continued fraction"). Different approximations are used depending on the desired level of accuracy. - [Zelen & Severo (1964)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFZelenSevero1964) give the approximation for *Ί*(*x*) for *x* \> 0 with the absolute error \|*Δ*(*x*)\| \< 7.5·10−8 (algorithm [26\.2.17](https://secure.math.ubc.ca/~cbm/aands/page_932.htm)): Ί ( x ) \= 1 − φ ( x ) ( b 1 t \+ b 2 t 2 \+ b 3 t 3 \+ b 4 t 4 \+ b 5 t 5 ) \+ Δ ( x ) , t \= 1 1 \+ b 0 x , {\\displaystyle \\Phi (x)=1-\\varphi (x)\\left(b\_{1}t+b\_{2}t^{2}+b\_{3}t^{3}+b\_{4}t^{4}+b\_{5}t^{5}\\right)+\\varepsilon (x),\\qquad t={\\frac {1}{1+b\_{0}x}},} ![{\\displaystyle \\Phi (x)=1-\\varphi (x)\\left(b\_{1}t+b\_{2}t^{2}+b\_{3}t^{3}+b\_{4}t^{4}+b\_{5}t^{5}\\right)+\\varepsilon (x),\\qquad t={\\frac {1}{1+b\_{0}x}},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202a295cd562d4d7404a1042e23f14b8d72be308) where *ϕ*(*x*) is the standard normal probability density function, and *b*0 = 0.2316419, *b*1 = 0.319381530, *b*2 = −0.356563782, *b*3 = 1.781477937, *b*4 = −1.821255978, *b*5 = 1.330274429. - [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) lists dozens of approximations by means of rational functions, with or without exponentials, for the `erfc()` function, where erfc(x) = 1 - erf(x). His algorithms vary in the degree of complexity and the resulting precision, with a maximum absolute precision of 24 digits. An algorithm by [West (2009)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWest2009) combines Hart's algorithm 5666 with a [continued fraction](https://en.wikipedia.org/wiki/Continued_fraction "Continued fraction") approximation in the tail to provide a fast computation algorithm with 16-digit precision. - [Cody (1969)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCody1969), after recalling the Hart68 solution is not suited for erf, gave a solution for both erf and erfc, with maximal relative error bound, via [Rational Chebyshev Approximation](https://en.wikipedia.org/wiki/Rational_function "Rational function"). - [Marsaglia (2004)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsaglia2004) suggested a simple algorithm[\[note 1\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-75) based on the Taylor series expansion Ί ( x ) \= 1 2 \+ φ ( x ) ( x \+ x 3 3 \+ x 5 3 ⋅ 5 \+ x 7 3 ⋅ 5 ⋅ 7 \+ x 9 3 ⋅ 5 ⋅ 7 ⋅ 9 \+ ⋯ ) {\\displaystyle \\Phi (x)={\\frac {1}{2}}+\\varphi (x)\\left(x+{\\frac {x^{3}}{3}}+{\\frac {x^{5}}{3\\cdot 5}}+{\\frac {x^{7}}{3\\cdot 5\\cdot 7}}+{\\frac {x^{9}}{3\\cdot 5\\cdot 7\\cdot 9}}+\\cdots \\right)} ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}+\\varphi (x)\\left(x+{\\frac {x^{3}}{3}}+{\\frac {x^{5}}{3\\cdot 5}}+{\\frac {x^{7}}{3\\cdot 5\\cdot 7}}+{\\frac {x^{9}}{3\\cdot 5\\cdot 7\\cdot 9}}+\\cdots \\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ca45895a9095ca37f734f18a83481576ba4c5a49) for calculating *Ί*(*x*) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when *x* = 10). - The [GNU Scientific Library](https://en.wikipedia.org/wiki/GNU_Scientific_Library "GNU Scientific Library") calculates values of the standard normal cumulative distribution function using Hart's algorithms and approximations with [Chebyshev polynomials](https://en.wikipedia.org/wiki/Chebyshev_polynomial "Chebyshev polynomial"). - [Dia (2023)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDia2023) proposes the following approximation of 1 − Ί {\\textstyle 1-\\Phi } ![{\\textstyle 1-\\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/80cf15e7b1c8138c3b5cc37f31168f914c7d6621) with a maximum relative error less than 2 − 53 {\\textstyle 2^{-53}} ![{\\textstyle 2^{-53}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/208228d226c31bceb0c8aeadfab59460f19a157e) ( ≈ 1\.1 × 10 − 16 ) {\\textstyle \\left(\\approx 1.1\\times 10^{-16}\\right)} ![{\\textstyle \\left(\\approx 1.1\\times 10^{-16}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/191b9343ac891e152b2093e8b6ac149e367dbf58) in absolute value: for x ≄ 0 {\\textstyle x\\geq 0} ![{\\textstyle x\\geq 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c434fc1e2ab777786469de853c75e616007b3eb4) 1 − Ί ( x ) \= ( 0\.39894228040143268 x \+ 2\.92678600515804815 ) ( x 2 \+ 8\.42742300458043240 x \+ 18\.38871225773938487 x 2 \+ 5\.81582518933527391 x \+ 8\.97280659046817350 ) ( x 2 \+ 7\.30756258553673541 x \+ 18\.25323235347346525 x 2 \+ 5\.70347935898051437 x \+ 10\.27157061171363079 ) ( x 2 \+ 5\.66479518878470765 x \+ 18\.61193318971775795 x 2 \+ 5\.51862483025707963 x \+ 12\.72323261907760928 ) ( x 2 \+ 4\.91396098895240075 x \+ 24\.14804072812762821 x 2 \+ 5\.26184239579604207 x \+ 16\.88639562007936908 ) ( x 2 \+ 3\.83362947800146179 x \+ 11\.61511226260603247 x 2 \+ 4\.92081346632882033 x \+ 24\.12333774572479110 ) e − x 2 2 {\\textstyle {\\begin{aligned}1-\\Phi \\left(x\\right)&=\\left({\\frac {0.39894228040143268}{x+2.92678600515804815}}\\right)\\left({\\frac {x^{2}+8.42742300458043240x+18.38871225773938487}{x^{2}+5.81582518933527391x+8.97280659046817350}}\\right)\\\\&\\left({\\frac {x^{2}+7.30756258553673541x+18.25323235347346525}{x^{2}+5.70347935898051437x+10.27157061171363079}}\\right)\\left({\\frac {x^{2}+5.66479518878470765x+18.61193318971775795}{x^{2}+5.51862483025707963x+12.72323261907760928}}\\right)\\\\&\\left({\\frac {x^{2}+4.91396098895240075x+24.14804072812762821}{x^{2}+5.26184239579604207x+16.88639562007936908}}\\right)\\left({\\frac {x^{2}+3.83362947800146179x+11.61511226260603247}{x^{2}+4.92081346632882033x+24.12333774572479110}}\\right)e^{-{\\frac {x^{2}}{2}}}\\end{aligned}}} ![{\\textstyle {\\begin{aligned}1-\\Phi \\left(x\\right)&=\\left({\\frac {0.39894228040143268}{x+2.92678600515804815}}\\right)\\left({\\frac {x^{2}+8.42742300458043240x+18.38871225773938487}{x^{2}+5.81582518933527391x+8.97280659046817350}}\\right)\\\\&\\left({\\frac {x^{2}+7.30756258553673541x+18.25323235347346525}{x^{2}+5.70347935898051437x+10.27157061171363079}}\\right)\\left({\\frac {x^{2}+5.66479518878470765x+18.61193318971775795}{x^{2}+5.51862483025707963x+12.72323261907760928}}\\right)\\\\&\\left({\\frac {x^{2}+4.91396098895240075x+24.14804072812762821}{x^{2}+5.26184239579604207x+16.88639562007936908}}\\right)\\left({\\frac {x^{2}+3.83362947800146179x+11.61511226260603247}{x^{2}+4.92081346632882033x+24.12333774572479110}}\\right)e^{-{\\frac {x^{2}}{2}}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0f9a049b86b4971707745b5c6ba2e40ae4e25205) and for x \< 0 {\\textstyle x\<0} ![{\\textstyle x\<0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/11dbe80785d8f5d86eb8e91c35b6f3003f8d2838) , 1 − Ί ( x ) \= 1 − ( 1 − Ί ( − x ) ) {\\displaystyle 1-\\Phi \\left(x\\right)=1-\\left(1-\\Phi \\left(-x\\right)\\right)} ![{\\displaystyle 1-\\Phi \\left(x\\right)=1-\\left(1-\\Phi \\left(-x\\right)\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2e12b409099845b3057fa7bdb2d9b84c6cacf73a) Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting *p* = *Ί*(*z*), the simplest approximation for the quantile function is: z \= Ί − 1 ( p ) \= 5\.5556 \[ 1 − ( 1 − p p ) 0\.1186 \] , p ≄ 1 / 2 {\\displaystyle z=\\Phi ^{-1}(p)=5.5556\\left\[1-\\left({\\frac {1-p}{p}}\\right)^{0.1186}\\right\],\\qquad p\\geq 1/2} ![{\\displaystyle z=\\Phi ^{-1}(p)=5.5556\\left\[1-\\left({\\frac {1-p}{p}}\\right)^{0.1186}\\right\],\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5f2df7f1427d0c90d075faef38f4f5ab7acce5c9) This approximation delivers for z a maximum absolute error of 0.026 (for 0\.5 ≀ *p* ≀ 0.9999, corresponding to 0 ≀ *z* ≀ 3.719). For *p* \< 1/2 replace p by 1 − *p* and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation: z \= − 0\.4115 { 1 − p p \+ log ⁥ \[ 1 − p p \] − 1 } , p ≄ 1 / 2 {\\displaystyle z=-0.4115\\left\\{{\\frac {1-p}{p}}+\\log \\left\[{\\frac {1-p}{p}}\\right\]-1\\right\\},\\qquad p\\geq 1/2} ![{\\displaystyle z=-0.4115\\left\\{{\\frac {1-p}{p}}+\\log \\left\[{\\frac {1-p}{p}}\\right\]-1\\right\\},\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1edea9f990058f741db6735799c8b40999b833b) The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by L ( z ) \= ∫ z ∞ ( u − z ) φ ( u ) d u \= ∫ z ∞ \[ 1 − Ί ( u ) \] d u L ( z ) ≈ { 0\.4115 ( p 1 − p ) − z , p \< 1 / 2 , 0\.4115 ( 1 − p p ) , p ≄ 1 / 2\. or, equivalently, L ( z ) ≈ { 0\.4115 { 1 − log ⁥ \[ p 1 − p \] } , p \< 1 / 2 , 0\.4115 1 − p p , p ≄ 1 / 2\. {\\displaystyle {\\begin{aligned}L(z)&=\\int \_{z}^{\\infty }(u-z)\\varphi (u)\\,du=\\int \_{z}^{\\infty }\[1-\\Phi (u)\]\\,du\\\\\[5pt\]L(z)&\\approx {\\begin{cases}0.4115\\left({\\dfrac {p}{1-p}}\\right)-z,\&p\<1/2,\\\\\\\\0.4115\\left({\\dfrac {1-p}{p}}\\right),\&p\\geq 1/2.\\end{cases}}\\\\\[5pt\]{\\text{or, equivalently,}}\\\\L(z)&\\approx {\\begin{cases}0.4115\\left\\{1-\\log \\left\[{\\frac {p}{1-p}}\\right\]\\right\\},\&p\<1/2,\\\\\\\\0.4115{\\dfrac {1-p}{p}},\&p\\geq 1/2.\\end{cases}}\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}L(z)&=\\int \_{z}^{\\infty }(u-z)\\varphi (u)\\,du=\\int \_{z}^{\\infty }\[1-\\Phi (u)\]\\,du\\\\\[5pt\]L(z)&\\approx {\\begin{cases}0.4115\\left({\\dfrac {p}{1-p}}\\right)-z,\&p\<1/2,\\\\\\\\0.4115\\left({\\dfrac {1-p}{p}}\\right),\&p\\geq 1/2.\\end{cases}}\\\\\[5pt\]{\\text{or, equivalently,}}\\\\L(z)&\\approx {\\begin{cases}0.4115\\left\\{1-\\log \\left\[{\\frac {p}{1-p}}\\right\]\\right\\},\&p\<1/2,\\\\\\\\0.4115{\\dfrac {1-p}{p}},\&p\\geq 1/2.\\end{cases}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4b69fa586cffdfbbd40a94c65629726e4ae78bf) This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for *z* ≄ 1.4). Highly accurate approximations for the cumulative distribution function, based on [Response Modeling Methodology](https://en.wikipedia.org/wiki/Response_Modeling_Methodology "Response Modeling Methodology") (RMM, Shore, 2011, 2012), are shown in Shore (2005). Some more approximations can be found at: [Error function\#Approximation with elementary functions](https://en.wikipedia.org/wiki/Error_function#Approximation_with_elementary_functions "Error function"). In particular, small *relative* error on the whole domain for the cumulative distribution function ⁠ Ί {\\displaystyle \\Phi } ![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24) ⁠ and the quantile function Ί − 1 {\\textstyle \\Phi ^{-1}} ![{\\textstyle \\Phi ^{-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd21c29efa71343458e18c6c3bdd7a1005cafa0d) as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008. ## History \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=55 "Edit section: History")\] ### Development \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=56 "Edit section: Development")\] Some authors[\[75\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-76)[\[76\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-77) attribute the discovery of the normal distribution to [de Moivre](https://en.wikipedia.org/wiki/De_Moivre "De Moivre"), who in 1738[\[note 2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-78) published in the second edition of his *[The Doctrine of Chances](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances")* the study of the coefficients in the [binomial expansion](https://en.wikipedia.org/wiki/Binomial_expansion "Binomial expansion") of (*a* + *b*)*n*. De Moivre proved that the middle term in this expansion has the approximate magnitude of 2 n / 2 π n {\\textstyle 2^{n}/{\\sqrt {2\\pi n}}} ![{\\textstyle 2^{n}/{\\sqrt {2\\pi n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ae9fee018963a079bb837482314ae6b1533a3a19), and that "If m or ⁠1/2⁠*n* be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ, has to the middle Term, is − 2 ℓ ℓ n {\\textstyle -{\\frac {2\\ell \\ell }{n}}} ![{\\textstyle -{\\frac {2\\ell \\ell }{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d5327ed8841e09d62970ee806553294cdfe96e9e)."[\[77\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-79) Although this theorem can be interpreted as the first obscure expression for the normal probability law, [Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[\[78\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-80) [![](https://upload.wikimedia.org/wikipedia/commons/thumb/9/9b/Carl_Friedrich_Gauss.jpg/250px-Carl_Friedrich_Gauss.jpg)](https://en.wikipedia.org/wiki/File:Carl_Friedrich_Gauss.jpg) In 1809, [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") showed that the normal distribution provides a way to rationalize the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares"). In 1823 [Gauss](https://en.wikipedia.org/wiki/Gauss "Gauss") published his monograph "*Theoria combinationis observationum erroribus minimis obnoxiae*" where among other things he introduces several important statistical concepts, such as the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares"), the [method of maximum likelihood](https://en.wikipedia.org/wiki/Method_of_maximum_likelihood "Method of maximum likelihood"), and the *normal distribution*. Gauss used M, *M*â€Č, *M*″, ... to denote the measurements of some unknown quantity V, and sought the most probable estimator of that quantity: the one that maximizes the probability *φ*(*M* − *V*) · *φ*(*M*â€Č − *V*) · *φ*(*M*″ − *V*) · ... of obtaining the observed experimental results. In his notation φΔ is the probability density function of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[\[note 3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-81) Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[\[79\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-82) φ Δ \= h √ π e − h h Δ Δ , {\\displaystyle \\varphi {\\mathit {\\Delta }}={\\frac {h}{\\surd \\pi }}\\,e^{-\\mathrm {hh} \\Delta \\Delta },} ![{\\displaystyle \\varphi {\\mathit {\\Delta }}={\\frac {h}{\\surd \\pi }}\\,e^{-\\mathrm {hh} \\Delta \\Delta },}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c45300f5e3b84f9d3571c95d621dc76c4097b4b3) where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the [non-linear](https://en.wikipedia.org/wiki/Non-linear_least_squares "Non-linear least squares") [weighted least squares](https://en.wikipedia.org/wiki/Weighted_least_squares "Weighted least squares") method.[\[80\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-83) [![](https://upload.wikimedia.org/wikipedia/commons/thumb/e/e3/Pierre-Simon_Laplace.jpg/250px-Pierre-Simon_Laplace.jpg)](https://en.wikipedia.org/wiki/File:Pierre-Simon_Laplace.jpg) [Pierre-Simon Laplace](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") proved the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") in 1810, consolidating the importance of the normal distribution in statistics. Although Gauss was the first to suggest the normal distribution law, [Laplace](https://en.wikipedia.org/wiki/Laplace "Laplace") made significant contributions.[\[note 4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-84) It was Laplace who first posed the problem of aggregating several observations in 1774,[\[81\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-85) although his own solution led to the [Laplacian distribution](https://en.wikipedia.org/wiki/Laplacian_distribution "Laplacian distribution"). It was Laplace who first calculated the value of the [integral ∫ *e*−*t*2 *dt* = √π](https://en.wikipedia.org/wiki/Gaussian_integral "Gaussian integral") in 1782, providing the normalization constant for the normal distribution.[\[82\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-86) For this accomplishment, Gauss acknowledged the priority of Laplace.[\[83\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-87) Finally, it was Laplace who in 1810 proved and presented to the academy the fundamental [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"), which emphasized the theoretical importance of the normal distribution.[\[84\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-88) It is of interest to note that in 1809 an Irish-American mathematician [Robert Adrain](https://en.wikipedia.org/wiki/Robert_Adrain "Robert Adrain") published two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.[\[85\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-89) His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed by [Abbe](https://en.wikipedia.org/wiki/Cleveland_Abbe "Cleveland Abbe").[\[86\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-90) In the middle of the 19th century [Maxwell](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[\[59\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEMaxwell186023-59) The number of particles whose velocity, resolved in a certain direction, lies between x and *x* + *dx* is N ⁥ 1 α π e − x 2 α 2 d x {\\displaystyle \\operatorname {N} {\\frac {1}{\\alpha \\;{\\sqrt {\\pi }}}}\\;e^{-{\\frac {x^{2}}{\\alpha ^{2}}}}\\,dx} ![{\\displaystyle \\operatorname {N} {\\frac {1}{\\alpha \\;{\\sqrt {\\pi }}}}\\;e^{-{\\frac {x^{2}}{\\alpha ^{2}}}}\\,dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/75ebaa526f97ad5136df9d8a540d1970aa8c5664) ### Naming \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=57 "Edit section: Naming")\] Today, the concept is usually known in English as the **normal distribution** or **Gaussian distribution**. Other less common names include Gauss distribution, Laplace–Gauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.[\[87\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-91) However, by the end of the 19th century some authors[\[note 5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-92) had started using the name *normal distribution*, where the word "normal" was used as an adjective – the term now being seen as a reflection of this distribution being seen as typical, common – and thus normal. [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce") (one of those authors) once defined "normal" thus: "... the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what *would*, in the long run, occur under certain circumstances."[\[88\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-93) Around the turn of the 20th century [Pearson](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") popularized the term *normal* as a designation for this distribution.[\[89\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-94) > Many years ago I called the Laplace–Gaussian curve the *normal* curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'. — [Pearson (1920)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1920) Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, [Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher "Ronald Fisher") added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: d f \= 1 2 σ 2 π e − ( x − m ) 2 / ( 2 σ 2 ) d x . {\\displaystyle df={\\frac {1}{\\sqrt {2\\sigma ^{2}\\pi }}}e^{-(x-m)^{2}/(2\\sigma ^{2})}\\,dx.} ![{\\displaystyle df={\\frac {1}{\\sqrt {2\\sigma ^{2}\\pi }}}e^{-(x-m)^{2}/(2\\sigma ^{2})}\\,dx.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fcecb090761f24d376d5f8f4962eb0459504fcae) The term *standard normal distribution*, which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947) *Introduction to Mathematical Statistics* and [Alexander M. Mood](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950) *Introduction to the Theory of Statistics*.[\[90\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-95)[\[91\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-96)[\[92\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-97) ## See also \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=58 "Edit section: See also")\] - [![icon](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Nuvola_apps_edu_mathematics_blue-p.svg/40px-Nuvola_apps_edu_mathematics_blue-p.svg.png)](https://en.wikipedia.org/wiki/File:Nuvola_apps_edu_mathematics_blue-p.svg)[Mathematics portal](https://en.wikipedia.org/wiki/Portal:Mathematics "Portal:Mathematics") - [Bates distribution](https://en.wikipedia.org/wiki/Bates_distribution "Bates distribution") – similar to the Irwin–Hall distribution, but rescaled back into the 0 to 1 range - [Behrens–Fisher problem](https://en.wikipedia.org/wiki/Behrens%E2%80%93Fisher_problem "Behrens–Fisher problem") – the long-standing problem of testing whether two normal samples with different variances have same means; - [Bhattacharyya distance](https://en.wikipedia.org/wiki/Bhattacharyya_distance "Bhattacharyya distance") – method used to separate mixtures of normal distributions - [ErdƑs–Kac theorem](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kac_theorem "ErdƑs–Kac theorem") – on the occurrence of the normal distribution in [number theory](https://en.wikipedia.org/wiki/Number_theory "Number theory") - [Full width at half maximum](https://en.wikipedia.org/wiki/Full_width_at_half_maximum "Full width at half maximum") - [Gaussian blur](https://en.wikipedia.org/wiki/Gaussian_blur "Gaussian blur") – [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution"), which uses the normal distribution as a kernel - [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function "Gaussian function") - [Modified half-normal distribution](https://en.wikipedia.org/wiki/Modified_half-normal_distribution "Modified half-normal distribution")[\[93\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Sun-2021-98) with the pdf on ( 0 , ∞ ) {\\textstyle (0,\\infty )} ![{\\textstyle (0,\\infty )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ae3c3bf34fb8926a1f042fdbf486b618904a66d6) is given as f ( x ) \= 2 ÎČ Î± / 2 x α − 1 exp ⁥ ( − ÎČ x 2 \+ Îł x ) Κ ( α 2 , Îł ÎČ ) {\\textstyle f(x)={\\frac {2\\beta ^{\\alpha /2}x^{\\alpha -1}\\exp(-\\beta x^{2}+\\gamma x)}{\\Psi \\left({\\frac {\\alpha }{2}},{\\frac {\\gamma }{\\sqrt {\\beta }}}\\right)}}} ![{\\textstyle f(x)={\\frac {2\\beta ^{\\alpha /2}x^{\\alpha -1}\\exp(-\\beta x^{2}+\\gamma x)}{\\Psi \\left({\\frac {\\alpha }{2}},{\\frac {\\gamma }{\\sqrt {\\beta }}}\\right)}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dc6abbdf77999c8a4fd15f2ea35054b02d7324d) , where Κ ( α , z ) \= 1 Κ 1 ( ( α , 1 2 ) ( 1 , 0 ) ; z ) {\\textstyle \\Psi (\\alpha ,z)={}\_{1}\\Psi \_{1}\\left({\\begin{matrix}\\left(\\alpha ,{\\frac {1}{2}}\\right)\\\\(1,0)\\end{matrix}};z\\right)} ![{\\textstyle \\Psi (\\alpha ,z)={}\_{1}\\Psi \_{1}\\left({\\begin{matrix}\\left(\\alpha ,{\\frac {1}{2}}\\right)\\\\(1,0)\\end{matrix}};z\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3d1921f133045eb2313bee35c48ea65b07c4b5ad) denotes the [Fox–Wright Psi function](https://en.wikipedia.org/wiki/Fox%E2%80%93Wright_Psi_function "Fox–Wright Psi function"). - [Normally distributed and uncorrelated does not imply independent](https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent "Normally distributed and uncorrelated does not imply independent") - [Ratio normal distribution](https://en.wikipedia.org/wiki/Ratio_normal_distribution "Ratio normal distribution") - [Reciprocal normal distribution](https://en.wikipedia.org/wiki/Reciprocal_normal_distribution "Reciprocal normal distribution") - [Standard normal table](https://en.wikipedia.org/wiki/Standard_normal_table "Standard normal table") - [Stein's lemma](https://en.wikipedia.org/wiki/Stein%27s_lemma "Stein's lemma") - [Sub-Gaussian distribution](https://en.wikipedia.org/wiki/Sub-Gaussian_distribution "Sub-Gaussian distribution") - [Sum of normally distributed random variables](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables "Sum of normally distributed random variables") - [Tweedie distribution](https://en.wikipedia.org/wiki/Tweedie_distribution "Tweedie distribution") – The normal distribution is a member of the family of Tweedie [exponential dispersion models](https://en.wikipedia.org/wiki/Exponential_dispersion_model "Exponential dispersion model"). - [Wrapped normal distribution](https://en.wikipedia.org/wiki/Wrapped_normal_distribution "Wrapped normal distribution") – the normal distribution applied to a circular domain - [Z-test](https://en.wikipedia.org/wiki/Z-test "Z-test") – using the normal distribution ## Notes \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=59 "Edit section: Notes")\] 1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-75)** For example, this algorithm is given in the article [Bc programming language](https://en.wikipedia.org/wiki/Bc_programming_language#A_translated_C_function "Bc programming language"). 2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-78)** De Moivre first published his findings in 1733, in a pamphlet *Approximatio ad Summam Terminorum Binomii* (*a* + *b*)*n* *in Seriem Expansi* that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example [Walker (1985)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985). 3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-81)** "It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been determined by several direct observations, made under the same circumstances and with equal care, the arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at least, so that it is always most safe to adhere to it." — [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177) 4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-84)** "My custom of terming the curve the Gauss–Laplacian or *normal* curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189) 5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-92)** Besides those specifically referenced here, such use is encountered in the works of [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce"), [Galton](https://en.wikipedia.org/wiki/Galton "Galton") ([Galton (1889](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalton1889), chapter V)) and [Lexis](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") ([Lexis (1878)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLexis1878), [Rohrbasser & VĂ©ron (2003)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFRohrbasserV%C3%A9ron2003)) c. 1875.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] ## References \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=60 "Edit section: References")\] ### Citations \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=61 "Edit section: Citations")\] 1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Norton-2019_1-0)** Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019). ["Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation"](https://web.archive.org/web/20230331230821/http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF). *Annals of Operations Research*. **299** (1–2\). Springer: 1281–1315\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1811\.11301](https://arxiv.org/abs/1811.11301). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s10479-019-03373-1](https://doi.org/10.1007%2Fs10479-019-03373-1). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [254231768](https://api.semanticscholar.org/CorpusID:254231768). Archived from [the original](http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF) on March 31, 2023. Retrieved February 27, 2023. 2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-The_Joy_of_Finite_Mathematics_2-0)** Tsokos, Chris; Wooten, Rebecca (January 1, 2016). Tsokos, Chris; Wooten, Rebecca (eds.). [*The Joy of Finite Mathematics*](https://linkinghub.elsevier.com/retrieve/pii/B9780128029671000073). Boston: Academic Press. pp. 231–263\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-802967-1.00007-3](https://doi.org/10.1016%2Fb978-0-12-802967-1.00007-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-12-802967-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-802967-1 "Special:BookSources/978-0-12-802967-1") . 3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Mathematics_for_Physical_Science_and_Engineering_3-0)** Harris, Frank E. (January 1, 2014). Harris, Frank E. (ed.). [*Mathematics for Physical Science and Engineering*](https://linkinghub.elsevier.com/retrieve/pii/B9780128010006000183). Boston: Academic Press. pp. 663–709\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-801000-6.00018-3](https://doi.org/10.1016%2Fb978-0-12-801000-6.00018-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-12-801000-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-801000-6 "Special:BookSources/978-0-12-801000-6") . 4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-4)** [Hoel (1947](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947), [p. 31](https://archive.org/details/in.ernet.dli.2015.263186/page/n39/mode/2up?q=%22normal+distribution%22)) and [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 109](https://archive.org/details/introductiontoth0000alex/page/108/mode/2up?q=%22normal+distribution%22)) give this definition with slightly different notation. 5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-5)** [*Normal Distribution*](http://www.encyclopedia.com/topic/Normal_Distribution.aspx#3), Gale Encyclopedia of Psychology 6. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-6)** [Casella & Berger (2001](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCasellaBerger2001), p. 102) 7. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-7)** Lyon, A. (2014). [Why are Normal Distributions Normal?](https://aidanlyon.com/normal_distributions.pdf), The British Journal for the Philosophy of Science. 8. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-8)** Jorge, Nocedal; Stephan, J. Wright (2006). *Numerical Optimization* (2nd ed.). Springer. p. 249. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0387-30303-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0387-30303-1 "Special:BookSources/978-0387-30303-1") . 9. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-1) ["Normal Distribution"](https://www.mathsisfun.com/data/standard-normal-distribution.html). *www.mathsisfun.com*. Retrieved August 15, 2020. 10. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-10)** ["bell curve"](https://www.merriam-webster.com/dictionary/bell%20curve). *Merriam-Webster.com Dictionary*. Retrieved May 25, 2025. 11. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-11)** [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 112](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22)) explicitly defines the *standard normal distribution*. In contrast, [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) explicitly defines the *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and introduces the term *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22). 12. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-12)** [Stigler (1982)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1982) 13. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-13)** [Halperin, Hartley & Hoel (1965](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHalperinHartleyHoel1965), item 7) 14. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-14)** [McPherson (1990](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMcPherson1990), p. 110) 15. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-15)** [Bernardo & Smith (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBernardoSmith2000), p. 121) 16. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-KunIlPark_16-0)** Park, Kun Il (2018). *Fundamentals of Probability and Stochastic Processes with Applications to Communications*. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-3-319-68074-3](https://en.wikipedia.org/wiki/Special:BookSources/978-3-319-68074-3 "Special:BookSources/978-3-319-68074-3") . 17. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-17)** Scott, Clayton; Nowak, Robert (August 7, 2003). ["The Q-function"](http://cnx.org/content/m11537/1.2/). *Connexions*. 18. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-18)** Barak, Ohad (April 6, 2006). ["Q Function and Error Function"](https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF). Tel Aviv University. Archived from [the original](http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF) on March 25, 2009. 19. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-19)** [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution Function"](https://mathworld.wolfram.com/NormalDistributionFunction.html). *[MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld")*. 20. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-20)** [Abramowitz, Milton](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); [Stegun, Irene Ann](https://en.wikipedia.org/wiki/Irene_Stegun "Irene Stegun"), eds. (1983) \[June 1964\]. ["Chapter 26, eqn 26.2.12"](http://www.math.ubc.ca/~cbm/aands/page_932.htm). [*Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun"). Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 932. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0") . [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [64-60036](https://lccn.loc.gov/64-60036). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0167642](https://mathscinet.ams.org/mathscinet-getitem?mr=0167642). [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [65-12253](https://www.loc.gov/item/65012253). 21. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-duff_21-0)** Duff, Michael (2003). "Normal Distribution Algorithms". *The Mathematical Gazette*. **87** (509): 331–336\. [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [3621062](https://www.jstor.org/stable/3621062). 22. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-1) Stuart, Alan; Ord, J. Keith (1987). ["The normal d.f."](https://archive.org/details/kendallsadvanced0001kend/page/183/mode/1up). *Kendall's Advanced Theory of Statistics*. Vol. 1: Distribution Theory. originally by [Maurice Kendall](https://en.wikipedia.org/wiki/Maurice_Kendall "Maurice Kendall") (5th ed.). Charles Griffin & Co. § 5\.37, pp. 183–185. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-85264-285-7](https://en.wikipedia.org/wiki/Special:BookSources/0-85264-285-7 "Special:BookSources/0-85264-285-7") . 23. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-23)** Vaart, A. W. van der (October 13, 1998). [*Asymptotic Statistics*](https://dx.doi.org/10.1017/cbo9780511802256). Cambridge University Press. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1017/cbo9780511802256](https://doi.org/10.1017%2Fcbo9780511802256). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-511-80225-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-511-80225-6 "Special:BookSources/978-0-511-80225-6") . 24. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-1) [Cover & Thomas (2006)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCoverThomas2006), p. 254. 25. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-25)** Park, Sung Y.; Bera, Anil K. (2009). ["Maximum Entropy Autoregressive Conditional Heteroskedasticity Model"](https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf) (PDF). *Journal of Econometrics*. **150** (2): 219–230\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2009JEcon.150..219P](https://ui.adsabs.harvard.edu/abs/2009JEcon.150..219P). [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.511.9750](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.511.9750). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.jeconom.2008.12.014](https://doi.org/10.1016%2Fj.jeconom.2008.12.014). Archived from [the original](http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf) (PDF) on March 7, 2016. Retrieved June 2, 2011. 26. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Geary_RC_26-0)** Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184 27. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-27)** [Lukacs, Eugene](https://en.wikipedia.org/wiki/Eugene_Lukacs "Eugene Lukacs") (March 1942). ["A Characterization of the Normal Distribution"](https://archive.org/details/dli.ernet.4125/page/91). *[Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics "Annals of Mathematical Statistics")*. **13** (1): 91–93\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/AOMS/1177731647](https://doi.org/10.1214%2FAOMS%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0006626](https://mathscinet.ams.org/mathscinet-getitem?mr=0006626). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0060\.28509](https://zbmath.org/?format=complete&q=an:0060.28509). [Wikidata](https://en.wikipedia.org/wiki/WDQ_\(identifier\) "WDQ (identifier)") [Q55897617](https://www.wikidata.org/wiki/Q55897617 "d:Q55897617"). 28. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-1) [***c***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-2) [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.4\]) 29. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-29)** [Fan (1991](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFFan1991), p. 1258) 30. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-30)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.8\]) 31. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-31)** Papoulis, Athanasios. *Probability, Random Variables and Stochastic Processes* (4th ed.). p. 148. 32. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-32)** Winkelbauer, Andreas (2012). "Moments and Absolute Moments of the Normal Distribution". [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1209\.4340](https://arxiv.org/abs/1209.4340) \[[math.ST](https://arxiv.org/archive/math.ST)\]. 33. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-33)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 23) 34. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-34)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 24) 35. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-35)** Williams, David (2001). [*Weighing the odds : a course in probability and statistics*](https://archive.org/details/weighingoddscour00will) (Reprinted. ed.). Cambridge \[u.a.\]: Cambridge Univ. Press. pp. [197](https://archive.org/details/weighingoddscour00will/page/n219)–199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-521-00618-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-00618-7 "Special:BookSources/978-0-521-00618-7") . 36. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-36)** JosĂ© M. Bernardo; Adrian F. M. Smith (2000). [*Bayesian theory*](https://archive.org/details/bayesiantheory00bern_963) (Reprint ed.). Chichester \[u.a.\]: Wiley. pp. [209](https://archive.org/details/bayesiantheory00bern_963/page/n224), 366. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5") . 37. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-37)** O'Hagan, A. (1994) *Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference*, Edward Arnold. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-340-52922-9](https://en.wikipedia.org/wiki/Special:BookSources/0-340-52922-9 "Special:BookSources/0-340-52922-9") (Section 5.40) 38. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-1) [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 35) 39. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-39)** [UIUC, Lecture 21. *The Multivariate Normal Distribution*](http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf), 21.6:"Individually Gaussian Versus Jointly Gaussian". 40. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-40)** Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", *[The American Statistician](https://en.wikipedia.org/wiki/The_American_Statistician "The American Statistician")*, volume 36, number 4 November 1982, pages 372–373 41. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-41)** ["Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions"](http://www.allisons.org/ll/MML/KL/Normal/). *Allisons.org*. December 5, 2007. Retrieved March 3, 2017. 42. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-42)** Jordan, Michael I. (February 8, 2010). ["Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution"](http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf) (PDF). 43. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-43)** [Amari & Nagaoka (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFAmariNagaoka2000) 44. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-44)** ["Expectation of the maximum of gaussian random variables"](https://math.stackexchange.com/a/89147). *Mathematics Stack Exchange*. Retrieved April 7, 2024. 45. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-45)** ["Normal Approximation to Poisson Distribution"](http://www.stat.ucla.edu/~dinov/courses_students.dir/Applets.dir/NormalApprox2PoissonApplet.html). *Stat.ucla.edu*. Retrieved March 3, 2017. 46. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-46)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 27) 47. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-47)** Weisstein, Eric W. ["Normal Product Distribution"](http://mathworld.wolfram.com/NormalProductDistribution.html). *MathWorld*. wolfram.com. 48. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-48)** Lukacs, Eugene (1942). ["A Characterization of the Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177731647). *[The Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/The_Annals_of_Mathematical_Statistics "The Annals of Mathematical Statistics")*. **13** (1): 91–3\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177731647](https://doi.org/10.1214%2Faoms%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166). 49. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-49)** Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". *[Sankhyā](https://en.wikipedia.org/wiki/Sankhy%C4%81_\(journal\) "Sankhyā (journal)")*. **13** (4): 359–62\. [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0036-4452](https://search.worldcat.org/issn/0036-4452). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [25048183](https://www.jstor.org/stable/25048183). 50. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-50)** Lehmann, E. L. (1997). *Testing Statistical Hypotheses* (2nd ed.). Springer. p. 199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-94919-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-94919-2 "Special:BookSources/978-0-387-94919-2") . 51. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-51)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.3.6\]) 52. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-52)** [Galambos & Simonelli (2004](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalambosSimonelli2004), Theorem 3.5) 53. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-1) [Lukacs & King (1954)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLukacsKing1954) 54. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-54)** Quine, M.P. (1993). ["On three characterisations of the normal distribution"](http://www.math.uni.wroc.pl/~pms/publicationsArticle.php?nr=14.2&nrA=8&ppB=257&ppE=263). *Probability and Mathematical Statistics*. **14** (2): 257–263\. 55. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-John-1982_55-0)** John, S (1982). "The three parameter two-piece normal family of distributions and its fitting". *Communications in Statistics – Theory and Methods*. **11** (8): 879–885\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610928208828279](https://doi.org/10.1080%2F03610928208828279). 56. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-1) [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 127) 57. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-57)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 130) 58. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-58)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 133) 59. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-1) [Maxwell (1860)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMaxwell1860), p. 23. 60. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEBryc19951_60-0)** [Bryc (1995)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 1. 61. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-61)** Larkoski, Andrew J. (2023). [*Quantum Mechanics: A Mathematical Introduction*](https://books.google.com/books?id=iKmnEAAAQBAJ&dq=normal%20distribution&pg=PA120). United Kingdom: Cambridge University Press. pp. 120–121\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-1-009-12222-1](https://en.wikipedia.org/wiki/Special:BookSources/978-1-009-12222-1 "Special:BookSources/978-1-009-12222-1") . Retrieved May 30, 2025. 62. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-62)** [Huxley (1932)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHuxley1932) 63. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-63)** Jaynes, Edwin T. (2003). [*Probability Theory: The Logic of Science*](https://books.google.com/books?id=tTN4HuUNXjgC&pg=PA592). Cambridge University Press. pp. 592–593\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9780521592710](https://en.wikipedia.org/wiki/Special:BookSources/9780521592710 "Special:BookSources/9780521592710") . 64. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-64)** Oosterbaan, Roland J. (1994). ["Chapter 6: Frequency and Regression Analysis of Hydrologic Data"](http://www.waterlog.info/pdf/freqtxt.pdf) (PDF). In Ritzema, Henk P. (ed.). *Drainage Principles and Applications, Publication 16* (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp. 175–224\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-90-70754-33-4](https://en.wikipedia.org/wiki/Special:BookSources/978-90-70754-33-4 "Special:BookSources/978-90-70754-33-4") . 65. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-65)** Why Most Published Research Findings Are False, John P. A. Ioannidis, 2005 66. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-66)** Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". *Applied Statistics*. **37** (3): 477–84\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347330](https://doi.org/10.2307%2F2347330). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347330](https://www.jstor.org/stable/2347330). 67. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-67)** [Johnson, Kotz & Balakrishnan (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1995), Equation (26.48)) 68. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-68)** [Kinderman & Monahan (1977)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKindermanMonahan1977) 69. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-69)** [Leva (1992)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLeva1992) 70. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-70)** [Marsaglia & Tsang (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsagliaTsang2000) 71. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-71)** [Karney (2016)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKarney2016) 72. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-72)** [Du, Fan & Wei (2022)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDuFanWei2022) 73. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-73)** [Monahan (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMonahan1985), section 2) 74. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-74)** [Wallace (1996)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWallace1996) 75. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-76)** [Johnson, Kotz & Balakrishnan (1994](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1994), p. 85) 76. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-77)** [Le Cam & Lo Yang (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLe_CamLo_Yang2000), p. 74) 77. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-79)** De Moivre, Abraham (1733), Corollary I – see [Walker (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985), p. 77) 78. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-80)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), [p. 76](https://archive.org/details/historyofstatist00stig/page/76/mode/2up?q=%22de+moivre%22)) 79. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-82)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177) 80. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-83)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 179) 81. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-85)** [Laplace (1774](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLaplace1774), Problem III) 82. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-86)** [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189) 83. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-87)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177) 84. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-88)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), p. 144) 85. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-89)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 243) 86. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-90)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 244) 87. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-91)** Jaynes, Edwin J.; *Probability Theory: The Logic of Science*, [Ch. 7](http://www-biba.inrialpes.fr/Jaynes/cc07s.pdf). 88. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-93)** Peirce, Charles S. (c. 1909 MS), *[Collected Papers](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce_bibliography#CP "Charles Sanders Peirce bibliography")* v. 6, paragraph 327. 89. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-94)** [Kruskal & Stigler (1997)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKruskalStigler1997). 90. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-95)** ["Earliest Uses... (Entry Standard Normal Curve)"](http://jeff560.tripod.com/s.html). 91. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-96)** [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) introduces the terms *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22). 92. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-97)** [Mood (1950)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950) explicitly defines the *standard normal distribution* [(p. 112)](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22). 93. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Sun-2021_98-0)** Sun, Jingchao; Kong, Maiying; Pal, Subhadip (June 22, 2021). ["The Modified-Half-Normal distribution: Properties and an efficient sampling scheme"](https://www.tandfonline.com/doi/abs/10.1080/03610926.2021.1934700?journalCode=lsta20). *Communications in Statistics – Theory and Methods*. **52** (5): 1591–1613\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610926.2021.1934700](https://doi.org/10.1080%2F03610926.2021.1934700). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0361-0926](https://search.worldcat.org/issn/0361-0926). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [237919587](https://api.semanticscholar.org/CorpusID:237919587). ### Sources \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=62 "Edit section: Sources")\] - Aldrich, John; Miller, Jeff. ["Earliest Uses of Symbols in Probability and Statistics"](http://jeff560.tripod.com/stat.html). - Aldrich, John; Miller, Jeff. ["Earliest Known Uses of Some of the Words of Mathematics"](http://jeff560.tripod.com/mathword.html). In particular, the entries for ["bell-shaped and bell curve"](http://jeff560.tripod.com/b.html), ["normal (distribution)"](http://jeff560.tripod.com/n.html), ["Gaussian"](http://jeff560.tripod.com/g.html), and ["Error, law of error, theory of errors, etc."](http://jeff560.tripod.com/e.html). - [Amari, Shun'ichi](https://en.wikipedia.org/wiki/Shun%27ichi_Amari "Shun'ichi Amari"); Nagaoka, Hiroshi (2000). *Methods of Information Geometry*. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8218-0531-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-0531-2 "Special:BookSources/978-0-8218-0531-2") . - [Bernardo, JosĂ© M.](https://en.wikipedia.org/wiki/Jos%C3%A9-Miguel_Bernardo "JosĂ©-Miguel Bernardo"); [Smith, Adrian F. M.](https://en.wikipedia.org/wiki/Adrian_Smith_\(statistician\) "Adrian Smith (statistician)") (2000). *Bayesian Theory*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5") . - Bryc, Wlodzimierz (1995). [*The Normal Distribution: Characterizations with Applications*](https://books.google.com/books?id=tyXjBwAAQBAJ). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-97990-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97990-8 "Special:BookSources/978-0-387-97990-8") . - [Casella, George](https://en.wikipedia.org/wiki/George_Casella "George Casella"); [Berger, Roger L.](https://en.wikipedia.org/wiki/Roger_Lee_Berger "Roger Lee Berger") (2001). *Statistical Inference* (2nd ed.). Duxbury. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-534-24312-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-24312-8 "Special:BookSources/978-0-534-24312-8") . - Cody, William J. (1969). ["Rational Chebyshev Approximations for the Error Function"](https://en.wikipedia.org/wiki/Error_function#cite_note-5 "Error function"). *Mathematics of Computation*. **23** (107): 631–638\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1969MaCom..23..631C](https://ui.adsabs.harvard.edu/abs/1969MaCom..23..631C). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1969-0247736-4](https://doi.org/10.1090%2FS0025-5718-1969-0247736-4). - [Cover, Thomas M.](https://en.wikipedia.org/wiki/Thomas_M._Cover "Thomas M. Cover"); [Thomas, Joy A.](https://en.wikipedia.org/wiki/Joy_A._Thomas "Joy A. Thomas") (2006). [*Elements of Information Theory*](https://books.google.com/books?id=VWq5GG6ycxMC). John Wiley and Sons. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9780471241959](https://en.wikipedia.org/wiki/Special:BookSources/9780471241959 "Special:BookSources/9780471241959") . - Dia, Yaya D. (2023). ["Approximate Incomplete Integrals, Application to Complementary Error Function"](https://ssrn.com/abstract=4487559). *SSRN*. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2139/ssrn.4487559](https://doi.org/10.2139%2Fssrn.4487559). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [259689086](https://api.semanticscholar.org/CorpusID:259689086). - [de Moivre, Abraham](https://en.wikipedia.org/wiki/Abraham_de_Moivre "Abraham de Moivre") (2000) \[First published 1738\]. [*The Doctrine of Chances*](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances"). American Mathematical Society. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8218-2103-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-2103-9 "Special:BookSources/978-0-8218-2103-9") . - Du, Y.; Fan, B.; Wei, B. (2022). "An improved exact sampling algorithm for the standard normal distribution". *Computational Statistics*. **37** (2): 721–737\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[2008\.03855](https://arxiv.org/abs/2008.03855). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00180-021-01136-w](https://doi.org/10.1007%2Fs00180-021-01136-w). - Fan, Jianqing (1991). ["On the optimal rates of convergence for nonparametric deconvolution problems"](https://doi.org/10.1214%2Faos%2F1176348248). *The Annals of Statistics*. **19** (3): 1257–1272\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176348248](https://doi.org/10.1214%2Faos%2F1176348248). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2241949](https://www.jstor.org/stable/2241949). - [Galton, Francis](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") (1889). [*Natural Inheritance*](http://galton.org/books/natural-inheritance/pdf/galton-nat-inh-1up-clean.pdf) (PDF). London, UK: Richard Clay and Sons. - [Galambos, Janos](https://en.wikipedia.org/wiki/Janos_Galambos "Janos Galambos"); Simonelli, Italo (2004). [*Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions*](https://archive.org/details/productsofrandom00gala). Marcel Dekker, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8247-5402-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-5402-0 "Special:BookSources/978-0-8247-5402-0") . - [Gauss, Carolo Friderico](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") (1809). [*Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm*](https://archive.org/details/theoriamotuscor00gausgoog) \[*Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections*\] (in Latin). Hambvrgi, Svmtibvs F. Perthes et I. H. Besser. [English translation](https://books.google.com/books?id=1TIAAAAAQAAJ). - [Gould, Stephen Jay](https://en.wikipedia.org/wiki/Stephen_Jay_Gould "Stephen Jay Gould") (1981). [*The Mismeasure of Man*](https://en.wikipedia.org/wiki/The_Mismeasure_of_Man "The Mismeasure of Man") (first ed.). W. W. Norton. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-393-01489-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-393-01489-1 "Special:BookSources/978-0-393-01489-1") . - Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation". *The American Statistician*. **19** (3): 12–14\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2681417](https://doi.org/10.2307%2F2681417). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2681417](https://www.jstor.org/stable/2681417). - Hart, John F.; et al. (1968). *Computer Approximations*. New York, NY: John Wiley & Sons, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-88275-642-4](https://en.wikipedia.org/wiki/Special:BookSources/978-0-88275-642-4 "Special:BookSources/978-0-88275-642-4") . - ["Normal Distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_Distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\] - [Herrnstein, Richard J.](https://en.wikipedia.org/wiki/Richard_J._Herrnstein "Richard J. Herrnstein"); [Murray, Charles](https://en.wikipedia.org/wiki/Charles_Murray_\(political_scientist\) "Charles Murray (political scientist)") (1994). [*The Bell Curve: Intelligence and Class Structure in American Life*](https://en.wikipedia.org/wiki/The_Bell_Curve "The Bell Curve"). [Free Press](https://en.wikipedia.org/wiki/Free_Press_\(publisher\) "Free Press (publisher)"). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-02-914673-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-02-914673-6 "Special:BookSources/978-0-02-914673-6") . - Hoel, Paul G. (1947). [*Introduction To Mathematical Statistics*](https://archive.org/details/in.ernet.dli.2015.263186/page/n1/mode/2up). New York: Wiley. - [Huxley, Julian S.](https://en.wikipedia.org/wiki/Julian_S._Huxley "Julian S. Huxley") (1972) \[First published 1932\]. *Problems of Relative Growth*. London. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-61114-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61114-3 "Special:BookSources/978-0-486-61114-3") . [OCLC](https://en.wikipedia.org/wiki/OCLC_\(identifier\) "OCLC (identifier)") [476909537](https://search.worldcat.org/oclc/476909537). - [Johnson, Norman L.](https://en.wikipedia.org/wiki/Norman_Lloyd_Johnson "Norman Lloyd Johnson"); [Kotz, Samuel](https://en.wikipedia.org/wiki/Samuel_Kotz "Samuel Kotz"); Balakrishnan, Narayanaswamy (1994). *Continuous Univariate Distributions, Volume 1*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-58495-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58495-7 "Special:BookSources/978-0-471-58495-7") . - Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). *Continuous Univariate Distributions, Volume 2*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-58494-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58494-0 "Special:BookSources/978-0-471-58494-0") . - Karney, C. F. F. (2016). ["Sampling exactly from the normal distribution"](https://doi.org/10.1145%2F2710016). *ACM Transactions on Mathematical Software*. **42** (1): 3:1–14. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1303\.6257](https://arxiv.org/abs/1303.6257). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/2710016](https://doi.org/10.1145%2F2710016). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [14252035](https://api.semanticscholar.org/CorpusID:14252035). - Kinderman, Albert J.; Monahan, John F. (1977). ["Computer Generation of Random Variables Using the Ratio of Uniform Deviates"](https://doi.org/10.1145%2F355744.355750). *ACM Transactions on Mathematical Software*. **3** (3): 257–260\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/355744.355750](https://doi.org/10.1145%2F355744.355750). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [12884505](https://api.semanticscholar.org/CorpusID:12884505). - Krishnamoorthy, Kalimuthu (2006). *Handbook of Statistical Distributions with Applications*. Chapman & Hall/CRC. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-1-58488-635-8](https://en.wikipedia.org/wiki/Special:BookSources/978-1-58488-635-8 "Special:BookSources/978-1-58488-635-8") . - [Kruskal, William H.](https://en.wikipedia.org/wiki/William_H._Kruskal "William H. Kruskal"); Stigler, Stephen M. (1997). Spencer, Bruce D. (ed.). *Normative Terminology: 'Normal' in Statistics and Elsewhere*. Statistics and Public Policy. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-19-852341-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-19-852341-3 "Special:BookSources/978-0-19-852341-3") . - [Laplace, Pierre-Simon de](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") (1774). ["MĂ©moire sur la probabilitĂ© des causes par les Ă©vĂ©nements"](http://gallica.bnf.fr/ark:/12148/bpt6k77596b/f32). *MĂ©moires de l'AcadĂ©mie Royale des Sciences de Paris (Savants Ă©trangers), Tome 6*: 621–656\. Translated by Stephen M. Stigler in *Statistical Science* **1** (3), 1986: [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2245476](https://www.jstor.org/stable/2245476). - Laplace, Pierre-Simon (1812). [*ThĂ©orie analytique des probabilitĂ©s*](https://archive.org/details/thorieanalytiqu00laplgoog) \[*[Analytical theory of probabilities](https://en.wikipedia.org/wiki/Analytical_theory_of_probabilities "Analytical theory of probabilities")*\]. Paris, Ve. Courcier. - [Le Cam, Lucien](https://en.wikipedia.org/wiki/Lucien_Le_Cam "Lucien Le Cam"); [Lo Yang, Grace](https://en.wikipedia.org/wiki/Grace_Yang "Grace Yang") (2000). *Asymptotics in Statistics: Some Basic Concepts* (second ed.). Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-95036-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-95036-5 "Special:BookSources/978-0-387-95036-5") . - Leva, Joseph L. (1992). ["A fast normal random number generator"](https://web.archive.org/web/20100716035328/http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF). *ACM Transactions on Mathematical Software*. **18** (4): 449–453\. [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.544.5806](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.544.5806). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/138351.138364](https://doi.org/10.1145%2F138351.138364). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [15802663](https://api.semanticscholar.org/CorpusID:15802663). Archived from [the original](http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF) on July 16, 2010. - [Lexis, Wilhelm](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") (1878). "Sur la durĂ©e normale de la vie humaine et sur la thĂ©orie de la stabilitĂ© des rapports statistiques". *Annales de DĂ©mographie Internationale*. **II**. Paris: 447–462\. - Lukacs, Eugene; King, Edgar P. (1954). ["A Property of Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177728796). *The Annals of Mathematical Statistics*. **25** (2): 389–394\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177728796](https://doi.org/10.1214%2Faoms%2F1177728796). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236741](https://www.jstor.org/stable/2236741). - McPherson, Glen (1990). [*Statistics in Scientific Investigation: Its Basis, Application and Interpretation*](https://archive.org/details/statisticsinscie0000mcph). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-97137-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97137-7 "Special:BookSources/978-0-387-97137-7") . - [Marsaglia, George](https://en.wikipedia.org/wiki/George_Marsaglia "George Marsaglia"); Tsang, Wai Wan (2000). ["The Ziggurat Method for Generating Random Variables"](https://doi.org/10.18637%2Fjss.v005.i08). *Journal of Statistical Software*. **5** (8). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v005.i08](https://doi.org/10.18637%2Fjss.v005.i08). - Marsaglia, George (2004). ["Evaluating the Normal Distribution"](https://doi.org/10.18637%2Fjss.v011.i04). *Journal of Statistical Software*. **11** (4). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v011.i04](https://doi.org/10.18637%2Fjss.v011.i04). - [Maxwell, James Clerk](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") (1860). ["V. Illustrations of the dynamical theory of gases. — Part I: On the motions and collisions of perfectly elastic spheres"](https://books.google.com/books?id=-YU7AQAAMAAJ&pg=PA19). *Philosophical Magazine*. Series 4. **19** (124): 19–32\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1860LEDPM..19...19M](https://ui.adsabs.harvard.edu/abs/1860LEDPM..19...19M). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786446008642818](https://doi.org/10.1080%2F14786446008642818). - Monahan, J. F. (1985). ["Accuracy in random number generation"](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X). *Mathematics of Computation*. **45** (172): 559–568\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1985-0804945-X](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X). - [Mood, Alexander McFarlane](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950). [*Introduction to the Theory of Statistics*](https://archive.org/details/introductiontoth0000alex/page/n5/mode/2up). New York: McGraw-Hill. - Patel, Jagdish K.; Read, Campbell B. (1996). *Handbook of the Normal Distribution* (2nd ed.). CRC Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8247-9342-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-9342-5 "Special:BookSources/978-0-8247-9342-5") . - [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1901). ["On Lines and Planes of Closest Fit to Systems of Points in Space"](http://stat.smmu.edu.cn/history/pearson1901.pdf) (PDF). *[Philosophical Magazine](https://en.wikipedia.org/wiki/Philosophical_Magazine "Philosophical Magazine")*. 6. **2** (11): 559–572\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786440109462720](https://doi.org/10.1080%2F14786440109462720). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [125037489](https://api.semanticscholar.org/CorpusID:125037489). - [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1905). ["'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A rejoinder"](https://zenodo.org/record/1449456). *Biometrika*. **4** (1): 169–212\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2331536](https://doi.org/10.2307%2F2331536). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331536](https://www.jstor.org/stable/2331536). - Pearson, Karl (1920). ["Notes on the History of Correlation"](https://zenodo.org/record/1431597). *Biometrika*. **13** (1): 25–45\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1093/biomet/13.1.25](https://doi.org/10.1093%2Fbiomet%2F13.1.25). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331722](https://www.jstor.org/stable/2331722). - Rohrbasser, Jean-Marc; VĂ©ron, Jacques (2003). ["Wilhelm Lexis: The Normal Length of Life as an Expression of the "Nature of Things""](http://www.persee.fr/web/revues/home/prescript/article/pop_1634-2941_2003_num_58_3_18444). *Population*. **58** (3): 303–322\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.3917/pope.303.0303](https://doi.org/10.3917%2Fpope.303.0303). - Shore, H (1982). "Simple Approximations for the Inverse Cumulative Function, the Density Function and the Loss Integral of the Normal Distribution". *Journal of the Royal Statistical Society. Series C (Applied Statistics)*. **31** (2): 108–114\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347972](https://doi.org/10.2307%2F2347972). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347972](https://www.jstor.org/stable/2347972). - Shore, H (2005). "Accurate RMM-Based Approximations for the CDF of the Normal Distribution". *Communications in Statistics – Theory and Methods*. **34** (3): 507–513\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1081/sta-200052102](https://doi.org/10.1081%2Fsta-200052102). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122148043](https://api.semanticscholar.org/CorpusID:122148043). - Shore, H (2011). "Response Modeling Methodology". *WIREs Comput Stat*. **3** (4): 357–372\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.151](https://doi.org/10.1002%2Fwics.151). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [62021374](https://api.semanticscholar.org/CorpusID:62021374). - Shore, H (2012). "Estimating Response Modeling Methodology Models". *WIREs Comput Stat*. **4** (3): 323–333\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.1199](https://doi.org/10.1002%2Fwics.1199). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122366147](https://api.semanticscholar.org/CorpusID:122366147). - [Stigler, Stephen M.](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") (1978). ["Mathematical Statistics in the Early States"](https://doi.org/10.1214%2Faos%2F1176344123). *The Annals of Statistics*. **6** (2): 239–265\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176344123](https://doi.org/10.1214%2Faos%2F1176344123). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2958876](https://www.jstor.org/stable/2958876). - Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal". *The American Statistician*. **36** (2): 137–138\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2684031](https://doi.org/10.2307%2F2684031). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2684031](https://www.jstor.org/stable/2684031). - Stigler, Stephen M. (1986). [*The History of Statistics: The Measurement of Uncertainty before 1900*](https://archive.org/details/historyofstatist00stig). Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-674-40340-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-40340-6 "Special:BookSources/978-0-674-40340-6") . - Stigler, Stephen M. (1999). *Statistics on the Table*. Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-674-83601-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-83601-3 "Special:BookSources/978-0-674-83601-3") . - Walker, Helen M. (1985). ["De Moivre on the Law of Normal Probability"](http://www.york.ac.uk/depts/maths/histstat/demoivre.pdf) (PDF). In Smith, David Eugene (ed.). *A Source Book in Mathematics*. Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-64690-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-64690-9 "Special:BookSources/978-0-486-64690-9") . - [Wallace, C. S.](https://en.wikipedia.org/wiki/Chris_Wallace_\(computer_scientist\) "Chris Wallace (computer scientist)") (1996). ["Fast pseudo-random generators for normal and exponential variates"](https://doi.org/10.1145%2F225545.225554). *ACM Transactions on Mathematical Software*. **22** (1): 119–127\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/225545.225554](https://doi.org/10.1145%2F225545.225554). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [18514848](https://api.semanticscholar.org/CorpusID:18514848). - [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution"](http://mathworld.wolfram.com/NormalDistribution.html). [MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld"). - West, Graeme (2009). ["Better Approximations to Cumulative Normal Functions"](https://web.archive.org/web/20120229202051/https://wilmott.com/pdfs/090721_west.pdf) (PDF). *Wilmott Magazine*: 70–76\. Archived from [the original](https://wilmott.com/pdfs/090721_west.pdf) (PDF) on February 29, 2012. - Zelen, Marvin; Severo, Norman C. (1972) \[First published 1964\]. [*Probability Functions (chapter 26)*](http://www.math.sfu.ca/~cbm/aands/page_931.htm). *[Handbook of mathematical functions with formulas, graphs, and mathematical tables](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun")*, by [Abramowitz, M.](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); and [Stegun, I. A.](https://en.wikipedia.org/wiki/Irene_A._Stegun "Irene A. Stegun"): National Bureau of Standards. New York, NY: Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0") . ## External links \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=63 "Edit section: External links")\] [![Wikimedia Commons logo](https://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/40px-Commons-logo.svg.png)](https://en.wikipedia.org/wiki/File:Commons-logo.svg) Wikimedia Commons has media related to [Normal distribution](https://commons.wikimedia.org/wiki/Category:Normal_distribution "commons:Category:Normal distribution"). - ["Normal distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\] - [Normal distribution calculator](https://www.hackmath.net/en/calculator/normal-distribution) | [v](https://en.wikipedia.org/wiki/Template:Probability_distributions "Template:Probability distributions") [t](https://en.wikipedia.org/wiki/Template_talk:Probability_distributions "Template talk:Probability distributions") [e](https://en.wikipedia.org/wiki/Special:EditPage/Template:Probability_distributions "Special:EditPage/Template:Probability distributions")[Probability distributions](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") ([list](https://en.wikipedia.org/wiki/List_of_probability_distributions "List of probability distributions")) | | |---|---| | Discrete univariate | | | | | | with finite support | [Benford](https://en.wikipedia.org/wiki/Benford%27s_law "Benford's law") [Bernoulli](https://en.wikipedia.org/wiki/Bernoulli_distribution "Bernoulli distribution") [Beta-binomial](https://en.wikipedia.org/wiki/Beta-binomial_distribution "Beta-binomial distribution") [Binomial](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution") [Categorical](https://en.wikipedia.org/wiki/Categorical_distribution "Categorical distribution") [Hypergeometric](https://en.wikipedia.org/wiki/Hypergeometric_distribution "Hypergeometric distribution") [Negative](https://en.wikipedia.org/wiki/Negative_hypergeometric_distribution "Negative hypergeometric distribution") [Poisson binomial](https://en.wikipedia.org/wiki/Poisson_binomial_distribution "Poisson binomial distribution") [Rademacher](https://en.wikipedia.org/wiki/Rademacher_distribution "Rademacher distribution") [Soliton](https://en.wikipedia.org/wiki/Soliton_distribution "Soliton distribution") [Discrete uniform](https://en.wikipedia.org/wiki/Discrete_uniform_distribution "Discrete uniform distribution") [Zipf](https://en.wikipedia.org/wiki/Zipf%27s_law "Zipf's law") [Zipf–Mandelbrot](https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law "Zipf–Mandelbrot law") | | with infinite support | [Beta negative binomial](https://en.wikipedia.org/wiki/Beta_negative_binomial_distribution "Beta negative binomial distribution") [Borel](https://en.wikipedia.org/wiki/Borel_distribution "Borel distribution") [Conway–Maxwell–Poisson](https://en.wikipedia.org/wiki/Conway%E2%80%93Maxwell%E2%80%93Poisson_distribution "Conway–Maxwell–Poisson distribution") [Discrete phase-type](https://en.wikipedia.org/wiki/Discrete_phase-type_distribution "Discrete phase-type distribution") [Delaporte](https://en.wikipedia.org/wiki/Delaporte_distribution "Delaporte distribution") [Extended negative binomial](https://en.wikipedia.org/wiki/Extended_negative_binomial_distribution "Extended negative binomial distribution") [Flory–Schulz](https://en.wikipedia.org/wiki/Flory%E2%80%93Schulz_distribution "Flory–Schulz distribution") [Gauss–Kuzmin](https://en.wikipedia.org/wiki/Gauss%E2%80%93Kuzmin_distribution "Gauss–Kuzmin distribution") [Geometric](https://en.wikipedia.org/wiki/Geometric_distribution "Geometric distribution") [Logarithmic](https://en.wikipedia.org/wiki/Logarithmic_distribution "Logarithmic distribution") [Mixed Poisson](https://en.wikipedia.org/wiki/Mixed_Poisson_distribution "Mixed Poisson distribution") [Negative binomial](https://en.wikipedia.org/wiki/Negative_binomial_distribution "Negative binomial distribution") [Panjer](https://en.wikipedia.org/wiki/\(a,b,0\)_class_of_distributions "(a,b,0) class of distributions") [Parabolic fractal](https://en.wikipedia.org/wiki/Parabolic_fractal_distribution "Parabolic fractal distribution") [Poisson](https://en.wikipedia.org/wiki/Poisson_distribution "Poisson distribution") [Skellam](https://en.wikipedia.org/wiki/Skellam_distribution "Skellam distribution") [Yule–Simon](https://en.wikipedia.org/wiki/Yule%E2%80%93Simon_distribution "Yule–Simon distribution") [Zeta](https://en.wikipedia.org/wiki/Zeta_distribution "Zeta distribution") | | Continuous univariate | | | | | | supported on a bounded interval | [Arcsine](https://en.wikipedia.org/wiki/Arcsine_distribution "Arcsine distribution") [ARGUS](https://en.wikipedia.org/wiki/ARGUS_distribution "ARGUS distribution") [Balding–Nichols](https://en.wikipedia.org/wiki/Balding%E2%80%93Nichols_model "Balding–Nichols model") [Bates](https://en.wikipedia.org/wiki/Bates_distribution "Bates distribution") [Beta](https://en.wikipedia.org/wiki/Beta_distribution "Beta distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_beta_distribution "Generalized beta distribution") [Beta rectangular](https://en.wikipedia.org/wiki/Beta_rectangular_distribution "Beta rectangular distribution") [Continuous Bernoulli](https://en.wikipedia.org/wiki/Continuous_Bernoulli_distribution "Continuous Bernoulli distribution") [Irwin–Hall](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "Irwin–Hall distribution") [Kumaraswamy](https://en.wikipedia.org/wiki/Kumaraswamy_distribution "Kumaraswamy distribution") [Logit-normal](https://en.wikipedia.org/wiki/Logit-normal_distribution "Logit-normal distribution") [Noncentral beta](https://en.wikipedia.org/wiki/Noncentral_beta_distribution "Noncentral beta distribution") [PERT](https://en.wikipedia.org/wiki/PERT_distribution "PERT distribution") [Power function](https://en.wikipedia.org/w/index.php?title=Power_function_distribution&action=edit&redlink=1 "Power function distribution (page does not exist)") [Raised cosine](https://en.wikipedia.org/wiki/Raised_cosine_distribution "Raised cosine distribution") [Reciprocal](https://en.wikipedia.org/wiki/Reciprocal_distribution "Reciprocal distribution") [Triangular](https://en.wikipedia.org/wiki/Triangular_distribution "Triangular distribution") [U-quadratic](https://en.wikipedia.org/wiki/U-quadratic_distribution "U-quadratic distribution") [Uniform](https://en.wikipedia.org/wiki/Continuous_uniform_distribution "Continuous uniform distribution") [Wigner semicircle](https://en.wikipedia.org/wiki/Wigner_semicircle_distribution "Wigner semicircle distribution") | | supported on a semi-infinite interval | [Benini](https://en.wikipedia.org/wiki/Benini_distribution "Benini distribution") [Benktander 1st kind](https://en.wikipedia.org/wiki/Benktander_type_I_distribution "Benktander type I distribution") [Benktander 2nd kind](https://en.wikipedia.org/wiki/Benktander_type_II_distribution "Benktander type II distribution") [Beta prime](https://en.wikipedia.org/wiki/Beta_prime_distribution "Beta prime distribution") [Burr](https://en.wikipedia.org/wiki/Burr_distribution "Burr distribution") [Chi](https://en.wikipedia.org/wiki/Chi_distribution "Chi distribution") [Chi-squared](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") [Noncentral](https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution "Noncentral chi-squared distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse-chi-squared_distribution "Inverse-chi-squared distribution") [Scaled](https://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution "Scaled inverse chi-squared distribution") [Dagum](https://en.wikipedia.org/wiki/Dagum_distribution "Dagum distribution") [Davis](https://en.wikipedia.org/wiki/Davis_distribution "Davis distribution") [Erlang](https://en.wikipedia.org/wiki/Erlang_distribution "Erlang distribution") [Hyper](https://en.wikipedia.org/wiki/Hyper-Erlang_distribution "Hyper-Erlang distribution") [Exponential](https://en.wikipedia.org/wiki/Exponential_distribution "Exponential distribution") [Hyperexponential](https://en.wikipedia.org/wiki/Hyperexponential_distribution "Hyperexponential distribution") [Hypoexponential](https://en.wikipedia.org/wiki/Hypoexponential_distribution "Hypoexponential distribution") [Logarithmic](https://en.wikipedia.org/wiki/Exponential-logarithmic_distribution "Exponential-logarithmic distribution") [*F*](https://en.wikipedia.org/wiki/F-distribution "F-distribution") [Noncentral](https://en.wikipedia.org/wiki/Noncentral_F-distribution "Noncentral F-distribution") [Folded normal](https://en.wikipedia.org/wiki/Folded_normal_distribution "Folded normal distribution") [FrĂ©chet](https://en.wikipedia.org/wiki/Fr%C3%A9chet_distribution "FrĂ©chet distribution") [Gamma](https://en.wikipedia.org/wiki/Gamma_distribution "Gamma distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_gamma_distribution "Generalized gamma distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse-gamma_distribution "Inverse-gamma distribution") [gamma/Gompertz](https://en.wikipedia.org/wiki/Gamma/Gompertz_distribution "Gamma/Gompertz distribution") [Gompertz](https://en.wikipedia.org/wiki/Gompertz_distribution "Gompertz distribution") [Shifted](https://en.wikipedia.org/wiki/Shifted_Gompertz_distribution "Shifted Gompertz distribution") [Half-logistic](https://en.wikipedia.org/wiki/Half-logistic_distribution "Half-logistic distribution") [Half-normal](https://en.wikipedia.org/wiki/Half-normal_distribution "Half-normal distribution") [Hotelling's *T*\-squared](https://en.wikipedia.org/wiki/Hotelling%27s_T-squared_distribution "Hotelling's T-squared distribution") [Hartman–Watson](https://en.wikipedia.org/wiki/Hartman%E2%80%93Watson_distribution "Hartman–Watson distribution") [Inverse Gaussian](https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution "Inverse Gaussian distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_inverse_Gaussian_distribution "Generalized inverse Gaussian distribution") [Kolmogorov](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test "Kolmogorov–Smirnov test") [LĂ©vy](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "LĂ©vy distribution") [Log-Cauchy](https://en.wikipedia.org/wiki/Log-Cauchy_distribution "Log-Cauchy distribution") [Log-Laplace](https://en.wikipedia.org/wiki/Log-Laplace_distribution "Log-Laplace distribution") [Log-logistic](https://en.wikipedia.org/wiki/Log-logistic_distribution "Log-logistic distribution") [Log-normal](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") [Log-t](https://en.wikipedia.org/wiki/Log-t_distribution "Log-t distribution") [Lomax](https://en.wikipedia.org/wiki/Lomax_distribution "Lomax distribution") [Matrix-exponential](https://en.wikipedia.org/wiki/Matrix-exponential_distribution "Matrix-exponential distribution") [Maxwell–Boltzmann](https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution "Maxwell–Boltzmann distribution") [Maxwell–JĂŒttner](https://en.wikipedia.org/wiki/Maxwell%E2%80%93J%C3%BCttner_distribution "Maxwell–JĂŒttner distribution") [Mittag-Leffler](https://en.wikipedia.org/wiki/Mittag-Leffler_distribution "Mittag-Leffler distribution") [Nakagami](https://en.wikipedia.org/wiki/Nakagami_distribution "Nakagami distribution") [Pareto](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution") [Phase-type](https://en.wikipedia.org/wiki/Phase-type_distribution "Phase-type distribution") [Poly-Weibull](https://en.wikipedia.org/wiki/Poly-Weibull_distribution "Poly-Weibull distribution") [Rayleigh](https://en.wikipedia.org/wiki/Rayleigh_distribution "Rayleigh distribution") [Relativistic Breit–Wigner](https://en.wikipedia.org/wiki/Relativistic_Breit%E2%80%93Wigner_distribution "Relativistic Breit–Wigner distribution") [Rice](https://en.wikipedia.org/wiki/Rice_distribution "Rice distribution") [Truncated normal](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution") [type-2 Gumbel](https://en.wikipedia.org/wiki/Type-2_Gumbel_distribution "Type-2 Gumbel distribution") [Weibull](https://en.wikipedia.org/wiki/Weibull_distribution "Weibull distribution") [Discrete](https://en.wikipedia.org/wiki/Discrete_Weibull_distribution "Discrete Weibull distribution") [Wilks's lambda](https://en.wikipedia.org/wiki/Wilks%27s_lambda_distribution "Wilks's lambda distribution") | | supported on the whole real line | [Cauchy](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution") [Exponential power](https://en.wikipedia.org/wiki/Generalized_normal_distribution#Version_1 "Generalized normal distribution") [Fisher's *z*](https://en.wikipedia.org/wiki/Fisher%27s_z-distribution "Fisher's z-distribution") [Kaniadakis Îș-Gaussian](https://en.wikipedia.org/wiki/Kaniadakis_Gaussian_distribution "Kaniadakis Gaussian distribution") [Gaussian *q*](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") [Generalized hyperbolic](https://en.wikipedia.org/wiki/Generalised_hyperbolic_distribution "Generalised hyperbolic distribution") [Generalized logistic (logistic-beta)](https://en.wikipedia.org/wiki/Generalized_logistic_distribution "Generalized logistic distribution") [Generalized normal](https://en.wikipedia.org/wiki/Generalized_normal_distribution "Generalized normal distribution") [Geometric stable](https://en.wikipedia.org/wiki/Geometric_stable_distribution "Geometric stable distribution") [Gumbel](https://en.wikipedia.org/wiki/Gumbel_distribution "Gumbel distribution") [Holtsmark](https://en.wikipedia.org/wiki/Holtsmark_distribution "Holtsmark distribution") [Hyperbolic secant](https://en.wikipedia.org/wiki/Hyperbolic_secant_distribution "Hyperbolic secant distribution") [Johnson's *SU*](https://en.wikipedia.org/wiki/Johnson%27s_SU-distribution "Johnson's SU-distribution") [Landau](https://en.wikipedia.org/wiki/Landau_distribution "Landau distribution") [Laplace](https://en.wikipedia.org/wiki/Laplace_distribution "Laplace distribution") [Asymmetric](https://en.wikipedia.org/wiki/Asymmetric_Laplace_distribution "Asymmetric Laplace distribution") [Logistic](https://en.wikipedia.org/wiki/Logistic_distribution "Logistic distribution") [Noncentral *t*](https://en.wikipedia.org/wiki/Noncentral_t-distribution "Noncentral t-distribution") [Normal (Gaussian)]() [Normal-inverse Gaussian](https://en.wikipedia.org/wiki/Normal-inverse_Gaussian_distribution "Normal-inverse Gaussian distribution") [Skew normal](https://en.wikipedia.org/wiki/Skew_normal_distribution "Skew normal distribution") [Slash](https://en.wikipedia.org/wiki/Slash_distribution "Slash distribution") [Stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") [Student's *t*](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") [Tracy–Widom](https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution "Tracy–Widom distribution") [Variance-gamma](https://en.wikipedia.org/wiki/Variance-gamma_distribution "Variance-gamma distribution") [Voigt](https://en.wikipedia.org/wiki/Voigt_profile "Voigt profile") | | with support whose type varies | [Generalized chi-squared](https://en.wikipedia.org/wiki/Generalized_chi-squared_distribution "Generalized chi-squared distribution") [Generalized extreme value](https://en.wikipedia.org/wiki/Generalized_extreme_value_distribution "Generalized extreme value distribution") [Generalized Pareto](https://en.wikipedia.org/wiki/Generalized_Pareto_distribution "Generalized Pareto distribution") [Marchenko–Pastur](https://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur_distribution "Marchenko–Pastur distribution") [Kaniadakis *Îș*\-exponential](https://en.wikipedia.org/wiki/Kaniadakis_Exponential_distribution "Kaniadakis Exponential distribution") [Kaniadakis *Îș*\-Gamma](https://en.wikipedia.org/wiki/Kaniadakis_Gamma_distribution "Kaniadakis Gamma distribution") [Kaniadakis *Îș*\-Weibull](https://en.wikipedia.org/wiki/Kaniadakis_Weibull_distribution "Kaniadakis Weibull distribution") [Kaniadakis *Îș*\-Logistic](https://en.wikipedia.org/wiki/Kaniadakis_Logistic_distribution "Kaniadakis Logistic distribution") [Kaniadakis *Îș*\-Erlang](https://en.wikipedia.org/wiki/Kaniadakis_Erlang_distribution "Kaniadakis Erlang distribution") [*q*\-exponential](https://en.wikipedia.org/wiki/Q-exponential_distribution "Q-exponential distribution") [*q*\-Gaussian](https://en.wikipedia.org/wiki/Q-Gaussian_distribution "Q-Gaussian distribution") [*q*\-Weibull](https://en.wikipedia.org/wiki/Q-Weibull_distribution "Q-Weibull distribution") [Shifted log-logistic](https://en.wikipedia.org/wiki/Shifted_log-logistic_distribution "Shifted log-logistic distribution") [Tukey lambda](https://en.wikipedia.org/wiki/Tukey_lambda_distribution "Tukey lambda distribution") | | Mixed univariate | | | | | | continuous- discrete | [Rectified Gaussian](https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution "Rectified Gaussian distribution") | | [Multivariate (joint)](https://en.wikipedia.org/wiki/Joint_probability_distribution "Joint probability distribution") | *Discrete:* [Ewens](https://en.wikipedia.org/wiki/Ewens%27s_sampling_formula "Ewens's sampling formula") [Multinomial](https://en.wikipedia.org/wiki/Multinomial_distribution "Multinomial distribution") [Dirichlet](https://en.wikipedia.org/wiki/Dirichlet-multinomial_distribution "Dirichlet-multinomial distribution") [Negative](https://en.wikipedia.org/wiki/Negative_multinomial_distribution "Negative multinomial distribution") *Continuous:* [Dirichlet](https://en.wikipedia.org/wiki/Dirichlet_distribution "Dirichlet distribution") [Generalized](https://en.wikipedia.org/wiki/Generalized_Dirichlet_distribution "Generalized Dirichlet distribution") [Multivariate Laplace](https://en.wikipedia.org/wiki/Multivariate_Laplace_distribution "Multivariate Laplace distribution") [Multivariate normal](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") [Multivariate stable](https://en.wikipedia.org/wiki/Multivariate_stable_distribution "Multivariate stable distribution") [Multivariate *t*](https://en.wikipedia.org/wiki/Multivariate_t-distribution "Multivariate t-distribution") [Normal-gamma](https://en.wikipedia.org/wiki/Normal-gamma_distribution "Normal-gamma distribution") [Inverse](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution") *[Matrix-valued:](https://en.wikipedia.org/wiki/Random_matrix "Random matrix")* [LKJ](https://en.wikipedia.org/wiki/Lewandowski-Kurowicka-Joe_distribution "Lewandowski-Kurowicka-Joe distribution") [Matrix beta](https://en.wikipedia.org/wiki/Matrix_variate_beta_distribution "Matrix variate beta distribution") [Matrix *F*](https://en.wikipedia.org/wiki/Matrix_F-distribution "Matrix F-distribution") [Matrix normal](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution") [Matrix *t*](https://en.wikipedia.org/wiki/Matrix_t-distribution "Matrix t-distribution") [Matrix gamma](https://en.wikipedia.org/wiki/Matrix_gamma_distribution "Matrix gamma distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse_matrix_gamma_distribution "Inverse matrix gamma distribution") [Wishart](https://en.wikipedia.org/wiki/Wishart_distribution "Wishart distribution") [Normal](https://en.wikipedia.org/wiki/Normal-Wishart_distribution "Normal-Wishart distribution") [Inverse](https://en.wikipedia.org/wiki/Inverse-Wishart_distribution "Inverse-Wishart distribution") [Normal-inverse](https://en.wikipedia.org/wiki/Normal-inverse-Wishart_distribution "Normal-inverse-Wishart distribution") [Complex](https://en.wikipedia.org/wiki/Complex_Wishart_distribution "Complex Wishart distribution") [Uniform distribution on a Stiefel manifold](https://en.wikipedia.org/wiki/Uniform_distribution_on_a_Stiefel_manifold "Uniform distribution on a Stiefel manifold") | | [Directional](https://en.wikipedia.org/wiki/Directional_statistics "Directional statistics") | *Univariate (circular) [directional](https://en.wikipedia.org/wiki/Directional_statistics "Directional statistics")* [Circular uniform](https://en.wikipedia.org/wiki/Circular_uniform_distribution "Circular uniform distribution") [Univariate von Mises](https://en.wikipedia.org/wiki/Von_Mises_distribution "Von Mises distribution") [Wrapped normal](https://en.wikipedia.org/wiki/Wrapped_normal_distribution "Wrapped normal distribution") [Wrapped Cauchy](https://en.wikipedia.org/wiki/Wrapped_Cauchy_distribution "Wrapped Cauchy distribution") [Wrapped exponential](https://en.wikipedia.org/wiki/Wrapped_exponential_distribution "Wrapped exponential distribution") [Wrapped asymmetric Laplace](https://en.wikipedia.org/wiki/Wrapped_asymmetric_Laplace_distribution "Wrapped asymmetric Laplace distribution") [Wrapped LĂ©vy](https://en.wikipedia.org/wiki/Wrapped_L%C3%A9vy_distribution "Wrapped LĂ©vy distribution") *Bivariate (spherical)* [Kent](https://en.wikipedia.org/wiki/Kent_distribution "Kent distribution") *Bivariate (toroidal)* [Bivariate von Mises](https://en.wikipedia.org/wiki/Bivariate_von_Mises_distribution "Bivariate von Mises distribution") *Multivariate* [von Mises–Fisher](https://en.wikipedia.org/wiki/Von_Mises%E2%80%93Fisher_distribution "Von Mises–Fisher distribution") [Bingham](https://en.wikipedia.org/wiki/Bingham_distribution "Bingham distribution") | | [Degenerate](https://en.wikipedia.org/wiki/Degenerate_distribution "Degenerate distribution") and [singular](https://en.wikipedia.org/wiki/Singular_distribution "Singular distribution") | *Degenerate* [Dirac delta function](https://en.wikipedia.org/wiki/Dirac_delta_function "Dirac delta function") *Singular* [Cantor](https://en.wikipedia.org/wiki/Cantor_distribution "Cantor distribution") | | Families | [Circular](https://en.wikipedia.org/wiki/Circular_distribution "Circular distribution") [Compound Poisson](https://en.wikipedia.org/wiki/Compound_Poisson_distribution "Compound Poisson distribution") [Elliptical](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution") [Exponential](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") [Natural exponential](https://en.wikipedia.org/wiki/Natural_exponential_family "Natural exponential family") [Location–scale](https://en.wikipedia.org/wiki/Location%E2%80%93scale_family "Location–scale family") [Maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution") [Mixture](https://en.wikipedia.org/wiki/Mixture_distribution "Mixture distribution") [Pearson](https://en.wikipedia.org/wiki/Pearson_distribution "Pearson distribution") [Tweedie](https://en.wikipedia.org/wiki/Tweedie_distribution "Tweedie distribution") [Wrapped](https://en.wikipedia.org/wiki/Wrapped_distribution "Wrapped distribution") | | ![](https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/20px-Symbol_category_class.svg.png) [Category](https://en.wikipedia.org/wiki/Category:Probability_distributions "Category:Probability distributions") [![](https://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/20px-Commons-logo.svg.png)](https://en.wikipedia.org/wiki/File:Commons-logo.svg "Commons page") [Commons](https://commons.wikimedia.org/wiki/Category:Probability_distributions "commons:Category:Probability distributions") | | | [Authority control databases](https://en.wikipedia.org/wiki/Help:Authority_control "Help:Authority control") [![Edit this at Wikidata](https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/20px-OOjs_UI_icon_edit-ltr-progressive.svg.png)](https://www.wikidata.org/wiki/Q133871#identifiers "Edit this at Wikidata") | | |---|---| | International | [GND](https://d-nb.info/gnd/4075494-7) | | National | [United States](https://id.loc.gov/authorities/sh85053556) [France](https://catalogue.bnf.fr/ark:/12148/cb119421818) [BnF data](https://data.bnf.fr/ark:/12148/cb119421818) [Czech Republic](https://aleph.nkp.cz/F/?func=find-c&local_base=aut&ccl_term=ica=ph123321&CON_LNG=ENG) [Israel](https://www.nli.org.il/en/authorities/987007560462505171) | | Other | [Yale LUX](https://lux.collections.yale.edu/view/concept/d5b5f87b-74e6-4f34-996b-7308b1fe9b73) | ![](https://en.wikipedia.org/wiki/Special:CentralAutoLogin/start?useformat=desktop&type=1x1&usesul3=1) Retrieved from "<https://en.wikipedia.org/w/index.php?title=Normal_distribution&oldid=1344852379>" [Categories](https://en.wikipedia.org/wiki/Help:Category "Help:Category"): - [Normal distribution](https://en.wikipedia.org/wiki/Category:Normal_distribution "Category:Normal distribution") - [Continuous distributions](https://en.wikipedia.org/wiki/Category:Continuous_distributions "Category:Continuous distributions") - [Conjugate prior distributions](https://en.wikipedia.org/wiki/Category:Conjugate_prior_distributions "Category:Conjugate prior distributions") - [Exponential family distributions](https://en.wikipedia.org/wiki/Category:Exponential_family_distributions "Category:Exponential family distributions") - [Stable distributions](https://en.wikipedia.org/wiki/Category:Stable_distributions "Category:Stable distributions") - [Location-scale family probability distributions](https://en.wikipedia.org/wiki/Category:Location-scale_family_probability_distributions "Category:Location-scale family probability distributions") Hidden categories: - [All articles with unsourced statements](https://en.wikipedia.org/wiki/Category:All_articles_with_unsourced_statements "Category:All articles with unsourced statements") - [Articles with unsourced statements from June 2011](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_June_2011 "Category:Articles with unsourced statements from June 2011") - [CS1: long volume value](https://en.wikipedia.org/wiki/Category:CS1:_long_volume_value "Category:CS1: long volume value") - [Articles with short description](https://en.wikipedia.org/wiki/Category:Articles_with_short_description "Category:Articles with short description") - [Short description matches Wikidata](https://en.wikipedia.org/wiki/Category:Short_description_matches_Wikidata "Category:Short description matches Wikidata") - [Use mdy dates from August 2012](https://en.wikipedia.org/wiki/Category:Use_mdy_dates_from_August_2012 "Category:Use mdy dates from August 2012") - [Pages using infobox probability distribution with unknown parameters](https://en.wikipedia.org/wiki/Category:Pages_using_infobox_probability_distribution_with_unknown_parameters "Category:Pages using infobox probability distribution with unknown parameters") - [Articles with unsourced statements from February 2023](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_February_2023 "Category:Articles with unsourced statements from February 2023") - [Articles with unsourced statements from June 2025](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_June_2025 "Category:Articles with unsourced statements from June 2025") - [CS1 Latin-language sources (la)](https://en.wikipedia.org/wiki/Category:CS1_Latin-language_sources_\(la\) "Category:CS1 Latin-language sources (la)") - [Commons category link is on Wikidata](https://en.wikipedia.org/wiki/Category:Commons_category_link_is_on_Wikidata "Category:Commons category link is on Wikidata") - This page was last edited on 22 March 2026, at 23:03 (UTC). - Text is available under the [Creative Commons Attribution-ShareAlike 4.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_4.0_International_License "Wikipedia:Text of the Creative Commons Attribution-ShareAlike 4.0 International License"); additional terms may apply. By using this site, you agree to the [Terms of Use](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Terms_of_Use "foundation:Special:MyLanguage/Policy:Terms of Use") and [Privacy Policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy "foundation:Special:MyLanguage/Policy:Privacy policy"). WikipediaÂź is a registered trademark of the [Wikimedia Foundation, Inc.](https://wikimediafoundation.org/), a non-profit organization. - [Privacy policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy) - [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About) - [Disclaimers](https://en.wikipedia.org/wiki/Wikipedia:General_disclaimer) - [Contact Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Contact_us) - [Legal & safety contacts](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Legal:Wikimedia_Foundation_Legal_and_Safety_Contact_Information) - [Code of Conduct](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Universal_Code_of_Conduct) - [Developers](https://developer.wikimedia.org/) - [Statistics](https://stats.wikimedia.org/#/en.wikipedia.org) - [Cookie statement](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Cookie_statement) - [Mobile view](https://en.wikipedia.org/w/index.php?title=Normal_distribution&mobileaction=toggle_view_mobile) - [![Wikimedia Foundation](https://en.wikipedia.org/static/images/footer/wikimedia.svg)](https://www.wikimedia.org/) - [![Powered by MediaWiki](https://en.wikipedia.org/w/resources/assets/mediawiki_compact.svg)](https://www.mediawiki.org/) Search Toggle the table of contents Normal distribution 73 languages [Add topic](https://en.wikipedia.org/wiki/Normal_distribution)
Readable Markdown
| Normal distribution | | |---|---| | Probability density function[![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Normal_Distribution_PDF.svg/500px-Normal_Distribution_PDF.svg.png)](https://en.wikipedia.org/wiki/File:Normal_Distribution_PDF.svg)The red curve is the [*standard normal distribution*](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution). | | | Cumulative distribution function[![](https://upload.wikimedia.org/wikipedia/commons/thumb/c/ca/Normal_Distribution_CDF.svg/500px-Normal_Distribution_CDF.svg.png)](https://en.wikipedia.org/wiki/File:Normal_Distribution_CDF.svg) | | | Notation | ![{\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/863304aaa42a945f2f07d79facc3d2eebc845ce7) | In [probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") and [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics"), a **normal distribution** or **Gaussian distribution** is a type of [continuous probability distribution](https://en.wikipedia.org/wiki/Continuous_probability_distribution "Continuous probability distribution") for a [real-valued](https://en.wikipedia.org/wiki/Real_number "Real number") [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable"). The general form of its [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") is[\[2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-The_Joy_of_Finite_Mathematics-2)[\[3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Mathematics_for_Physical_Science_and_Engineering-3)[\[4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-4) ![{\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}\\exp {\\left(-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}\\right)}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/76a8168ecf0cf6a7cce21549645ba8912bcdad9e) The parameter ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ is the [mean](https://en.wikipedia.org/wiki/Mean#Mean_of_a_probability_distribution "Mean") or [expectation](https://en.wikipedia.org/wiki/Expected_value "Expected value") of the distribution (and also its [median](https://en.wikipedia.org/wiki/Median "Median") and [mode](https://en.wikipedia.org/wiki/Mode_\(statistics\) "Mode (statistics)")), while the parameter ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is the [variance](https://en.wikipedia.org/wiki/Variance "Variance"). The [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") of the distribution is the positive value ⁠![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36)⁠ (sigma). A random variable with a Gaussian distribution is said to be **normally distributed** and is called a **normal deviate**. Normal distributions are important in [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics") and are often used in the [natural](https://en.wikipedia.org/wiki/Natural_science "Natural science") and [social sciences](https://en.wikipedia.org/wiki/Social_science "Social science") to represent real-valued [random variables](https://en.wikipedia.org/wiki/Random_variable "Random variable") whose distributions are not known.[\[5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-5)[\[6\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-6) Their importance is partly due to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). It states that the average of many [statistically independent](https://en.wikipedia.org/wiki/Statistically_independent "Statistically independent") samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution [converges](https://en.wikipedia.org/wiki/Convergence_in_distribution "Convergence in distribution") to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as [measurement errors](https://en.wikipedia.org/wiki/Measurement_error "Measurement error"), often have distributions that are nearly normal.[\[7\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-7) Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any [linear combination](https://en.wikipedia.org/wiki/Linear_combination "Linear combination") of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as [propagation of uncertainty](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") and [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares")[\[8\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-8) parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed. A normal distribution is sometimes informally called a **bell curve**.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9)[\[10\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-10) However, many other distributions are [bell-shaped](https://en.wikipedia.org/wiki/Bell-shaped_function "Bell-shaped function") (such as the [Cauchy](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"), [Student's t](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution"), and [logistic](https://en.wikipedia.org/wiki/Logistic_distribution "Logistic distribution") distributions). (For other names, see *[Naming](https://en.wikipedia.org/wiki/Normal_distribution#Naming)*.) The [univariate probability distribution](https://en.wikipedia.org/wiki/Univariate_distribution "Univariate distribution") is generalized for [vectors](https://en.wikipedia.org/wiki/Vector_\(mathematics_and_physics\) "Vector (mathematics and physics)") in the [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") and for matrices in the [matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution"). ### Standard normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=2 "Edit section: Standard normal distribution")\] The simplest case of a normal distribution is known as the **standard normal distribution** or **unit normal distribution**. This is a special case when ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118) and ![{\\textstyle \\sigma ^{2}=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9d2598ad7a8b77472fe5859d97603776dce7d5ba), and it is described by this [probability density function](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") (or density):[\[11\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-11) ![{\\displaystyle \\varphi (z)={\\frac {e^{-z^{2}/2}}{\\sqrt {2\\pi }}}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c3ece18a271826e23b5befbe96e9fa9b7750146) The variable ⁠![{\\displaystyle z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf368e72c009decd9b6686ee84a375632e11de98)⁠ has a mean of 0 and a variance and standard deviation of 1. The density ![{\\textstyle \\varphi (z)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1358bdf1286826fffd8e70843233399b5d69f8ee) has its peak value ![{\\textstyle {\\frac {1}{\\sqrt {2\\pi }}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eccd2c28dc343be5631094e573191b2c17edd21d) at ![{\\textstyle z=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0829ff59a6fdc19b44396956e8767fac4ba87ba3) and [inflection points](https://en.wikipedia.org/wiki/Inflection_point "Inflection point") at ![{\\textstyle z=+1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/696b845d6e60244a38cdcbe4c380cf780fa896cb) and ⁠![{\\displaystyle z=-1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/af5d92c041c1ddaa688b8f7f68d333e107e93709)⁠. Although the density above is most commonly known as the *standard normal,* a few authors have used that term to describe other versions of the normal distribution. [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss"), for example, once defined the standard normal as ![{\\textstyle \\varphi (z)={\\frac {1}{\\sqrt {\\pi }}}e^{-z^{2}},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f03b5e7bd0f9ba671f4c660d390291fe5b599673) which has a variance of ⁠![{\\displaystyle {\\tfrac {1}{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/edef8290613648790a8ac1a95c2fb7c3972aea2f)⁠, and [Stephen Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") once defined the standard normal as ![{\\textstyle \\varphi (z)=e^{-\\pi z^{2}},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83ce834a9cc639b399bc65f8e2db8439d6d126a6) which has a simple functional form and a variance of ![{\\textstyle \\sigma ^{2}={\\frac {1}{2\\pi }}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/906d23a23bb72831878ca5ef45498aa67b40ba1f)[\[12\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-12) ### General normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=3 "Edit section: General normal distribution")\] If ⁠![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd)⁠ is a [standard normal deviate](https://en.wikipedia.org/wiki/Standard_normal_deviate "Standard normal deviate"), then ![{\\textstyle X=\\sigma Z+\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/1acabbc54b0e79a7c76dce7de93aaaee9374b206) will have a normal distribution with expected value ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and standard deviation ⁠![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36)⁠. This is equivalent to saying that the standard normal distribution ⁠![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd)⁠ can be scaled/stretched by a factor of ⁠![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36)⁠ and shifted by ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ to yield a different normal distribution, called ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠. Conversely, if ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is a normal deviate with parameters ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), then this ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ distribution can be re-scaled and shifted via the formula ![{\\textstyle Z=(X-\\mu )/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/8a2807a7efd142e91103b494889fff5f3c4f3b56) to convert it to the standard normal distribution. This variate is also called the standardized form of ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠. In particular, the probability density function for ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ can be written in terms of the standard normal distribution ⁠![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e)⁠ (with zero mean and unit variance): ![{\\displaystyle f(x\\mid \\mu ,\\sigma ^{2})={\\frac {1}{\\sigma }}\\varphi \\left({\\frac {x-\\mu }{\\sigma }}\\right)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd10de785e04c8023943b58c3794881d6037d959) The probability density must be scaled by ![{\\textstyle 1/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/d20071b089d5bd816234098110b4c2bd18ed32d9) so that the [integral](https://en.wikipedia.org/wiki/Integral "Integral") is still 1. The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter ⁠![{\\displaystyle \\phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/72b1f30316670aee6270a28334bdf4f5072cdde4)⁠ ([phi](https://en.wikipedia.org/wiki/Phi "Phi")).[\[13\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-13) The variant form of the Greek letter phi, ⁠![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e)⁠, is also used quite often. The normal distribution is often referred to as ![{\\textstyle N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6c34ca495ee2609c49ba6b010c03c31e1968ae87) or ⁠![{\\displaystyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/863304aaa42a945f2f07d79facc3d2eebc845ce7)⁠.[\[14\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-14) Thus when a random variable ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is normally distributed with mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and standard deviation ⁠![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36)⁠, one may write ![{\\displaystyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0aeea1216143061c89f6a1944928a0aeee1b9cb1) ### Alternative parameterizations \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=5 "Edit section: Alternative parameterizations")\] Some authors advocate using the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)") ⁠![{\\displaystyle \\tau }](https://wikimedia.org/api/rest_v1/media/math/render/svg/38a7dcde9730ef0853809fefc18d88771f95206c)⁠ as the parameter defining the width of the distribution, instead of the standard deviation ⁠![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36)⁠ or the variance ⁠![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5)⁠. The precision is normally defined as the reciprocal of the variance, ⁠![{\\displaystyle 1/\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd9d6d70944c9de586516f90477d752079617c07)⁠.[\[15\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-15) The formula for the distribution then becomes ![{\\displaystyle f(x)={\\sqrt {\\frac {\\tau }{2\\pi }}}e^{-\\tau (x-\\mu )^{2}/2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e18260c517de859f1451477bc0c91d2d46f092a1) This choice is claimed to have advantages in numerical computations when ⁠![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36)⁠ is very close to zero, and simplifies formulas in some contexts, such as in the [Bayesian inference](https://en.wikipedia.org/wiki/Bayesian_statistics "Bayesian statistics") of variables with [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution"). Alternatively, the reciprocal of the standard deviation ![{\\textstyle \\tau '=1/\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/2cb4526c1105a954e898b1a04d9614d40eb53a98) might be defined as the *precision*, in which case the expression of the normal distribution becomes ![{\\displaystyle f(x)={\\frac {\\tau '}{\\sqrt {2\\pi }}}e^{-(\\tau ')^{2}(x-\\mu )^{2}/2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e16b2715bf35fcc08b305f756ca921d8557ddc4f) According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the distribution. Normal distributions form an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") with [natural parameters](https://en.wikipedia.org/wiki/Natural_parameter "Natural parameter") ![{\\textstyle \\textstyle \\theta \_{1}={\\frac {\\mu }{\\sigma ^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/008c5a8c14b3669bb14c1af1053414284aa96206) and ![{\\textstyle \\textstyle \\theta \_{2}=-{\\frac {1}{2\\sigma ^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4307052cdece25e306590d90818a7a348ec29c12), and natural statistics x and *x*2. The dual expectation parameters for normal distribution are *η*1 = *ÎŒ* and *η*2 = *ÎŒ*2 + *σ*2. ### Cumulative distribution function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=6 "Edit section: Cumulative distribution function")\] The [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") (CDF) of the standard normal distribution, usually denoted with the capital Greek letter ⁠![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24)⁠, is the integral ![{\\displaystyle \\Phi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x}e^{-t^{2}/2}\\,dt\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fe155a8dcfd41749d5e0e0e6a9ee8bc2614b32b5) The related [error function](https://en.wikipedia.org/wiki/Error_function "Error function") ![{\\textstyle \\operatorname {erf} (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/90f3dc9d94749c443d5adce54b6a51200bdd52f8) gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2, falling in the range ⁠![{\\displaystyle \[-x,x\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e23c41ff0bd6f01a0e27054c2b85819fcd08b762)⁠. That is: ![{\\displaystyle \\operatorname {erf} (x)={\\frac {1}{\\sqrt {\\pi }}}\\int \_{-x}^{x}e^{-t^{2}}\\,dt={\\frac {2}{\\sqrt {\\pi }}}\\int \_{0}^{x}e^{-t^{2}}\\,dt\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/42ad512624379200c07b7148812f0841710e5650) These integrals cannot be expressed in terms of elementary functions, and are often said to be [special functions](https://en.wikipedia.org/wiki/Special_function "Special function"). However, many numerical approximations are known; see [below](https://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_cumulative_distribution_function_and_normal_quantile_function) for more. The two functions are closely related, namely ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x}{\\sqrt {2}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8356fb040a87c7199fe5d99ca78fc217bb22260) For a generic normal distribution with density ⁠![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61)⁠, mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), the cumulative distribution function is ![{\\displaystyle F(x)=\\Phi {\\left({\\frac {x-\\mu }{\\sigma }}\\right)}={\\frac {1}{2}}\\left\[1+\\operatorname {erf} \\left({\\frac {x-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d71d8dab3627a46c34fafde729c82724b641b3eb) The probability that x lies between a and b with a \< b is therefore[\[16\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-KunIlPark-16): 84 ![{\\displaystyle \\operatorname {P} (a\<x\\leq b)={\\frac {1}{2}}\\left\[\\operatorname {erf} \\left({\\frac {b-\\mu }{\\sigma {\\sqrt {2}}}}\\right)-\\operatorname {erf} \\left({\\frac {a-\\mu }{\\sigma {\\sqrt {2}}}}\\right)\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/081e3058b1d40566c4e105866b7728e950d32f1d) The complement of the standard normal cumulative distribution function, ![{\\textstyle Q(x)=1-\\Phi (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55ee5d81b5e0df283ceaf4c4fb84f2735fa703e4), is often called the [Q-function](https://en.wikipedia.org/wiki/Q-function "Q-function"), especially in engineering texts.[\[17\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-17)[\[18\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-18) It gives the probability that the value of a standard normal random variable ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ will exceed ⁠![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4)⁠: ⁠![{\\displaystyle P(X\>x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/767fd276524cfb3556093722a4f40a9209194ea5)⁠. Other definitions of the ⁠![{\\displaystyle Q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8752c7023b4b3286800fe3238271bbca681219ed)⁠\-function, all of which are simple transformations of ⁠![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24)⁠, are also used occasionally.[\[19\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-19) The [graph](https://en.wikipedia.org/wiki/Graph_of_a_function "Graph of a function") of the standard normal cumulative distribution function ⁠![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24)⁠ has 2-fold [rotational symmetry](https://en.wikipedia.org/wiki/Rotational_symmetry "Rotational symmetry") around the point (0,1/2); that is, ⁠![{\\displaystyle \\Phi (-x)=1-\\Phi (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ac1a5e4fc7858485f2a5448635fd0a85b7fd53b0)⁠. Its [antiderivative](https://en.wikipedia.org/wiki/Antiderivative "Antiderivative") (indefinite integral) can be expressed as follows: ![{\\displaystyle \\int \\Phi (x)\\,dx=x\\Phi (x)+\\varphi (x)+C.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ec7f747ce873d091260c617c82359d7c407fee6) An [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") of the cumulative distribution function for large x can be derived using [integration by parts](https://en.wikipedia.org/wiki/Integration_by_parts "Integration by parts"): ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}e^{-x^{2}/2}\\sum \_{n=0}^{\\infty }{\\frac {1}{(2n+1)!!}}x^{2n+1}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e676037e64672c4894dd3f9b2abb609ff490b8e1) where ![{\\textstyle !!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/89dd9aa9898e3a45a024e12e27086d636a1bf5cd) denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"). For more, see [Error function § Asymptotic expansion](https://en.wikipedia.org/wiki/Error_function#Asymptotic_expansion "Error function").[\[20\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-20) #### Taylor series representation \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=7 "Edit section: Taylor series representation")\] The [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") for the normal distribution ⁠![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e)⁠ can be derived by substituting ⁠![{\\displaystyle -{\\tfrac {1}{2}}x^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fac175b0fc8ddfb03edecad7cd12331afdea1e3b)⁠ into the [Taylor series for the exponential function](https://en.wikipedia.org/wiki/Exponential_function#Power_series "Exponential function"):[\[21\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-duff-21) ![{\\displaystyle \\varphi (x)={\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}}}x^{2n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fed6d9f237b86e3ae908aac934d4780b6b08d2df) This series can be integrated term by term to obtain the Taylor series for the cumulative distribution function:[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22) ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}+{\\frac {1}{\\sqrt {2\\pi }}}\\sum \_{n=0}^{\\infty }{\\frac {(-1)^{n}}{n!\\,2^{n}(2n+1)}}x^{2n+1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56b82860a095bd5378858edbd264bc29d3c58c17) However, this series is ineffective for calculation due to slow convergence, except when ⁠![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4)⁠ is small.[\[22\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-kendall-22) Both of these series describe [entire functions](https://en.wikipedia.org/wiki/Entire_function "Entire function"), which converge for all real and complex values of ⁠![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4)⁠. #### Recursive computation with Taylor series \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=8 "Edit section: Recursive computation with Taylor series")\] The recurrence relation for [Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials "Hermite polynomials") He*n*(*x*) may be used to efficiently construct the [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series") expansion about any point *x*0: ![{\\displaystyle \\Phi (x)=\\sum \_{n=0}^{\\infty }{\\frac {\\Phi ^{(n)}(x\_{0})}{n!}}(x-x\_{0})^{n}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/899b25343db480efb1ae8a44a5331e78497b83cd) where: ![{\\displaystyle {\\begin{aligned}\\Phi ^{(0)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}\\int \_{-\\infty }^{x\_{0}}e^{-t^{2}/2}\\,dt\\\\\\Phi ^{(1)}(x\_{0})&={\\frac {1}{\\sqrt {2\\pi }}}e^{-x\_{0}^{2}/2}\\\\\\Phi ^{(n)}(x\_{0})&=-\\left(x\_{0}\\Phi ^{(n-1)}(x\_{0})+(n-2)\\Phi ^{(n-2)}(x\_{0})\\right),\&n\\geq 2\\,.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fed2112b27830b104cb3df5ea89238adbca49314) #### Standard deviation and coverage \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=9 "Edit section: Standard deviation and coverage")\] [![](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Standard_deviation_diagram.svg/500px-Standard_deviation_diagram.svg.png)](https://en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg) For the normal distribution, the values less than one standard deviation from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%. About 68% of values drawn from a normal distribution are within one standard deviation σ from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[\[9\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-www.mathsisfun.com-9) This is known as the [68–95–99.7 (empirical) rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule "68–95–99.7 rule"), or the *3-sigma rule*. More precisely, the probability that a normal deviate lies in the range between ![{\\textstyle \\mu -n\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/16ab399c484083545cd1266951f08743135539f3) and ![{\\textstyle \\mu +n\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/d6d5a81dd97c468e3806e43e15761506633efb95) is given by ![{\\displaystyle F(\\mu +n\\sigma )-F(\\mu -n\\sigma )=\\Phi (n)-\\Phi (-n)=\\operatorname {erf} \\left({\\frac {n}{\\sqrt {2}}}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/effeceb477bf37b05d0035347946350b1f0155ce) To 12 significant digits, the values for ![{\\textstyle n=1,2,\\ldots ,6}](https://wikimedia.org/api/rest_v1/media/math/render/svg/79a9883d3cba69db9041e545b5bd90383dc72a76) are: | ⁠![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)⁠ | | | | | |---|---|---|---|---| | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A178647](https://oeis.org/A178647 "oeis:A178647") | | | | | | 2 | 0\.954499736104 | 0\.045500263896 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A110894](https://oeis.org/A110894 "oeis:A110894") | | | | | | | | 21 | .9778945080 | | | | | 3 | 0\.997300203937 | 0\.002699796063 | | [OEIS](https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences "On-Line Encyclopedia of Integer Sequences"): [A270712](https://oeis.org/A270712 "oeis:A270712") | | | | | | | | 370 | .398347345 | | | | | 4 | 0\.999936657516 | 0\.000063342484 | | | | | | | | | | 15787 | .1927673 | | | | | 5 | 0\.999999426697 | 0\.000000573303 | | | | | | | | | | 1744277 | .89362 | | | | | 6 | 0\.999999998027 | 0\.000000001973 | | | | | | | | | | 506797345 | .897 | | | | For large ⁠![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)⁠, one can use the approximation ![{\\displaystyle 1-p\\approx {\\frac {\\sqrt {2}}{n{\\sqrt {\\pi e^{n^{2}}}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/04078025dfb809088e494d493686a144f704740e) The [quantile function](https://en.wikipedia.org/wiki/Quantile_function "Quantile function") of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function"), and can be expressed in terms of the inverse [error function](https://en.wikipedia.org/wiki/Error_function "Error function"): ![{\\displaystyle \\Phi ^{-1}(p)={\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/de61998182ddb364f8b77d67c1aa645685fb3c3b) For a normal random variable with mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), the quantile function is ![{\\displaystyle F^{-1}(p)=\\mu +\\sigma \\Phi ^{-1}(p)=\\mu +\\sigma {\\sqrt {2}}\\operatorname {erf} ^{-1}(2p-1),\\quad p\\in (0,1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57ed565648bb5901c0da2dd3ad10b8d447d4c73c) The [quantile](https://en.wikipedia.org/wiki/Quantile "Quantile") ![{\\textstyle \\Phi ^{-1}(p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3a9f04f02c81a82f110ba3e0bc02b931ddba14c2) of the standard normal distribution is commonly denoted as ⁠![{\\displaystyle z\_{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/52498d5e243c71b94e48fa16217a3f4a17be6687)⁠. These values are used in [hypothesis testing](https://en.wikipedia.org/wiki/Hypothesis_testing "Hypothesis testing"), construction of [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") and [Q–Q plots](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "Q–Q plot"). A normal random variable ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ will exceed ![{\\textstyle \\mu +z\_{p}\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/7ee41c1b531933b4fcf571bb55703e37c7a0fc77) with probability ![{\\textstyle 1-p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ec5d41b23d96cef795a32f25060ef6ba352876bd), and will lie outside the interval ![{\\textstyle \\mu \\pm z\_{p}\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/e909fd422a7987824dffd964868bd1e873ca0d40) with probability ⁠![{\\displaystyle 2(1-p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7403414204a8e5a6b889202992b9824f826cc72c)⁠. In particular, the quantile ![{\\textstyle z\_{0.975}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5207b0523601cf57cae2ee9cb74ce6ac33951595) is [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96"); therefore a normal random variable will lie outside the interval ![{\\textstyle \\mu \\pm 1.96\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fa5d1373db8c4232587a27b948e3cf67aa3d83e) in only 5% of cases. The following table gives the quantile ![{\\textstyle z\_{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a5b93ea92fe03865ccbef09efc30804949e5545e) such that ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ will lie in the range ![{\\textstyle \\mu \\pm z\_{p}\\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/e909fd422a7987824dffd964868bd1e873ca0d40) with a specified probability ⁠![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36)⁠. These values are useful to determine [tolerance interval](https://en.wikipedia.org/wiki/Tolerance_interval "Tolerance interval") for [sample averages](https://en.wikipedia.org/wiki/Sample_mean_and_sample_covariance#Sample_mean "Sample mean and sample covariance") and other statistical [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") with normal (or [asymptotically](https://en.wikipedia.org/wiki/Asymptotic "Asymptotic") normal) distributions.[\[23\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-23) The following table shows ![{\\textstyle {\\sqrt {2}}\\operatorname {erf} ^{-1}(p)=\\Phi ^{-1}\\left({\\frac {p+1}{2}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e7ca400c2536e7fba7dcda56c02bb014978c01c), not ![{\\textstyle \\Phi ^{-1}(p)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3a9f04f02c81a82f110ba3e0bc02b931ddba14c2) as defined above. | ⁠![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36)⁠ | |---| For small ⁠![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36)⁠, the quantile function has the useful [asymptotic expansion](https://en.wikipedia.org/wiki/Asymptotic_expansion "Asymptotic expansion") ![{\\textstyle \\Phi ^{-1}(p)=-{\\sqrt {\\ln {\\frac {1}{p^{2}}}-\\ln \\ln {\\frac {1}{p^{2}}}-\\ln(2\\pi )}}+{\\mathcal {o}}(1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2e59806c8eb5b7858944464c01f370d982d20c72)\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] #### Using root finding to compute the quantile function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=11 "Edit section: Using root finding to compute the quantile function")\] Any of the described approaches for computing the cumulative distribution function ![{\\textstyle \\Phi (x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fb97785cbcee6a45a8885563bcc90a913b803a87) can be used with [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method "Newton's method") (or another [root-finding algorithm](https://en.wikipedia.org/wiki/Root-finding_algorithm "Root-finding algorithm") such as [Halley's method](https://en.wikipedia.org/wiki/Halley%27s_method "Halley's method")) to find the value of ⁠![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4)⁠ for which ⁠![{\\displaystyle \\Phi (x)=q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0fba24eeb58b69a8aeebffbbcafbac5272094f90)⁠ for some desired quantile ⁠![{\\displaystyle q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/06809d64fa7c817ffc7e323f85997f783dbdf71d)⁠. For example, starting with an initial, approximately correct guess ⁠![{\\displaystyle x\_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/86f21d0e31751534cd6584264ecf864a6aa792cf)⁠, increasingly better approximations ⁠![{\\displaystyle x\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a8788bf85d532fa88d1fb25eff6ae382a601c308)⁠, ⁠![{\\displaystyle x\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d7af1b928f06e4c7e3e8ebfd60704656719bd766)⁠, ... can be calculated iteratively using Newton's method with ![{\\displaystyle x\_{n}=x\_{n-1}-{\\frac {\\Phi (x\_{n-1})-q}{\\varphi (x\_{n-1})}}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/712090ad5a1da4901e0ab1a80ad54d959a221223) The normal distribution is the only distribution whose [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") beyond the first two (i.e., other than the mean and [variance](https://en.wikipedia.org/wiki/Variance "Variance")) are zero. It is also the continuous distribution with the [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution") for a specified mean and variance.[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24)[\[25\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-25) Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.[\[26\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Geary_RC-26)[\[27\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-27) The normal distribution is a subclass of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). The normal distribution is [symmetric](https://en.wikipedia.org/wiki/Symmetric_distribution "Symmetric distribution") about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the [weight](https://en.wikipedia.org/wiki/Weight "Weight") of a person or the price of a [share of stock](https://en.wikipedia.org/wiki/Share_\(finance\) "Share (finance)"). Such variables may be better described by other distributions, such as the [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") or the [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution "Pareto distribution"). The value of the normal density is practically zero when the value ⁠![{\\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4)⁠ lies more than a few [standard deviations](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of [outliers](https://en.wikipedia.org/wiki/Outlier "Outlier")—values that lie many standard deviations away from the mean—and least squares and other [statistical inference](https://en.wikipedia.org/wiki/Statistical_inference "Statistical inference") methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more [heavy-tailed](https://en.wikipedia.org/wiki/Heavy-tailed "Heavy-tailed") distribution should be assumed and appropriate [robust statistical inference](https://en.wikipedia.org/wiki/Robust_statistics "Robust statistics") methods applied. The Gaussian distribution belongs to the family of [stable distributions](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") which are the attractors of sums of [independent, identically distributed](https://en.wikipedia.org/wiki/Independent,_identically_distributed "Independent, identically distributed") distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution") and the [LĂ©vy distribution](https://en.wikipedia.org/wiki/L%C3%A9vy_distribution "LĂ©vy distribution"). ### Symmetries and derivatives \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=13 "Edit section: Symmetries and derivatives")\] The normal distribution with density ![{\\textstyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0a982c6635ab3b98d9e12d5f5a8533359bcb38a) (mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}\>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ebe66d229cd7ffe26117b0210589335cd0ef9a1a)) has the following properties: Furthermore, the density ⁠![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e)⁠ of the standard normal distribution (i.e. ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118) and ![{\\textstyle \\sigma =1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f84ca2bfc32e1664f86cae4803d641d22cfe08ca)) also has the following properties: The plain and absolute [moments](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") of a variable ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ are the expected values of ![{\\textstyle X^{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c37706b810427d7356289f56e79feac9da48f931) and ![{\\textstyle \|X\|^{p}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/84cb2a490da4ae1e7295a3d64f8965cd564dbfda), respectively. If the expected value ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ of ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is zero, these parameters are called *central moments;* otherwise, these parameters are called *non-central moments.* Usually we are interested only in moments with integer order ⁠![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36)⁠. If ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ has a normal distribution, the non-central moments exist and are finite for any ⁠![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36)⁠ whose real part is greater than −1. For any non-negative integer ⁠![{\\displaystyle p}](https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36)⁠, the plain central moments are:[\[31\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-31) ![{\\displaystyle \\operatorname {E} \\left\[(X-\\mu )^{p}\\right\]={\\begin{cases}0&{\\text{if }}p{\\text{ is odd,}}\\\\\\sigma ^{p}(p-1)!!&{\\text{if }}p{\\text{ is even.}}\\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1d2c92b62ac2bbe07a8e475faac29c8cc5f7755) Here ![{\\textstyle n!!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6479cdc6d9299bee6791fd78541a9caced147e84) denotes the [double factorial](https://en.wikipedia.org/wiki/Double_factorial "Double factorial"), that is, the product of all numbers from ⁠![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)⁠ to 1 that have the same parity as ![{\\textstyle n.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6788de1792d07b05bdece3a07e26e3ca35cf0dcc) The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer ![{\\textstyle p,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd670f60ea60d0f801a51d07153211c4498c2b9b) ![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[\|X-\\mu \|^{p}\\right\]&=\\sigma ^{p}(p-1)!!\\cdot {\\begin{cases}{\\sqrt {\\frac {2}{\\pi }}}&{\\text{if }}p{\\text{ is odd}}\\\\1&{\\text{if }}p{\\text{ is even}}\\end{cases}}\\\\\[8pt\]&=\\sigma ^{p}\\cdot {\\frac {2^{p/2}\\Gamma \\left({\\frac {p+1}{2}}\\right)}{\\sqrt {\\pi }}}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3be5c6403b0141985f1980de6283a118ef4ea267) The last formula is valid also for any non-integer ![{\\textstyle p\>-1.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/60bec6c456d39575d4d0bda8ef6126df6965184b) When the mean ![{\\textstyle \\mu \\neq 0,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd40f78414e9c6ae54a809ff88f32fb619d8b65d) the plain and absolute moments can be expressed in terms of [confluent hypergeometric functions](https://en.wikipedia.org/wiki/Confluent_hypergeometric_function "Confluent hypergeometric function") ![{\\textstyle {}\_{1}F\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9b60e0e022990cc8edbdf635ade1ab912951bf32) and ![{\\textstyle U.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fa070da3b4046164d9d5e470611c5372c15c53e5)[\[32\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-32) ![{\\displaystyle {\\begin{aligned}\\operatorname {E} \\left\[X^{p}\\right\]&=\\sigma ^{p}\\cdot {\\left(-i{\\sqrt {2}}\\right)}^{p}\\,U{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)},\\\\\\operatorname {E} \\left\[\|X\|^{p}\\right\]&=\\sigma ^{p}\\cdot 2^{p/2}{\\frac {\\Gamma {\\left({\\frac {1+p}{2}}\\right)}}{\\sqrt {\\pi }}}\\,{}\_{1}F\_{1}{\\left(-{\\frac {p}{2}},{\\frac {1}{2}},-{\\frac {\\mu ^{2}}{2\\sigma ^{2}}}\\right)}.\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7c1a76b43032d0b01ff1935cced240253263ec62) These expressions remain valid even when ⁠![{\\displaystyle p\>-1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/406c8961cfe4fad127623e38a89afa66115d815c)⁠ is not an integer. See also [generalized Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials#"Negative_variance" "Hermite polynomials"). | Order | Non-central moment, ![{\\displaystyle \\operatorname {E} \\left\[X^{p}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53264b5ab94e93f2e7de05de69012af27df4c4f4) | |---|---| The expectation of ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ conditioned on the event that ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ lies in an interval ![{\\textstyle \[a,b\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c780cbaafb5b1d4a6912aa65d2b0b1982097108) is given by ![{\\displaystyle \\operatorname {E} \\left\[X\\mid a\<X\<b\\right\]=\\mu -\\sigma ^{2}{\\frac {f(b)-f(a)}{F(b)-F(a)}}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad97cc40960e6d1d4e65f51596c9cd0c9accfdc0) where ⁠![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61)⁠ and ⁠![{\\displaystyle F}](https://wikimedia.org/api/rest_v1/media/math/render/svg/545fd099af8541605f7ee55f08225526be88ce57)⁠ respectively are the density and the cumulative distribution function of ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠. For ![{\\textstyle b=\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/780ce7d0e407bb8380dfb1d1f2997d11255fa9ba) this is known as the [inverse Mills ratio](https://en.wikipedia.org/wiki/Inverse_Mills_ratio "Inverse Mills ratio"). Note that above, density ⁠![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61)⁠ of ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is used instead of standard normal density as in inverse Mills ratio, so here we have ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) instead of ⁠![{\\displaystyle \\sigma }](https://wikimedia.org/api/rest_v1/media/math/render/svg/59f59b7c3e6fdb1d0365a494b81fb9a696138c36)⁠. ### Fourier transform and characteristic function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=15 "Edit section: Fourier transform and characteristic function")\] The [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform "Fourier transform") of a normal density ⁠![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61)⁠ with mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is[\[33\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-33) ![{\\displaystyle {\\hat {f}}(t)=\\int \_{-\\infty }^{\\infty }f(x)e^{-itx}\\,dx=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dec8c2d78fe6c57003ffc124f982d4bb6d28610b) where ⁠![{\\displaystyle i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20)⁠ is the [imaginary unit](https://en.wikipedia.org/wiki/Imaginary_unit "Imaginary unit"). If the mean ![{\\textstyle \\mu =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/501d35ed7f32d2adb071cbde62acd5fe6218d118), the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the [frequency domain](https://en.wikipedia.org/wiki/Frequency_domain "Frequency domain"), with mean 0 and variance ⁠![{\\displaystyle 1/\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd9d6d70944c9de586516f90477d752079617c07)⁠. In particular, the standard normal distribution ⁠![{\\displaystyle \\varphi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/33ee699558d09cf9d653f6351f9fda0b2f4aaa3e)⁠ is an [eigenfunction](https://en.wikipedia.org/wiki/Fourier_transform#Eigenfunctions "Fourier transform") of the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is closely connected to the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") ![{\\textstyle \\varphi \_{X}(t)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/06b652a8fa9ef917c8f5c4843bc5c27904398e89) of that variable, which is defined as the [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") of ![{\\textstyle e^{itX}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2cee41fa7c645c5cd513bda031d2a93db5e0ea2f), as a function of the real variable ⁠![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560)⁠ (the [frequency](https://en.wikipedia.org/wiki/Frequency "Frequency") parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable ⁠![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560)⁠.[\[34\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-34) The relation between both is: ![{\\displaystyle \\varphi \_{X}(t)={\\hat {f}}(-t)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf8b863dd1facca4d0ada922d4420095387ea547) The real and imaginary parts of ![{\\displaystyle {\\hat {f}}(t)=\\operatorname {E} \[e^{-itx}\]=e^{-i\\mu t}e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/952332f7eee9e208a58adcb0fc9579bdaa143cce) give: ![{\\displaystyle \\operatorname {E} \[\\cos(tx)\]=\\cos(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5395d7de6a6755a20b760ae1a32fffc4321f63be) and ![{\\displaystyle \\operatorname {E} \[\\sin(tx)\]=\\sin(\\mu t)e^{-{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da4035208745307ad5ea8c4ac3274bdebc42bccd) Similarly, ![{\\displaystyle \\operatorname {E} \[\\cosh(tx)\]=\\cosh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ab17a3e9ea7d35d5f8f142bdb6057ea1cf8ff224) and ![{\\displaystyle \\operatorname {E} \[\\sinh(tx)\]=\\sinh(\\mu t)e^{{\\frac {1}{2}}\\sigma ^{2}t^{2}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/448bbe59bcf7fbe7f53f44effd9ce6d75fc19fc3) These formulas evaluated at ![{\\displaystyle t=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/970dea4a5f5ec5355c4cdd62f6396fbc8b1baaa1) give the expected value of these basic trigonometric and hyperbolic functions over a Gaussian random variable ![{\\displaystyle X\\sim N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1ac7642fb5117eb47ddd41db3006f20fb7886f01), which also could be seen as consequences of the [Isserlis's theorem](https://en.wikipedia.org/wiki/Isserlis%27s_theorem "Isserlis's theorem"). ### Moment- and cumulant-generating functions \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=16 "Edit section: Moment- and cumulant-generating functions")\] The [moment generating function](https://en.wikipedia.org/wiki/Moment_generating_function "Moment generating function") of a real random variable ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is the expected value of ![{\\textstyle e^{tX}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ce1d303b270111ba07bf545bbf6ceca8f6e500dd), as a function of the real parameter ⁠![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560)⁠. For a normal distribution with density ⁠![{\\displaystyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61)⁠, mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), the moment generating function exists and is equal to ![{\\displaystyle M(t)=\\operatorname {E} \\left\[e^{tX}\\right\]={\\hat {f}}(it)=e^{\\mu t}e^{\\sigma ^{2}t^{2}/2}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3b5930b107fb4328bc04d077d65ce3d2bf1510de) For any ⁠![{\\displaystyle k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40)⁠, the coefficient of ⁠![{\\displaystyle t^{k}/k!}](https://wikimedia.org/api/rest_v1/media/math/render/svg/929a04a311aa91f40ff282b49d54f7e0815c28d8)⁠ in the moment generating function (expressed as an [exponential power series](https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function_\(EGF\) "Generating function") in ⁠![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560)⁠) is the normal distribution's expected value ⁠![{\\displaystyle \\operatorname {E} \[X^{k}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9893a5b728d8111751abbcf6bd653d214b462cdc)⁠. The [cumulant generating function](https://en.wikipedia.org/wiki/Cumulant_generating_function "Cumulant generating function") is the logarithm of the moment generating function, namely ![{\\displaystyle g(t)=\\ln M(t)=\\mu t+{\\tfrac {1}{2}}\\sigma ^{2}t^{2}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/44d74b9c48715ac4ee80e4cee69b1a551ee37b7e) The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in ⁠![{\\displaystyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560)⁠, only the first two [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant") are nonzero, namely the mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and the variance ⁠![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5)⁠. Some authors prefer to instead work with the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") E\[*e**itX*\] = *e**iÎŒt* − *σ*2*t*2/2 and ln E\[*e**itX*\] = *iÎŒt* − ⁠1/2⁠*σ*2*t*2. ### Stein operator and class \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=17 "Edit section: Stein operator and class")\] Within [Stein's method](https://en.wikipedia.org/wiki/Stein%27s_method "Stein's method") the Stein operator and class of a random variable ![{\\textstyle X\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1ab057858ef1f46aa5e20f095bf0cf86322866a) are ![{\\textstyle {\\mathcal {A}}f(x)=\\sigma ^{2}f'(x)-(x-\\mu )f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e8b9b89fdc012005cf1aea0c28982345f34fe12e) and ![{\\textstyle {\\mathcal {F}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/92b7034b359bf5ee4c81c8992fbb9289a1c6d5a3) the class of all absolutely continuous functions ⁠![{\\displaystyle \\textstyle f:\\mathbb {R} \\to \\mathbb {R} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/04e4859bf24012d6cb2df8e1f13a611da5fd3ec2)⁠ such that ⁠![{\\displaystyle \\operatorname {E} \[\\vert f'(X)\\vert \]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/c172f331ab4cca02c7e06a7322b7832f082e717b)⁠. ### Zero-variance limit \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=18 "Edit section: Zero-variance limit")\] In the [limit](https://en.wikipedia.org/wiki/Limit_\(mathematics\) "Limit (mathematics)") when ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) approaches zero, the probability density ![{\\textstyle f}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1b77076edca76caf3331d0551d1645b8f678283) approaches zero everywhere except at ![{\\textstyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/259577540a13444806174d5a1ae7662974f58085), where it approaches ![{\\textstyle \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/7672961cb69498135a93484fe61fedd72996ad03), while its integral remains equal to 1. An extension of the normal distribution to the case with zero variance can be defined using the [Dirac delta measure](https://en.wikipedia.org/wiki/Dirac_measure "Dirac measure") ![{\\textstyle \\delta \_{\\mu }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/90d9512ab27b5e327a7525d31932d16636b29ec6), although the resulting random variables are not [absolutely continuous](https://en.wikipedia.org/wiki/Absolutely_continuous_random_variable "Absolutely continuous random variable") and thus do not have [probability density functions](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function"). The cumulative distribution function of such a random variable is then the [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function "Heaviside step function") translated by the mean ![{\\textstyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/259577540a13444806174d5a1ae7662974f58085), namely ![{\\displaystyle F(x)={\\begin{cases}0&{\\text{if }}x\<\\mu \\\\1&{\\text{if }}x\\geq \\mu .\\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0c3302167eb5f749bb47fd73b74c732f425e3dea) Of all probability distributions over the reals with a specified finite mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and finite variance ⁠![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5)⁠, the normal distribution ![{\\textstyle N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6c34ca495ee2609c49ba6b010c03c31e1968ae87) is the one with [maximum entropy](https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution "Maximum entropy probability distribution").[\[24\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTECoverThomas2006254-24) To see this, let ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ be a [continuous random variable](https://en.wikipedia.org/wiki/Continuous_random_variable "Continuous random variable") with [probability density](https://en.wikipedia.org/wiki/Probability_density "Probability density") ⁠![{\\displaystyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074)⁠. The entropy of ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is defined as[\[35\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-35)[\[36\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-36)[\[37\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-37) ![{\\displaystyle H(X)=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/129fc3f44c225b79aa5515f97f7321102fd329e8) where ![{\\textstyle f(x)\\log f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6a92396d2452d9a8a76d9e66eadf0bd18e6f9600) is understood to be zero whenever ⁠![{\\displaystyle f(x)=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf85883d74b75fe35ca8d3f2b44802df078e4fa1)⁠. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by using [variational calculus](https://en.wikipedia.org/wiki/Variational_calculus "Variational calculus"). A function with three [Lagrange multipliers](https://en.wikipedia.org/wiki/Lagrange_multipliers "Lagrange multipliers") is defined: ![{\\displaystyle L=-\\int \_{-\\infty }^{\\infty }f(x)\\ln f(x)\\,dx-\\lambda \_{0}\\left(1-\\int \_{-\\infty }^{\\infty }f(x)\\,dx\\right)-\\lambda \_{1}\\left(\\mu -\\int \_{-\\infty }^{\\infty }f(x)x\\,dx\\right)-\\lambda \_{2}\\left(\\sigma ^{2}-\\int \_{-\\infty }^{\\infty }f(x)(x-\\mu )^{2}\\,dx\\right)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/856af00c1ea9a107166c797aeb451a2978d699c3) At maximum entropy, a small variation ![{\\textstyle \\delta f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83e87224bdd1053bd82900662d625caf94321aeb) about ![{\\textstyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0a982c6635ab3b98d9e12d5f5a8533359bcb38a) will produce a variation ![{\\textstyle \\delta L}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4dd7ec66a3e71534556db59909a49ee04982c706) about ⁠![{\\displaystyle L}](https://wikimedia.org/api/rest_v1/media/math/render/svg/103168b86f781fe6e9a4a87b8ea1cebe0ad4ede8)⁠ which is equal to 0: ![{\\displaystyle 0=\\delta L=\\int \_{-\\infty }^{\\infty }\\delta f(x)\\left(-\\ln f(x)-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,dx\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/91b0159db4e0722a4b5ededf3f98d97437605782) Since this must hold for any small ⁠![{\\displaystyle \\delta f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6443aec0d016c556a8d440074b7bb5c4df23232b)⁠, the factor multiplying ⁠![{\\displaystyle \\delta f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6443aec0d016c556a8d440074b7bb5c4df23232b)⁠ must be zero, and solving for ⁠![{\\displaystyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074)⁠ yields: ![{\\displaystyle f(x)=\\exp \\left(-1+\\lambda \_{0}+\\lambda \_{1}x+\\lambda \_{2}(x-\\mu )^{2}\\right)\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9a80908345c8c165230ad4e818f29470b33064d5) The Lagrange constraints that ⁠![{\\displaystyle f(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074)⁠ is properly normalized and has the specified mean and variance are satisfied if and only if ⁠![{\\displaystyle \\lambda \_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cfa5ad1eb6cdaf3d8dfd77991ee9ce7bdf169184)⁠, ⁠![{\\displaystyle \\lambda \_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/571a423bece8f29bcd1b48572f18dd4f6213dce2)⁠, and ⁠![{\\displaystyle \\lambda \_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6b668a1bd1e8ab9452ca975b7497546e7c1ba187)⁠ are chosen so that ![{\\displaystyle f(x)={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}e^{-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}}\\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ac6c71a4a3df62eeaf7e052e27ce356793102f5) The entropy of a normal distribution ![{\\textstyle X\\sim N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4b2b33217663b3d5e94daf88d51b5de6d3c9c8e6) is equal to ![{\\displaystyle H(X)={\\tfrac {1}{2}}(1+\\ln 2\\sigma ^{2}\\pi )\\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2448beacbfebb0a9ccc54a4927aaa5dde946e77) which is independent of the mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠. 1. If the characteristic function ![{\\textstyle \\phi \_{X}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c6dea199efa8c774e513875b77ba51bff03b054e) of some random variable ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is of the form ![{\\textstyle \\phi \_{X}(t)=\\exp Q(t)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/354a1bdc69b6b27a0bf4cfaffb65219279017ff6) in a neighborhood of zero, where ![{\\textstyle Q(t)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7d6236f45c1edf052e5a044de03973e4f149bda2) is a [polynomial](https://en.wikipedia.org/wiki/Polynomial "Polynomial"), then the **Marcinkiewicz theorem** (named after [JĂłzef Marcinkiewicz](https://en.wikipedia.org/wiki/J%C3%B3zef_Marcinkiewicz "JĂłzef Marcinkiewicz")) asserts that ⁠![{\\displaystyle Q}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8752c7023b4b3286800fe3238271bbca681219ed)⁠ can be at most a quadratic polynomial, and therefore ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is a normal random variable.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38) The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero [cumulants](https://en.wikipedia.org/wiki/Cumulant "Cumulant"). 2. If ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ and ⁠![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f)⁠ are [jointly normal](https://en.wikipedia.org/wiki/Jointly_normal "Jointly normal") and [uncorrelated](https://en.wikipedia.org/wiki/Uncorrelated "Uncorrelated"), then they are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). The requirement that ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ and ⁠![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f)⁠ should be *jointly* normal is essential; without it the property does not hold.[\[39\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-39)[\[40\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-40)[\[proof\]](https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent "Normally distributed and uncorrelated does not imply independent") For non-normal random variables uncorrelatedness does not imply independence. 3. The [Kullback–Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence "Kullback–Leibler divergence") of one normal distribution ![{\\textstyle X\_{1}\\sim N(\\mu \_{1},\\sigma \_{1}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/049151b05395320c984754f2ded5834c011e90d8) from another ![{\\textstyle X\_{2}\\sim N(\\mu \_{2},\\sigma \_{2}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d6c3719a5439a1563f5a624b1af2dc278963269a) is given by:[\[41\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-41) ![{\\displaystyle D\_{\\mathrm {KL} }(X\_{1}\\parallel X\_{2})={\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{2\\sigma \_{2}^{2}}}+{\\frac {1}{2}}\\left({\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}-1-\\ln {\\frac {\\sigma \_{1}^{2}}{\\sigma \_{2}^{2}}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/89a5fe7e76e6b9fbe30e34710d2d76d3073d89a6) The [Hellinger distance](https://en.wikipedia.org/wiki/Hellinger_distance "Hellinger distance") between the same distributions is equal to ![{\\displaystyle H^{2}(X\_{1},X\_{2})=1-{\\sqrt {\\frac {2\\sigma \_{1}\\sigma \_{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}}\\exp \\left(-{\\frac {1}{4}}{\\frac {(\\mu \_{1}-\\mu \_{2})^{2}}{\\sigma \_{1}^{2}+\\sigma \_{2}^{2}}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b9f888884d0dcbcbb5ff916c32e1d2bad752517) 4. The [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") for a normal distribution w.r.t. ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is diagonal and takes the form ![{\\displaystyle {\\mathcal {I}}(\\mu ,\\sigma ^{2})={\\begin{pmatrix}{\\frac {1}{\\sigma ^{2}}}&0\\\\0&{\\frac {1}{2\\sigma ^{4}}}\\end{pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e882ba24b6d046a40e779c6154532c352ce59f35) 5. The [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the mean of a normal distribution is another normal distribution.[\[42\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-42) Specifically, if ![{\\textstyle x\_{1},\\ldots ,x\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e9531d09966e9ceeb357705fc047d0c907d3841) are iid ![{\\textstyle \\sim N(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2d705c9cde9a1093fbde1ea4e506b8bca3edcbdf) and the prior is ![{\\textstyle \\mu \\sim N(\\mu \_{0},\\sigma \_{0}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fc7273573f401eac0cce42900e5f46f0fadd64a9), then the posterior distribution for the estimator of ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ will be ![{\\displaystyle \\mu \\mid x\_{1},\\ldots ,x\_{n}\\sim {\\mathcal {N}}\\left({\\frac {{\\frac {\\sigma ^{2}}{n}}\\mu \_{0}+\\sigma \_{0}^{2}{\\bar {x}}}{{\\frac {\\sigma ^{2}}{n}}+\\sigma \_{0}^{2}}},\\left({\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}\\right)^{-1}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/793f7cd8d5c23e2f8a92fed1b332d89375f9229b) 6. The family of normal distributions not only forms an [exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") (EF), but in fact forms a [natural exponential family](https://en.wikipedia.org/wiki/Natural_exponential_family "Natural exponential family") (NEF) with quadratic [variance function](https://en.wikipedia.org/wiki/Variance_function "Variance function") ([NEF-QVF](https://en.wikipedia.org/wiki/NEF-QVF "NEF-QVF")). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF. 7. In [information geometry](https://en.wikipedia.org/wiki/Information_geometry "Information geometry"), the family of normal distributions forms a [statistical manifold](https://en.wikipedia.org/wiki/Statistical_manifold "Statistical manifold") with [constant curvature](https://en.wikipedia.org/wiki/Constant_curvature "Constant curvature") ⁠![{\\displaystyle -1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/704fb0427140d054dd267925495e78164fee9aac)⁠. The same family is [flat](https://en.wikipedia.org/wiki/Flat_manifold "Flat manifold") with respect to the (±1)-connections ![{\\textstyle \\nabla ^{(e)}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6bcf8f6e96ed7e5c44d6aa0800f7ff991bb4b36e) and ![{\\textstyle \\nabla ^{(m)}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6eeb62878a2414145eaf24df9e0e4bd48d24c84e).[\[43\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-43) 8. If ![{\\textstyle X\_{1},\\dots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ae9b42f93ae184132d9ede84c94bebe02d83109) are distributed according to ![{\\textstyle N(0,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ac432f4ef3005cdc1eb19d42a7939c8715052ec), then ![{\\textstyle E\[\\max \_{i}X\_{i}\]\\leq \\sigma {\\sqrt {2\\ln n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0dfb87c9b047ccf23ace2139d97810dff1ed6670). Note that there is no assumption of independence.[\[44\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-44) ### Central limit theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=22 "Edit section: Central limit theorem")\] [![](https://upload.wikimedia.org/wikipedia/commons/0/06/De_moivre-laplace.gif)](https://en.wikipedia.org/wiki/File:De_moivre-laplace.gif) As the number of discrete events increases, the function begins to resemble a normal distribution. [![](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Dice_sum_central_limit_theorem.svg/250px-Dice_sum_central_limit_theorem.svg.png)](https://en.wikipedia.org/wiki/File:Dice_sum_central_limit_theorem.svg) Comparison of probability density functions, *p*(*k*) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing na, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where ![{\\textstyle X\_{1},\\ldots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b38285b8894295da7e871244e729fddff576d35) are [independent and identically distributed](https://en.wikipedia.org/wiki/Independent_and_identically_distributed "Independent and identically distributed") random variables with the same arbitrary distribution, zero mean, and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) and ⁠![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd)⁠ is their mean scaled by ![{\\textstyle {\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fe0c841da590bde3ce98d1cf05d497678712ada0) ![{\\displaystyle Z={\\sqrt {n}}{\\biggl (}{\\frac {1}{n}}\\sum \_{i=1}^{n}X\_{i}{\\biggr )}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/955771f557a34627b4cc5d6b4219e99e9fa0397d) Then, as ⁠![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b)⁠ increases, the probability distribution of ⁠![{\\displaystyle Z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1cc6b75e09a8aa3f04d8584b11db534f88fb56bd)⁠ will tend to the normal distribution with zero mean and variance ⁠![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5)⁠. The theorem can be extended to variables ![{\\textstyle (X\_{i})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c96401b8a2d7a36bd58d49ae3f28143d08fe55cc) that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions. Many [test statistics](https://en.wikipedia.org/wiki/Test_statistic "Test statistic"), [scores](https://en.wikipedia.org/wiki/Score_\(statistics\) "Score (statistics)"), and [estimators](https://en.wikipedia.org/wiki/Estimator "Estimator") encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of [influence functions](https://en.wikipedia.org/wiki/Influence_function_\(statistics\) "Influence function (statistics)"). The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example: Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound for the approximation error in the central limit theorem is given by the [Berry–Esseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem"), improvements of the approximation are given by the [Edgeworth expansions](https://en.wikipedia.org/wiki/Edgeworth_expansion "Edgeworth expansion"). This theorem can also be used to justify modeling the sum of many uniform noise sources as [Gaussian noise](https://en.wikipedia.org/wiki/Gaussian_noise "Gaussian noise"). See [AWGN](https://en.wikipedia.org/wiki/AWGN "AWGN"). ### Operations and functions of normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=23 "Edit section: Operations and functions of normal variables")\] #### Operations on a single normal variable \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=24 "Edit section: Operations on a single normal variable")\] If ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ is distributed normally with mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), then ##### Operations on two independent normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=25 "Edit section: Operations on two independent normal variables")\] - If ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are two [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)") normal random variables, with means ![{\\textstyle \\mu \_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0672d6a80563b72164d70c7c6a0f39f093207de3), ![{\\textstyle \\mu \_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/797d1f85ae11f23755ed4bf3d1a1c574911cff40) and variances ![{\\textstyle \\sigma \_{1}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cba7a66cdd970ca6ecd6fbb92bc4f577a31f71a2), ![{\\textstyle \\sigma \_{2}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/259e8a5c676195775647fe765266c0a74eeed92b), then their sum ![{\\textstyle X\_{1}+X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a09bb18cee0b5940e34ab7c35a8f582cb3a9ce5f) will also be normally distributed,[\[proof\]](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables "Sum of normally distributed random variables") with mean ![{\\textstyle \\mu \_{1}+\\mu \_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77ad5dae54aa9dd27f842a7d6dd199386e8b0a0d) and variance ![{\\textstyle \\sigma \_{1}^{2}+\\sigma \_{2}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/551b9e3d182ae052da11927e652710504685a917). - In particular, if ⁠![{\\displaystyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68baa052181f707c662844a465bfeeb135e82bab)⁠ and ⁠![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f)⁠ are independent normal deviates with zero mean and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), then ![{\\textstyle X+Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd96b99d81cffac6dcdd636ede2372218fffac12) and ![{\\textstyle X-Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/797cd2ee57ec14065d7bdbcc702699d7954c14a0) are also independent and normally distributed, with zero mean and variance ![{\\textstyle 2\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/187e0ddb488bc4b83589c8c3b0853bf00b8d80eb). This is a special case of the [polarization identity](https://en.wikipedia.org/wiki/Polarization_identity "Polarization identity").[\[46\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-46) - If ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6), ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are two independent normal deviates with mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), and ⁠![{\\displaystyle a}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ffd2487510aa438433a2579450ab2b3d557e5edc)⁠, ⁠![{\\displaystyle b}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f11423fbb2e967f986e36804a8ae4271734917c3)⁠ are arbitrary real numbers, then the variable ![{\\displaystyle X\_{3}={\\frac {aX\_{1}+bX\_{2}-(a+b)\\mu }{\\sqrt {a^{2}+b^{2}}}}+\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/cecad53efc9fb1f034f57b6ca0dd5754f504c919) is also normally distributed with mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490). It follows that the normal distribution is [stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") (with exponent ![{\\textstyle \\alpha =2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/873c52c80dce07adc3c2a81eadf14d54f50589bf)). - If ![{\\textstyle X\_{k}\\sim {\\mathcal {N}}(m\_{k},\\sigma \_{k}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0afe402f1fe01bbec5615f009ecbb2a33001476), ![{\\textstyle k\\in \\{0,1\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/17da80cc83f4f41abbb5cd2318c8450e76b8efa8) are normal distributions, then their normalized [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean "Geometric mean") ![{\\textstyle {\\frac {1}{\\int \_{\\mathbb {R} ^{n}}X\_{0}^{\\alpha }(x)X\_{1}^{1-\\alpha }(x)\\,{\\text{d}}x}}X\_{0}^{\\alpha }X\_{1}^{1-\\alpha }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1156cfcef03aabda6ebc0c9b38eb7a22f16c6b21) is a normal distribution ![{\\textstyle {\\mathcal {N}}(m\_{\\alpha },\\sigma \_{\\alpha }^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4ef0b16243d84fd268ff7749996b279bb6826b1) with ![{\\textstyle m\_{\\alpha }={\\frac {\\alpha m\_{0}\\sigma \_{1}^{2}+(1-\\alpha )m\_{1}\\sigma \_{0}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0e5ed5af81505ab74dfef47c0d5d6cd2d0c7de64) and ![{\\textstyle \\sigma \_{\\alpha }^{2}={\\frac {\\sigma \_{0}^{2}\\sigma \_{1}^{2}}{\\alpha \\sigma \_{1}^{2}+(1-\\alpha )\\sigma \_{0}^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ce5dc27898fde9a32d74f281cb31ecbcbcf4f53c). ##### Operations on two independent standard normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=26 "Edit section: Operations on two independent standard normal variables")\] If ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are two independent standard normal random variables with mean 0 and variance 1, then #### Operations on multiple independent normal variables \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=27 "Edit section: Operations on multiple independent normal variables")\] - A [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") of a normal vector, i.e. a quadratic function ![{\\textstyle q=\\sum x\_{i}^{2}+\\sum x\_{j}+c}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3ed0ec175c34bf70d7336eb9f25dbc2d269ff701) of multiple independent or correlated normal variables, is a [generalized chi-square](https://en.wikipedia.org/wiki/Generalized_chi-square_distribution "Generalized chi-square distribution") variable. ### Operations on the density function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=29 "Edit section: Operations on the density function")\] The [split normal distribution](https://en.wikipedia.org/wiki/Split_normal_distribution "Split normal distribution") is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The [truncated normal distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution "Truncated normal distribution") results from rescaling a section of a single density function. ### Infinite divisibility and CramĂ©r's theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=30 "Edit section: Infinite divisibility and CramĂ©r's theorem")\] For any positive integer n, any normal distribution with mean ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) is the distribution of the sum of n independent normal deviates, each with mean ![{\\textstyle {\\frac {\\mu }{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c57b6786aa4c5bd9599caffb5ae4480a3286961e) and variance ![{\\textstyle {\\frac {\\sigma ^{2}}{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5e51eb9306d3daff888093e26f3e17267ce1fe3d). This property is called [infinite divisibility](https://en.wikipedia.org/wiki/Infinite_divisibility_\(probability\) "Infinite divisibility (probability)").[\[51\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-51) Conversely, if ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) are independent random variables and their sum ![{\\textstyle X\_{1}+X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a09bb18cee0b5940e34ab7c35a8f582cb3a9ce5f) has a normal distribution, then both ![{\\textstyle X\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8988aef95eb5600d6730ece0631d654408f194d6) and ![{\\textstyle X\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/68f7c0d27a42ad32b39db3e8dc89c52aed9a09ae) must be normal deviates.[\[52\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-52) This result is known as [CramĂ©r's decomposition theorem](https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_decomposition_theorem "CramĂ©r's decomposition theorem"), and is equivalent to saying that the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of two distributions is normal if and only if both are normal. CramĂ©r's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[\[38\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Bryc_1995_35-38) ### The Kac–Bernstein theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=31 "Edit section: The Kac–Bernstein theorem")\] The [Kac–Bernstein theorem](https://en.wikipedia.org/wiki/Kac%E2%80%93Bernstein_theorem "Kac–Bernstein theorem") states that if ![{\\textstyle X}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d80c41192705e1a6c6de1d65e16d7f70fbac391) and ⁠![{\\displaystyle Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/961d67d6b454b4df2301ac571808a3538b3a6d3f)⁠ are independent and ![{\\textstyle X+Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd96b99d81cffac6dcdd636ede2372218fffac12) and ![{\\textstyle X-Y}](https://wikimedia.org/api/rest_v1/media/math/render/svg/797cd2ee57ec14065d7bdbcc702699d7954c14a0) are also independent, then both X and Y must necessarily have normal distributions.[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53)[\[54\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-54) More generally, if ![{\\textstyle X\_{1},\\ldots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b38285b8894295da7e871244e729fddff576d35) are independent random variables, then two distinct linear combinations ![{\\textstyle \\sum {a\_{k}X\_{k}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d62b3dc666b11bdb962170ace940a297ff8d9c7f) and ![{\\textstyle \\sum {b\_{k}X\_{k}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/701fc598b855acd656fb94b99cfc1696dd881016)will be independent if and only if all ![{\\textstyle X\_{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8782c28c517e623ccb7715a9e66a964f49446069) are normal and ![{\\textstyle \\sum {a\_{k}b\_{k}\\sigma \_{k}^{2}=0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4cca7bca2de00b5be911848276eca1c52c8ce765), where ![{\\textstyle \\sigma \_{k}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55cef67d203b726b4d716826adbb3c680d991da5) denotes the variance of ![{\\textstyle X\_{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8782c28c517e623ccb7715a9e66a964f49446069).[\[53\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Lukacs-53) The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called *normal* or *Gaussian* laws, so a certain ambiguity in names exists. - The [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") describes the Gaussian law in the k\-dimensional [Euclidean space](https://en.wikipedia.org/wiki/Euclidean_space "Euclidean space"). A vector *X* ∈ **R***k* is multivariate-normally distributed if any linear combination of its components ÎŁ*k* *j*\=1*a**j* *X**j* has a (univariate) normal distribution. The variance of X is a *k* × *k* symmetric positive-definite matrix V. The multivariate normal distribution is a special case of the [elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution"). As such, its iso-density loci in the *k* = 2 case are [ellipses](https://en.wikipedia.org/wiki/Ellipse "Ellipse") and in the case of arbitrary k are [ellipsoids](https://en.wikipedia.org/wiki/Ellipsoid "Ellipsoid"). - [Rectified Gaussian distribution](https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution "Rectified Gaussian distribution") a rectified version of normal distribution with all the negative elements reset to 0. - [Complex normal distribution](https://en.wikipedia.org/wiki/Complex_normal_distribution "Complex normal distribution") deals with the complex normal vectors. A complex vector *X* ∈ **C***k* is said to be normal if both its real and imaginary components jointly possess a 2*k*\-dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the *variance* matrix Γ, and the *relation* matrix C. - [Matrix normal distribution](https://en.wikipedia.org/wiki/Matrix_normal_distribution "Matrix normal distribution") describes the case of normally distributed matrices. - [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process "Gaussian process") are the normally distributed [stochastic processes](https://en.wikipedia.org/wiki/Stochastic_process "Stochastic process"). These can be viewed as elements of some infinite-dimensional [Hilbert space](https://en.wikipedia.org/wiki/Hilbert_space "Hilbert space") H, and thus are the analogues of multivariate normal vectors for the case *k* = ∞. A random element *h* ∈ *H* is said to be normal if for any constant *a* ∈ *H* the [scalar product](https://en.wikipedia.org/wiki/Scalar_product "Scalar product") (*a*, *h*) has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear *covariance operator* *K*: *H* → *H*. Several Gaussian processes became popular enough to have their own names: - [Brownian motion](https://en.wikipedia.org/wiki/Wiener_process "Wiener process"); - [Brownian bridge](https://en.wikipedia.org/wiki/Brownian_bridge "Brownian bridge"); and - [Ornstein–Uhlenbeck process](https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process "Ornstein–Uhlenbeck process"). - [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") is an abstract mathematical construction that represents a [q-analogue](https://en.wikipedia.org/wiki/Q-analogue "Q-analogue") of the normal distribution. - the [q-Gaussian](https://en.wikipedia.org/wiki/Q-Gaussian "Q-Gaussian") is an analogue of the Gaussian distribution, in the sense that it maximises the [Tsallis entropy](https://en.wikipedia.org/wiki/Tsallis_entropy "Tsallis entropy"), and is one type of [Tsallis distribution](https://en.wikipedia.org/wiki/Tsallis_distribution "Tsallis distribution"). This distribution is different from the [Gaussian q-distribution](https://en.wikipedia.org/wiki/Gaussian_q-distribution "Gaussian q-distribution") above. - The [Kaniadakis Îș\-Gaussian distribution](https://en.wikipedia.org/wiki/Kaniadakis_Gaussian_distribution "Kaniadakis Gaussian distribution") is a generalization of the Gaussian distribution which arises from the [Kaniadakis statistics](https://en.wikipedia.org/wiki/Kaniadakis_statistics "Kaniadakis statistics"), being one of the [Kaniadakis distributions](https://en.wikipedia.org/wiki/Kaniadakis_distribution "Kaniadakis distribution"). A random variable X has a two-piece normal distribution if it has a distribution ![{\\displaystyle f\_{X}(x)={\\begin{cases}N(\\mu ,\\sigma \_{1}^{2}),&{\\text{ if }}x\\leq \\mu \\\\N(\\mu ,\\sigma \_{2}^{2}),&{\\text{ if }}x\\geq \\mu \\end{cases}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/31bf9650298445fee0ca47aa25f62df2e8d66286) where ÎŒ is the mean and *σ*2 1 and *σ*2 2 are the variances of the distribution to the left and right of the mean respectively. The mean E(*X*), variance V(*X*), and third central moment T(*X*) of this distribution have been determined[\[55\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-John-1982-55) ![{\\displaystyle {\\begin{aligned}\\operatorname {E} (X)&=\\mu +{\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1}),\\\\\\operatorname {V} (X)&=\\left(1-{\\frac {2}{\\pi }}\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2},\\\\\\operatorname {T} (X)&={\\sqrt {\\frac {2}{\\pi }}}(\\sigma \_{2}-\\sigma \_{1})\\left\[\\left({\\frac {4}{\\pi }}-1\\right)(\\sigma \_{2}-\\sigma \_{1})^{2}+\\sigma \_{1}\\sigma \_{2}\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/97f32cc8147bff0b5cdc02123a520a1119854060) One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are: - [Pearson distribution](https://en.wikipedia.org/wiki/Pearson_distribution "Pearson distribution") — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values. - The [generalized normal distribution](https://en.wikipedia.org/wiki/Generalized_normal_distribution "Generalized normal distribution"), also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors. ## Statistical inference \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=33 "Edit section: Statistical inference")\] ### Estimation of parameters \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=34 "Edit section: Estimation of parameters")\] It is often the case that we do not know the parameters of the normal distribution, but instead want to [estimate](https://en.wikipedia.org/wiki/Estimation_theory "Estimation theory") them. That is, having a sample ![{\\textstyle (x\_{1},\\ldots ,x\_{n})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55e09ee050ccb93f44cef510332f40a3d6bc651d) from a normal ![{\\textstyle {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fa40efb531f5b9513c921fd804868e727dfc71c0) population we would like to learn the approximate values of parameters ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490). The standard approach to this problem is the [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") method, which requires maximization of the *[log-likelihood function](https://en.wikipedia.org/wiki/Log-likelihood_function "Log-likelihood function")*: ![{\\displaystyle \\ln {\\mathcal {L}}(\\mu ,\\sigma ^{2})=\\sum \_{i=1}^{n}\\ln f(x\_{i}\\mid \\mu ,\\sigma ^{2})=-{\\frac {n}{2}}\\ln(2\\pi )-{\\frac {n}{2}}\\ln \\sigma ^{2}-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/003faa08d27475dd2b029e9f7f0cebab17c0e147) Taking derivatives with respect to ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ and ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) and solving the resulting system of first order conditions yields the *maximum likelihood estimates*: ![{\\displaystyle {\\hat {\\mu }}={\\overline {x}}\\equiv {\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i},\\qquad {\\hat {\\sigma }}^{2}={\\frac {1}{n}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0269b28e095780b5f1f76c94505841fbe51aeec2) Then ![{\\textstyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f2946f476cb2518422bed7d85a33eb8d8460d365) is as follows: ![{\\displaystyle \\ln {\\mathcal {L}}({\\hat {\\mu }},{\\hat {\\sigma }}^{2})=(-n/2)\[\\ln(2\\pi {\\hat {\\sigma }}^{2})+1\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/561353e6bc80d226fddd9510be61d21bc67b3aee) Estimator ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) is called the *[sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean")*, since it is the arithmetic mean of all observations. The statistic ![{\\displaystyle \\textstyle {\\overline {x}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c74eb776989f75b04948837080faa9ebc08c8cd3) is [complete](https://en.wikipedia.org/wiki/Complete_statistic "Complete statistic") and [sufficient](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") for ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠, and therefore by the [Lehmann–ScheffĂ© theorem](https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9_theorem "Lehmann–ScheffĂ© theorem"), ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) is the [uniformly minimum variance unbiased](https://en.wikipedia.org/wiki/Uniformly_minimum_variance_unbiased "Uniformly minimum variance unbiased") (UMVU) estimator.[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) In finite samples it is distributed normally: ![{\\displaystyle {\\hat {\\mu }}\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2}/n).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8f1fbb023c73b0f4010814107ac36419b16a226) The variance of this estimator is equal to the *ΌΌ*\-element of the inverse [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information_matrix "Fisher information matrix") ![{\\displaystyle \\textstyle {\\mathcal {I}}^{-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/98cf99dd702e8c61031251ee2506b639a6eff98f). This implies that the estimator is [finite-sample efficient](https://en.wikipedia.org/wiki/Efficient_estimator "Efficient estimator"). Of practical importance is the [standard error](https://en.wikipedia.org/wiki/Standard_error "Standard error") of ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) being proportional to ![{\\displaystyle \\textstyle 1/{\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2cd0024843448c587ee8246c08fe5af7fb03cc95), that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in [Monte Carlo simulations](https://en.wikipedia.org/wiki/Monte_Carlo_simulation "Monte Carlo simulation"). From the standpoint of the [asymptotic theory](https://en.wikipedia.org/wiki/Asymptotic_theory_\(statistics\) "Asymptotic theory (statistics)"), ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) is [consistent](https://en.wikipedia.org/wiki/Consistent_estimator "Consistent estimator"), that is, it [converges in probability](https://en.wikipedia.org/wiki/Converges_in_probability "Converges in probability") to ⁠![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161)⁠ as ![{\\textstyle n\\rightarrow \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/680784f1a8c2242d7a04788c43a18d276b993466). The estimator is also [asymptotically normal](https://en.wikipedia.org/wiki/Asymptotic_normality "Asymptotic normality"), which is a simple corollary of it being normal in finite samples: ![{\\displaystyle {\\sqrt {n}}({\\hat {\\mu }}-\\mu )\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,\\sigma ^{2}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bfe762c7215a7ac297a3bd441e237f92cd415c00) The estimator ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) is called the *[sample variance](https://en.wikipedia.org/wiki/Sample_variance "Sample variance")*, since it is the variance of the sample (![{\\textstyle (x\_{1},\\ldots ,x\_{n})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/55e09ee050ccb93f44cef510332f40a3d6bc651d)). In practice, another estimator is often used instead of the ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084). This other estimator is denoted ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713), and is also called the *sample variance*, which represents a certain ambiguity in terminology; its square root ⁠![{\\displaystyle s}](https://wikimedia.org/api/rest_v1/media/math/render/svg/01d131dfd7673938b947072a13a9744fe997e632)⁠ is called the *sample standard deviation*. The estimator ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) differs from ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) by having (*n* − 1) instead of n in the denominator (the so-called [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction "Bessel's correction")): ![{\\displaystyle s^{2}={\\frac {n}{n-1}}{\\hat {\\sigma }}^{2}={\\frac {1}{n-1}}\\sum \_{i=1}^{n}(x\_{i}-{\\overline {x}})^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bb09766b1fa03887c9ec7f7254e3b25f94224532) The difference between ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) and ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) becomes negligibly small for large n's. In finite samples however, the motivation behind the use of ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is that it is an [unbiased estimator](https://en.wikipedia.org/wiki/Unbiased_estimator "Unbiased estimator") of the underlying parameter ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), whereas ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) is biased. Also, by the Lehmann–ScheffĂ© theorem the estimator ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is uniformly minimum variance unbiased ([UMVU](https://en.wikipedia.org/wiki/UMVU "UMVU")),[\[56\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-Krishnamoorthy-56) which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) is better than the ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) in terms of the [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") (MSE) criterion. In finite samples both ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) and ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) have scaled [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with (*n* − 1) degrees of freedom: ![{\\displaystyle s^{2}\\sim {\\frac {\\sigma ^{2}}{n-1}}\\cdot \\chi \_{n-1}^{2},\\qquad {\\hat {\\sigma }}^{2}\\sim {\\frac {\\sigma ^{2}}{n}}\\cdot \\chi \_{n-1}^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b55e6d2c748d5ba1ff42692650492b9506ab164d) The first of these expressions shows that the variance of ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is equal to ![{\\textstyle 2\\sigma ^{4}/(n-1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bb270a41c1ca6bb65b250cab50e47b302aca984f), which is slightly greater than the *σσ*\-element of the inverse Fisher information matrix ![{\\displaystyle \\textstyle {\\mathcal {I}}^{-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/98cf99dd702e8c61031251ee2506b639a6eff98f), which is ![{\\textstyle 2\\sigma ^{4}/n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e75535dfc8adc819cb974935bd3a1b5b4e08734c). Thus, ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is not an efficient estimator for ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490), and moreover, since ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) is UMVU, we can conclude that the finite-sample efficient estimator for ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) does not exist. Applying the asymptotic theory, both estimators ![{\\textstyle s^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24ac5be3cbea7265af53b722b53ab3077320e713) and ![{\\displaystyle \\textstyle {\\hat {\\sigma }}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dbeb6ca1eacf73ca838981e36035f66f8449084) are consistent, that is they converge in probability to ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) as the sample size ![{\\textstyle n\\rightarrow \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/680784f1a8c2242d7a04788c43a18d276b993466). The two estimators are also both asymptotically normal: ![{\\displaystyle {\\sqrt {n}}({\\hat {\\sigma }}^{2}-\\sigma ^{2})\\simeq {\\sqrt {n}}(s^{2}-\\sigma ^{2})\\,\\xrightarrow {d} \\,{\\mathcal {N}}(0,2\\sigma ^{4}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/64e1884a9be0b16cd8d2bfbe88f08e7ca6b02a45) In particular, both estimators are asymptotically efficient for ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490). ### Confidence intervals \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=37 "Edit section: Confidence intervals")\] By [Cochran's theorem](https://en.wikipedia.org/wiki/Cochran%27s_theorem "Cochran's theorem"), for normal distributions the sample mean ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) and the sample variance *s*2 are [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"), which means there can be no gain in considering their [joint distribution](https://en.wikipedia.org/wiki/Joint_distribution "Joint distribution"). There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) and s can be employed to construct the so-called *t-statistic*: ![{\\displaystyle t={\\frac {{\\hat {\\mu }}-\\mu }{s/{\\sqrt {n}}}}={\\frac {{\\overline {x}}-\\mu }{\\sqrt {{\\frac {1}{n(n-1)}}\\sum (x\_{i}-{\\overline {x}})^{2}}}}\\sim t\_{n-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/35f4ea0fbb1b9bdbcef271db64817c384d43497a) This quantity t has the [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution "Student's t-distribution") with (*n* − 1) degrees of freedom, and it is an [ancillary statistic](https://en.wikipedia.org/wiki/Ancillary_statistic "Ancillary statistic") (independent of the value of the parameters). Inverting the distribution of this t\-statistics will allow us to construct the [confidence interval](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") for ÎŒ;[\[57\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-57) similarly, inverting the *χ*2 distribution of the statistic *s*2 will give us the confidence interval for *σ*2:[\[58\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-58) ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}},\\,{\\hat {\\mu }}+t\_{n-1,1-\\alpha /2}{\\frac {s}{\\sqrt {n}}}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/86ad00c4aac2907f6358d3ab3a5e413a58158be4) ![{\\displaystyle \\sigma ^{2}\\in \\left\[{\\frac {n-1}{\\chi \_{n-1,1-\\alpha /2}^{2}}}s^{2},\\,{\\frac {n-1}{\\chi \_{n-1,\\alpha /2}^{2}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87c0c6ae7bd48ba8377279fed58df479f0de900c) where *t**k*,*p* and χ 2 *k,p* are the pth [quantiles](https://en.wikipedia.org/wiki/Quantile "Quantile") of the t\- and *χ*2\-distributions respectively. These confidence intervals are of the *[confidence level](https://en.wikipedia.org/wiki/Confidence_level "Confidence level")* 1 − *α*, meaning that the true values ÎŒ and *σ*2 fall outside of these intervals with probability (or [significance level](https://en.wikipedia.org/wiki/Significance_level "Significance level")) α. In practice people usually take *α* = 5%, resulting in the 95% confidence intervals. The confidence interval for σ can be found by taking the square root of the interval bounds for *σ*2. Approximate formulas can be derived from the asymptotic distributions of ![{\\displaystyle \\textstyle {\\hat {\\mu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/adefcea6c129cd8f06e8fc941a5f760cb9c4d5b4) and *s*2: ![{\\displaystyle \\mu \\in \\left\[{\\hat {\\mu }}-{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s,\\,{\\hat {\\mu }}+{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6ed5adb135a9cd03de1aa21d774e66be1adb4ea8) ![{\\displaystyle \\sigma ^{2}\\in \\left\[s^{2}-{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2},\\,s^{2}+{\\sqrt {2}}{\\frac {\|z\_{\\alpha /2}\|}{\\sqrt {n}}}s^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56646fb560578a0414ad2f045c14031c4015b9a2) The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles *z**α*/2 do not depend on n. In particular, the most popular value of *α* = 5%, results in \|*z*0\.025\| = [1\.96](https://en.wikipedia.org/wiki/1.96 "1.96"). Normality tests assess the likelihood that the given data set {*x*1, ..., *x**n*} comes from a normal distribution. Typically the [null hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis "Null hypothesis") *H*0 is that the observations are distributed normally with unspecified mean ÎŒ and variance *σ*2, versus the alternative *H**a* that the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below: **Diagnostic plots** are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. - [Q–Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "Q–Q plot"), also known as [normal probability plot](https://en.wikipedia.org/wiki/Normal_probability_plot "Normal probability plot") or [rankit](https://en.wikipedia.org/wiki/Rankit "Rankit") plot—is a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it is a plot of point of the form (*Ί*−1(*p**k*), *x*(*k*)), where plotting points *p**k* are equal to *p**k* = (*k* − *α*)/(*n* + 1 − 2*α*) and α is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line. - [P–P plot](https://en.wikipedia.org/wiki/P%E2%80%93P_plot "P–P plot") – similar to the Q–Q plot, but used much less frequently. This method consists of plotting the points (*Ί*(*z*(*k*)), *p**k*), where ![{\\textstyle \\textstyle z\_{(k)}=(x\_{(k)}-{\\hat {\\mu }})/{\\hat {\\sigma }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/11fad9789c407013d2c4cb224bfa84b320563fd9). For normally distributed data this plot should lie on a straight line between (0, 0) and (1, 1). **Goodness-of-fit tests**: *Moment-based tests*: - [D'Agostino's K-squared test](https://en.wikipedia.org/wiki/D%27Agostino%27s_K-squared_test "D'Agostino's K-squared test") - [Jarque–Bera test](https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test "Jarque–Bera test") - [Shapiro–Wilk test](https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test "Shapiro–Wilk test"): This is based on the line in the Q–Q plot having the slope of σ. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly. *Tests based on the empirical distribution function*: - [Anderson–Darling test](https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test "Anderson–Darling test") - [Lilliefors test](https://en.wikipedia.org/wiki/Lilliefors_test "Lilliefors test") (an adaptation of the [Kolmogorov–Smirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test "Kolmogorov–Smirnov test")) ### Bayesian analysis of the normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=39 "Edit section: Bayesian analysis of the normal distribution")\] Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: - Either the mean, or the variance, or neither, may be considered a fixed quantity. - When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified. - Both univariate and [multivariate](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") cases need to be considered. - Either [conjugate](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") or [improper](https://en.wikipedia.org/wiki/Improper_prior "Improper prior") [prior distributions](https://en.wikipedia.org/wiki/Prior_distribution "Prior distribution") may be placed on the unknown variables. - An additional set of cases occurs in [Bayesian linear regression](https://en.wikipedia.org/wiki/Bayesian_linear_regression "Bayesian linear regression"), where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the [regression coefficients](https://en.wikipedia.org/wiki/Regression_coefficient "Regression coefficient"). The resulting analysis is similar to the basic cases of [independent identically distributed](https://en.wikipedia.org/wiki/Independent_identically_distributed "Independent identically distributed") data. The formulas for the non-linear-regression cases are summarized in the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") article. #### Sum of two quadratics \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=40 "Edit section: Sum of two quadratics")\] The following auxiliary formula is useful for simplifying the [posterior](https://en.wikipedia.org/wiki/Posterior_distribution "Posterior distribution") update equations, which otherwise become fairly tedious. ![{\\displaystyle a(x-y)^{2}+b(x-z)^{2}=(a+b)\\left(x-{\\frac {ay+bz}{a+b}}\\right)^{2}+{\\frac {ab}{a+b}}(y-z)^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dfac4114765b1f994800c9b424b82564b57ba179) This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and [completing the square](https://en.wikipedia.org/wiki/Completing_the_square "Completing the square"). Note the following about the complex constant factors attached to some of the terms: 1. The factor ![{\\textstyle {\\frac {ay+bz}{a+b}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b27b947e13b22bb972ea8cc460c5ab3de2db0237) has the form of a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of y and z. 2. ![{\\textstyle {\\frac {ab}{a+b}}={\\frac {1}{{\\frac {1}{a}}+{\\frac {1}{b}}}}=(a^{-1}+b^{-1})^{-1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/042395d4562e6a427ab04e879ba859a10e8087a7) This shows that this factor can be thought of as resulting from a situation where the [reciprocals](https://en.wikipedia.org/wiki/Multiplicative_inverse "Multiplicative inverse") of quantities a and b add directly, so to combine a and b themselves, it is necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean"), so it is not surprising that ![{\\textstyle {\\frac {ab}{a+b}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7021bc974153f898ddadd50b0351d9f3a28de8a8) is one-half the [harmonic mean](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean") of a and b. A similar formula can be written for the sum of two vector quadratics: If **x**, **y**, **z** are vectors of length k, and **A** and **B** are [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"), [invertible matrices](https://en.wikipedia.org/wiki/Invertible_matrices "Invertible matrices") of size ![{\\textstyle k\\times k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2b9889111e6949ae57adb3a883df2f8a29bb5062), then ![{\\displaystyle {\\begin{aligned}&(\\mathbf {y} -\\mathbf {x} )'\\mathbf {A} (\\mathbf {y} -\\mathbf {x} )+(\\mathbf {x} -\\mathbf {z} )'\\mathbf {B} (\\mathbf {x} -\\mathbf {z} )\\\\={}&(\\mathbf {x} -\\mathbf {c} )'(\\mathbf {A} +\\mathbf {B} )(\\mathbf {x} -\\mathbf {c} )+(\\mathbf {y} -\\mathbf {z} )'(\\mathbf {A} ^{-1}+\\mathbf {B} ^{-1})^{-1}(\\mathbf {y} -\\mathbf {z} )\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6374f98fcb11f7c1273b06c44e1c0f0b84154048) where ![{\\displaystyle \\mathbf {c} =(\\mathbf {A} +\\mathbf {B} )^{-1}(\\mathbf {A} \\mathbf {y} +\\mathbf {B} \\mathbf {z} )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/267a22091cc9d9afb86fcacebcc6b842cb0e9b1b) The form **x**â€Č **A** **x** is called a [quadratic form](https://en.wikipedia.org/wiki/Quadratic_form "Quadratic form") and is a [scalar](https://en.wikipedia.org/wiki/Scalar_\(mathematics\) "Scalar (mathematics)"): ![{\\displaystyle \\mathbf {x} '\\mathbf {A} \\mathbf {x} =\\sum \_{i,j}a\_{ij}x\_{i}x\_{j}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ef06ff3139875b96fe704a43bbecebacdbea460) In other words, it sums up all possible combinations of products of pairs of elements from **x**, with a separate coefficient for each. In addition, since ![{\\textstyle x\_{i}x\_{j}=x\_{j}x\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0e9baf61d13d13e6695b57c3f31856e72c860f8), only the sum ![{\\textstyle a\_{ij}+a\_{ji}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c48f5fdcc3f2d80c5a75c3714a154a7e16a3195f) matters for any off-diagonal elements of **A**, and there is no loss of generality in assuming that **A** is [symmetric](https://en.wikipedia.org/wiki/Symmetric_matrix "Symmetric matrix"). Furthermore, if **A** is symmetric, then the form ![{\\textstyle \\mathbf {x} '\\mathbf {A} \\mathbf {y} =\\mathbf {y} '\\mathbf {A} \\mathbf {x} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d25486eab8a57da216fb9418eb8b4fa889c7b03) #### Sum of differences from the mean \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=43 "Edit section: Sum of differences from the mean")\] Another useful formula is as follows: ![{\\displaystyle \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}=\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6abcabe83cd01aabf39c16b0bc67994086519d02) where ![{\\textstyle {\\bar {x}}={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/17005337073440fa8d7c41536f875c6cd5d1fc0e) ### With known variance \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=44 "Edit section: With known variance")\] For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf563c08aa87170a438239b0d291a4093fd2cb27) with known [variance](https://en.wikipedia.org/wiki/Variance "Variance") *σ*2, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") distribution is also normally distributed. This can be shown more easily by rewriting the variance as the [precision](https://en.wikipedia.org/wiki/Precision_\(statistics\) "Precision (statistics)"), i.e. using *τ* = 1/*σ*2. Then if ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,1/\\tau )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83dac9d557f3dae598f1438c0a8164d82501fe19) and ![{\\textstyle \\mu \\sim {\\mathcal {N}}(\\mu \_{0},1/\\tau \_{0}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2f12cd6980bf93122c047ab32474d8403317756b) we proceed as follows. First, the [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") is (using the formula above for the sum of differences from the mean): ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\tau )&=\\prod \_{i=1}^{n}{\\sqrt {\\frac {\\tau }{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau (x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left(-{\\frac {1}{2}}\\tau \\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right)\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\].\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2bcd1c34520a24e29b758a0f7427e79e9d8a414) Then, we proceed as follows: ![{\\displaystyle {\\begin{aligned}p(\\mu \\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\mu )p(\\mu )\\\\&=\\left({\\frac {\\tau }{2\\pi }}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2}}\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)\\right\]{\\sqrt {\\frac {\\tau \_{0}}{2\\pi }}}\\exp \\left(-{\\frac {1}{2}}\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(\\tau \\left(\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+n({\\bar {x}}-\\mu )^{2}\\right)+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}\\left(n\\tau ({\\bar {x}}-\\mu )^{2}+\\tau \_{0}(\\mu -\\mu \_{0})^{2}\\right)\\right)\\\\&=\\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}+{\\frac {n\\tau \\tau \_{0}}{n\\tau +\\tau \_{0}}}({\\bar {x}}-\\mu \_{0})^{2}\\right)\\\\&\\propto \\exp \\left(-{\\frac {1}{2}}(n\\tau +\\tau \_{0})\\left(\\mu -{\\dfrac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\right)^{2}\\right)\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/96e309ead00fbc8603eced5342aa5df534522d6a) In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving ÎŒ. The result is the [kernel](https://en.wikipedia.org/wiki/Kernel_\(statistics\) "Kernel (statistics)") of a normal distribution, with mean ![{\\textstyle {\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b92a63cba1c7539f1d484ee0296c910912780a79) and precision ![{\\textstyle n\\tau +\\tau \_{0}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/23717a4ce103170f3da265af44ce96e80461871e), i.e. ![{\\displaystyle p(\\mu \\mid \\mathbf {X} )\\sim {\\mathcal {N}}\\left({\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}},{\\frac {1}{n\\tau +\\tau \_{0}}}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a45b361f59d044be9a7d87bf92514795f38419c8) This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: ![{\\displaystyle {\\begin{aligned}\\tau \_{0}'&=\\tau \_{0}+n\\tau \\\\\[5pt\]\\mu \_{0}'&={\\frac {n\\tau {\\bar {x}}+\\tau \_{0}\\mu \_{0}}{n\\tau +\\tau \_{0}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a6cfbdf504b1a9ce4cbe79561b4ae983fdf7271d) That is, to combine n data points with total precision of *nτ* (or equivalently, total variance of *n*/*σ*2) and mean of values ![{\\textstyle {\\bar {x}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/66ef5983f162d4e49610bc8240e713cc2bbca7d8), derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a *precision-weighted average*, i.e. a [weighted average](https://en.wikipedia.org/wiki/Weighted_average "Weighted average") of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.) The above formula reveals why it is more convenient to do [Bayesian analysis](https://en.wikipedia.org/wiki/Bayesian_analysis "Bayesian analysis") of [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas ![{\\displaystyle {\\begin{aligned}{\\sigma \_{0}^{2}}'&={\\frac {1}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]\\mu \_{0}'&={\\frac {{\\frac {n{\\bar {x}}}{\\sigma ^{2}}}+{\\frac {\\mu \_{0}}{\\sigma \_{0}^{2}}}}{{\\frac {n}{\\sigma ^{2}}}+{\\frac {1}{\\sigma \_{0}^{2}}}}}\\\\\[5pt\]{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea454c8840683777ce8192d9ae63068c63962858) For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf563c08aa87170a438239b0d291a4093fd2cb27) with known mean ÎŒ, the [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") of the [variance](https://en.wikipedia.org/wiki/Variance "Variance") has an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") or a [scaled inverse chi-squared distribution](https://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution "Scaled inverse chi-squared distribution"). The two are equivalent except for having different [parameterizations](https://en.wikipedia.org/wiki/Parameter "Parameter"). Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for *σ*2 is as follows: ![{\\displaystyle p(\\sigma ^{2}\\mid \\nu \_{0},\\sigma \_{0}^{2})={\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\nu \_{0}/2}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\propto {\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ef2528fe4774a93087d4adae570ef9ab84707f52) The [likelihood function](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") from above, written in terms of the variance, is: ![{\\displaystyle {\\begin{aligned}p(\\mathbf {X} \\mid \\mu ,\\sigma ^{2})&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {1}{2\\sigma ^{2}}}\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\right\]\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc06aa31588bba03e4748f8f345f0638a75dc156) where ![{\\displaystyle S=\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/56adf28a77173ce852c7de7eeee102b2f6895b39) Then: ![{\\displaystyle {\\begin{aligned}p(\\sigma ^{2}\\mid \\mathbf {X} )&\\propto p(\\mathbf {X} \\mid \\sigma ^{2})p(\\sigma ^{2})\\\\&=\\left({\\frac {1}{2\\pi \\sigma ^{2}}}\\right)^{n/2}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}\\right\]{\\frac {(\\sigma \_{0}^{2}{\\frac {\\nu \_{0}}{2}})^{\\frac {\\nu \_{0}}{2}}}{\\Gamma \\left({\\frac {\\nu \_{0}}{2}}\\right)}}~{\\frac {\\exp \\left\[{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\\\&\\propto \\left({\\frac {1}{\\sigma ^{2}}}\\right)^{n/2}{\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}}{2}}}}}\\exp \\left\[-{\\frac {S}{2\\sigma ^{2}}}+{\\frac {-\\nu \_{0}\\sigma \_{0}^{2}}{2\\sigma ^{2}}}\\right\]\\\\&={\\frac {1}{(\\sigma ^{2})^{1+{\\frac {\\nu \_{0}+n}{2}}}}}\\exp \\left\[-{\\frac {\\nu \_{0}\\sigma \_{0}^{2}+S}{2\\sigma ^{2}}}\\right\]\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/381c1b93f6dc76e2cdca9f3f1f77132dd51dc55f) The above is also a scaled inverse chi-squared distribution where ![{\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1d9cea4f20a8750894be82fb32d617284c433fd) or equivalently ![{\\displaystyle {\\begin{aligned}\\nu \_{0}'&=\\nu \_{0}+n\\\\{\\sigma \_{0}^{2}}'&={\\frac {\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{\\nu \_{0}+n}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/192be53c5d9d249b2ef7ca5622430b689f1aee64) Reparameterizing in terms of an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution"), the result is: ![{\\displaystyle {\\begin{aligned}\\alpha '&=\\alpha +{\\frac {n}{2}}\\\\\\beta '&=\\beta +{\\frac {\\sum \_{i=1}^{n}(x\_{i}-\\mu )^{2}}{2}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6242673d0e1932e640fa7ebb2167edbb20535f35) #### With unknown mean and unknown variance \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=46 "Edit section: With unknown mean and unknown variance")\] For a set of [i.i.d.](https://en.wikipedia.org/wiki/I.i.d. "I.i.d.") normally distributed data points **X** of size n where each individual point x follows ![{\\textstyle x\\sim {\\mathcal {N}}(\\mu ,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf563c08aa87170a438239b0d291a4093fd2cb27) with unknown mean ÎŒ and unknown [variance](https://en.wikipedia.org/wiki/Variance "Variance") *σ*2, a combined (multivariate) [conjugate prior](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") is placed over the mean and variance, consisting of a [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"). Logically, this originates as follows: 1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve [sufficient statistics](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points. 2. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and [sum of squared deviations](https://en.wikipedia.org/wiki/Sum_of_squared_deviations "Sum of squared deviations"). 3. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible. 4. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence. 5. This suggests that we create a *conditional prior* of the mean on the unknown variance, with a hyperparameter specifying the mean of the [pseudo-observations](https://en.wikipedia.org/wiki/Pseudo-observation "Pseudo-observation") associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately. 6. This leads immediately to the [normal-inverse-gamma distribution](https://en.wikipedia.org/wiki/Normal-inverse-gamma_distribution "Normal-inverse-gamma distribution"), which is the product of the two distributions just defined, with [conjugate priors](https://en.wikipedia.org/wiki/Conjugate_prior "Conjugate prior") used (an [inverse gamma distribution](https://en.wikipedia.org/wiki/Inverse_gamma_distribution "Inverse gamma distribution") over the variance, and a normal distribution over the mean, *conditional* on the variance) and with the same four parameters just defined. The priors are normally defined as follows: ![{\\displaystyle {\\begin{aligned}p(\\mu \\mid \\sigma ^{2};\\mu \_{0},n\_{0})&\\sim {\\mathcal {N}}(\\mu \_{0},\\sigma ^{2}/n\_{0})\\\\p(\\sigma ^{2};\\nu \_{0},\\sigma \_{0}^{2})&\\sim I\\chi ^{2}(\\nu \_{0},\\sigma \_{0}^{2})=IG(\\nu \_{0}/2,\\nu \_{0}\\sigma \_{0}^{2}/2)\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bab8dee515d3208f73dd85d1cb46706e3a9097f9) The update equations can be derived, and look as follows: ![{\\displaystyle {\\begin{aligned}{\\bar {x}}&={\\frac {1}{n}}\\sum \_{i=1}^{n}x\_{i}\\\\\\mu \_{0}'&={\\frac {n\_{0}\\mu \_{0}+n{\\bar {x}}}{n\_{0}+n}}\\\\n\_{0}'&=n\_{0}+n\\\\\\nu \_{0}'&=\\nu \_{0}+n\\\\\\nu \_{0}'{\\sigma \_{0}^{2}}'&=\\nu \_{0}\\sigma \_{0}^{2}+\\sum \_{i=1}^{n}(x\_{i}-{\\bar {x}})^{2}+{\\frac {n\_{0}n}{n\_{0}+n}}(\\mu \_{0}-{\\bar {x}})^{2}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/673b045d8322e2ce9e1ecc33c00585873b85547a)The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for ![{\\textstyle \\nu \_{0}'{\\sigma \_{0}^{2}}'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/97dcd132cd10175d6ce232772d0c5c5964a9f195) is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean. ## Occurrence and applications \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=47 "Edit section: Occurrence and applications")\] The occurrence of normal distribution in practical problems can be loosely classified into four categories: 1. Exactly normal distributions; 2. Approximately normal laws, for example when such approximation is justified by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"); and 3. Distributions modeled as normal – the normal distribution being the distribution with [maximum entropy](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy "Principle of maximum entropy") for a given mean and variance. 4. Regression problems – the normal distribution being found after systematic effects have been modeled sufficiently well. [![](https://upload.wikimedia.org/wikipedia/commons/b/bb/QHarmonicOscillator.png)](https://en.wikipedia.org/wiki/File:QHarmonicOscillator.png) The ground state of a [quantum harmonic oscillator](https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator "Quantum harmonic oscillator") has the Gaussian distribution. A normal distribution occurs in some [physical theories](https://en.wikipedia.org/wiki/Physical_theory "Physical theory"): ### Approximate normality \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=49 "Edit section: Approximate normality")\] *Approximately* normal distributions occur in many situations, as explained by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"). When the outcome is produced by many small effects acting *additively and independently*, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. - In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where [infinitely divisible](https://en.wikipedia.org/wiki/Infinitely_divisible "Infinitely divisible") and [decomposable](https://en.wikipedia.org/wiki/Indecomposable_distribution "Indecomposable distribution") distributions are involved, such as - [Binomial random variables](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"), associated with binary response variables; - [Poisson random variables](https://en.wikipedia.org/wiki/Poisson_random_variables "Poisson random variables"), associated with rare events; - [Thermal radiation](https://en.wikipedia.org/wiki/Thermal_radiation "Thermal radiation") has a [Bose–Einstein](https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statistics "Bose–Einstein statistics") distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. [![](https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Fisher_iris_versicolor_sepalwidth.svg/250px-Fisher_iris_versicolor_sepalwidth.svg.png)](https://en.wikipedia.org/wiki/File:Fisher_iris_versicolor_sepalwidth.svg) Histogram of sepal widths for *Iris versicolor* from Fisher's [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set "Iris flower data set"), with superimposed best-fitting normal distribution > I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations. There are statistical methods to empirically test that assumption; see the above [Normality tests](https://en.wikipedia.org/wiki/Normal_distribution#Normality_tests) section. - In [biology](https://en.wikipedia.org/wiki/Biology "Biology"), the *logarithm* of various variables tend to have a normal distribution, that is, they tend to have a [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution") (after separation on male/female subpopulations), with examples including: - Measures of size of living tissue (length, height, skin area, weight);[\[62\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-62) - The *length* of *inert* appendages (hair, claws, nails, teeth) of biological specimens, *in the direction of growth*; presumably the thickness of tree bark also falls under this category; - Certain physiological measurements, such as blood pressure of adult humans. - In finance, in particular the [Black–Scholes model](https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model "Black–Scholes model"), changes in the *logarithm* of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like [compound interest](https://en.wikipedia.org/wiki/Compound_interest "Compound interest"), not like simple interest, and so are multiplicative). Some mathematicians such as [Benoit Mandelbrot](https://en.wikipedia.org/wiki/Benoit_Mandelbrot "Benoit Mandelbrot") have argued that [log-Levy distributions](https://en.wikipedia.org/wiki/Levy_skew_alpha-stable_distribution "Levy skew alpha-stable distribution"), which possess [heavy tails](https://en.wikipedia.org/wiki/Heavy_tails "Heavy tails"), would be a more appropriate model, in particular for the analysis for [stock market crashes](https://en.wikipedia.org/wiki/Stock_market_crash "Stock market crash"). The use of the assumption of normal distribution occurring in financial models has also been criticized by [Nassim Nicholas Taleb](https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb "Nassim Nicholas Taleb") in his works. - [Measurement errors](https://en.wikipedia.org/wiki/Propagation_of_uncertainty "Propagation of uncertainty") in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.[\[63\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-63) - In [standardized testing](https://en.wikipedia.org/wiki/Standardized_testing_\(statistics\) "Standardized testing (statistics)"), results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the [IQ test](https://en.wikipedia.org/wiki/Intelligence_quotient "Intelligence quotient")) or transforming the raw test scores into output scores by fitting them to the normal distribution. For example, the [SAT](https://en.wikipedia.org/wiki/SAT "SAT")'s traditional range of 200–800 is based on a normal distribution with a mean of 500 and a standard deviation of 100. [![](https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/FitNormDistr.tif/lossless-page1-250px-FitNormDistr.tif.png)](https://en.wikipedia.org/wiki/File:FitNormDistr.tif) Fitted cumulative normal distribution to October rainfalls, see [distribution fitting](https://en.wikipedia.org/wiki/Distribution_fitting "Distribution fitting") - Many scores are derived from the normal distribution, including [percentile ranks](https://en.wikipedia.org/wiki/Percentile_rank "Percentile rank") (percentiles or quantiles), [normal curve equivalents](https://en.wikipedia.org/wiki/Normal_curve_equivalent "Normal curve equivalent"), [stanines](https://en.wikipedia.org/wiki/Stanine "Stanine"), [z-scores](https://en.wikipedia.org/wiki/Z-scores "Z-scores"), and T-scores. Additionally, some [behavioral statistical](https://en.wikipedia.org/wiki/Psychological_statistics "Psychological statistics") procedures assume that scores are normally distributed; for example, [t-tests](https://en.wikipedia.org/wiki/T-tests "T-tests") and [ANOVAs](https://en.wikipedia.org/wiki/Analysis_of_variance "Analysis of variance"). [Bell curve grading](https://en.wikipedia.org/wiki/Bell_curve_grading "Bell curve grading") assigns relative grades based on a normal distribution of scores. - In [hydrology](https://en.wikipedia.org/wiki/Hydrology "Hydrology") the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem").[\[64\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-64) The plot on the right illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% [confidence belt](https://en.wikipedia.org/wiki/Confidence_belt "Confidence belt") based on the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"). The rainfall data are represented by [plotting positions](https://en.wikipedia.org/wiki/Plotting_position "Plotting position") as part of the [cumulative frequency analysis](https://en.wikipedia.org/wiki/Cumulative_frequency_analysis "Cumulative frequency analysis"). ### Methodological problems and peer review \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=51 "Edit section: Methodological problems and peer review")\] [John Ioannidis](https://en.wikipedia.org/wiki/John_Ioannidis "John Ioannidis") [argued](https://en.wikipedia.org/wiki/Why_Most_Published_Research_Findings_Are_False "Why Most Published Research Findings Are False") that using normally distributed standard deviations as standards for validating research findings leave [falsifiable predictions](https://en.wikipedia.org/wiki/Falsifiability "Falsifiability") about phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if they were unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.[\[65\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-65) ## Computational methods \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=52 "Edit section: Computational methods")\] ### Generating values from normal distribution \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=53 "Edit section: Generating values from normal distribution")\] [![](https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Planche_de_Galton.jpg/250px-Planche_de_Galton.jpg)](https://en.wikipedia.org/wiki/File:Planche_de_Galton.jpg) The [bean machine](https://en.wikipedia.org/wiki/Bean_machine "Bean machine"), a device invented by [Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton"), can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve. In computer simulations, especially in applications of the [Monte-Carlo method](https://en.wikipedia.org/wiki/Monte-Carlo_method "Monte-Carlo method"), it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a *N*(*ÎŒ*, *σ*2) can be generated as *X* = *ÎŒ* + *σZ*, where Z is standard normal. All these algorithms rely on the availability of a [random number generator](https://en.wikipedia.org/wiki/Random_number_generator "Random number generator") U capable of producing [uniform](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") random variates. - The most straightforward method is based on the [probability integral transform](https://en.wikipedia.org/wiki/Probability_integral_transform "Probability integral transform") property: if U is distributed uniformly on (0,1), then *Ί*−1(*U*) will have the standard normal distribution. The drawback of this method is that it relies on calculation of the [probit function](https://en.wikipedia.org/wiki/Probit_function "Probit function") Ω−1, which cannot be done analytically. Some approximate methods are described in [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) and in the [erf](https://en.wikipedia.org/wiki/Error_function "Error function") article. Wichura gives a fast algorithm for computing this function to 16 decimal places,[\[66\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-66) which is used by [R](https://en.wikipedia.org/wiki/R_programming_language "R programming language") to compute random variates of the normal distribution. - [An easy-to-program approximate approach](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution#Approximating_a_Normal_distribution "Irwin–Hall distribution") that relies on the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") is as follows: generate 12 uniform *U*(0,1) deviates, add them all up, and subtract 6 – the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be [Irwin–Hall](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "Irwin–Hall distribution"), which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (−6, 6).[\[67\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-67) Note that in a true normal distribution, only 0.00034% of all samples will fall outside ±6*σ*. - The [Box–Muller method](https://en.wikipedia.org/wiki/Box%E2%80%93Muller_method "Box–Muller method") uses two independent random numbers U and V distributed [uniformly](https://en.wikipedia.org/wiki/Uniform_distribution_\(continuous\) "Uniform distribution (continuous)") on (0,1). Then the two random variables X and Y ![{\\displaystyle X={\\sqrt {-2\\ln U}}\\,\\cos(2\\pi V),\\qquad Y={\\sqrt {-2\\ln U}}\\,\\sin(2\\pi V).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/51fa20f18a8a5ed19c147db4686e7b15b6ca2e38) will both have the standard normal distribution, and will be [independent](https://en.wikipedia.org/wiki/Independence_\(probability_theory\) "Independence (probability theory)"). This formulation arises because for a [bivariate normal](https://en.wikipedia.org/wiki/Bivariate_normal "Bivariate normal") random vector (*X*, *Y*) the squared norm *X*2 + *Y*2 will have the [chi-squared distribution](https://en.wikipedia.org/wiki/Chi-squared_distribution "Chi-squared distribution") with two degrees of freedom, which is an easily generated [exponential random variable](https://en.wikipedia.org/wiki/Exponential_random_variable "Exponential random variable") corresponding to the quantity −2 ln(*U*) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V. - The [Marsaglia polar method](https://en.wikipedia.org/wiki/Marsaglia_polar_method "Marsaglia polar method") is a modification of the Box–Muller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (−1,1) distribution, and then *S* = *U*2 + *V*2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities ![{\\displaystyle X=U{\\sqrt {\\frac {-2\\ln S}{S}}},\\qquad Y=V{\\sqrt {\\frac {-2\\ln S}{S}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bdace1879c7c786ba946a60e5acb29f354d86796) are returned. Again, X and Y are independent, standard normal random variables. - The Ratio method[\[68\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-68) is a rejection method. The algorithm proceeds as follows: - Generate two independent uniform deviates U and V; - Compute *X* = √8/*e* (*V* − 0.5)/*U*; - Optional: if *X*2 ≀ 5 − 4*e*1/4*U* then accept X and terminate algorithm; - Optional: if *X*2 ≄ 4*e*−1.35/*U* + 1.4 then reject X and start over from step 1; - If *X*2 ≀ −4 ln *U* then accept X, otherwise start over the algorithm. The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved[\[69\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-69) so that the logarithm is rarely evaluated. - The [ziggurat algorithm](https://en.wikipedia.org/wiki/Ziggurat_algorithm "Ziggurat algorithm")[\[70\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-70) is faster than the Box–Muller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed. - Integer arithmetic can be used to sample from the standard normal distribution.[\[71\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-71)[\[72\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-72) This method is exact in the sense that it satisfies the conditions of *ideal approximation*;[\[73\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-73) i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number. - There is also some investigation[\[74\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-74) into the connection between the fast [Hadamard transform](https://en.wikipedia.org/wiki/Hadamard_transform "Hadamard transform") and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data. ### Numerical approximations for the normal cumulative distribution function and normal quantile function \[[edit](https://en.wikipedia.org/w/index.php?title=Normal_distribution&action=edit&section=54 "Edit section: Numerical approximations for the normal cumulative distribution function and normal quantile function")\] The standard normal [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") is widely used in scientific and statistical computing. The values *Ί*(*x*) may be approximated very accurately by a variety of methods, such as [numerical integration](https://en.wikipedia.org/wiki/Numerical_integration "Numerical integration"), [Taylor series](https://en.wikipedia.org/wiki/Taylor_series "Taylor series"), [asymptotic series](https://en.wikipedia.org/wiki/Asymptotic_series "Asymptotic series") and [continued fractions](https://en.wikipedia.org/wiki/Gauss%27s_continued_fraction#Of_Kummer's_confluent_hypergeometric_function "Gauss's continued fraction"). Different approximations are used depending on the desired level of accuracy. - [Zelen & Severo (1964)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFZelenSevero1964) give the approximation for *Ί*(*x*) for *x* \> 0 with the absolute error \|*Δ*(*x*)\| \< 7.5·10−8 (algorithm [26\.2.17](https://secure.math.ubc.ca/~cbm/aands/page_932.htm)): ![{\\displaystyle \\Phi (x)=1-\\varphi (x)\\left(b\_{1}t+b\_{2}t^{2}+b\_{3}t^{3}+b\_{4}t^{4}+b\_{5}t^{5}\\right)+\\varepsilon (x),\\qquad t={\\frac {1}{1+b\_{0}x}},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/202a295cd562d4d7404a1042e23f14b8d72be308) where *ϕ*(*x*) is the standard normal probability density function, and *b*0 = 0.2316419, *b*1 = 0.319381530, *b*2 = −0.356563782, *b*3 = 1.781477937, *b*4 = −1.821255978, *b*5 = 1.330274429. - [Hart (1968)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHart1968) lists dozens of approximations by means of rational functions, with or without exponentials, for the erfc() function, where erfc(x) = 1 - erf(x). His algorithms vary in the degree of complexity and the resulting precision, with a maximum absolute precision of 24 digits. An algorithm by [West (2009)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWest2009) combines Hart's algorithm 5666 with a [continued fraction](https://en.wikipedia.org/wiki/Continued_fraction "Continued fraction") approximation in the tail to provide a fast computation algorithm with 16-digit precision. - [Cody (1969)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCody1969), after recalling the Hart68 solution is not suited for erf, gave a solution for both erf and erfc, with maximal relative error bound, via [Rational Chebyshev Approximation](https://en.wikipedia.org/wiki/Rational_function "Rational function"). - [Marsaglia (2004)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsaglia2004) suggested a simple algorithm[\[note 1\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-75) based on the Taylor series expansion ![{\\displaystyle \\Phi (x)={\\frac {1}{2}}+\\varphi (x)\\left(x+{\\frac {x^{3}}{3}}+{\\frac {x^{5}}{3\\cdot 5}}+{\\frac {x^{7}}{3\\cdot 5\\cdot 7}}+{\\frac {x^{9}}{3\\cdot 5\\cdot 7\\cdot 9}}+\\cdots \\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ca45895a9095ca37f734f18a83481576ba4c5a49) for calculating *Ί*(*x*) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when *x* = 10). - The [GNU Scientific Library](https://en.wikipedia.org/wiki/GNU_Scientific_Library "GNU Scientific Library") calculates values of the standard normal cumulative distribution function using Hart's algorithms and approximations with [Chebyshev polynomials](https://en.wikipedia.org/wiki/Chebyshev_polynomial "Chebyshev polynomial"). - [Dia (2023)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDia2023) proposes the following approximation of ![{\\textstyle 1-\\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/80cf15e7b1c8138c3b5cc37f31168f914c7d6621) with a maximum relative error less than ![{\\textstyle 2^{-53}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/208228d226c31bceb0c8aeadfab59460f19a157e) ![{\\textstyle \\left(\\approx 1.1\\times 10^{-16}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/191b9343ac891e152b2093e8b6ac149e367dbf58) in absolute value: for ![{\\textstyle x\\geq 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c434fc1e2ab777786469de853c75e616007b3eb4)![{\\textstyle {\\begin{aligned}1-\\Phi \\left(x\\right)&=\\left({\\frac {0.39894228040143268}{x+2.92678600515804815}}\\right)\\left({\\frac {x^{2}+8.42742300458043240x+18.38871225773938487}{x^{2}+5.81582518933527391x+8.97280659046817350}}\\right)\\\\&\\left({\\frac {x^{2}+7.30756258553673541x+18.25323235347346525}{x^{2}+5.70347935898051437x+10.27157061171363079}}\\right)\\left({\\frac {x^{2}+5.66479518878470765x+18.61193318971775795}{x^{2}+5.51862483025707963x+12.72323261907760928}}\\right)\\\\&\\left({\\frac {x^{2}+4.91396098895240075x+24.14804072812762821}{x^{2}+5.26184239579604207x+16.88639562007936908}}\\right)\\left({\\frac {x^{2}+3.83362947800146179x+11.61511226260603247}{x^{2}+4.92081346632882033x+24.12333774572479110}}\\right)e^{-{\\frac {x^{2}}{2}}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0f9a049b86b4971707745b5c6ba2e40ae4e25205) and for ![{\\textstyle x\<0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/11dbe80785d8f5d86eb8e91c35b6f3003f8d2838), ![{\\displaystyle 1-\\Phi \\left(x\\right)=1-\\left(1-\\Phi \\left(-x\\right)\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2e12b409099845b3057fa7bdb2d9b84c6cacf73a) Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting *p* = *Ί*(*z*), the simplest approximation for the quantile function is: ![{\\displaystyle z=\\Phi ^{-1}(p)=5.5556\\left\[1-\\left({\\frac {1-p}{p}}\\right)^{0.1186}\\right\],\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5f2df7f1427d0c90d075faef38f4f5ab7acce5c9) This approximation delivers for z a maximum absolute error of 0.026 (for 0\.5 ≀ *p* ≀ 0.9999, corresponding to 0 ≀ *z* ≀ 3.719). For *p* \< 1/2 replace p by 1 − *p* and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation: ![{\\displaystyle z=-0.4115\\left\\{{\\frac {1-p}{p}}+\\log \\left\[{\\frac {1-p}{p}}\\right\]-1\\right\\},\\qquad p\\geq 1/2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e1edea9f990058f741db6735799c8b40999b833b) The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by ![{\\displaystyle {\\begin{aligned}L(z)&=\\int \_{z}^{\\infty }(u-z)\\varphi (u)\\,du=\\int \_{z}^{\\infty }\[1-\\Phi (u)\]\\,du\\\\\[5pt\]L(z)&\\approx {\\begin{cases}0.4115\\left({\\dfrac {p}{1-p}}\\right)-z,\&p\<1/2,\\\\\\\\0.4115\\left({\\dfrac {1-p}{p}}\\right),\&p\\geq 1/2.\\end{cases}}\\\\\[5pt\]{\\text{or, equivalently,}}\\\\L(z)&\\approx {\\begin{cases}0.4115\\left\\{1-\\log \\left\[{\\frac {p}{1-p}}\\right\]\\right\\},\&p\<1/2,\\\\\\\\0.4115{\\dfrac {1-p}{p}},\&p\\geq 1/2.\\end{cases}}\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4b69fa586cffdfbbd40a94c65629726e4ae78bf) This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for *z* ≄ 1.4). Highly accurate approximations for the cumulative distribution function, based on [Response Modeling Methodology](https://en.wikipedia.org/wiki/Response_Modeling_Methodology "Response Modeling Methodology") (RMM, Shore, 2011, 2012), are shown in Shore (2005). Some more approximations can be found at: [Error function\#Approximation with elementary functions](https://en.wikipedia.org/wiki/Error_function#Approximation_with_elementary_functions "Error function"). In particular, small *relative* error on the whole domain for the cumulative distribution function ⁠![{\\displaystyle \\Phi }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aed80a2011a3912b028ba32a52dfa57165455f24)⁠ and the quantile function ![{\\textstyle \\Phi ^{-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bd21c29efa71343458e18c6c3bdd7a1005cafa0d) as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008. Some authors[\[75\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-76)[\[76\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-77) attribute the discovery of the normal distribution to [de Moivre](https://en.wikipedia.org/wiki/De_Moivre "De Moivre"), who in 1738[\[note 2\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-78) published in the second edition of his *[The Doctrine of Chances](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances")* the study of the coefficients in the [binomial expansion](https://en.wikipedia.org/wiki/Binomial_expansion "Binomial expansion") of (*a* + *b*)*n*. De Moivre proved that the middle term in this expansion has the approximate magnitude of ![{\\textstyle 2^{n}/{\\sqrt {2\\pi n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ae9fee018963a079bb837482314ae6b1533a3a19), and that "If m or ⁠1/2⁠*n* be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ, has to the middle Term, is ![{\\textstyle -{\\frac {2\\ell \\ell }{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d5327ed8841e09d62970ee806553294cdfe96e9e)."[\[77\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-79) Although this theorem can be interpreted as the first obscure expression for the normal probability law, [Stigler](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[\[78\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-80) [![](https://upload.wikimedia.org/wikipedia/commons/thumb/9/9b/Carl_Friedrich_Gauss.jpg/250px-Carl_Friedrich_Gauss.jpg)](https://en.wikipedia.org/wiki/File:Carl_Friedrich_Gauss.jpg) In 1809, [Carl Friedrich Gauss](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") showed that the normal distribution provides a way to rationalize the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares"). In 1823 [Gauss](https://en.wikipedia.org/wiki/Gauss "Gauss") published his monograph "*Theoria combinationis observationum erroribus minimis obnoxiae*" where among other things he introduces several important statistical concepts, such as the [method of least squares](https://en.wikipedia.org/wiki/Method_of_least_squares "Method of least squares"), the [method of maximum likelihood](https://en.wikipedia.org/wiki/Method_of_maximum_likelihood "Method of maximum likelihood"), and the *normal distribution*. Gauss used M, *M*â€Č, *M*″, ... to denote the measurements of some unknown quantity V, and sought the most probable estimator of that quantity: the one that maximizes the probability *φ*(*M* − *V*) · *φ*(*M*â€Č − *V*) · *φ*(*M*″ − *V*) · ... of obtaining the observed experimental results. In his notation φΔ is the probability density function of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[\[note 3\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-81) Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[\[79\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-82) ![{\\displaystyle \\varphi {\\mathit {\\Delta }}={\\frac {h}{\\surd \\pi }}\\,e^{-\\mathrm {hh} \\Delta \\Delta },}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c45300f5e3b84f9d3571c95d621dc76c4097b4b3) where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the [non-linear](https://en.wikipedia.org/wiki/Non-linear_least_squares "Non-linear least squares") [weighted least squares](https://en.wikipedia.org/wiki/Weighted_least_squares "Weighted least squares") method.[\[80\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-83) [![](https://upload.wikimedia.org/wikipedia/commons/thumb/e/e3/Pierre-Simon_Laplace.jpg/250px-Pierre-Simon_Laplace.jpg)](https://en.wikipedia.org/wiki/File:Pierre-Simon_Laplace.jpg) [Pierre-Simon Laplace](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") proved the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem") in 1810, consolidating the importance of the normal distribution in statistics. Although Gauss was the first to suggest the normal distribution law, [Laplace](https://en.wikipedia.org/wiki/Laplace "Laplace") made significant contributions.[\[note 4\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-84) It was Laplace who first posed the problem of aggregating several observations in 1774,[\[81\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-85) although his own solution led to the [Laplacian distribution](https://en.wikipedia.org/wiki/Laplacian_distribution "Laplacian distribution"). It was Laplace who first calculated the value of the [integral ∫ *e*−*t*2 *dt* = √π](https://en.wikipedia.org/wiki/Gaussian_integral "Gaussian integral") in 1782, providing the normalization constant for the normal distribution.[\[82\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-86) For this accomplishment, Gauss acknowledged the priority of Laplace.[\[83\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-87) Finally, it was Laplace who in 1810 proved and presented to the academy the fundamental [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem"), which emphasized the theoretical importance of the normal distribution.[\[84\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-88) It is of interest to note that in 1809 an Irish-American mathematician [Robert Adrain](https://en.wikipedia.org/wiki/Robert_Adrain "Robert Adrain") published two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.[\[85\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-89) His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed by [Abbe](https://en.wikipedia.org/wiki/Cleveland_Abbe "Cleveland Abbe").[\[86\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-90) In the middle of the 19th century [Maxwell](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[\[59\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-FOOTNOTEMaxwell186023-59) The number of particles whose velocity, resolved in a certain direction, lies between x and *x* + *dx* is ![{\\displaystyle \\operatorname {N} {\\frac {1}{\\alpha \\;{\\sqrt {\\pi }}}}\\;e^{-{\\frac {x^{2}}{\\alpha ^{2}}}}\\,dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/75ebaa526f97ad5136df9d8a540d1970aa8c5664) Today, the concept is usually known in English as the **normal distribution** or **Gaussian distribution**. Other less common names include Gauss distribution, Laplace–Gauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.[\[87\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-91) However, by the end of the 19th century some authors[\[note 5\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-92) had started using the name *normal distribution*, where the word "normal" was used as an adjective – the term now being seen as a reflection of this distribution being seen as typical, common – and thus normal. [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce") (one of those authors) once defined "normal" thus: "... the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what *would*, in the long run, occur under certain circumstances."[\[88\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-93) Around the turn of the 20th century [Pearson](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") popularized the term *normal* as a designation for this distribution.[\[89\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-94) > Many years ago I called the Laplace–Gaussian curve the *normal* curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'. Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, [Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher "Ronald Fisher") added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: ![{\\displaystyle df={\\frac {1}{\\sqrt {2\\sigma ^{2}\\pi }}}e^{-(x-m)^{2}/(2\\sigma ^{2})}\\,dx.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fcecb090761f24d376d5f8f4962eb0459504fcae) The term *standard normal distribution*, which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947) *Introduction to Mathematical Statistics* and [Alexander M. Mood](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950) *Introduction to the Theory of Statistics*.[\[90\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-95)[\[91\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-96)[\[92\]](https://en.wikipedia.org/wiki/Normal_distribution#cite_note-97) 1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-75)** For example, this algorithm is given in the article [Bc programming language](https://en.wikipedia.org/wiki/Bc_programming_language#A_translated_C_function "Bc programming language"). 2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-78)** De Moivre first published his findings in 1733, in a pamphlet *Approximatio ad Summam Terminorum Binomii* (*a* + *b*)*n* *in Seriem Expansi* that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example [Walker (1985)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985). 3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-81)** "It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been determined by several direct observations, made under the same circumstances and with equal care, the arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at least, so that it is always most safe to adhere to it." — [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177) 4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-84)** "My custom of terming the curve the Gauss–Laplacian or *normal* curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189) 5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-92)** Besides those specifically referenced here, such use is encountered in the works of [Peirce](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce "Charles Sanders Peirce"), [Galton](https://en.wikipedia.org/wiki/Galton "Galton") ([Galton (1889](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalton1889), chapter V)) and [Lexis](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") ([Lexis (1878)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLexis1878), [Rohrbasser & VĂ©ron (2003)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFRohrbasserV%C3%A9ron2003)) c. 1875.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] 1. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Norton-2019_1-0)** Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019). ["Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation"](https://web.archive.org/web/20230331230821/http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF). *Annals of Operations Research*. **299** (1–2\). Springer: 1281–1315\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1811\.11301](https://arxiv.org/abs/1811.11301). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s10479-019-03373-1](https://doi.org/10.1007%2Fs10479-019-03373-1). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [254231768](https://api.semanticscholar.org/CorpusID:254231768). Archived from [the original](http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf) (PDF) on March 31, 2023. Retrieved February 27, 2023. 2. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-The_Joy_of_Finite_Mathematics_2-0)** Tsokos, Chris; Wooten, Rebecca (January 1, 2016). Tsokos, Chris; Wooten, Rebecca (eds.). [*The Joy of Finite Mathematics*](https://linkinghub.elsevier.com/retrieve/pii/B9780128029671000073). Boston: Academic Press. pp. 231–263\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-802967-1.00007-3](https://doi.org/10.1016%2Fb978-0-12-802967-1.00007-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-12-802967-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-802967-1 "Special:BookSources/978-0-12-802967-1") . 3. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Mathematics_for_Physical_Science_and_Engineering_3-0)** Harris, Frank E. (January 1, 2014). Harris, Frank E. (ed.). [*Mathematics for Physical Science and Engineering*](https://linkinghub.elsevier.com/retrieve/pii/B9780128010006000183). Boston: Academic Press. pp. 663–709\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/b978-0-12-801000-6.00018-3](https://doi.org/10.1016%2Fb978-0-12-801000-6.00018-3). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-12-801000-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-801000-6 "Special:BookSources/978-0-12-801000-6") . 4. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-4)** [Hoel (1947](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947), [p. 31](https://archive.org/details/in.ernet.dli.2015.263186/page/n39/mode/2up?q=%22normal+distribution%22)) and [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 109](https://archive.org/details/introductiontoth0000alex/page/108/mode/2up?q=%22normal+distribution%22)) give this definition with slightly different notation. 5. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-5)** [*Normal Distribution*](http://www.encyclopedia.com/topic/Normal_Distribution.aspx#3), Gale Encyclopedia of Psychology 6. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-6)** [Casella & Berger (2001](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCasellaBerger2001), p. 102) 7. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-7)** Lyon, A. (2014). [Why are Normal Distributions Normal?](https://aidanlyon.com/normal_distributions.pdf), The British Journal for the Philosophy of Science. 8. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-8)** Jorge, Nocedal; Stephan, J. Wright (2006). *Numerical Optimization* (2nd ed.). Springer. p. 249. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0387-30303-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0387-30303-1 "Special:BookSources/978-0387-30303-1") . 9. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-www.mathsisfun.com_9-1) ["Normal Distribution"](https://www.mathsisfun.com/data/standard-normal-distribution.html). *www.mathsisfun.com*. Retrieved August 15, 2020. 10. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-10)** ["bell curve"](https://www.merriam-webster.com/dictionary/bell%20curve). *Merriam-Webster.com Dictionary*. Retrieved May 25, 2025. 11. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-11)** [Mood (1950](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950), [p. 112](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22)) explicitly defines the *standard normal distribution*. In contrast, [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) explicitly defines the *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and introduces the term *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22). 12. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-12)** [Stigler (1982)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1982) 13. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-13)** [Halperin, Hartley & Hoel (1965](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHalperinHartleyHoel1965), item 7) 14. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-14)** [McPherson (1990](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMcPherson1990), p. 110) 15. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-15)** [Bernardo & Smith (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBernardoSmith2000), p. 121) 16. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-KunIlPark_16-0)** Park, Kun Il (2018). *Fundamentals of Probability and Stochastic Processes with Applications to Communications*. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-3-319-68074-3](https://en.wikipedia.org/wiki/Special:BookSources/978-3-319-68074-3 "Special:BookSources/978-3-319-68074-3") . 17. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-17)** Scott, Clayton; Nowak, Robert (August 7, 2003). ["The Q-function"](http://cnx.org/content/m11537/1.2/). *Connexions*. 18. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-18)** Barak, Ohad (April 6, 2006). ["Q Function and Error Function"](https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF). Tel Aviv University. Archived from [the original](http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF) on March 25, 2009. 19. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-19)** [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution Function"](https://mathworld.wolfram.com/NormalDistributionFunction.html). *[MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld")*. 20. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-20)** [Abramowitz, Milton](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); [Stegun, Irene Ann](https://en.wikipedia.org/wiki/Irene_Stegun "Irene Stegun"), eds. (1983) \[June 1964\]. ["Chapter 26, eqn 26.2.12"](http://www.math.ubc.ca/~cbm/aands/page_932.htm). [*Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun"). Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 932. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0") . [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [64-60036](https://lccn.loc.gov/64-60036). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0167642](https://mathscinet.ams.org/mathscinet-getitem?mr=0167642). [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [65-12253](https://www.loc.gov/item/65012253). 21. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-duff_21-0)** Duff, Michael (2003). "Normal Distribution Algorithms". *The Mathematical Gazette*. **87** (509): 331–336\. [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [3621062](https://www.jstor.org/stable/3621062). 22. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-kendall_22-1) Stuart, Alan; Ord, J. Keith (1987). ["The normal d.f."](https://archive.org/details/kendallsadvanced0001kend/page/183/mode/1up). *Kendall's Advanced Theory of Statistics*. Vol. 1: Distribution Theory. originally by [Maurice Kendall](https://en.wikipedia.org/wiki/Maurice_Kendall "Maurice Kendall") (5th ed.). Charles Griffin & Co. § 5\.37, pp. 183–185. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-85264-285-7](https://en.wikipedia.org/wiki/Special:BookSources/0-85264-285-7 "Special:BookSources/0-85264-285-7") . 23. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-23)** Vaart, A. W. van der (October 13, 1998). [*Asymptotic Statistics*](https://dx.doi.org/10.1017/cbo9780511802256). Cambridge University Press. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1017/cbo9780511802256](https://doi.org/10.1017%2Fcbo9780511802256). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-511-80225-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-511-80225-6 "Special:BookSources/978-0-511-80225-6") . 24. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTECoverThomas2006254_24-1) [Cover & Thomas (2006)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFCoverThomas2006), p. 254. 25. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-25)** Park, Sung Y.; Bera, Anil K. (2009). ["Maximum Entropy Autoregressive Conditional Heteroskedasticity Model"](https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf) (PDF). *Journal of Econometrics*. **150** (2): 219–230\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2009JEcon.150..219P](https://ui.adsabs.harvard.edu/abs/2009JEcon.150..219P). [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.511.9750](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.511.9750). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.jeconom.2008.12.014](https://doi.org/10.1016%2Fj.jeconom.2008.12.014). Archived from [the original](http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf) (PDF) on March 7, 2016. Retrieved June 2, 2011. 26. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Geary_RC_26-0)** Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184 27. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-27)** [Lukacs, Eugene](https://en.wikipedia.org/wiki/Eugene_Lukacs "Eugene Lukacs") (March 1942). ["A Characterization of the Normal Distribution"](https://archive.org/details/dli.ernet.4125/page/91). *[Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics "Annals of Mathematical Statistics")*. **13** (1): 91–93\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/AOMS/1177731647](https://doi.org/10.1214%2FAOMS%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166). [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0006626](https://mathscinet.ams.org/mathscinet-getitem?mr=0006626). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0060\.28509](https://zbmath.org/?format=complete&q=an:0060.28509). [Wikidata](https://en.wikipedia.org/wiki/WDQ_\(identifier\) "WDQ (identifier)") [Q55897617](https://www.wikidata.org/wiki/Q55897617 "d:Q55897617"). 28. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-1) [***c***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Patel_28-2) [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.4\]) 29. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-29)** [Fan (1991](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFFan1991), p. 1258) 30. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-30)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.1.8\]) 31. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-31)** Papoulis, Athanasios. *Probability, Random Variables and Stochastic Processes* (4th ed.). p. 148. 32. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-32)** Winkelbauer, Andreas (2012). "Moments and Absolute Moments of the Normal Distribution". [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1209\.4340](https://arxiv.org/abs/1209.4340) \[[math.ST](https://arxiv.org/archive/math.ST)\]. 33. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-33)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 23) 34. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-34)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 24) 35. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-35)** Williams, David (2001). [*Weighing the odds : a course in probability and statistics*](https://archive.org/details/weighingoddscour00will) (Reprinted. ed.). Cambridge \[u.a.\]: Cambridge Univ. Press. pp. [197](https://archive.org/details/weighingoddscour00will/page/n219)–199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-521-00618-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-00618-7 "Special:BookSources/978-0-521-00618-7") . 36. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-36)** JosĂ© M. Bernardo; Adrian F. M. Smith (2000). [*Bayesian theory*](https://archive.org/details/bayesiantheory00bern_963) (Reprint ed.). Chichester \[u.a.\]: Wiley. pp. [209](https://archive.org/details/bayesiantheory00bern_963/page/n224), 366. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5") . 37. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-37)** O'Hagan, A. (1994) *Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference*, Edward Arnold. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-340-52922-9](https://en.wikipedia.org/wiki/Special:BookSources/0-340-52922-9 "Special:BookSources/0-340-52922-9") (Section 5.40) 38. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Bryc_1995_35_38-1) [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 35) 39. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-39)** [UIUC, Lecture 21. *The Multivariate Normal Distribution*](http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf), 21.6:"Individually Gaussian Versus Jointly Gaussian". 40. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-40)** Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", *[The American Statistician](https://en.wikipedia.org/wiki/The_American_Statistician "The American Statistician")*, volume 36, number 4 November 1982, pages 372–373 41. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-41)** ["Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions"](http://www.allisons.org/ll/MML/KL/Normal/). *Allisons.org*. December 5, 2007. Retrieved March 3, 2017. 42. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-42)** Jordan, Michael I. (February 8, 2010). ["Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution"](http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf) (PDF). 43. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-43)** [Amari & Nagaoka (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFAmariNagaoka2000) 44. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-44)** ["Expectation of the maximum of gaussian random variables"](https://math.stackexchange.com/a/89147). *Mathematics Stack Exchange*. Retrieved April 7, 2024. 45. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-45)** ["Normal Approximation to Poisson Distribution"](http://www.stat.ucla.edu/~dinov/courses_students.dir/Applets.dir/NormalApprox2PoissonApplet.html). *Stat.ucla.edu*. Retrieved March 3, 2017. 46. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-46)** [Bryc (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 27) 47. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-47)** Weisstein, Eric W. ["Normal Product Distribution"](http://mathworld.wolfram.com/NormalProductDistribution.html). *MathWorld*. wolfram.com. 48. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-48)** Lukacs, Eugene (1942). ["A Characterization of the Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177731647). *[The Annals of Mathematical Statistics](https://en.wikipedia.org/wiki/The_Annals_of_Mathematical_Statistics "The Annals of Mathematical Statistics")*. **13** (1): 91–3\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177731647](https://doi.org/10.1214%2Faoms%2F1177731647). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0003-4851](https://search.worldcat.org/issn/0003-4851). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236166](https://www.jstor.org/stable/2236166). 49. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-49)** Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". *[Sankhyā](https://en.wikipedia.org/wiki/Sankhy%C4%81_\(journal\) "Sankhyā (journal)")*. **13** (4): 359–62\. [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0036-4452](https://search.worldcat.org/issn/0036-4452). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [25048183](https://www.jstor.org/stable/25048183). 50. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-50)** Lehmann, E. L. (1997). *Testing Statistical Hypotheses* (2nd ed.). Springer. p. 199. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-94919-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-94919-2 "Special:BookSources/978-0-387-94919-2") . 51. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-51)** [Patel & Read (1996](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPatelRead1996), \[2.3.6\]) 52. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-52)** [Galambos & Simonelli (2004](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGalambosSimonelli2004), Theorem 3.5) 53. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Lukacs_53-1) [Lukacs & King (1954)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLukacsKing1954) 54. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-54)** Quine, M.P. (1993). ["On three characterisations of the normal distribution"](http://www.math.uni.wroc.pl/~pms/publicationsArticle.php?nr=14.2&nrA=8&ppB=257&ppE=263). *Probability and Mathematical Statistics*. **14** (2): 257–263\. 55. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-John-1982_55-0)** John, S (1982). "The three parameter two-piece normal family of distributions and its fitting". *Communications in Statistics – Theory and Methods*. **11** (8): 879–885\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610928208828279](https://doi.org/10.1080%2F03610928208828279). 56. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Krishnamoorthy_56-1) [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 127) 57. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-57)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 130) 58. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-58)** [Krishnamoorthy (2006](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKrishnamoorthy2006), p. 133) 59. ^ [***a***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-0) [***b***](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEMaxwell186023_59-1) [Maxwell (1860)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMaxwell1860), p. 23. 60. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-FOOTNOTEBryc19951_60-0)** [Bryc (1995)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFBryc1995), p. 1. 61. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-61)** Larkoski, Andrew J. (2023). [*Quantum Mechanics: A Mathematical Introduction*](https://books.google.com/books?id=iKmnEAAAQBAJ&dq=normal%20distribution&pg=PA120). United Kingdom: Cambridge University Press. pp. 120–121\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-1-009-12222-1](https://en.wikipedia.org/wiki/Special:BookSources/978-1-009-12222-1 "Special:BookSources/978-1-009-12222-1") . Retrieved May 30, 2025. 62. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-62)** [Huxley (1932)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHuxley1932) 63. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-63)** Jaynes, Edwin T. (2003). [*Probability Theory: The Logic of Science*](https://books.google.com/books?id=tTN4HuUNXjgC&pg=PA592). Cambridge University Press. pp. 592–593\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9780521592710](https://en.wikipedia.org/wiki/Special:BookSources/9780521592710 "Special:BookSources/9780521592710") . 64. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-64)** Oosterbaan, Roland J. (1994). ["Chapter 6: Frequency and Regression Analysis of Hydrologic Data"](http://www.waterlog.info/pdf/freqtxt.pdf) (PDF). In Ritzema, Henk P. (ed.). *Drainage Principles and Applications, Publication 16* (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp. 175–224\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-90-70754-33-4](https://en.wikipedia.org/wiki/Special:BookSources/978-90-70754-33-4 "Special:BookSources/978-90-70754-33-4") . 65. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-65)** Why Most Published Research Findings Are False, John P. A. Ioannidis, 2005 66. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-66)** Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". *Applied Statistics*. **37** (3): 477–84\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347330](https://doi.org/10.2307%2F2347330). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347330](https://www.jstor.org/stable/2347330). 67. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-67)** [Johnson, Kotz & Balakrishnan (1995](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1995), Equation (26.48)) 68. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-68)** [Kinderman & Monahan (1977)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKindermanMonahan1977) 69. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-69)** [Leva (1992)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLeva1992) 70. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-70)** [Marsaglia & Tsang (2000)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMarsagliaTsang2000) 71. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-71)** [Karney (2016)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKarney2016) 72. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-72)** [Du, Fan & Wei (2022)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFDuFanWei2022) 73. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-73)** [Monahan (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMonahan1985), section 2) 74. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-74)** [Wallace (1996)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWallace1996) 75. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-76)** [Johnson, Kotz & Balakrishnan (1994](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFJohnsonKotzBalakrishnan1994), p. 85) 76. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-77)** [Le Cam & Lo Yang (2000](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLe_CamLo_Yang2000), p. 74) 77. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-79)** De Moivre, Abraham (1733), Corollary I – see [Walker (1985](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFWalker1985), p. 77) 78. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-80)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), [p. 76](https://archive.org/details/historyofstatist00stig/page/76/mode/2up?q=%22de+moivre%22)) 79. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-82)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177) 80. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-83)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 179) 81. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-85)** [Laplace (1774](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFLaplace1774), Problem III) 82. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-86)** [Pearson (1905](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFPearson1905), p. 189) 83. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-87)** [Gauss (1809](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFGauss1809), section 177) 84. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-88)** [Stigler (1986](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1986), p. 144) 85. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-89)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 243) 86. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-90)** [Stigler (1978](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFStigler1978), p. 244) 87. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-91)** Jaynes, Edwin J.; *Probability Theory: The Logic of Science*, [Ch. 7](http://www-biba.inrialpes.fr/Jaynes/cc07s.pdf). 88. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-93)** Peirce, Charles S. (c. 1909 MS), *[Collected Papers](https://en.wikipedia.org/wiki/Charles_Sanders_Peirce_bibliography#CP "Charles Sanders Peirce bibliography")* v. 6, paragraph 327. 89. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-94)** [Kruskal & Stigler (1997)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFKruskalStigler1997). 90. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-95)** ["Earliest Uses... (Entry Standard Normal Curve)"](http://jeff560.tripod.com/s.html). 91. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-96)** [Hoel (1947)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFHoel1947) introduces the terms *standard normal curve* [(p. 33)](https://archive.org/details/in.ernet.dli.2015.263186/page/n41/mode/2up?q=%22standard+normal+curve%22) and *standard normal distribution* [(p. 69)](https://archive.org/details/in.ernet.dli.2015.263186/page/n77/mode/2up?q=%22standard+normal+distribution%22). 92. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-97)** [Mood (1950)](https://en.wikipedia.org/wiki/Normal_distribution#CITEREFMood1950) explicitly defines the *standard normal distribution* [(p. 112)](https://archive.org/details/introductiontoth0000alex/page/112/mode/2up?q=%22standard+normal+distribution%22). 93. **[^](https://en.wikipedia.org/wiki/Normal_distribution#cite_ref-Sun-2021_98-0)** Sun, Jingchao; Kong, Maiying; Pal, Subhadip (June 22, 2021). ["The Modified-Half-Normal distribution: Properties and an efficient sampling scheme"](https://www.tandfonline.com/doi/abs/10.1080/03610926.2021.1934700?journalCode=lsta20). *Communications in Statistics – Theory and Methods*. **52** (5): 1591–1613\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/03610926.2021.1934700](https://doi.org/10.1080%2F03610926.2021.1934700). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0361-0926](https://search.worldcat.org/issn/0361-0926). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [237919587](https://api.semanticscholar.org/CorpusID:237919587). - Aldrich, John; Miller, Jeff. ["Earliest Uses of Symbols in Probability and Statistics"](http://jeff560.tripod.com/stat.html). - Aldrich, John; Miller, Jeff. ["Earliest Known Uses of Some of the Words of Mathematics"](http://jeff560.tripod.com/mathword.html). In particular, the entries for ["bell-shaped and bell curve"](http://jeff560.tripod.com/b.html), ["normal (distribution)"](http://jeff560.tripod.com/n.html), ["Gaussian"](http://jeff560.tripod.com/g.html), and ["Error, law of error, theory of errors, etc."](http://jeff560.tripod.com/e.html). - [Amari, Shun'ichi](https://en.wikipedia.org/wiki/Shun%27ichi_Amari "Shun'ichi Amari"); Nagaoka, Hiroshi (2000). *Methods of Information Geometry*. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8218-0531-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-0531-2 "Special:BookSources/978-0-8218-0531-2") . - [Bernardo, JosĂ© M.](https://en.wikipedia.org/wiki/Jos%C3%A9-Miguel_Bernardo "JosĂ©-Miguel Bernardo"); [Smith, Adrian F. M.](https://en.wikipedia.org/wiki/Adrian_Smith_\(statistician\) "Adrian Smith (statistician)") (2000). *Bayesian Theory*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-49464-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49464-5 "Special:BookSources/978-0-471-49464-5") . - Bryc, Wlodzimierz (1995). [*The Normal Distribution: Characterizations with Applications*](https://books.google.com/books?id=tyXjBwAAQBAJ). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-97990-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97990-8 "Special:BookSources/978-0-387-97990-8") . - [Casella, George](https://en.wikipedia.org/wiki/George_Casella "George Casella"); [Berger, Roger L.](https://en.wikipedia.org/wiki/Roger_Lee_Berger "Roger Lee Berger") (2001). *Statistical Inference* (2nd ed.). Duxbury. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-534-24312-8](https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-24312-8 "Special:BookSources/978-0-534-24312-8") . - Cody, William J. (1969). ["Rational Chebyshev Approximations for the Error Function"](https://en.wikipedia.org/wiki/Error_function#cite_note-5 "Error function"). *Mathematics of Computation*. **23** (107): 631–638\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1969MaCom..23..631C](https://ui.adsabs.harvard.edu/abs/1969MaCom..23..631C). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1969-0247736-4](https://doi.org/10.1090%2FS0025-5718-1969-0247736-4). - [Cover, Thomas M.](https://en.wikipedia.org/wiki/Thomas_M._Cover "Thomas M. Cover"); [Thomas, Joy A.](https://en.wikipedia.org/wiki/Joy_A._Thomas "Joy A. Thomas") (2006). [*Elements of Information Theory*](https://books.google.com/books?id=VWq5GG6ycxMC). John Wiley and Sons. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9780471241959](https://en.wikipedia.org/wiki/Special:BookSources/9780471241959 "Special:BookSources/9780471241959") . - Dia, Yaya D. (2023). ["Approximate Incomplete Integrals, Application to Complementary Error Function"](https://ssrn.com/abstract=4487559). *SSRN*. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2139/ssrn.4487559](https://doi.org/10.2139%2Fssrn.4487559). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [259689086](https://api.semanticscholar.org/CorpusID:259689086). - [de Moivre, Abraham](https://en.wikipedia.org/wiki/Abraham_de_Moivre "Abraham de Moivre") (2000) \[First published 1738\]. [*The Doctrine of Chances*](https://en.wikipedia.org/wiki/The_Doctrine_of_Chances "The Doctrine of Chances"). American Mathematical Society. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8218-2103-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-2103-9 "Special:BookSources/978-0-8218-2103-9") . - Du, Y.; Fan, B.; Wei, B. (2022). "An improved exact sampling algorithm for the standard normal distribution". *Computational Statistics*. **37** (2): 721–737\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[2008\.03855](https://arxiv.org/abs/2008.03855). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00180-021-01136-w](https://doi.org/10.1007%2Fs00180-021-01136-w). - Fan, Jianqing (1991). ["On the optimal rates of convergence for nonparametric deconvolution problems"](https://doi.org/10.1214%2Faos%2F1176348248). *The Annals of Statistics*. **19** (3): 1257–1272\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176348248](https://doi.org/10.1214%2Faos%2F1176348248). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2241949](https://www.jstor.org/stable/2241949). - [Galton, Francis](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") (1889). [*Natural Inheritance*](http://galton.org/books/natural-inheritance/pdf/galton-nat-inh-1up-clean.pdf) (PDF). London, UK: Richard Clay and Sons. - [Galambos, Janos](https://en.wikipedia.org/wiki/Janos_Galambos "Janos Galambos"); Simonelli, Italo (2004). [*Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions*](https://archive.org/details/productsofrandom00gala). Marcel Dekker, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8247-5402-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-5402-0 "Special:BookSources/978-0-8247-5402-0") . - [Gauss, Carolo Friderico](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss "Carl Friedrich Gauss") (1809). [*Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm*](https://archive.org/details/theoriamotuscor00gausgoog) \[*Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections*\] (in Latin). Hambvrgi, Svmtibvs F. Perthes et I. H. Besser. [English translation](https://books.google.com/books?id=1TIAAAAAQAAJ). - [Gould, Stephen Jay](https://en.wikipedia.org/wiki/Stephen_Jay_Gould "Stephen Jay Gould") (1981). [*The Mismeasure of Man*](https://en.wikipedia.org/wiki/The_Mismeasure_of_Man "The Mismeasure of Man") (first ed.). W. W. Norton. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-393-01489-1](https://en.wikipedia.org/wiki/Special:BookSources/978-0-393-01489-1 "Special:BookSources/978-0-393-01489-1") . - Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation". *The American Statistician*. **19** (3): 12–14\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2681417](https://doi.org/10.2307%2F2681417). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2681417](https://www.jstor.org/stable/2681417). - Hart, John F.; et al. (1968). *Computer Approximations*. New York, NY: John Wiley & Sons, Inc. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-88275-642-4](https://en.wikipedia.org/wiki/Special:BookSources/978-0-88275-642-4 "Special:BookSources/978-0-88275-642-4") . - ["Normal Distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_Distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\] - [Herrnstein, Richard J.](https://en.wikipedia.org/wiki/Richard_J._Herrnstein "Richard J. Herrnstein"); [Murray, Charles](https://en.wikipedia.org/wiki/Charles_Murray_\(political_scientist\) "Charles Murray (political scientist)") (1994). [*The Bell Curve: Intelligence and Class Structure in American Life*](https://en.wikipedia.org/wiki/The_Bell_Curve "The Bell Curve"). [Free Press](https://en.wikipedia.org/wiki/Free_Press_\(publisher\) "Free Press (publisher)"). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-02-914673-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-02-914673-6 "Special:BookSources/978-0-02-914673-6") . - Hoel, Paul G. (1947). [*Introduction To Mathematical Statistics*](https://archive.org/details/in.ernet.dli.2015.263186/page/n1/mode/2up). New York: Wiley. - [Huxley, Julian S.](https://en.wikipedia.org/wiki/Julian_S._Huxley "Julian S. Huxley") (1972) \[First published 1932\]. *Problems of Relative Growth*. London. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-61114-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61114-3 "Special:BookSources/978-0-486-61114-3") . [OCLC](https://en.wikipedia.org/wiki/OCLC_\(identifier\) "OCLC (identifier)") [476909537](https://search.worldcat.org/oclc/476909537). - [Johnson, Norman L.](https://en.wikipedia.org/wiki/Norman_Lloyd_Johnson "Norman Lloyd Johnson"); [Kotz, Samuel](https://en.wikipedia.org/wiki/Samuel_Kotz "Samuel Kotz"); Balakrishnan, Narayanaswamy (1994). *Continuous Univariate Distributions, Volume 1*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-58495-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58495-7 "Special:BookSources/978-0-471-58495-7") . - Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). *Continuous Univariate Distributions, Volume 2*. Wiley. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-471-58494-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-58494-0 "Special:BookSources/978-0-471-58494-0") . - Karney, C. F. F. (2016). ["Sampling exactly from the normal distribution"](https://doi.org/10.1145%2F2710016). *ACM Transactions on Mathematical Software*. **42** (1): 3:1–14. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1303\.6257](https://arxiv.org/abs/1303.6257). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/2710016](https://doi.org/10.1145%2F2710016). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [14252035](https://api.semanticscholar.org/CorpusID:14252035). - Kinderman, Albert J.; Monahan, John F. (1977). ["Computer Generation of Random Variables Using the Ratio of Uniform Deviates"](https://doi.org/10.1145%2F355744.355750). *ACM Transactions on Mathematical Software*. **3** (3): 257–260\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/355744.355750](https://doi.org/10.1145%2F355744.355750). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [12884505](https://api.semanticscholar.org/CorpusID:12884505). - Krishnamoorthy, Kalimuthu (2006). *Handbook of Statistical Distributions with Applications*. Chapman & Hall/CRC. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-1-58488-635-8](https://en.wikipedia.org/wiki/Special:BookSources/978-1-58488-635-8 "Special:BookSources/978-1-58488-635-8") . - [Kruskal, William H.](https://en.wikipedia.org/wiki/William_H._Kruskal "William H. Kruskal"); Stigler, Stephen M. (1997). Spencer, Bruce D. (ed.). *Normative Terminology: 'Normal' in Statistics and Elsewhere*. Statistics and Public Policy. Oxford University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-19-852341-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-19-852341-3 "Special:BookSources/978-0-19-852341-3") . - [Laplace, Pierre-Simon de](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") (1774). ["MĂ©moire sur la probabilitĂ© des causes par les Ă©vĂ©nements"](http://gallica.bnf.fr/ark:/12148/bpt6k77596b/f32). *MĂ©moires de l'AcadĂ©mie Royale des Sciences de Paris (Savants Ă©trangers), Tome 6*: 621–656\. Translated by Stephen M. Stigler in *Statistical Science* **1** (3), 1986: [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2245476](https://www.jstor.org/stable/2245476). - Laplace, Pierre-Simon (1812). [*ThĂ©orie analytique des probabilitĂ©s*](https://archive.org/details/thorieanalytiqu00laplgoog) \[*[Analytical theory of probabilities](https://en.wikipedia.org/wiki/Analytical_theory_of_probabilities "Analytical theory of probabilities")*\]. Paris, Ve. Courcier. - [Le Cam, Lucien](https://en.wikipedia.org/wiki/Lucien_Le_Cam "Lucien Le Cam"); [Lo Yang, Grace](https://en.wikipedia.org/wiki/Grace_Yang "Grace Yang") (2000). *Asymptotics in Statistics: Some Basic Concepts* (second ed.). Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-95036-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-95036-5 "Special:BookSources/978-0-387-95036-5") . - Leva, Joseph L. (1992). ["A fast normal random number generator"](https://web.archive.org/web/20100716035328/http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF). *ACM Transactions on Mathematical Software*. **18** (4): 449–453\. [CiteSeerX](https://en.wikipedia.org/wiki/CiteSeerX_\(identifier\) "CiteSeerX (identifier)") [10\.1.1.544.5806](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.544.5806). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/138351.138364](https://doi.org/10.1145%2F138351.138364). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [15802663](https://api.semanticscholar.org/CorpusID:15802663). Archived from [the original](http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF) on July 16, 2010. - [Lexis, Wilhelm](https://en.wikipedia.org/wiki/Wilhelm_Lexis "Wilhelm Lexis") (1878). "Sur la durĂ©e normale de la vie humaine et sur la thĂ©orie de la stabilitĂ© des rapports statistiques". *Annales de DĂ©mographie Internationale*. **II**. Paris: 447–462\. - Lukacs, Eugene; King, Edgar P. (1954). ["A Property of Normal Distribution"](https://doi.org/10.1214%2Faoms%2F1177728796). *The Annals of Mathematical Statistics*. **25** (2): 389–394\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177728796](https://doi.org/10.1214%2Faoms%2F1177728796). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2236741](https://www.jstor.org/stable/2236741). - McPherson, Glen (1990). [*Statistics in Scientific Investigation: Its Basis, Application and Interpretation*](https://archive.org/details/statisticsinscie0000mcph). Springer-Verlag. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-97137-7](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-97137-7 "Special:BookSources/978-0-387-97137-7") . - [Marsaglia, George](https://en.wikipedia.org/wiki/George_Marsaglia "George Marsaglia"); Tsang, Wai Wan (2000). ["The Ziggurat Method for Generating Random Variables"](https://doi.org/10.18637%2Fjss.v005.i08). *Journal of Statistical Software*. **5** (8). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v005.i08](https://doi.org/10.18637%2Fjss.v005.i08). - Marsaglia, George (2004). ["Evaluating the Normal Distribution"](https://doi.org/10.18637%2Fjss.v011.i04). *Journal of Statistical Software*. **11** (4). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.18637/jss.v011.i04](https://doi.org/10.18637%2Fjss.v011.i04). - [Maxwell, James Clerk](https://en.wikipedia.org/wiki/James_Clerk_Maxwell "James Clerk Maxwell") (1860). ["V. Illustrations of the dynamical theory of gases. — Part I: On the motions and collisions of perfectly elastic spheres"](https://books.google.com/books?id=-YU7AQAAMAAJ&pg=PA19). *Philosophical Magazine*. Series 4. **19** (124): 19–32\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1860LEDPM..19...19M](https://ui.adsabs.harvard.edu/abs/1860LEDPM..19...19M). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786446008642818](https://doi.org/10.1080%2F14786446008642818). - Monahan, J. F. (1985). ["Accuracy in random number generation"](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X). *Mathematics of Computation*. **45** (172): 559–568\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0025-5718-1985-0804945-X](https://doi.org/10.1090%2FS0025-5718-1985-0804945-X). - [Mood, Alexander McFarlane](https://en.wikipedia.org/wiki/Alexander_M._Mood "Alexander M. Mood") (1950). [*Introduction to the Theory of Statistics*](https://archive.org/details/introductiontoth0000alex/page/n5/mode/2up). New York: McGraw-Hill. - Patel, Jagdish K.; Read, Campbell B. (1996). *Handbook of the Normal Distribution* (2nd ed.). CRC Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8247-9342-5](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-9342-5 "Special:BookSources/978-0-8247-9342-5") . - [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1901). ["On Lines and Planes of Closest Fit to Systems of Points in Space"](http://stat.smmu.edu.cn/history/pearson1901.pdf) (PDF). *[Philosophical Magazine](https://en.wikipedia.org/wiki/Philosophical_Magazine "Philosophical Magazine")*. 6. **2** (11): 559–572\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/14786440109462720](https://doi.org/10.1080%2F14786440109462720). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [125037489](https://api.semanticscholar.org/CorpusID:125037489). - [Pearson, Karl](https://en.wikipedia.org/wiki/Karl_Pearson "Karl Pearson") (1905). ["'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A rejoinder"](https://zenodo.org/record/1449456). *Biometrika*. **4** (1): 169–212\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2331536](https://doi.org/10.2307%2F2331536). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331536](https://www.jstor.org/stable/2331536). - Pearson, Karl (1920). ["Notes on the History of Correlation"](https://zenodo.org/record/1431597). *Biometrika*. **13** (1): 25–45\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1093/biomet/13.1.25](https://doi.org/10.1093%2Fbiomet%2F13.1.25). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2331722](https://www.jstor.org/stable/2331722). - Rohrbasser, Jean-Marc; VĂ©ron, Jacques (2003). ["Wilhelm Lexis: The Normal Length of Life as an Expression of the "Nature of Things""](http://www.persee.fr/web/revues/home/prescript/article/pop_1634-2941_2003_num_58_3_18444). *Population*. **58** (3): 303–322\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.3917/pope.303.0303](https://doi.org/10.3917%2Fpope.303.0303). - Shore, H (1982). "Simple Approximations for the Inverse Cumulative Function, the Density Function and the Loss Integral of the Normal Distribution". *Journal of the Royal Statistical Society. Series C (Applied Statistics)*. **31** (2): 108–114\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2347972](https://doi.org/10.2307%2F2347972). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2347972](https://www.jstor.org/stable/2347972). - Shore, H (2005). "Accurate RMM-Based Approximations for the CDF of the Normal Distribution". *Communications in Statistics – Theory and Methods*. **34** (3): 507–513\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1081/sta-200052102](https://doi.org/10.1081%2Fsta-200052102). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122148043](https://api.semanticscholar.org/CorpusID:122148043). - Shore, H (2011). "Response Modeling Methodology". *WIREs Comput Stat*. **3** (4): 357–372\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.151](https://doi.org/10.1002%2Fwics.151). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [62021374](https://api.semanticscholar.org/CorpusID:62021374). - Shore, H (2012). "Estimating Response Modeling Methodology Models". *WIREs Comput Stat*. **4** (3): 323–333\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1002/wics.1199](https://doi.org/10.1002%2Fwics.1199). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [122366147](https://api.semanticscholar.org/CorpusID:122366147). - [Stigler, Stephen M.](https://en.wikipedia.org/wiki/Stephen_Stigler "Stephen Stigler") (1978). ["Mathematical Statistics in the Early States"](https://doi.org/10.1214%2Faos%2F1176344123). *The Annals of Statistics*. **6** (2): 239–265\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176344123](https://doi.org/10.1214%2Faos%2F1176344123). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2958876](https://www.jstor.org/stable/2958876). - Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal". *The American Statistician*. **36** (2): 137–138\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2684031](https://doi.org/10.2307%2F2684031). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2684031](https://www.jstor.org/stable/2684031). - Stigler, Stephen M. (1986). [*The History of Statistics: The Measurement of Uncertainty before 1900*](https://archive.org/details/historyofstatist00stig). Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-674-40340-6](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-40340-6 "Special:BookSources/978-0-674-40340-6") . - Stigler, Stephen M. (1999). *Statistics on the Table*. Harvard University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-674-83601-3](https://en.wikipedia.org/wiki/Special:BookSources/978-0-674-83601-3 "Special:BookSources/978-0-674-83601-3") . - Walker, Helen M. (1985). ["De Moivre on the Law of Normal Probability"](http://www.york.ac.uk/depts/maths/histstat/demoivre.pdf) (PDF). In Smith, David Eugene (ed.). *A Source Book in Mathematics*. Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-64690-9](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-64690-9 "Special:BookSources/978-0-486-64690-9") . - [Wallace, C. S.](https://en.wikipedia.org/wiki/Chris_Wallace_\(computer_scientist\) "Chris Wallace (computer scientist)") (1996). ["Fast pseudo-random generators for normal and exponential variates"](https://doi.org/10.1145%2F225545.225554). *ACM Transactions on Mathematical Software*. **22** (1): 119–127\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1145/225545.225554](https://doi.org/10.1145%2F225545.225554). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [18514848](https://api.semanticscholar.org/CorpusID:18514848). - [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Normal Distribution"](http://mathworld.wolfram.com/NormalDistribution.html). [MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld"). - West, Graeme (2009). ["Better Approximations to Cumulative Normal Functions"](https://web.archive.org/web/20120229202051/https://wilmott.com/pdfs/090721_west.pdf) (PDF). *Wilmott Magazine*: 70–76\. Archived from [the original](https://wilmott.com/pdfs/090721_west.pdf) (PDF) on February 29, 2012. - Zelen, Marvin; Severo, Norman C. (1972) \[First published 1964\]. [*Probability Functions (chapter 26)*](http://www.math.sfu.ca/~cbm/aands/page_931.htm). *[Handbook of mathematical functions with formulas, graphs, and mathematical tables](https://en.wikipedia.org/wiki/Abramowitz_and_Stegun "Abramowitz and Stegun")*, by [Abramowitz, M.](https://en.wikipedia.org/wiki/Milton_Abramowitz "Milton Abramowitz"); and [Stegun, I. A.](https://en.wikipedia.org/wiki/Irene_A._Stegun "Irene A. Stegun"): National Bureau of Standards. New York, NY: Dover. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-486-61272-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-61272-0 "Special:BookSources/978-0-486-61272-0") . - ["Normal distribution"](https://www.encyclopediaofmath.org/index.php?title=Normal_distribution), *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*, [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"), 2001 \[1994\] - [Normal distribution calculator](https://www.hackmath.net/en/calculator/normal-distribution)
Shard152 (laksa)
Root Hash17790707453426894952
Unparsed URLorg,wikipedia!en,/wiki/Normal_distribution s443