đŸ•·ïž Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 152 (from laksa076)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

â„č Skipped - page is already crawled

📄
INDEXABLE
✅
CRAWLED
9 days ago
đŸ€–
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0.3 months ago (distributed domain, exempt)
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://en.wikipedia.org/wiki/Central_limit_theorem
Last Crawled2026-04-01 00:24:22 (9 days ago)
First Indexed2013-08-08 16:29:31 (12 years ago)
HTTP Status Code200
Meta TitleCentral limit theorem - Wikipedia
Meta Descriptionnull
Meta Canonicalnull
Boilerpipe Text
Central Limit Theorem Type Theorem Field Probability theory Statement The scaled sum of a sequence of i.i.d. random variables with finite positive variance converges in distribution to the normal distribution . Generalizations Lindeberg's CLT In probability theory , the central limit theorem ( CLT ) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution . This holds even if the original variables themselves are not normally distributed . There are several versions of the CLT, each applying in the context of different conditions. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern form it was only precisely stated in the 1920s. [ 1 ] In statistics , the CLT can be stated as: let denote a statistical sample of size from a population with expected value (average) and finite positive variance , and let denote the sample mean (which is itself a random variable ). Then the limit as of the distribution of is a normal distribution with mean and variance . [ 2 ] In other words, suppose that a large sample of observations is obtained, each observation being randomly produced in a way that does not depend on the values of the other observations, and the average ( arithmetic mean ) of the observed values is computed. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size is large enough, the probability distribution of these averages will closely approximate a normal distribution. The central limit theorem has several variants. In its common form, the random variables must be independent and identically distributed (i.i.d.). This requirement can be weakened; convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations if they comply with certain conditions. The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution , is the de Moivre–Laplace theorem . Independent sequences [ edit ] Whatever the form of the population distribution, the sampling distribution tends to a Gaussian, and its dispersion is given by the central limit theorem. [ 3 ] Let be a sequence of i.i.d. random variables having a distribution with expected value given by and finite variance given by Suppose we are interested in the sample average By the law of large numbers , the sample average converges almost surely (and therefore also converges in probability ) to the expected value as The classical central limit theorem describes the size and the distributional form of the stochastic fluctuations around the deterministic number during this convergence. More precisely, it states that as gets larger, the distribution of the normalized mean , i.e. the difference between the sample average and its limit scaled by the factor , approaches the normal distribution with mean and variance For large enough the distribution of gets arbitrarily close to the normal distribution with mean and variance The usefulness of the theorem is that the distribution of approaches normality regardless of the shape of the distribution of the individual Formally, the theorem can be stated as follows: In the case convergence in distribution means that the cumulative distribution functions of converge pointwise to the cdf of the distribution: for every real number where is the standard normal cdf evaluated at The convergence is uniform in in the sense that where denotes the supremum (i.e. least upper bound) of the set. [ 5 ] In this variant of the central limit theorem the random variables have to be independent, but not necessarily identically distributed. The theorem also requires that random variables have moments of some order , and that the rate of growth of these moments is limited by the Lyapunov condition given below. Lyapunov CLT [ 6 ] — Suppose is a sequence of independent random variables, each with finite expected value and variance . Define If for some , Lyapunov’s condition is satisfied, then a sum of converges in distribution to a standard normal random variable, as goes to infinity: In practice it is usually easiest to check Lyapunov's condition for . If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold. Lindeberg (-Feller) CLT [ edit ] In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from Lindeberg in 1920). Suppose that for every , where is the indicator function . Then the distribution of the standardized sums converges towards the standard normal distribution . CLT for the sum of a random number of random variables [ edit ] Rather than summing an integer number of random variables and taking , the sum can be of a random number of random variables, with conditions on . For example, the following theorem is Corollary 4 of Robbins (1948). It assumes that is asymptotically normal (Robbins also developed other conditions that lead to the same result). Multidimensional CLT [ edit ] Proofs that use characteristic functions can be extended to cases where each individual is a random vector in , with mean vector and covariance matrix (among the components of the vector), and these random vectors are independent and identically distributed. The multidimensional central limit theorem states that when scaled, sums converge to a multivariate normal distribution . [ 9 ] Summation of these vectors is done component-wise. For let be independent random vectors. The sum of the random vectors is and their average is Therefore, The multivariate central limit theorem states that where the covariance matrix is equal to The multivariate central limit theorem can be proved using the CramĂ©r–Wold theorem . [ 9 ] The rate of convergence is given by the following Berry–Esseen type result: It is unknown whether the factor is necessary. [ 11 ] The generalized central limit theorem [ edit ] The generalized central limit theorem (GCLT) was an effort of multiple mathematicians ( Sergei Bernstein , Jarl Waldemar Lindeberg , Paul LĂ©vy , William Feller , Andrey Kolmogorov , and others) over the period from 1920 to 1937. [ 12 ] The first published complete proof of the GCLT was in 1937 by Paul LĂ©vy in French. [ 13 ] An English language version of the complete proof of the GCLT is available in the translation of Boris Vladimirovich Gnedenko and Kolmogorov's 1954 book. [ 14 ] The statement of the GCLT is as follows: [ 15 ] Statement of GCLT — A non-degenerate random variable Z is α -stable for some 0 < α ≀ 2 if and only if there is an independent, identically distributed sequence of random variables X 1 , X 2 , X 3 , ..., and constants a n > 0 , b n ∈ ℝ with Here, ' → ' means the sequence of random variable sums converges in distribution; i.e., the corresponding distributions satisfy F n ( y ) → F ( y ) at all continuity points of F . In other words, if sums of independent, identically distributed random variables converge in distribution to some Z , then Z must be a stable distribution . Dependent processes [ edit ] CLT under weak dependence [ edit ] A useful generalization of a sequence of independent, identically distributed random variables is a mixing random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially strong mixing (also called α-mixing) defined by where is so-called strong mixing coefficient . A simplified formulation of the central limit theorem under strong mixing is: [ 16 ] In fact, where the series converges absolutely. The assumption cannot be omitted, since the asymptotic normality fails for where are another stationary sequence . There is a stronger version of the theorem: [ 17 ] the assumption is replaced with , and the assumption is replaced with Existence of such ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see ( Bradley 2007 ). Martingale difference CLT [ edit ] Theorem — Let a martingale satisfy then converges in distribution to as . [ 18 ] [ 19 ] Proof of classical CLT [ edit ] The central limit theorem has a proof using characteristic functions . [ 20 ] It is similar to the proof of the (weak) law of large numbers . Assume are independent and identically distributed random variables, each with mean and finite variance . The sum has mean and variance . Consider the random variable where in the last step we defined the new random variables , each with zero mean and unit variance ( ). The characteristic function of is given by where in the last step we used the fact that all of the are identically distributed. The characteristic function of is, by Taylor's theorem , where is " little o notation " for some function of that goes to zero more rapidly than . By the limit of the exponential function ( ), the characteristic function of equals All of the higher order terms vanish in the limit . The right hand side equals the characteristic function of a standard normal distribution , which implies through LĂ©vy's continuity theorem that the distribution of will approach as . Therefore, the sample average is such that converges to the normal distribution , from which the central limit theorem follows. Convergence to the limit [ edit ] The central limit theorem gives only an asymptotic distribution . As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails. [ citation needed ] The convergence in the central limit theorem is uniform because the limiting cumulative distribution function is continuous. If the third central moment exists and is finite, then the speed of convergence is at least on the order of (see Berry–Esseen theorem ). Stein's method [ 21 ] can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics. [ 22 ] The convergence to the normal distribution is monotonic, in the sense that the entropy of increases monotonically to that of the normal distribution. [ 23 ] The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables . A sum of discrete random variables is still a discrete random variable , so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution ). This means that if we build a histogram of the realizations of the sum of n independent identical discrete variables, the piecewise-linear curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity; this relation is known as de Moivre–Laplace theorem . The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values. Common misconceptions [ edit ] Studies have shown that the central limit theorem is subject to several common but serious misconceptions, some of which appear in widely used textbooks. [ 24 ] [ 25 ] [ 26 ] These include: The misconceived belief that the theorem applies to random sampling of any variable, rather than to the mean values (or sums) of iid random variables extracted from a population by repeated sampling. That is, the theorem assumes the random sampling produces a sampling distribution formed from different values of means (or sums) of such random variables. The misconceived belief that the theorem ensures that random sampling leads to the emergence of a normal distribution for sufficiently large samples of any random variable, regardless of the population distribution. In reality, such sampling asymptotically reproduces the properties of the population, an intuitive result underpinned by the Glivenko–Cantelli theorem . The misconceived belief that the theorem leads to a good approximation of a normal distribution for sample sizes greater than around 30, [ 27 ] allowing reliable inferences regardless of the nature of the population. In reality, this empirical rule of thumb has no valid justification, and can lead to seriously flawed inferences. See Z-test for where the approximation holds. Relation to the law of large numbers [ edit ] The law of large numbers as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior of S n as n approaches infinity?" In mathematical analysis, asymptotic series are one of the most popular tools employed to approach such questions. Suppose we have an asymptotic expansion of : Dividing both parts by φ 1 ( n ) and taking the limit will produce a 1 , the coefficient of the highest-order term in the expansion, which represents the rate at which f ( n ) changes in its leading term. Informally, one can say: " f ( n ) grows approximately as a 1 φ 1 ( n ) ". Taking the difference between f ( n ) and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about f ( n ) : Here one can say that the difference between the function and its approximation grows approximately as a 2 φ 2 ( n ) . The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself. Informally, something along these lines happens when the sum, S n , of independent identically distributed random variables, X 1 , ..., X n , is studied in classical probability theory. [ citation needed ] If each X i has finite mean ÎŒ , then by the law of large numbers, ⁠ S n / n ⁠ → ÎŒ . [ 28 ] If in addition each X i has finite variance σ 2 , then by the central limit theorem, where Ο is distributed as N (0, σ 2 ) . This provides values of the first two constants in the informal expansion In the case where the X i do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors: or informally Distributions Ξ which can arise in this way are called stable . [ 29 ] Clearly, the normal distribution is stable, but there are also other stable distributions, such as the Cauchy distribution , for which the mean or variance are not defined. The scaling factor b n may be proportional to n c , for any c ≄ ⁠ 1 / 2 ⁠ ; it may also be multiplied by a slowly varying function of n . [ 30 ] [ 31 ] The law of the iterated logarithm specifies what is happening "in between" the law of large numbers and the central limit theorem. Specifically it says that the normalizing function √ n log log n , intermediate in size between n of the law of large numbers and √ n of the central limit theorem, provides a non-trivial limiting behavior. Alternative statements of the theorem [ edit ] The density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov [ 32 ] for a particular local limit theorem for sums of independent and identically distributed random variables . Characteristic functions [ edit ] Since the characteristic function of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. Specifically, an appropriate scaling factor needs to be applied to the argument of the characteristic function. An equivalent statement can be made about Fourier transforms , since the characteristic function is essentially a Fourier transform. Calculating the variance [ edit ] Let S n be the sum of n random variables. Many central limit theorems provide conditions such that S n / √ Var( S n ) converges in distribution to N (0,1) (the normal distribution with mean 0, variance 1) as n → ∞ . In some cases, it is possible to find a constant σ 2 and function f(n) such that S n /(σ √ n⋅f ( n ) ) converges in distribution to N (0,1) as n → ∞ . Products of positive random variables [ edit ] The logarithm of a product is simply the sum of the logarithms of the factors. Therefore, when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a log-normal distribution . Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different random factors, so they follow a log-normal distribution. This multiplicative version of the central limit theorem is sometimes called Gibrat's law . Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable. [ 34 ] Beyond the classical framework [ edit ] Asymptotic normality, that is, convergence to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now. Theorem — There exists a sequence Δ n ↓ 0 for which the following holds. Let n ≄ 1 , and let random variables X 1 , ..., X n have a log-concave joint density f such that f ( x 1 , ..., x n ) = f (| x 1 |, ..., | x n |) for all x 1 , ..., x n , and E( X 2 k ) = 1 for all k = 1, ..., n . Then the distribution of is Δ n -close to in the total variation distance . [ 35 ] These two Δ n -close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence. An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies". Another example: f ( x 1 , ..., x n ) = const · exp(−(| x 1 | α + ⋯ + | x n | α ) ÎČ ) where α > 1 and αÎČ > 1 . If ÎČ = 1 then f ( x 1 , ..., x n ) factorizes into const · exp (−| x 1 | α ) 
 exp(−| x n | α ), which means X 1 , ..., X n are independent. In general, however, they are dependent. The condition f ( x 1 , ..., x n ) = f (| x 1 |, ..., | x n |) ensures that X 1 , ..., X n are of zero mean and uncorrelated ; [ citation needed ] still, they need not be independent, nor even pairwise independent . [ citation needed ] By the way, pairwise independence cannot replace independence in the classical central limit theorem. [ 36 ] Here is a Berry–Esseen type result. Theorem — Let X 1 , ..., X n satisfy the assumptions of the previous theorem, then [ 37 ] for all a < b ; here C is a universal (absolute) constant . Moreover, for every c 1 , ..., c n ∈ R such that c 2 1 + ⋯ + c 2 n = 1 , The distribution of ⁠ X 1 + ⋯ + X n / √ n ⁠ need not be approximately normal (in fact, it can be uniform). [ 38 ] However, the distribution of c 1 X 1 + ⋯ + c n X n is close to (in the total variation distance) for most vectors ( c 1 , ..., c n ) according to the uniform distribution on the sphere c 2 1 + ⋯ + c 2 n = 1 . Lacunary trigonometric series [ edit ] Theorem ( Salem – Zygmund ) — Let U be a random variable distributed uniformly on (0,2π) , and X k = r k cos( n k U + a k ) , where n k satisfy the lacunarity condition: there exists q > 1 such that n k + 1 ≄ qn k for all k , r k are such that 0 ≀ a k < 2π . Then [ 39 ] [ 40 ] converges in distribution to . Theorem — Let A 1 , ..., A n be independent random points on the plane R 2 each having the two-dimensional standard normal distribution. Let K n be the convex hull of these points, and X n the area of K n Then [ 41 ] converges in distribution to as n tends to infinity. The same also holds in all dimensions greater than 2. The polytope K n is called a Gaussian random polytope . A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions. [ 42 ] Linear functions of orthogonal matrices [ edit ] A linear function of a matrix M is a linear combination of its elements (with given coefficients), M ↩ tr( AM ) where A is the matrix of the coefficients; see Trace (linear algebra)#Inner product . A random orthogonal matrix is said to be distributed uniformly, if its distribution is the normalized Haar measure on the orthogonal group O( n , R ) ; see Rotation matrix#Uniform random rotation matrices . Theorem — Let M be a random orthogonal n × n matrix distributed uniformly, and A a fixed n × n matrix such that tr( AA *) = n , and let X = tr( AM ) . Then [ 43 ] the distribution of X is close to in the total variation metric up to [ clarification needed ] ⁠ 2 √ 3 / n − 1 ⁠ . Theorem — Let random variables X 1 , X 2 , ... ∈ L 2 (Ω) be such that X n → 0 weakly in L 2 (Ω) and X n → 1 weakly in L 1 (Ω) . Then there exist integers n 1 < n 2 < ⋯ such that converges in distribution to as k tends to infinity. [ 44 ] Random walk on a crystal lattice [ edit ] The central limit theorem may be established for the simple random walk on a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures. [ 45 ] [ 46 ] Applications and examples [ edit ] A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample statistics to the normal distribution in controlled experiments. Comparison of probability density functions p ( k ) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing n , in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). This figure demonstrates the central limit theorem. The sample means are generated using a random number generator, which draws numbers between 0 and 100 from a uniform probability distribution. It illustrates that increasing sample sizes result in the 500 measured sample means being more closely distributed about the population mean (50 in this case). It also compares the observed distributions with the distributions that would be expected for a normalized Gaussian distribution, and shows the chi-squared values that quantify the goodness of the fit (the fit is good if the reduced chi-squared value is less than or approximately equal to one). The input into the normalized Gaussian function is the mean of sample means (~50) and the mean sample standard deviation divided by the square root of the sample size (~28.87/ √ n ), which is called the standard deviation of the mean (since it refers to the spread of sample means). Another simulation using the binomial distribution. Random 0s and 1s were generated, and then their means calculated for sample sizes ranging from 1 to 2048. Note that as the sample size increases the tails become thinner and the distribution becomes more concentrated around the mean. Regression analysis , and in particular ordinary least squares , specifies that a dependent variable depends according to some function upon one or more independent variables , with an additive error term . Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution. Other illustrations [ edit ] Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem. [ 47 ] Dutch mathematician Henk Tijms writes: [ 48 ] The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born mathematician Abraham de Moivre who, in a remarkable article published in 1733, used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This finding was far ahead of its time, and was nearly forgotten until the famous French mathematician Pierre-Simon Laplace rescued it from obscurity in his monumental work ThĂ©orie analytique des probabilitĂ©s , which was published in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematician Aleksandr Lyapunov defined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory. Sir Francis Galton described the Central Limit Theorem in this way: [ 49 ] I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error". The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along. The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by George PĂłlya in 1920 in the title of a paper. [ 50 ] [ 51 ] PĂłlya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word central in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails". [ 51 ] The abstract of the paper On the central limit theorem of calculus of probability and the problem of moments by PĂłlya [ 50 ] in 1920 translates as follows. The occurrence of the Gaussian probability density 1 = e − x 2 in repeated experiments, in errors of measurements, which result in the combination of very many and very small elementary errors, in diffusion processes etc., can be explained, as is well-known, by the very same limit theorem, which plays a central role in the calculus of probability. The actual discoverer of this limit theorem is to be named Laplace; it is likely that its rigorous proof was first given by Tschebyscheff and its sharpest formulation can be found, as far as I am aware of, in an article by Liapounoff . ... A thorough account of the theorem's history, detailing Laplace's foundational work, as well as Cauchy 's, Bessel 's and Poisson 's contributions, is provided by Hald. [ 52 ] Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by von Mises , PĂłlya , Lindeberg , LĂ©vy , and CramĂ©r during the 1920s, are given by Hans Fischer. [ 53 ] Le Cam describes a period around 1935. [ 51 ] Bernstein [ 54 ] presents a historical discussion focusing on the work of Pafnuty Chebyshev and his students Andrey Markov and Aleksandr Lyapunov that led to the first proofs of the CLT in a general setting. A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of Alan Turing 's 1934 Fellowship Dissertation for King's College at the University of Cambridge . Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published. [ 55 ] Asymptotic equipartition property Asymptotic distribution Bates distribution Benford's law – result of extension of CLT to product of random variables. Berry–Esseen theorem Central limit theorem for directional statistics – Central limit theorem applied to the case of directional statistics Delta method – to compute the limit distribution of a function of a random variable. ErdƑs–Kac theorem – connects the number of prime factors of an integer with the normal probability distribution Fisher–Tippett–Gnedenko theorem – limit theorem for extremum values (such as max{ X n } ) Irwin–Hall distribution Markov chain central limit theorem Normal distribution Tweedie convergence theorem – a theorem that can be considered to bridge between the central limit theorem and the Poisson convergence theorem [ 56 ] Donsker's theorem ^ Fischer (2011) , p.  [ page needed ] . ^ Montgomery, Douglas C.; Runger, George C. (2014). Applied Statistics and Probability for Engineers (6th ed.). Wiley. p. 241. ISBN   9781118539712 . ^ Rouaud, Mathieu (2013). Probability, Statistics and Estimation (PDF) . p. 10. Archived (PDF) from the original on 2022-10-09. ^ Billingsley (1995) , p. 357. ^ Bauer (2001) , p. 199, Theorem 30.13. ^ Billingsley (1995) , p. 362. ^ Robbins, Herbert (1948). "The asymptotic distribution of the sum of a random number of random variables" . Bull. Amer. Math. Soc . 54 (12): 1151– 1161. doi : 10.1090/S0002-9904-1948-09142-X . ^ Chen, Louis H.Y.; Goldstein, Larry; Shao, Qi-Man (2011). Normal Approximation by Stein's Method . Berlin Heidelberg: Springer-Verlag. pp.  270– 271. ^ a b van der Vaart, A.W. (1998). Asymptotic statistics . New York, NY: Cambridge University Press. ISBN   978-0-521-49603-2 . LCCN   98015176 . ^ O’Donnell, Ryan (2014). "Theorem 5.38" . Archived from the original on 2019-04-08 . Retrieved 2017-10-18 . ^ Bentkus, V. (2005). "A Lyapunov-type bound in ". Theory Probab. Appl . 49 (2): 311– 323. doi : 10.1137/S0040585X97981123 . ^ Le Cam, L. (February 1986). "The Central Limit Theorem around 1935". Statistical Science . 1 (1): 78– 91. JSTOR   2245503 . ^ LĂ©vy, Paul (1937). Theorie de l'addition des variables aleatoires [ Combination theory of unpredictable variables ] (in French). Paris: Gauthier-Villars. ^ Gnedenko, Boris Vladimirovich; Kologorov, AndreÄ­ Nikolaevich; Doob, Joseph L.; Hsu, Pao-Lu (1968). Limit distributions for sums of independent random variables . Reading, MA: Addison-wesley. ^ Nolan, John P. (2020). Univariate stable distributions, Models for Heavy Tailed Data . Springer Series in Operations Research and Financial Engineering. Switzerland: Springer. doi : 10.1007/978-3-030-52915-4 . ISBN   978-3-030-52914-7 . S2CID   226648987 . ^ Billingsley (1995) , Theorem 27.4. ^ Durrett (2004) , Sect. 7.7(c), Theorem 7.8. ^ Durrett (2004) , Sect. 7.7, Theorem 7.4. ^ Billingsley (1995) , Theorem 35.12. ^ Lemons, Don (2003). An Introduction to Stochastic Processes in Physics . Johns Hopkins University Press. doi : 10.56021/9780801868665 . ISBN   9780801876387 . Retrieved 2016-08-11 . ^ Stein, C. (1972). "A bound for the error in the normal approximation to the distribution of a sum of dependent random variables" . Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability . 6 (2): 583– 602. MR   0402873 . Zbl   0278.60026 . ^ Chen, L. H. Y.; Goldstein, L.; Shao, Q. M. (2011). Normal approximation by Stein's method . Springer. ISBN   978-3-642-15006-7 . ^ Artstein, S. ; Ball, K. ; Barthe, F. ; Naor, A. (2004). "Solution of Shannon's Problem on the Monotonicity of Entropy" . Journal of the American Mathematical Society . 17 (4): 975– 982. doi : 10.1090/S0894-0347-04-00459-X . ^ Brewer, J. K. (1985). "Behavioral statistics textbooks: Source of myths and misconceptions?". Journal of Educational Statistics . 10 (3): 252– 268. doi : 10.3102/10769986010003252 . S2CID   119611584 . ^ Yu, C.; Behrens, J.; Spencer, A. Identification of Misconception in the Central Limit Theorem and Related Concepts, American Educational Research Association lecture 19 April 1995 ^ Sotos, A. E. C.; Vanhoof, S.; Van den Noortgate, W.; Onghena, P. (2007). "Students' misconceptions of statistical inference: A review of the empirical evidence from research on statistics education" . Educational Research Review . 2 (2): 98– 113. doi : 10.1016/j.edurev.2007.04.001 . ^ "Sampling distribution of the sample mean" . Khan Academy . 2 June 2023. Archived from the original (video) on 2023-06-02 . Retrieved 2023-10-08 . ^ Rosenthal, Jeffrey Seth (2000). A First Look at Rigorous Probability Theory . World Scientific. Theorem 5.3.4, p. 47. ISBN   981-02-4322-7 . ^ Johnson, Oliver Thomas (2004). Information Theory and the Central Limit Theorem . Imperial College Press. p. 88. ISBN   1-86094-473-6 . ^ Uchaikin, Vladimir V.; Zolotarev, V.M. (1999). Chance and Stability: Stable distributions and their applications . VSP. pp.  61– 62. ISBN   90-6764-301-7 . ^ Borodin, A. N.; Ibragimov, I. A.; Sudakov, V. N. (1995). Limit Theorems for Functionals of Random Walks . AMS Bookstore. Theorem 1.1, p. 8. ISBN   0-8218-0438-3 . ^ Petrov, V. V. (1976). Sums of Independent Random Variables . New York-Heidelberg: Springer-Verlag. ch. 7. ISBN   9783642658099 . ^ Hew, Patrick Chisan (2017). "Asymptotic distribution of rewards accumulated by alternating renewal processes". Statistics and Probability Letters . 129 : 355– 359. doi : 10.1016/j.spl.2017.06.027 . ^ Rempala, G.; Wesolowski, J. (2002). "Asymptotics of products of sums and U -statistics" (PDF) . Electronic Communications in Probability . 7 : 47– 54. doi : 10.1214/ecp.v7-1046 . ^ Klartag (2007) , Theorem 1.2. ^ Durrett (2004) , Section 2.4, Example 4.5. ^ Klartag (2008) , Theorem 1. ^ Klartag (2007) , Theorem 1.1. ^ Zygmund, Antoni (2003) [1959]. Trigonometric Series . Cambridge University Press. vol. II, sect. XVI.5, Theorem 5-5. ISBN   0-521-89053-5 . ^ Gaposhkin (1966) , Theorem 2.1.13. ^ BĂĄrĂĄny & Vu (2007) , Theorem 1.1. ^ BĂĄrĂĄny & Vu (2007) , Theorem 1.2. ^ Meckes, Elizabeth (2008). "Linear functions on the classical matrix groups". Transactions of the American Mathematical Society . 360 (10): 5355– 5366. arXiv : math/0509441 . doi : 10.1090/S0002-9947-08-04444-9 . S2CID   11981408 . ^ Gaposhkin (1966) , Sect. 1.5. ^ Kotani, M.; Sunada, Toshikazu (2003). Spectral geometry of crystal lattices . Vol. 338. Contemporary Math. pp.  271– 305. ISBN   978-0-8218-4269-0 . ^ Sunada, Toshikazu (2012). Topological Crystallography – With a View Towards Discrete Geometric Analysis . Surveys and Tutorials in the Applied Mathematical Sciences. Vol. 6. Springer. ISBN   978-4-431-54177-6 . ^ Marasinghe, M.; Meeker, W.; Cook, D.; Shin, T. S. (August 1994). Using graphics and simulation to teach statistical concepts . Annual meeting of the American Statistician Association, Toronto, Canada. ^ Henk, Tijms (2004). Understanding Probability: Chance Rules in Everyday Life . Cambridge: Cambridge University Press. p. 169. ISBN   0-521-54036-4 . ^ Galton, F. (1889). Natural Inheritance . p. 66. ^ a b PĂłlya, George (1920). "Über den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem" [On the central limit theorem of probability calculation and the problem of moments]. Mathematische Zeitschrift (in German). 8 ( 3– 4): 171– 181. doi : 10.1007/BF01206525 . S2CID   123063388 . ^ a b c Le Cam, Lucien (1986). "The central limit theorem around 1935" . Statistical Science . 1 (1): 78– 91. doi : 10.1214/ss/1177013818 . ^ Hald, Andreas (22 April 1998). A History of Mathematical Statistics from 1750 to 1930 (PDF) . Wiley. chapter 17. ISBN   978-0471179122 . Archived (PDF) from the original on 2022-10-09. ^ Fischer (2011) , Chapter 2; Chapter 5.2. ^ Bernstein, S. N. (1945). "On the work of P. L. Chebyshev in Probability Theory". In Bernstein., S. N. (ed.). Nauchnoe Nasledie P. L. Chebysheva. Vypusk Pervyi: Matematika [ The Scientific Legacy of P. L. Chebyshev. Part I: Mathematics ] (in Russian). Moscow & Leningrad: Academiya Nauk SSSR. p. 174. ^ Zabell, S. L. (1995). "Alan Turing and the Central Limit Theorem". American Mathematical Monthly . 102 (6): 483– 494. doi : 10.1080/00029890.1995.12004608 . ^ JĂžrgensen, Bent (1997). The Theory of Dispersion Models . Chapman & Hall. ISBN   978-0412997112 . BĂĄrĂĄny, Imre ; Vu, Van (2007). "Central limit theorems for Gaussian polytopes". Annals of Probability . 35 (4). Institute of Mathematical Statistics: 1593– 1621. arXiv : math/0610192 . doi : 10.1214/009117906000000791 . S2CID   9128253 . Bauer, Heinz (2001). Measure and Integration Theory . Berlin: de Gruyter. ISBN   3110167190 . Billingsley, Patrick (1995). Probability and Measure (3rd ed.). John Wiley & Sons. ISBN   0-471-00710-2 . Bradley, Richard (2005). "Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions". Probability Surveys . 2 : 107– 144. arXiv : math/0511078 . Bibcode : 2005math.....11078B . doi : 10.1214/154957805100000104 . S2CID   8395267 . Bradley, Richard (2007). Introduction to Strong Mixing Conditions (1st ed.). Heber City, UT: Kendrick Press. ISBN   978-0-9740427-9-4 . Dinov, Ivo; Christou, Nicolas; Sanchez, Juana (2008). "Central Limit Theorem: New SOCR Applet and Demonstration Activity" . Journal of Statistics Education . 16 (2). ASA: 1– 15. doi : 10.1080/10691898.2008.11889560 . PMC   3152447 . PMID   21833159 . Archived from the original on 2016-03-03 . Retrieved 2008-08-23 . Durrett, Richard (2004). Probability: theory and examples (3rd ed.). Cambridge University Press. ISBN   0521765390 . Fischer, Hans (2011). A History of the Central Limit Theorem: From Classical to Modern Probability Theory (PDF) . Sources and Studies in the History of Mathematics and Physical Sciences. New York: Springer. doi : 10.1007/978-0-387-87857-7 . ISBN   978-0-387-87856-0 . MR   2743162 . Zbl   1226.60004 . Archived (PDF) from the original on 2017-10-31. Gaposhkin, V. F. (1966). "Lacunary series and independent functions". Russian Mathematical Surveys . 21 (6): 1– 82. Bibcode : 1966RuMaS..21....1G . doi : 10.1070/RM1966v021n06ABEH001196 . S2CID   250833638 . . Klartag, Bo'az (2007). "A central limit theorem for convex sets". Inventiones Mathematicae . 168 (1): 91– 131. arXiv : math/0605014 . Bibcode : 2007InMat.168...91K . doi : 10.1007/s00222-006-0028-8 . S2CID   119169773 . Klartag, Bo'az (2008). "A Berry–Esseen type inequality for convex bodies with an unconditional basis". Probability Theory and Related Fields . 145 ( 1– 2): 1– 33. arXiv : 0705.0832 . doi : 10.1007/s00440-008-0158-6 . S2CID   10163322 . Central Limit Theorem at Khan Academy "Central limit theorem" . Encyclopedia of Mathematics . EMS Press . 2001 [1994]. Weisstein, Eric W. "Central Limit Theorem" . MathWorld . A music video demonstrating the central limit theorem with a Galton board by Carl McTague
Markdown
[Jump to content](https://en.wikipedia.org/wiki/Central_limit_theorem#bodyContent) Main menu Main menu move to sidebar hide Navigation - [Main page](https://en.wikipedia.org/wiki/Main_Page "Visit the main page [z]") - [Contents](https://en.wikipedia.org/wiki/Wikipedia:Contents "Guides to browsing Wikipedia") - [Current events](https://en.wikipedia.org/wiki/Portal:Current_events "Articles related to current events") - [Random article](https://en.wikipedia.org/wiki/Special:Random "Visit a randomly selected article [x]") - [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About "Learn about Wikipedia and how it works") - [Contact us](https://en.wikipedia.org/wiki/Wikipedia:Contact_us "How to contact Wikipedia") Contribute - [Help](https://en.wikipedia.org/wiki/Help:Contents "Guidance on how to use and edit Wikipedia") - [Learn to edit](https://en.wikipedia.org/wiki/Help:Introduction "Learn how to edit Wikipedia") - [Community portal](https://en.wikipedia.org/wiki/Wikipedia:Community_portal "The hub for editors") - [Recent changes](https://en.wikipedia.org/wiki/Special:RecentChanges "A list of recent changes to Wikipedia [r]") - [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_upload_wizard "Add images or other media for use on Wikipedia") - [Special pages](https://en.wikipedia.org/wiki/Special:SpecialPages "A list of all special pages [q]") [![](https://en.wikipedia.org/static/images/icons/enwiki-25.svg) ![Wikipedia](https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en-25.svg) ![The Free Encyclopedia](https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en-25.svg)](https://en.wikipedia.org/wiki/Main_Page) [Search](https://en.wikipedia.org/wiki/Special:Search "Search Wikipedia [f]") Appearance - [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en) - [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=Central+limit+theorem "You are encouraged to create an account and log in; however, it is not mandatory") - [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Central+limit+theorem "You're encouraged to log in; however, it's not mandatory. [o]") Personal tools - [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en) - [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=Central+limit+theorem "You are encouraged to create an account and log in; however, it is not mandatory") - [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Central+limit+theorem "You're encouraged to log in; however, it's not mandatory. [o]") ## Contents move to sidebar hide - [(Top)](https://en.wikipedia.org/wiki/Central_limit_theorem) - [1 Independent sequences](https://en.wikipedia.org/wiki/Central_limit_theorem#Independent_sequences) Toggle Independent sequences subsection - [1\.1 Classical CLT](https://en.wikipedia.org/wiki/Central_limit_theorem#Classical_CLT) - [1\.2 Lyapunov CLT](https://en.wikipedia.org/wiki/Central_limit_theorem#Lyapunov_CLT) - [1\.3 Lindeberg (-Feller) CLT](https://en.wikipedia.org/wiki/Central_limit_theorem#Lindeberg_\(-Feller\)_CLT) - [1\.4 CLT for the sum of a random number of random variables](https://en.wikipedia.org/wiki/Central_limit_theorem#CLT_for_the_sum_of_a_random_number_of_random_variables) - [1\.5 Multidimensional CLT](https://en.wikipedia.org/wiki/Central_limit_theorem#Multidimensional_CLT) - [2 The generalized central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem#The_generalized_central_limit_theorem) - [3 Dependent processes](https://en.wikipedia.org/wiki/Central_limit_theorem#Dependent_processes) Toggle Dependent processes subsection - [3\.1 CLT under weak dependence](https://en.wikipedia.org/wiki/Central_limit_theorem#CLT_under_weak_dependence) - [3\.2 Martingale difference CLT](https://en.wikipedia.org/wiki/Central_limit_theorem#Martingale_difference_CLT) - [4 Remarks](https://en.wikipedia.org/wiki/Central_limit_theorem#Remarks) Toggle Remarks subsection - [4\.1 Proof of classical CLT](https://en.wikipedia.org/wiki/Central_limit_theorem#Proof_of_classical_CLT) - [4\.2 Convergence to the limit](https://en.wikipedia.org/wiki/Central_limit_theorem#Convergence_to_the_limit) - [4\.3 Common misconceptions](https://en.wikipedia.org/wiki/Central_limit_theorem#Common_misconceptions) - [4\.4 Relation to the law of large numbers](https://en.wikipedia.org/wiki/Central_limit_theorem#Relation_to_the_law_of_large_numbers) - [4\.5 Alternative statements of the theorem](https://en.wikipedia.org/wiki/Central_limit_theorem#Alternative_statements_of_the_theorem) - [4\.5.1 Density functions](https://en.wikipedia.org/wiki/Central_limit_theorem#Density_functions) - [4\.5.2 Characteristic functions](https://en.wikipedia.org/wiki/Central_limit_theorem#Characteristic_functions) - [4\.6 Calculating the variance](https://en.wikipedia.org/wiki/Central_limit_theorem#Calculating_the_variance) - [5 Extensions](https://en.wikipedia.org/wiki/Central_limit_theorem#Extensions) Toggle Extensions subsection - [5\.1 Products of positive random variables](https://en.wikipedia.org/wiki/Central_limit_theorem#Products_of_positive_random_variables) - [6 Beyond the classical framework](https://en.wikipedia.org/wiki/Central_limit_theorem#Beyond_the_classical_framework) Toggle Beyond the classical framework subsection - [6\.1 Convex body](https://en.wikipedia.org/wiki/Central_limit_theorem#Convex_body) - [6\.2 Lacunary trigonometric series](https://en.wikipedia.org/wiki/Central_limit_theorem#Lacunary_trigonometric_series) - [6\.3 Gaussian polytopes](https://en.wikipedia.org/wiki/Central_limit_theorem#Gaussian_polytopes) - [6\.4 Linear functions of orthogonal matrices](https://en.wikipedia.org/wiki/Central_limit_theorem#Linear_functions_of_orthogonal_matrices) - [6\.5 Subsequences](https://en.wikipedia.org/wiki/Central_limit_theorem#Subsequences) - [6\.6 Random walk on a crystal lattice](https://en.wikipedia.org/wiki/Central_limit_theorem#Random_walk_on_a_crystal_lattice) - [7 Applications and examples](https://en.wikipedia.org/wiki/Central_limit_theorem#Applications_and_examples) Toggle Applications and examples subsection - [7\.1 Regression](https://en.wikipedia.org/wiki/Central_limit_theorem#Regression) - [7\.2 Other illustrations](https://en.wikipedia.org/wiki/Central_limit_theorem#Other_illustrations) - [8 History](https://en.wikipedia.org/wiki/Central_limit_theorem#History) - [9 See also](https://en.wikipedia.org/wiki/Central_limit_theorem#See_also) - [10 Notes](https://en.wikipedia.org/wiki/Central_limit_theorem#Notes) - [11 References](https://en.wikipedia.org/wiki/Central_limit_theorem#References) - [12 External links](https://en.wikipedia.org/wiki/Central_limit_theorem#External_links) Toggle the table of contents # Central limit theorem 42 languages - [Afrikaans](https://af.wikipedia.org/wiki/Sentrale_limietstelling "Sentrale limietstelling – Afrikaans") - [Ű§Ù„ŰčŰ±ŰšÙŠŰ©](https://ar.wikipedia.org/wiki/%D9%85%D8%A8%D8%B1%D9%87%D9%86%D8%A9_%D8%A7%D9%84%D9%86%D9%87%D8%A7%D9%8A%D8%A9_%D8%A7%D9%84%D9%85%D8%B1%D9%83%D8%B2%D9%8A%D8%A9 "Ù…ŰšŰ±Ù‡Ù†Ű© Ű§Ù„Ù†Ù‡Ű§ÙŠŰ© Ű§Ù„Ù…Ű±ÙƒŰČÙŠŰ© – Arabic") - [Asturianu](https://ast.wikipedia.org/wiki/Teorema_de_la_llende_central "Teorema de la llende central – Asturian") - [Đ‘Đ”Đ»Đ°Ń€ŃƒŃĐșая](https://be.wikipedia.org/wiki/%D0%A6%D1%8D%D0%BD%D1%82%D1%80%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0%D1%8F_%D0%BB%D1%96%D0%BC%D1%96%D1%82%D0%B0%D0%B2%D0%B0%D1%8F_%D1%82%D1%8D%D0%B0%D1%80%D1%8D%D0%BC%D0%B0 "ĐŠŃĐœŃ‚Ń€Đ°Đ»ŃŒĐœĐ°Ń Đ»Ń–ĐŒŃ–Ń‚Đ°ĐČая Ń‚ŃĐ°Ń€ŃĐŒĐ° – Belarusian") - [CatalĂ ](https://ca.wikipedia.org/wiki/Teorema_del_l%C3%ADmit_central "Teorema del lĂ­mit central – Catalan") - [ČeĆĄtina](https://cs.wikipedia.org/wiki/Centr%C3%A1ln%C3%AD_limitn%C3%AD_v%C4%9Bta "CentrĂĄlnĂ­ limitnĂ­ věta – Czech") - [Deutsch](https://de.wikipedia.org/wiki/Zentraler_Grenzwertsatz "Zentraler Grenzwertsatz – German") - [ΕλληΜÎčÎșÎŹ](https://el.wikipedia.org/wiki/%CE%98%CE%B5%CF%8E%CF%81%CE%B7%CE%BC%CE%B1_%CE%BA%CE%B5%CE%BD%CF%84%CF%81%CE%B9%CE%BA%CE%BF%CF%8D_%CE%BF%CF%81%CE%AF%CE%BF%CF%85 "ΘΔώρηΌα ÎșΔΜτρÎčÎșÎżÏ ÎżÏÎŻÎżÏ… – Greek") - [Español](https://es.wikipedia.org/wiki/Teorema_del_l%C3%ADmite_central "Teorema del lĂ­mite central – Spanish") - [Eesti](https://et.wikipedia.org/wiki/Tsentraalne_piirteoreem "Tsentraalne piirteoreem – Estonian") - [Euskara](https://eu.wikipedia.org/wiki/Limitearen_teorema_zentral "Limitearen teorema zentral – Basque") - [ÙŰ§Ű±ŰłÛŒ](https://fa.wikipedia.org/wiki/%D9%82%D8%B6%DB%8C%D9%87_%D8%AD%D8%AF_%D9%85%D8%B1%DA%A9%D8%B2%DB%8C "Ù‚Ű¶ÛŒÙ‡ Ű­ŰŻ Ù…Ű±Ú©ŰČی – Persian") - [Suomi](https://fi.wikipedia.org/wiki/Keskeinen_raja-arvolause "Keskeinen raja-arvolause – Finnish") - [Français](https://fr.wikipedia.org/wiki/Th%C3%A9or%C3%A8me_central_limite "ThĂ©orĂšme central limite – French") - [Galego](https://gl.wikipedia.org/wiki/Teorema_central_do_l%C3%ADmite "Teorema central do lĂ­mite – Galician") - [ŚąŚ‘ŚšŚ™ŚȘ](https://he.wikipedia.org/wiki/%D7%9E%D7%A9%D7%A4%D7%98_%D7%94%D7%92%D7%91%D7%95%D7%9C_%D7%94%D7%9E%D7%A8%D7%9B%D7%96%D7%99 "ŚžŚ©Ś€Ś˜ Ś”Ś’Ś‘Ś•Śœ Ś”ŚžŚšŚ›Ś–Ś™ – Hebrew") - [à€čà€żà€šà„à€Šà„€](https://hi.wikipedia.org/wiki/%E0%A4%95%E0%A5%87%E0%A4%82%E0%A4%A6%E0%A5%8D%E0%A4%B0%E0%A5%80%E0%A4%AF_%E0%A4%B8%E0%A5%80%E0%A4%AE%E0%A4%BE_%E0%A4%AA%E0%A5%8D%E0%A4%B0%E0%A4%AE%E0%A5%87%E0%A4%AF "à€•à„‡à€‚à€Šà„à€°à„€à€Ż à€žà„€à€źà€Ÿ à€Șà„à€°à€źà„‡à€Ż – Hindi") - [Magyar](https://hu.wikipedia.org/wiki/Centr%C3%A1lis_hat%C3%A1reloszl%C3%A1s-t%C3%A9tel "CentrĂĄlis hatĂĄreloszlĂĄs-tĂ©tel – Hungarian") - [Bahasa Indonesia](https://id.wikipedia.org/wiki/Teorema_limit_pusat "Teorema limit pusat – Indonesian") - [Íslenska](https://is.wikipedia.org/wiki/H%C3%B6fu%C3%B0setning_t%C3%B6lfr%C3%A6%C3%B0innar "Höfuðsetning tölfrÊðinnar – Icelandic") - [Italiano](https://it.wikipedia.org/wiki/Teoremi_del_limite_centrale "Teoremi del limite centrale – Italian") - [æ—„æœŹèȘž](https://ja.wikipedia.org/wiki/%E4%B8%AD%E5%BF%83%E6%A5%B5%E9%99%90%E5%AE%9A%E7%90%86 "äž­ćżƒæ„”é™ćźšç† – Japanese") - [한ꔭ얎](https://ko.wikipedia.org/wiki/%EC%A4%91%EC%8B%AC_%EA%B7%B9%ED%95%9C_%EC%A0%95%EB%A6%AC "ì€‘ì‹Ź ê·č한 ì •ëŠŹ – Korean") - [LatvieĆĄu](https://lv.wikipedia.org/wiki/Centr%C4%81l%C4%81_robe%C5%BEteor%C4%93ma "Centrālā robeĆŸteorēma – Latvian") - [МаĐșĐ”ĐŽĐŸĐœŃĐșĐž](https://mk.wikipedia.org/wiki/%D0%A6%D0%B5%D0%BD%D1%82%D1%80%D0%B0%D0%BB%D0%BD%D0%B0_%D0%B3%D1%80%D0%B0%D0%BD%D0%B8%D1%87%D0%BD%D0%B0_%D1%82%D0%B5%D0%BE%D1%80%D0%B5%D0%BC%D0%B0 "ĐŠĐ”ĐœŃ‚Ń€Đ°Đ»ĐœĐ° ĐłŃ€Đ°ĐœĐžŃ‡ĐœĐ° Ń‚Đ”ĐŸŃ€Đ”ĐŒĐ° – Macedonian") - [Nederlands](https://nl.wikipedia.org/wiki/Centrale_limietstelling "Centrale limietstelling – Dutch") - [Norsk bokmĂ„l](https://no.wikipedia.org/wiki/Sentralgrenseteoremet "Sentralgrenseteoremet – Norwegian BokmĂ„l") - [Polski](https://pl.wikipedia.org/wiki/Centralne_twierdzenie_graniczne "Centralne twierdzenie graniczne – Polish") - [PortuguĂȘs](https://pt.wikipedia.org/wiki/Teorema_central_do_limite "Teorema central do limite – Portuguese") - [РуссĐșĐžĐč](https://ru.wikipedia.org/wiki/%D0%A6%D0%B5%D0%BD%D1%82%D1%80%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0%D1%8F_%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D1%8C%D0%BD%D0%B0%D1%8F_%D1%82%D0%B5%D0%BE%D1%80%D0%B5%D0%BC%D0%B0 "ĐŠĐ”ĐœŃ‚Ń€Đ°Đ»ŃŒĐœĐ°Ń ĐżŃ€Đ”ĐŽĐ”Đ»ŃŒĐœĐ°Ń Ń‚Đ”ĐŸŃ€Đ”ĐŒĐ° – Russian") - [Simple English](https://simple.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem – Simple English") - [Shqip](https://sq.wikipedia.org/wiki/Teorema_Q%C3%ABndrore_Limite "Teorema QĂ«ndrore Limite – Albanian") - [СрпсĐșĐž / srpski](https://sr.wikipedia.org/wiki/%D0%A6%D0%B5%D0%BD%D1%82%D1%80%D0%B0%D0%BB%D0%BD%D0%B0_%D0%B3%D1%80%D0%B0%D0%BD%D0%B8%D1%87%D0%BD%D0%B0_%D1%82%D0%B5%D0%BE%D1%80%D0%B5%D0%BC%D0%B0 "ĐŠĐ”ĐœŃ‚Ń€Đ°Đ»ĐœĐ° ĐłŃ€Đ°ĐœĐžŃ‡ĐœĐ° Ń‚Đ”ĐŸŃ€Đ”ĐŒĐ° – Serbian") - [Sunda](https://su.wikipedia.org/wiki/Central_limit_theorem "Central limit theorem – Sundanese") - [Svenska](https://sv.wikipedia.org/wiki/Centrala_gr%C3%A4nsv%C3%A4rdessatsen "Centrala grĂ€nsvĂ€rdessatsen – Swedish") - [TĂŒrkçe](https://tr.wikipedia.org/wiki/Merkez%C3%AE_limit_teoremi "MerkezĂź limit teoremi – Turkish") - [ĐŁĐșŃ€Đ°Ń—ĐœŃŃŒĐșа](https://uk.wikipedia.org/wiki/%D0%A6%D0%B5%D0%BD%D1%82%D1%80%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0_%D0%B3%D1%80%D0%B0%D0%BD%D0%B8%D1%87%D0%BD%D0%B0_%D1%82%D0%B5%D0%BE%D1%80%D0%B5%D0%BC%D0%B0 "ĐŠĐ”ĐœŃ‚Ń€Đ°Đ»ŃŒĐœĐ° ĐłŃ€Đ°ĐœĐžŃ‡ĐœĐ° Ń‚Đ”ĐŸŃ€Đ”ĐŒĐ° – Ukrainian") - [Ű§Ű±ŰŻÙˆ](https://ur.wikipedia.org/wiki/%D9%85%D8%B1%DA%A9%D8%B2%DB%8C_%D8%AD%D8%AF_%D9%85%D8%B3%D8%A6%D9%84%DB%81_%D8%A7%D8%AB%D8%A8%D8%A7%D8%AA%DB%8C "Ù…Ű±Ú©ŰČی Ű­ŰŻ Ù…ŰłŰŠÙ„Û ۧ۫ۚۧŰȘی – Urdu") - [Tiáșżng Việt](https://vi.wikipedia.org/wiki/%C4%90%E1%BB%8Bnh_l%C3%BD_gi%E1%BB%9Bi_h%E1%BA%A1n_trung_t%C3%A2m "Định lĂœ giới háșĄn trung tĂąm – Vietnamese") - [ćŽèŻ­](https://wuu.wikipedia.org/wiki/%E4%B8%AD%E5%BF%83%E6%9E%81%E9%99%90%E5%AE%9A%E7%90%86 "äž­ćżƒæžé™ćźšç† – Wu") - [çČ”èȘž](https://zh-yue.wikipedia.org/wiki/%E4%B8%AD%E5%A4%AE%E6%A5%B5%E9%99%90%E5%AE%9A%E7%90%86 "äž­ć€źæ„”é™ćźšç† – Cantonese") - [äž­æ–‡](https://zh.wikipedia.org/wiki/%E4%B8%AD%E5%BF%83%E6%9E%81%E9%99%90%E5%AE%9A%E7%90%86 "äž­ćżƒæžé™ćźšç† – Chinese") [Edit links](https://www.wikidata.org/wiki/Special:EntityPage/Q190391#sitelinks-wikipedia "Edit interlanguage links") - [Article](https://en.wikipedia.org/wiki/Central_limit_theorem "View the content page [c]") - [Talk](https://en.wikipedia.org/wiki/Talk:Central_limit_theorem "Discuss improvements to the content page [t]") English - [Read](https://en.wikipedia.org/wiki/Central_limit_theorem) - [Edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit "Edit this page [e]") - [View history](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=history "Past revisions of this page [h]") Tools Tools move to sidebar hide Actions - [Read](https://en.wikipedia.org/wiki/Central_limit_theorem) - [Edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit "Edit this page [e]") - [View history](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=history) General - [What links here](https://en.wikipedia.org/wiki/Special:WhatLinksHere/Central_limit_theorem "List of all English Wikipedia pages containing links to this page [j]") - [Related changes](https://en.wikipedia.org/wiki/Special:RecentChangesLinked/Central_limit_theorem "Recent changes in pages linked from this page [k]") - [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_Upload_Wizard "Upload files [u]") - [Permanent link](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&oldid=1345105468 "Permanent link to this revision of this page") - [Page information](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=info "More information about this page") - [Cite this page](https://en.wikipedia.org/w/index.php?title=Special:CiteThisPage&page=Central_limit_theorem&id=1345105468&wpFormIdentifier=titleform "Information on how to cite this page") - [Get shortened URL](https://en.wikipedia.org/w/index.php?title=Special:UrlShortener&url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCentral_limit_theorem) Print/export - [Download as PDF](https://en.wikipedia.org/w/index.php?title=Special:DownloadAsPdf&page=Central_limit_theorem&action=show-download-screen "Download this page as a PDF file") - [Printable version](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&printable=yes "Printable version of this page [p]") In other projects - [Wikimedia Commons](https://commons.wikimedia.org/wiki/Category:Central_limit_theorem) - [Wikidata item](https://www.wikidata.org/wiki/Special:EntityPage/Q190391 "Structured data on this page hosted by Wikidata [g]") Appearance move to sidebar hide From Wikipedia, the free encyclopedia Fundamental theorem in probability theory and statistics | | | |---|---| | [![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/7b/IllustrationCentralTheorem.png/330px-IllustrationCentralTheorem.png)](https://en.wikipedia.org/wiki/File:IllustrationCentralTheorem.png) | | | Type | [Theorem](https://en.wikipedia.org/wiki/Theorem "Theorem") | | Field | [Probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") | | Statement | The scaled sum of a sequence of [i.i.d. random variables](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables "Independent and identically distributed random variables") with finite positive [variance](https://en.wikipedia.org/wiki/Variance "Variance") converges in distribution to the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution"). | | Generalizations | [Lindeberg's CLT](https://en.wikipedia.org/wiki/Lindeberg%27s_condition "Lindeberg's condition") | In [probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory"), the **central limit theorem** (**CLT**) states that, under appropriate conditions, the [distribution](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") of a normalized version of the sample mean converges to a [standard normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution "Normal distribution"). This holds even if the original variables themselves are not [normally distributed](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution"). There are several versions of the CLT, each applying in the context of different conditions. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern form it was only precisely stated in the 1920s.[\[1\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEFischer2011[[Category:Wikipedia_articles_needing_page_number_citations_from_July_2023]]<sup_class="noprint_Inline-Template_"_style="white-space:nowrap;">&#91;<i>[[Wikipedia:Citing_sources|<span_title="This_citation_requires_a_reference_to_the_specific_page_or_range_of_pages_in_which_the_material_appears.&#32;\(July_2023\)">page&nbsp;needed</span>]]</i>&#93;</sup>-1) In [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics"), the CLT can be stated as: let X 1 , X 2 , 
 , X n {\\displaystyle X\_{1},X\_{2},\\dots ,X\_{n}} ![{\\displaystyle X\_{1},X\_{2},\\dots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/197911bbfe9fe9540571f88763a0064df10686ac) denote a [statistical sample](https://en.wikipedia.org/wiki/Sampling_\(statistics\) "Sampling (statistics)") of size n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) from a population with [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") (average) ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) and finite positive [variance](https://en.wikipedia.org/wiki/Variance "Variance") σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5), and let X ÂŻ n {\\displaystyle {\\bar {X}}\_{n}} ![{\\displaystyle {\\bar {X}}\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ab9d44cdd15adc4e2e00c9b282832eefe22d2dc) denote the sample mean (which is itself a [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable")). Then the [limit as n → ∞ {\\displaystyle n\\to \\infty } ![{\\displaystyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0d55d9b32f6fa8fab6a84ea444a6b5a24bb45e1) of the distribution](https://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_distribution "Convergence of random variables") of ( X ÂŻ n − ÎŒ ) n {\\displaystyle ({\\bar {X}}\_{n}-\\mu ){\\sqrt {n}}} ![{\\displaystyle ({\\bar {X}}\_{n}-\\mu ){\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c0140363b34a41babc747665cd5a1d09eaeb8bc7) is a normal distribution with mean 0 {\\displaystyle 0} ![{\\displaystyle 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2aae8864a3c1fec9585261791a809ddec1489950) and variance σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5).[\[2\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-2) In other words, suppose that a large sample of [observations](https://en.wikipedia.org/wiki/Random_variate "Random variate") is obtained, each observation being randomly produced in a way that does not depend on the values of the other observations, and the average ([arithmetic mean](https://en.wikipedia.org/wiki/Arithmetic_mean "Arithmetic mean")) of the observed values is computed. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size is large enough, the [probability distribution](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") of these averages will closely approximate a normal distribution. The central limit theorem has several variants. In its common form, the random variables must be [independent and identically distributed](https://en.wikipedia.org/wiki/Independent_and_identically_distributed "Independent and identically distributed") (i.i.d.). This requirement can be weakened; convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations if they comply with certain conditions. The earliest version of this theorem, that the normal distribution may be used as an approximation to the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"), is the [de Moivre–Laplace theorem](https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem "De Moivre–Laplace theorem"). ## Independent sequences \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=1 "Edit section: Independent sequences")\] [![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/7b/IllustrationCentralTheorem.png/500px-IllustrationCentralTheorem.png)](https://en.wikipedia.org/wiki/File:IllustrationCentralTheorem.png) Whatever the form of the population distribution, the sampling distribution tends to a Gaussian, and its dispersion is given by the central limit theorem.[\[3\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-3) ### Classical CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=2 "Edit section: Classical CLT")\] Let ( X n ) n ≄ 1 {\\displaystyle (X\_{n})\_{n\\geq 1}} ![{\\displaystyle (X\_{n})\_{n\\geq 1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/46a33659d18632a22501249c54c6be339cf08a4c) be a sequence of [i.i.d. random variables](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables "Independent and identically distributed random variables") having a distribution with [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") given by ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) and finite [variance](https://en.wikipedia.org/wiki/Variance "Variance") given by σ 2 . {\\displaystyle \\sigma ^{2}.} ![{\\displaystyle \\sigma ^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c82112327f2b74f6bf7ba72fe64621a7ae9064de) Suppose we are interested in the [sample average](https://en.wikipedia.org/wiki/Sample_mean "Sample mean") X ÂŻ n ≡ X 1 \+ ⋯ \+ X n n . {\\displaystyle {\\bar {X}}\_{n}\\equiv {\\frac {X\_{1}+\\cdots +X\_{n}}{n}}.} ![{\\displaystyle {\\bar {X}}\_{n}\\equiv {\\frac {X\_{1}+\\cdots +X\_{n}}{n}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da94013ecb1dbc10d9a54c8eb669a3a87171582e) By the [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers"), the sample average [converges almost surely](https://en.wikipedia.org/wiki/Almost_sure_convergence "Almost sure convergence") (and therefore also [converges in probability](https://en.wikipedia.org/wiki/Convergence_in_probability "Convergence in probability")) to the expected value ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) as n → ∞ . {\\displaystyle n\\to \\infty .} ![{\\displaystyle n\\to \\infty .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d8a34a9f62668de90200a6cbde865c27af2cdbb7) The classical central limit theorem describes the size and the distributional form of the [stochastic](https://en.wiktionary.org/wiki/stochastic "wikt:stochastic") fluctuations around the deterministic number ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) during this convergence. More precisely, it states that as n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) gets larger, the distribution of the normalized mean n ( X ÂŻ n − ÎŒ ) {\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )} ![{\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1f533faaeb8931b80712ddb6361d4814c2be02a), i.e. the difference between the sample average X ÂŻ n {\\displaystyle {\\bar {X}}\_{n}} ![{\\displaystyle {\\bar {X}}\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ab9d44cdd15adc4e2e00c9b282832eefe22d2dc) and its limit ÎŒ , {\\displaystyle \\mu ,} ![{\\displaystyle \\mu ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1e7e1ef161a49a22b500d63307460ad92eeb6a16) scaled by the factor n {\\displaystyle {\\sqrt {n}}} ![{\\displaystyle {\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2a2994734eae382ce30100fb17b9447fd8e99f81), approaches the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution") with mean 0 {\\displaystyle 0} ![{\\displaystyle 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2aae8864a3c1fec9585261791a809ddec1489950) and variance σ 2 . {\\displaystyle \\sigma ^{2}.} ![{\\displaystyle \\sigma ^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c82112327f2b74f6bf7ba72fe64621a7ae9064de) For large enough n , {\\displaystyle n,} ![{\\displaystyle n,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/397bfafc701afdf14c2743278a097f6f2957eabb) the distribution of X ÂŻ n {\\displaystyle {\\bar {X}}\_{n}} ![{\\displaystyle {\\bar {X}}\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ab9d44cdd15adc4e2e00c9b282832eefe22d2dc) gets arbitrarily close to the normal distribution with mean ÎŒ {\\displaystyle \\mu } ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) and variance σ 2 / n . {\\displaystyle \\sigma ^{2}/n.} ![{\\displaystyle \\sigma ^{2}/n.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4d48d9aeca39ca2771b0b7e941dd17f652df8f3b) The usefulness of the theorem is that the distribution of n ( X ÂŻ n − ÎŒ ) {\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )} ![{\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1f533faaeb8931b80712ddb6361d4814c2be02a) approaches normality regardless of the shape of the distribution of the individual X i . {\\displaystyle X\_{i}.} ![{\\displaystyle X\_{i}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6d8cf06c3e80129deea779a3d738464b1990906c) Formally, the theorem can be stated as follows: **Lindeberg–LĂ©vy CLT**—Suppose X 1 , X 2 , X 3 
 {\\displaystyle X\_{1},X\_{2},X\_{3}\\ldots } ![{\\displaystyle X\_{1},X\_{2},X\_{3}\\ldots }](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd6598526ef19d380b128f3aacff83367b5f35c1) is a sequence of [i.i.d.](https://en.wikipedia.org/wiki/Independent_and_identically_distributed "Independent and identically distributed") random variables with E ⁥ \[ X i \] \= ÎŒ {\\displaystyle \\operatorname {E} \[X\_{i}\]=\\mu } ![{\\displaystyle \\operatorname {E} \[X\_{i}\]=\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b4cb0f7a2cebcaf8b28056deff750e7a0e1d34f3) and Var ⁥ \[ X i \] \= σ 2 \< ∞ . {\\displaystyle \\operatorname {Var} \[X\_{i}\]=\\sigma ^{2}\<\\infty .} ![{\\displaystyle \\operatorname {Var} \[X\_{i}\]=\\sigma ^{2}\<\\infty .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/76c2379b589ea26ef8d17aaeab2d9899f30f1299) Then, as n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) approaches infinity, the random variables n ( X ÂŻ n − ÎŒ ) {\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )} ![{\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1f533faaeb8931b80712ddb6361d4814c2be02a) [converge in distribution](https://en.wikipedia.org/wiki/Convergence_in_distribution "Convergence in distribution") to a [normal](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution") N ( 0 , σ 2 ) {\\displaystyle {\\mathcal {N}}(0,\\sigma ^{2})} ![{\\displaystyle {\\mathcal {N}}(0,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a12e4999caaf1154cee3440edde18c9e5f66a8da):[\[4\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBillingsley1995357-4) n ( X ÂŻ n − ÎŒ ) ⟶ d N ( 0 , σ 2 ) . {\\displaystyle {\\sqrt {n}}\\left({\\bar {X}}\_{n}-\\mu \\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}\\left(0,\\sigma ^{2}\\right).} ![{\\displaystyle {\\sqrt {n}}\\left({\\bar {X}}\_{n}-\\mu \\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}\\left(0,\\sigma ^{2}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3752ee1b5177b8d87fb1cde66b4b4eef328bb142) In the case σ \> 0 , {\\displaystyle \\sigma \>0,} ![{\\displaystyle \\sigma \>0,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/135c308f5cd7c6374f738b618f1c7e09afe129f9) convergence in distribution means that the [cumulative distribution functions](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") of n ( X ÂŻ n − ÎŒ ) {\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )} ![{\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1f533faaeb8931b80712ddb6361d4814c2be02a) converge pointwise to the cdf of the N ( 0 , σ 2 ) {\\displaystyle {\\mathcal {N}}(0,\\sigma ^{2})} ![{\\displaystyle {\\mathcal {N}}(0,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a12e4999caaf1154cee3440edde18c9e5f66a8da) distribution: for every real number z , {\\displaystyle z,} ![{\\displaystyle z,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/47989a9b66a4ea8a0ec19e8159749fce8a9a8ca8) lim n → ∞ P \[ n ( X ÂŻ n − ÎŒ ) ≀ z \] \= lim n → ∞ P \[ n ( X ÂŻ n − ÎŒ ) σ ≀ z σ \] \= Ί ( z σ ) , {\\displaystyle \\lim \_{n\\to \\infty }\\mathbb {P} \\left\[{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )\\leq z\\right\]=\\lim \_{n\\to \\infty }\\mathbb {P} \\left\[{\\frac {{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}{\\sigma }}\\leq {\\frac {z}{\\sigma }}\\right\]=\\Phi \\left({\\frac {z}{\\sigma }}\\right),} ![{\\displaystyle \\lim \_{n\\to \\infty }\\mathbb {P} \\left\[{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )\\leq z\\right\]=\\lim \_{n\\to \\infty }\\mathbb {P} \\left\[{\\frac {{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}{\\sigma }}\\leq {\\frac {z}{\\sigma }}\\right\]=\\Phi \\left({\\frac {z}{\\sigma }}\\right),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/defd4cf70972fa6a76a8570fee6551f4cb7d70b8) where Ί ( z ) {\\displaystyle \\Phi (z)} ![{\\displaystyle \\Phi (z)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2fbffdd4f2cf1c6e8daaddfddd00677b16f12809) is the standard normal cdf evaluated at z . {\\displaystyle z.} ![{\\displaystyle z.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd7f273b229260c8fe9aa42378b0471336394cc2) The convergence is uniform in z {\\displaystyle z} ![{\\displaystyle z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf368e72c009decd9b6686ee84a375632e11de98) in the sense that lim n → ∞ sup z ∈ R \| P \[ n ( X ÂŻ n − ÎŒ ) ≀ z \] − Ί ( z σ ) \| \= 0 , {\\displaystyle \\lim \_{n\\to \\infty }\\;\\sup \_{z\\in \\mathbb {R} }\\;\\left\|\\mathbb {P} \\left\[{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )\\leq z\\right\]-\\Phi \\left({\\frac {z}{\\sigma }}\\right)\\right\|=0~,} ![{\\displaystyle \\lim \_{n\\to \\infty }\\;\\sup \_{z\\in \\mathbb {R} }\\;\\left\|\\mathbb {P} \\left\[{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )\\leq z\\right\]-\\Phi \\left({\\frac {z}{\\sigma }}\\right)\\right\|=0~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/835addcb3ec37594d1e9a6a78c0373a5e7b2eddc) where sup {\\displaystyle \\sup } ![{\\displaystyle \\sup }](https://wikimedia.org/api/rest_v1/media/math/render/svg/266a03117c05ef2558c304b1dbb5f319fcd56fd6) denotes the [supremum](https://en.wikipedia.org/wiki/Supremum "Supremum") (i.e. least upper bound) of the set.[\[5\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBauer2001199Theorem_30.13-5) ### Lyapunov CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=3 "Edit section: Lyapunov CLT")\] In this variant of the central limit theorem the random variables X i {\\textstyle X\_{i}} ![{\\textstyle X\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/70db743cc2ba4453965ff0bdd6d6ef4f6d0ec699) have to be independent, but not necessarily identically distributed. The theorem also requires that random variables \| X i \| {\\textstyle \\left\|X\_{i}\\right\|} ![{\\textstyle \\left\|X\_{i}\\right\|}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4576839a2dde8200ec639c41d60f25dc5e311daa) have [moments](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") of some order ( 2 \+ ÎŽ ) {\\textstyle (2+\\delta )} ![{\\textstyle (2+\\delta )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2df82efa92af67a0e93dd48e5079860e949d715a) , and that the rate of growth of these moments is limited by the Lyapunov condition given below. **Lyapunov CLT[\[6\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBillingsley1995362-6)**—Suppose { X 1 , 
 , X n , 
 } {\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}} ![{\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c8cf831d0f6051f596823e3ab16dec9b32c3605b) is a sequence of independent random variables, each with finite expected value ÎŒ i {\\textstyle \\mu \_{i}} ![{\\textstyle \\mu \_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4fd49d4df5ae6cae38aaec73f22e4eaacb29b253) and variance σ i 2 {\\textstyle \\sigma \_{i}^{2}} ![{\\textstyle \\sigma \_{i}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83678ff991d38b63c52a71d9b4c3e129b9d98ff1) . Define s n 2 \= ∑ i \= 1 n σ i 2 . {\\displaystyle s\_{n}^{2}=\\sum \_{i=1}^{n}\\sigma \_{i}^{2}.} ![{\\displaystyle s\_{n}^{2}=\\sum \_{i=1}^{n}\\sigma \_{i}^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1df16c0efa2304bd4f4ca88a2bbfef75c13374b6) If for some ÎŽ \> 0 {\\textstyle \\delta \>0} ![{\\textstyle \\delta \>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e503a231ad333c0bd9a2dc7d48d8848076423356) , *Lyapunov’s condition* lim n → ∞ 1 s n 2 \+ ÎŽ ∑ i \= 1 n E ⁥ \[ \| X i − ÎŒ i \| 2 \+ ÎŽ \] \= 0 {\\displaystyle \\lim \_{n\\to \\infty }\\;{\\frac {1}{s\_{n}^{2+\\delta }}}\\,\\sum \_{i=1}^{n}\\operatorname {E} \\left\[\\left\|X\_{i}-\\mu \_{i}\\right\|^{2+\\delta }\\right\]=0} ![{\\displaystyle \\lim \_{n\\to \\infty }\\;{\\frac {1}{s\_{n}^{2+\\delta }}}\\,\\sum \_{i=1}^{n}\\operatorname {E} \\left\[\\left\|X\_{i}-\\mu \_{i}\\right\|^{2+\\delta }\\right\]=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f06b7a3309fd45005cce7a4e0b15ca3758f662f5) is satisfied, then a sum of X i − ÎŒ i s n {\\textstyle {\\frac {X\_{i}-\\mu \_{i}}{s\_{n}}}} ![{\\textstyle {\\frac {X\_{i}-\\mu \_{i}}{s\_{n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e74b2c70018625d030bc0da067e9fbc0bae4921) converges in distribution to a standard normal random variable, as n {\\textstyle n} ![{\\textstyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc6e1f880981346a604257ebcacdef24c0aca2d6) goes to infinity: 1 s n ∑ i \= 1 n ( X i − ÎŒ i ) ⟶ d N ( 0 , 1 ) . {\\displaystyle {\\frac {1}{s\_{n}}}\\,\\sum \_{i=1}^{n}\\left(X\_{i}-\\mu \_{i}\\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}(0,1).} ![{\\displaystyle {\\frac {1}{s\_{n}}}\\,\\sum \_{i=1}^{n}\\left(X\_{i}-\\mu \_{i}\\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}(0,1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/14316bd39542fb71d72341e143c5a311ae21786b) In practice it is usually easiest to check Lyapunov's condition for ÎŽ \= 1 {\\textstyle \\delta =1} ![{\\textstyle \\delta =1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0519d0fea3348d08426b0d99a7af0e13f938aea7) . If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold. ### Lindeberg (-Feller) CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=4 "Edit section: Lindeberg (-Feller) CLT")\] Main article: [Lindeberg's condition](https://en.wikipedia.org/wiki/Lindeberg%27s_condition "Lindeberg's condition") In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from [Lindeberg](https://en.wikipedia.org/wiki/Jarl_Waldemar_Lindeberg "Jarl Waldemar Lindeberg") in 1920). Suppose that for every Δ \> 0 {\\textstyle \\varepsilon \>0} ![{\\textstyle \\varepsilon \>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e73828dc02cc2d6974733344381d188455ab3d55), lim n → ∞ 1 s n 2 ∑ i \= 1 n E ⁥ \[ ( X i − ÎŒ i ) 2 ⋅ 1 { \| X i − ÎŒ i \| \> Δ s n } \] \= 0 {\\displaystyle \\lim \_{n\\to \\infty }{\\frac {1}{s\_{n}^{2}}}\\sum \_{i=1}^{n}\\operatorname {E} \\left\[(X\_{i}-\\mu \_{i})^{2}\\cdot \\mathbf {1} \_{\\left\\{\\left\|X\_{i}-\\mu \_{i}\\right\|\>\\varepsilon s\_{n}\\right\\}}\\right\]=0} ![{\\displaystyle \\lim \_{n\\to \\infty }{\\frac {1}{s\_{n}^{2}}}\\sum \_{i=1}^{n}\\operatorname {E} \\left\[(X\_{i}-\\mu \_{i})^{2}\\cdot \\mathbf {1} \_{\\left\\{\\left\|X\_{i}-\\mu \_{i}\\right\|\>\\varepsilon s\_{n}\\right\\}}\\right\]=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dfd10d152dd578ac2a2fa674a084bd7b03b95b1b) where 1 { 
 } {\\textstyle \\mathbf {1} \_{\\{\\ldots \\}}} ![{\\textstyle \\mathbf {1} \_{\\{\\ldots \\}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6d8bb3f9d0effd91db23de8e5c25323be916bdda) is the [indicator function](https://en.wikipedia.org/wiki/Indicator_function "Indicator function"). Then the distribution of the standardized sums 1 s n ∑ i \= 1 n ( X i − ÎŒ i ) {\\displaystyle {\\frac {1}{s\_{n}}}\\sum \_{i=1}^{n}\\left(X\_{i}-\\mu \_{i}\\right)} ![{\\displaystyle {\\frac {1}{s\_{n}}}\\sum \_{i=1}^{n}\\left(X\_{i}-\\mu \_{i}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/455361b7063b96cc68099041e69a3461ffca46d2) converges towards the standard normal distribution N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) . ### CLT for the sum of a random number of random variables \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=5 "Edit section: CLT for the sum of a random number of random variables")\] Rather than summing an integer number n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) of random variables and taking n → ∞ {\\displaystyle n\\to \\infty } ![{\\displaystyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0d55d9b32f6fa8fab6a84ea444a6b5a24bb45e1), the sum can be of a random number N {\\displaystyle N} ![{\\displaystyle N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3) of random variables, with conditions on N {\\displaystyle N} ![{\\displaystyle N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3). For example, the following theorem is Corollary 4 of Robbins (1948). It assumes that N {\\displaystyle N} ![{\\displaystyle N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3) is asymptotically normal (Robbins also developed other conditions that lead to the same result). **Robbins CLT[\[7\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-7)[\[8\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-8)**—Let { X i , i ≄ 1 } {\\displaystyle \\{X\_{i},i\\geq 1\\}} ![{\\displaystyle \\{X\_{i},i\\geq 1\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/26052c0c5e8e3db01d9c785dfc174d8c71fe734f) be independent, identically distributed random variables with E ( X i ) \= ÎŒ {\\displaystyle E(X\_{i})=\\mu } ![{\\displaystyle E(X\_{i})=\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/e208ae7900083a65c7119421bbcba762563bcccf) and Var ( X i ) \= σ 2 {\\displaystyle {\\text{Var}}(X\_{i})=\\sigma ^{2}} ![{\\displaystyle {\\text{Var}}(X\_{i})=\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d14b0bda603ea36cc1e89a255619beea1a7c75d0), and let { N n , n ≄ 1 } {\\displaystyle \\{N\_{n},n\\geq 1\\}} ![{\\displaystyle \\{N\_{n},n\\geq 1\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/556d223eade6d07bd944603bcdb94764ecc1a642) be a sequence of non-negative integer-valued random variables that are independent of { X i , i ≄ 1 } {\\displaystyle \\{X\_{i},i\\geq 1\\}} ![{\\displaystyle \\{X\_{i},i\\geq 1\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/26052c0c5e8e3db01d9c785dfc174d8c71fe734f). Assume for each n \= 1 , 2 , 
 {\\displaystyle n=1,2,\\dots } ![{\\displaystyle n=1,2,\\dots }](https://wikimedia.org/api/rest_v1/media/math/render/svg/ac3f39e31d25a3197ae7ee7d9daebac8feaf7644) that E ( N n 2 ) \< ∞ {\\displaystyle E(N\_{n}^{2})\<\\infty } ![{\\displaystyle E(N\_{n}^{2})\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1059a7ffe5e5d3d91606ff912e7034e4784db11) and N n − E ( N n ) Var ( N n ) → d N ( 0 , 1 ) {\\displaystyle {\\frac {N\_{n}-E(N\_{n})}{\\sqrt {{\\text{Var}}(N\_{n})}}}\\xrightarrow {\\quad d\\quad } {\\mathcal {N}}(0,1)} ![{\\displaystyle {\\frac {N\_{n}-E(N\_{n})}{\\sqrt {{\\text{Var}}(N\_{n})}}}\\xrightarrow {\\quad d\\quad } {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/721d85ac59ff172b45e1c4f0c0331b677392e6b5) where → d {\\displaystyle \\xrightarrow {\\,d\\,} } ![{\\displaystyle \\xrightarrow {\\,d\\,} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/802b431e63a564370bd6a129ab7e3a2b7e0bac04) denotes convergence in distribution and N ( 0 , 1 ) {\\displaystyle {\\mathcal {N}}(0,1)} ![{\\displaystyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a3eeb356405c0b33b680b5caa425ada4e9f53e8b) is the normal distribution with mean 0, variance 1. Then ∑ i \= 1 N n X i − ÎŒ E ( N n ) σ 2 E ( N n ) \+ ÎŒ 2 Var ( N n ) → d N ( 0 , 1 ) {\\displaystyle {\\frac {\\sum \_{i=1}^{N\_{n}}X\_{i}-\\mu E(N\_{n})}{\\sqrt {\\sigma ^{2}E(N\_{n})+\\mu ^{2}{\\text{Var}}(N\_{n})}}}\\xrightarrow {\\quad d\\quad } {\\mathcal {N}}(0,1)} ![{\\displaystyle {\\frac {\\sum \_{i=1}^{N\_{n}}X\_{i}-\\mu E(N\_{n})}{\\sqrt {\\sigma ^{2}E(N\_{n})+\\mu ^{2}{\\text{Var}}(N\_{n})}}}\\xrightarrow {\\quad d\\quad } {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/986e571ef10d5dc97f230b90ae308c5019ebcd80) ### Multidimensional CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=6 "Edit section: Multidimensional CLT")\] Proofs that use characteristic functions can be extended to cases where each individual X i {\\textstyle \\mathbf {X} \_{i}} ![{\\textstyle \\mathbf {X} \_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d1c5b312b59eec02027f69297c93d303697233eb) is a [random vector](https://en.wikipedia.org/wiki/Random_vector "Random vector") in R k {\\textstyle \\mathbb {R} ^{k}} ![{\\textstyle \\mathbb {R} ^{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bc5ce44a3fa7ec45276820499f4fdcbb89e12a7b) , with mean vector ÎŒ \= E ⁥ \[ X i \] {\\textstyle {\\boldsymbol {\\mu }}=\\operatorname {E} \[\\mathbf {X} \_{i}\]} ![{\\textstyle {\\boldsymbol {\\mu }}=\\operatorname {E} \[\\mathbf {X} \_{i}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f2931b016ce578d17578ee3cdffeb31852446873) and [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix "Covariance matrix") ÎŁ {\\textstyle \\mathbf {\\Sigma } } ![{\\textstyle \\mathbf {\\Sigma } }](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ae7bb00bd54a3671be6b1f2b3a067462c91aa63) (among the components of the vector), and these random vectors are independent and identically distributed. The multidimensional central limit theorem states that when scaled, sums converge to a [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution").[\[9\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-vanderVaart-9) Summation of these vectors is done component-wise. For i \= 1 , 2 , 3 , 
 , {\\displaystyle i=1,2,3,\\ldots ,} ![{\\displaystyle i=1,2,3,\\ldots ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/75890aaf87574f07064dd7effe29c1ace708bead) let X i \= \[ X i ( 1 ) ⋼ X i ( k ) \] {\\displaystyle \\mathbf {X} \_{i}={\\begin{bmatrix}X\_{i}^{(1)}\\\\\\vdots \\\\X\_{i}^{(k)}\\end{bmatrix}}} ![{\\displaystyle \\mathbf {X} \_{i}={\\begin{bmatrix}X\_{i}^{(1)}\\\\\\vdots \\\\X\_{i}^{(k)}\\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5849f8855c39fb6f72ec0bb0af6f027e9b764557) be independent random vectors. The sum of the random vectors X 1 , 
 , X n {\\displaystyle \\mathbf {X} \_{1},\\ldots ,\\mathbf {X} \_{n}} ![{\\displaystyle \\mathbf {X} \_{1},\\ldots ,\\mathbf {X} \_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ed48d149cb70706fe64e27eb403783b9163a686) is ∑ i \= 1 n X i \= \[ X 1 ( 1 ) ⋼ X 1 ( k ) \] \+ \[ X 2 ( 1 ) ⋼ X 2 ( k ) \] \+ ⋯ \+ \[ X n ( 1 ) ⋼ X n ( k ) \] \= \[ ∑ i \= 1 n X i ( 1 ) ⋼ ∑ i \= 1 n X i ( k ) \] {\\displaystyle \\sum \_{i=1}^{n}\\mathbf {X} \_{i}={\\begin{bmatrix}X\_{1}^{(1)}\\\\\\vdots \\\\X\_{1}^{(k)}\\end{bmatrix}}+{\\begin{bmatrix}X\_{2}^{(1)}\\\\\\vdots \\\\X\_{2}^{(k)}\\end{bmatrix}}+\\cdots +{\\begin{bmatrix}X\_{n}^{(1)}\\\\\\vdots \\\\X\_{n}^{(k)}\\end{bmatrix}}={\\begin{bmatrix}\\sum \_{i=1}^{n}X\_{i}^{(1)}\\\\\\vdots \\\\\\sum \_{i=1}^{n}X\_{i}^{(k)}\\end{bmatrix}}} ![{\\displaystyle \\sum \_{i=1}^{n}\\mathbf {X} \_{i}={\\begin{bmatrix}X\_{1}^{(1)}\\\\\\vdots \\\\X\_{1}^{(k)}\\end{bmatrix}}+{\\begin{bmatrix}X\_{2}^{(1)}\\\\\\vdots \\\\X\_{2}^{(k)}\\end{bmatrix}}+\\cdots +{\\begin{bmatrix}X\_{n}^{(1)}\\\\\\vdots \\\\X\_{n}^{(k)}\\end{bmatrix}}={\\begin{bmatrix}\\sum \_{i=1}^{n}X\_{i}^{(1)}\\\\\\vdots \\\\\\sum \_{i=1}^{n}X\_{i}^{(k)}\\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c25add12bc8b8482b3465848a882be2c756861a0) and their average is X ÂŻ n \= \[ X ÂŻ i ( 1 ) ⋼ X ÂŻ i ( k ) \] \= 1 n ∑ i \= 1 n X i . {\\displaystyle \\mathbf {{\\bar {X}}\_{n}} ={\\begin{bmatrix}{\\bar {X}}\_{i}^{(1)}\\\\\\vdots \\\\{\\bar {X}}\_{i}^{(k)}\\end{bmatrix}}={\\frac {1}{n}}\\sum \_{i=1}^{n}\\mathbf {X} \_{i}.} ![{\\displaystyle \\mathbf {{\\bar {X}}\_{n}} ={\\begin{bmatrix}{\\bar {X}}\_{i}^{(1)}\\\\\\vdots \\\\{\\bar {X}}\_{i}^{(k)}\\end{bmatrix}}={\\frac {1}{n}}\\sum \_{i=1}^{n}\\mathbf {X} \_{i}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf121f2531614d5deda7f010f4299906d40e7763) Therefore, 1 n ∑ i \= 1 n \[ X i − E ⁥ ( X i ) \] \= 1 n ∑ i \= 1 n ( X i − ÎŒ ) \= n ( X ÂŻ n − ÎŒ ) . {\\displaystyle {\\frac {1}{\\sqrt {n}}}\\sum \_{i=1}^{n}\\left\[\\mathbf {X} \_{i}-\\operatorname {E} \\left(\\mathbf {X} \_{i}\\right)\\right\]={\\frac {1}{\\sqrt {n}}}\\sum \_{i=1}^{n}(\\mathbf {X} \_{i}-{\\boldsymbol {\\mu }})={\\sqrt {n}}\\left({\\overline {\\mathbf {X} }}\_{n}-{\\boldsymbol {\\mu }}\\right).} ![{\\displaystyle {\\frac {1}{\\sqrt {n}}}\\sum \_{i=1}^{n}\\left\[\\mathbf {X} \_{i}-\\operatorname {E} \\left(\\mathbf {X} \_{i}\\right)\\right\]={\\frac {1}{\\sqrt {n}}}\\sum \_{i=1}^{n}(\\mathbf {X} \_{i}-{\\boldsymbol {\\mu }})={\\sqrt {n}}\\left({\\overline {\\mathbf {X} }}\_{n}-{\\boldsymbol {\\mu }}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77d3c26b99d8a97339b688367c519d0e28f9541c) The multivariate central limit theorem states that n ( X ÂŻ n − ÎŒ ) ⟶ d N k ( 0 , ÎŁ ) , {\\displaystyle {\\sqrt {n}}\\left({\\overline {\\mathbf {X} }}\_{n}-{\\boldsymbol {\\mu }}\\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}\_{k}(0,{\\boldsymbol {\\Sigma }}),} ![{\\displaystyle {\\sqrt {n}}\\left({\\overline {\\mathbf {X} }}\_{n}-{\\boldsymbol {\\mu }}\\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}\_{k}(0,{\\boldsymbol {\\Sigma }}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0e50727443f2f8d94b4e35b66a3d441a12e69e6b) where the [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix "Covariance matrix") ÎŁ {\\displaystyle {\\boldsymbol {\\Sigma }}} ![{\\displaystyle {\\boldsymbol {\\Sigma }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8532511177f5a2d2dc2b8c1ea37d483c70266911) is equal to ÎŁ \= \[ Var ⁥ ( X 1 ( 1 ) ) Cov ⁥ ( X 1 ( 1 ) , X 1 ( 2 ) ) Cov ⁥ ( X 1 ( 1 ) , X 1 ( 3 ) ) ⋯ Cov ⁥ ( X 1 ( 1 ) , X 1 ( k ) ) Cov ⁥ ( X 1 ( 2 ) , X 1 ( 1 ) ) Var ⁥ ( X 1 ( 2 ) ) Cov ⁥ ( X 1 ( 2 ) , X 1 ( 3 ) ) ⋯ Cov ⁥ ( X 1 ( 2 ) , X 1 ( k ) ) Cov ⁥ ( X 1 ( 3 ) , X 1 ( 1 ) ) Cov ⁥ ( X 1 ( 3 ) , X 1 ( 2 ) ) Var ⁥ ( X 1 ( 3 ) ) ⋯ Cov ⁥ ( X 1 ( 3 ) , X 1 ( k ) ) ⋼ ⋼ ⋼ ⋱ ⋼ Cov ⁥ ( X 1 ( k ) , X 1 ( 1 ) ) Cov ⁥ ( X 1 ( k ) , X 1 ( 2 ) ) Cov ⁥ ( X 1 ( k ) , X 1 ( 3 ) ) ⋯ Var ⁥ ( X 1 ( k ) ) \] . {\\displaystyle {\\boldsymbol {\\Sigma }}={\\begin{bmatrix}{\\operatorname {Var} \\left(X\_{1}^{(1)}\\right)}&\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(k)}\\right)\\\\\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(1)}\\right)&\\operatorname {Var} \\left(X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(k)}\\right)\\\\\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(1)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(2)}\\right)&\\operatorname {Var} \\left(X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(k)}\\right)\\\\\\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(1)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Var} \\left(X\_{1}^{(k)}\\right)\\\\\\end{bmatrix}}~.} ![{\\displaystyle {\\boldsymbol {\\Sigma }}={\\begin{bmatrix}{\\operatorname {Var} \\left(X\_{1}^{(1)}\\right)}&\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(k)}\\right)\\\\\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(1)}\\right)&\\operatorname {Var} \\left(X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(k)}\\right)\\\\\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(1)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(2)}\\right)&\\operatorname {Var} \\left(X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(k)}\\right)\\\\\\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(1)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Var} \\left(X\_{1}^{(k)}\\right)\\\\\\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6d498478cfaf665725932f6934d0e59d52b8d97) The multivariate central limit theorem can be proved using the [CramĂ©r–Wold theorem](https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Wold_theorem "CramĂ©r–Wold theorem").[\[9\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-vanderVaart-9) The rate of convergence is given by the following [Berry–Esseen](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem") type result: **Theorem[\[10\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-10)**—Let X 1 , 
 , X n , 
 {\\displaystyle X\_{1},\\dots ,X\_{n},\\dots } ![{\\displaystyle X\_{1},\\dots ,X\_{n},\\dots }](https://wikimedia.org/api/rest_v1/media/math/render/svg/036c457524cbc41bf97abfb7f5699e44fefc47f1) be independent R d {\\displaystyle \\mathbb {R} ^{d}} ![{\\displaystyle \\mathbb {R} ^{d}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a713426956296f1668fce772df3c60b9dde8a685)\-valued random vectors, each having mean zero. Write S \= ∑ i \= 1 n X i {\\displaystyle S=\\sum \_{i=1}^{n}X\_{i}} ![{\\displaystyle S=\\sum \_{i=1}^{n}X\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a560a6c26ccec12fa77776c7c9344e68f82df5af) and assume ÎŁ \= Cov ⁥ \[ S \] {\\displaystyle \\Sigma =\\operatorname {Cov} \[S\]} ![{\\displaystyle \\Sigma =\\operatorname {Cov} \[S\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6bbdd46ff8928b02fc4e37a7fc14c51ceaf58b40) is invertible. Let Z ∌ N ( 0 , ÎŁ ) {\\displaystyle Z\\sim {\\mathcal {N}}(0,\\Sigma )} ![{\\displaystyle Z\\sim {\\mathcal {N}}(0,\\Sigma )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9a54466e721dd1fc4bc34c972d3fe7ad5b82fb54) be a d {\\displaystyle d} ![{\\displaystyle d}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e85ff03cbe0c7341af6b982e47e9f90d235c66ab)\-dimensional Gaussian with the same mean and same covariance matrix as S {\\displaystyle S} ![{\\displaystyle S}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4611d85173cd3b508e67077d4a1252c9c05abca2). Then for all convex sets U ⊆ R d {\\displaystyle U\\subseteq \\mathbb {R} ^{d}} ![{\\displaystyle U\\subseteq \\mathbb {R} ^{d}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bb1a0be5553eae33cbd69b70e7ae9d849b26d058) , \| P \[ S ∈ U \] − P \[ Z ∈ U \] \| ≀ C d 1 / 4 Îł , {\\displaystyle \\left\|\\mathbb {P} \[S\\in U\]-\\mathbb {P} \[Z\\in U\]\\right\|\\leq C\\,d^{1/4}\\gamma ~,} ![{\\displaystyle \\left\|\\mathbb {P} \[S\\in U\]-\\mathbb {P} \[Z\\in U\]\\right\|\\leq C\\,d^{1/4}\\gamma ~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6c26c26cb1dedbb0db401fd2ebfb479ec45fb4cc) where C {\\displaystyle C} ![{\\displaystyle C}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4fc55753007cd3c18576f7933f6f089196732029) is a universal constant, Îł \= ∑ i \= 1 n E ⁥ \[ ‖ ÎŁ − 1 / 2 X i ‖ 2 3 \] {\\displaystyle \\gamma =\\sum \_{i=1}^{n}\\operatorname {E} \\left\[\\left\\\|\\Sigma ^{-1/2}X\_{i}\\right\\\|\_{2}^{3}\\right\]} ![{\\displaystyle \\gamma =\\sum \_{i=1}^{n}\\operatorname {E} \\left\[\\left\\\|\\Sigma ^{-1/2}X\_{i}\\right\\\|\_{2}^{3}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7904b82b37828200b3a729334fb37932175ef82e) , and ‖ ⋅ ‖ 2 {\\displaystyle \\\|\\cdot \\\|\_{2}} ![{\\displaystyle \\\|\\cdot \\\|\_{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b3a8e44a2eb980f856968a6357e3d0a7c22c905f) denotes the Euclidean norm on R d {\\displaystyle \\mathbb {R} ^{d}} ![{\\displaystyle \\mathbb {R} ^{d}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a713426956296f1668fce772df3c60b9dde8a685) . It is unknown whether the factor d 1 / 4 {\\textstyle d^{1/4}} ![{\\textstyle d^{1/4}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ed525ca1204085b15a58c7fc9b0e7dd1c38428f) is necessary.[\[11\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-11) ## The generalized central limit theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=7 "Edit section: The generalized central limit theorem")\] The generalized central limit theorem (GCLT) was an effort of multiple mathematicians ([Sergei Bernstein](https://en.wikipedia.org/wiki/Sergei_Bernstein "Sergei Bernstein"), [Jarl Waldemar Lindeberg](https://en.wikipedia.org/wiki/Jarl_Waldemar_Lindeberg "Jarl Waldemar Lindeberg"), [Paul LĂ©vy](https://en.wikipedia.org/wiki/Paul_L%C3%A9vy_\(mathematician\) "Paul LĂ©vy (mathematician)"), [William Feller](https://en.wikipedia.org/wiki/William_Feller "William Feller"), [Andrey Kolmogorov](https://en.wikipedia.org/wiki/Andrey_Kolmogorov "Andrey Kolmogorov"), and others) over the period from 1920 to 1937.[\[12\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-12) The first published complete proof of the GCLT was in 1937 by Paul LĂ©vy in French.[\[13\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-13) An English language version of the complete proof of the GCLT is available in the translation of [Boris Vladimirovich Gnedenko](https://en.wikipedia.org/wiki/Boris_Vladimirovich_Gnedenko "Boris Vladimirovich Gnedenko") and Kolmogorov's 1954 book.[\[14\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-14) The statement of the GCLT is as follows:[\[15\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-15) **Statement of GCLT**—A non-degenerate random variable Z is [α\-stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") for some 0 \< *α* ≀ 2 if and only if there is an independent, identically distributed sequence of random variables *X*1, *X*2, *X*3, ..., and constants *a**n* \> 0, *b**n* ∈ ℝ with a n ( X 1 \+ ⋯ \+ X n ) − b n → Z . {\\displaystyle a\_{n}(X\_{1}+\\dots +X\_{n})-b\_{n}\\to Z.} ![{\\displaystyle a\_{n}(X\_{1}+\\dots +X\_{n})-b\_{n}\\to Z.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b245985cb0bacf087b4042c20326688a86fec9bc) Here, '→' means the sequence of random variable sums converges in distribution; i.e., the corresponding distributions satisfy *F**n*(*y*) → *F*(*y*) at all continuity points of F. In other words, if sums of independent, identically distributed random variables converge in distribution to some Z, then Z must be a [stable distribution](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution"). ## Dependent processes \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=8 "Edit section: Dependent processes")\] ### CLT under weak dependence \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=9 "Edit section: CLT under weak dependence")\] A useful generalization of a sequence of independent, identically distributed random variables is a [mixing](https://en.wikipedia.org/wiki/Mixing_\(mathematics\) "Mixing (mathematics)") random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially [strong mixing](https://en.wikipedia.org/wiki/Mixing_\(mathematics\)#Mixing_in_stochastic_processes "Mixing (mathematics)") (also called α-mixing) defined by α ( n ) → 0 {\\textstyle \\alpha (n)\\to 0} ![{\\textstyle \\alpha (n)\\to 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4ed99c356c145334ba6ea4de91528e6cc812a00) where α ( n ) {\\textstyle \\alpha (n)} ![{\\textstyle \\alpha (n)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/468f23bd38dabd879395bf653a352665a569a5e6) is so-called [strong mixing coefficient](https://en.wikipedia.org/wiki/Mixing_\(mathematics\)#Mixing_in_stochastic_processes "Mixing (mathematics)"). A simplified formulation of the central limit theorem under strong mixing is:[\[16\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBillingsley1995Theorem_27.4-16) **Theorem**—Suppose that { X 1 , 
 , X n , 
 } {\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}} ![{\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c8cf831d0f6051f596823e3ab16dec9b32c3605b) is stationary and α {\\displaystyle \\alpha } ![{\\displaystyle \\alpha }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b79333175c8b3f0840bfb4ec41b8072c83ea88d3)\-mixing with α n \= O ( n − 5 ) {\\textstyle \\alpha \_{n}=O\\left(n^{-5}\\right)} ![{\\textstyle \\alpha \_{n}=O\\left(n^{-5}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9101fb0dac273b5c7c43dc1e9477ab1998baca60) and that E ⁥ \[ X n \] \= 0 {\\textstyle \\operatorname {E} \[X\_{n}\]=0} ![{\\textstyle \\operatorname {E} \[X\_{n}\]=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d12315de3945900a1cdcca84088a0f562e93d042) and E ⁥ \[ X n 12 \] \< ∞ {\\textstyle \\operatorname {E} \[X\_{n}^{12}\]\<\\infty } ![{\\textstyle \\operatorname {E} \[X\_{n}^{12}\]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/aa251162abe2eb1e886116a08905a2e1f30ba891) . Denote S n \= X 1 \+ ⋯ \+ X n {\\textstyle S\_{n}=X\_{1}+\\cdots +X\_{n}} ![{\\textstyle S\_{n}=X\_{1}+\\cdots +X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/36fd633e84994384183529ab9a056835465b9382) , then the limit σ 2 \= lim n → ∞ E ⁥ ( S n 2 ) n {\\displaystyle \\sigma ^{2}=\\lim \_{n\\rightarrow \\infty }{\\frac {\\operatorname {E} \\left(S\_{n}^{2}\\right)}{n}}} ![{\\displaystyle \\sigma ^{2}=\\lim \_{n\\rightarrow \\infty }{\\frac {\\operatorname {E} \\left(S\_{n}^{2}\\right)}{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2535aaf054d8ae91ed0f0d2a87929f50bca7eabb) exists, and if σ ≠ 0 {\\textstyle \\sigma \\neq 0} ![{\\textstyle \\sigma \\neq 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba5fed43f7e3f2424d0be5a899641b1245afaf75) then S n σ n {\\textstyle {\\frac {S\_{n}}{\\sigma {\\sqrt {n}}}}} ![{\\textstyle {\\frac {S\_{n}}{\\sigma {\\sqrt {n}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/889c6fe4e873253ac5404d0bbb1909afca5eef22) converges in distribution to N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7). In fact, σ 2 \= E ⁥ ( X 1 2 ) \+ 2 ∑ k \= 1 ∞ E ⁥ ( X 1 X 1 \+ k ) , {\\displaystyle \\sigma ^{2}=\\operatorname {E} \\left(X\_{1}^{2}\\right)+2\\sum \_{k=1}^{\\infty }\\operatorname {E} \\left(X\_{1}X\_{1+k}\\right),} ![{\\displaystyle \\sigma ^{2}=\\operatorname {E} \\left(X\_{1}^{2}\\right)+2\\sum \_{k=1}^{\\infty }\\operatorname {E} \\left(X\_{1}X\_{1+k}\\right),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5bb0d3c7756dcb842a098c5ff56495a7e16c4ba3) where the series converges absolutely. The assumption σ ≠ 0 {\\textstyle \\sigma \\neq 0} ![{\\textstyle \\sigma \\neq 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba5fed43f7e3f2424d0be5a899641b1245afaf75) cannot be omitted, since the asymptotic normality fails for X n \= Y n − Y n − 1 {\\textstyle X\_{n}=Y\_{n}-Y\_{n-1}} ![{\\textstyle X\_{n}=Y\_{n}-Y\_{n-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7f4384d7efbcc99abdb70838f377c73b071b7a59) where Y n {\\textstyle Y\_{n}} ![{\\textstyle Y\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f44f731779482cae5f5ef040bfa0cb6c181dd4eb) are another [stationary sequence](https://en.wikipedia.org/wiki/Stationary_sequence "Stationary sequence"). There is a stronger version of the theorem:[\[17\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEDurrett2004Sect._7.7\(c\),_Theorem_7.8-17) the assumption E ⁥ \[ X n 12 \] \< ∞ {\\textstyle \\operatorname {E} \\left\[X\_{n}^{12}\\right\]\<\\infty } ![{\\textstyle \\operatorname {E} \\left\[X\_{n}^{12}\\right\]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/657baea1ae3ea930c0d5057a69e2353a155a0df4) is replaced with E ⁥ \[ \| X n \| 2 \+ ÎŽ \] \< ∞ {\\textstyle \\operatorname {E} \\left\[{\\left\|X\_{n}\\right\|}^{2+\\delta }\\right\]\<\\infty } ![{\\textstyle \\operatorname {E} \\left\[{\\left\|X\_{n}\\right\|}^{2+\\delta }\\right\]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/940680c6ace3f0b2a962e5e31fcc79b6e4f28f13) , and the assumption α n \= O ( n − 5 ) {\\textstyle \\alpha \_{n}=O\\left(n^{-5}\\right)} ![{\\textstyle \\alpha \_{n}=O\\left(n^{-5}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9101fb0dac273b5c7c43dc1e9477ab1998baca60) is replaced with ∑ n α n ÎŽ 2 ( 2 \+ ÎŽ ) \< ∞ . {\\displaystyle \\sum \_{n}\\alpha \_{n}^{\\frac {\\delta }{2(2+\\delta )}}\<\\infty .} ![{\\displaystyle \\sum \_{n}\\alpha \_{n}^{\\frac {\\delta }{2(2+\\delta )}}\<\\infty .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/326a749d6ae5429bb4e90c62fa60214ff2c65d93) Existence of such ÎŽ \> 0 {\\textstyle \\delta \>0} ![{\\textstyle \\delta \>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e503a231ad333c0bd9a2dc7d48d8848076423356) ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see ([Bradley 2007](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBradley2007)). ### Martingale difference CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=10 "Edit section: Martingale difference CLT")\] Main article: [Martingale central limit theorem](https://en.wikipedia.org/wiki/Martingale_central_limit_theorem "Martingale central limit theorem") **Theorem**—Let a [martingale](https://en.wikipedia.org/wiki/Martingale_\(probability_theory\) "Martingale (probability theory)") M n {\\textstyle M\_{n}} ![{\\textstyle M\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c6770260b77798ff7d5e557db457172ec8dacd02) satisfy - 1 n ∑ k \= 1 n E ⁥ \[ ( M k − M k − 1 ) 2 ∣ M 1 , 
 , M k − 1 \] → 1 {\\displaystyle {\\frac {1}{n}}\\sum \_{k=1}^{n}\\operatorname {E} \\left\[\\left(M\_{k}-M\_{k-1}\\right)^{2}\\mid M\_{1},\\dots ,M\_{k-1}\\right\]\\to 1} ![{\\displaystyle {\\frac {1}{n}}\\sum \_{k=1}^{n}\\operatorname {E} \\left\[\\left(M\_{k}-M\_{k-1}\\right)^{2}\\mid M\_{1},\\dots ,M\_{k-1}\\right\]\\to 1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c7fc9c003077c427537a3a787e84ef5528bbe866) in probability as *n* → ∞, - for every *Δ* \> 0, 1 n ∑ k \= 1 n E ⁥ \[ ( M k − M k − 1 ) 2 1 \[ \| M k − M k − 1 \| \> Δ n \] \] → 0 {\\displaystyle {\\frac {1}{n}}\\sum \_{k=1}^{n}{\\operatorname {E} \\left\[\\left(M\_{k}-M\_{k-1}\\right)^{2}\\mathbf {1} \\left\[\|M\_{k}-M\_{k-1}\|\>\\varepsilon {\\sqrt {n}}\\right\]\\right\]}\\to 0} ![{\\displaystyle {\\frac {1}{n}}\\sum \_{k=1}^{n}{\\operatorname {E} \\left\[\\left(M\_{k}-M\_{k-1}\\right)^{2}\\mathbf {1} \\left\[\|M\_{k}-M\_{k-1}\|\>\\varepsilon {\\sqrt {n}}\\right\]\\right\]}\\to 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b3a52a1cd0940afa0a6752d55edc28d1c72ab1bd) as *n* → ∞, then M n n {\\textstyle {\\frac {M\_{n}}{\\sqrt {n}}}} ![{\\textstyle {\\frac {M\_{n}}{\\sqrt {n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8fefdb90dfbb56b1aa437d0dc16f9cab5906c7fc) converges in distribution to N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as n → ∞ {\\textstyle n\\to \\infty } ![{\\textstyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2e2d57fbe15926e75495e28e3517ffec3622d96).[\[18\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEDurrett2004Sect._7.7,_Theorem_7.4-18)[\[19\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBillingsley1995Theorem_35.12-19) ## Remarks \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=11 "Edit section: Remarks")\] ### Proof of classical CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=12 "Edit section: Proof of classical CLT")\] The central limit theorem has a proof using [characteristic functions](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)").[\[20\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-20) It is similar to the proof of the (weak) [law of large numbers](https://en.wikipedia.org/wiki/Proof_of_the_law_of_large_numbers "Proof of the law of large numbers"). Assume { X 1 , 
 , X n , 
 } {\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}} ![{\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c8cf831d0f6051f596823e3ab16dec9b32c3605b) are independent and identically distributed random variables, each with mean ÎŒ {\\textstyle \\mu } ![{\\textstyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/259577540a13444806174d5a1ae7662974f58085) and finite variance σ 2 {\\textstyle \\sigma ^{2}} ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490) . The sum X 1 \+ ⋯ \+ X n {\\textstyle X\_{1}+\\cdots +X\_{n}} ![{\\textstyle X\_{1}+\\cdots +X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2dd1d07185beda1344edb462098bdeb9c3d4bd2) has [mean](https://en.wikipedia.org/wiki/Linearity_of_expectation "Linearity of expectation") n ÎŒ {\\textstyle n\\mu } ![{\\textstyle n\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/86c2acb561e48611211d932c89b04070975c3097) and [variance](https://en.wikipedia.org/wiki/Variance#Sum_of_uncorrelated_variables_\(Bienaym%C3%A9_formula\) "Variance") n σ 2 {\\textstyle n\\sigma ^{2}} ![{\\textstyle n\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a3ac46172bd48cbd73b45b4cd9866ba79c60c024) . Consider the random variable Z n \= X 1 \+ ⋯ \+ X n − n ÎŒ n σ 2 \= ∑ i \= 1 n X i − ÎŒ n σ 2 \= ∑ i \= 1 n 1 n Y i , {\\displaystyle Z\_{n}={\\frac {X\_{1}+\\cdots +X\_{n}-n\\mu }{\\sqrt {n\\sigma ^{2}}}}=\\sum \_{i=1}^{n}{\\frac {X\_{i}-\\mu }{\\sqrt {n\\sigma ^{2}}}}=\\sum \_{i=1}^{n}{\\frac {1}{\\sqrt {n}}}Y\_{i},} ![{\\displaystyle Z\_{n}={\\frac {X\_{1}+\\cdots +X\_{n}-n\\mu }{\\sqrt {n\\sigma ^{2}}}}=\\sum \_{i=1}^{n}{\\frac {X\_{i}-\\mu }{\\sqrt {n\\sigma ^{2}}}}=\\sum \_{i=1}^{n}{\\frac {1}{\\sqrt {n}}}Y\_{i},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad49250c7f766ddd917c1a211d0a9cc17e22ef07) where in the last step we defined the new random variables Y i \= X i − ÎŒ σ {\\textstyle Y\_{i}={\\frac {X\_{i}-\\mu }{\\sigma }}} ![{\\textstyle Y\_{i}={\\frac {X\_{i}-\\mu }{\\sigma }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0d649652d30c90f9eff8ced6aa98a604cce20439) , each with zero mean and unit variance ( var ⁥ ( Y ) \= 1 {\\textstyle \\operatorname {var} (Y)=1} ![{\\textstyle \\operatorname {var} (Y)=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/89961aa24aac81f848a2039be849ae38207cbb20) ). The [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") of Z n {\\textstyle Z\_{n}} ![{\\textstyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6bb2f366f0fced92341046f7bdc5669190382a1) is given by φ Z n ( t ) \= φ ∑ i \= 1 n 1 n Y i ( t ) \= φ Y 1 ( t n ) φ Y 2 ( t n ) ⋯ φ Y n ( t n ) \= \[ φ Y 1 ( t n ) \] n , {\\displaystyle {\\begin{aligned}\\varphi \_{Z\_{n}}\\!(t)=\\varphi \_{\\sum \_{i=1}^{n}{{\\frac {1}{\\sqrt {n}}}Y\_{i}}}\\!(t)\\ &=\\ \\varphi \_{Y\_{1}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\varphi \_{Y\_{2}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\cdots \\varphi \_{Y\_{n}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\\\\[1ex\]&=\\ \\left\[\\varphi \_{Y\_{1}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\right\]^{n},\\end{aligned}}} ![{\\displaystyle {\\begin{aligned}\\varphi \_{Z\_{n}}\\!(t)=\\varphi \_{\\sum \_{i=1}^{n}{{\\frac {1}{\\sqrt {n}}}Y\_{i}}}\\!(t)\\ &=\\ \\varphi \_{Y\_{1}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\varphi \_{Y\_{2}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\cdots \\varphi \_{Y\_{n}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\\\\[1ex\]&=\\ \\left\[\\varphi \_{Y\_{1}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\right\]^{n},\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/47b0081e06952eb33ecd2d847e4c8bf6c41449bd) where in the last step we used the fact that all of the Y i {\\textstyle Y\_{i}} ![{\\textstyle Y\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b48694270212208c6f289fd3f5cd0c85a2ab1426) are identically distributed. The characteristic function of Y 1 {\\textstyle Y\_{1}} ![{\\textstyle Y\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d99286ab1964199302400d49e2efbacff8e0df7) is, by [Taylor's theorem](https://en.wikipedia.org/wiki/Taylor%27s_theorem "Taylor's theorem"), φ Y 1 ( t n ) \= 1 − t 2 2 n \+ o ( t 2 n ) , ( t n ) → 0 {\\displaystyle \\varphi \_{Y\_{1}}\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)=1-{\\frac {t^{2}}{2n}}+o\\!\\left({\\frac {t^{2}}{n}}\\right),\\quad \\left({\\frac {t}{\\sqrt {n}}}\\right)\\to 0} ![{\\displaystyle \\varphi \_{Y\_{1}}\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)=1-{\\frac {t^{2}}{2n}}+o\\!\\left({\\frac {t^{2}}{n}}\\right),\\quad \\left({\\frac {t}{\\sqrt {n}}}\\right)\\to 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ef0090e6f84b8f85bbd892cdb5b6d12e7e05c3e0) where o ( t 2 / n ) {\\textstyle o(t^{2}/n)} ![{\\textstyle o(t^{2}/n)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/532efbd52ebff5b1011012e68aed287fd957abc2) is "[little o notation](https://en.wikipedia.org/wiki/Little-o_notation "Little-o notation")" for some function of t {\\textstyle t} ![{\\textstyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2bc926f90178739fccd01a96c6fa778ab3535d6) that goes to zero more rapidly than t 2 / n {\\textstyle t^{2}/n} ![{\\textstyle t^{2}/n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b3697c7a2d39c44bf73f66827f536ff885c0d92b) . By the limit of the [exponential function](https://en.wikipedia.org/wiki/Exponential_function "Exponential function") ( e x \= lim n → ∞ ( 1 \+ x n ) n {\\textstyle e^{x}=\\lim \_{n\\to \\infty }\\left(1+{\\frac {x}{n}}\\right)^{n}} ![{\\textstyle e^{x}=\\lim \_{n\\to \\infty }\\left(1+{\\frac {x}{n}}\\right)^{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7ea7045a332544d742dd5772aca36c5a4ea64b36) ), the characteristic function of Z n {\\displaystyle Z\_{n}} ![{\\displaystyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e5073995dface6fb94824a8bec0075e65205fc64) equals φ Z n ( t ) \= ( 1 − t 2 2 n \+ o ( t 2 n ) ) n → e − 1 2 t 2 , n → ∞ . {\\displaystyle \\varphi \_{Z\_{n}}(t)=\\left(1-{\\frac {t^{2}}{2n}}+o\\left({\\frac {t^{2}}{n}}\\right)\\right)^{n}\\rightarrow e^{-{\\frac {1}{2}}t^{2}},\\quad n\\to \\infty .} ![{\\displaystyle \\varphi \_{Z\_{n}}(t)=\\left(1-{\\frac {t^{2}}{2n}}+o\\left({\\frac {t^{2}}{n}}\\right)\\right)^{n}\\rightarrow e^{-{\\frac {1}{2}}t^{2}},\\quad n\\to \\infty .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f99cfa0ea68282331a0ce987cab1c839a055bf9d) All of the higher order terms vanish in the limit n → ∞ {\\textstyle n\\to \\infty } ![{\\textstyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2e2d57fbe15926e75495e28e3517ffec3622d96) . The right hand side equals the characteristic function of a standard normal distribution N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7), which implies through [LĂ©vy's continuity theorem](https://en.wikipedia.org/wiki/L%C3%A9vy_continuity_theorem "LĂ©vy continuity theorem") that the distribution of Z n {\\textstyle Z\_{n}} ![{\\textstyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6bb2f366f0fced92341046f7bdc5669190382a1) will approach N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as n → ∞ {\\textstyle n\\to \\infty } ![{\\textstyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2e2d57fbe15926e75495e28e3517ffec3622d96) . Therefore, the [sample average](https://en.wikipedia.org/wiki/Sample_mean "Sample mean") X ÂŻ n \= X 1 \+ ⋯ \+ X n n {\\displaystyle {\\bar {X}}\_{n}={\\frac {X\_{1}+\\cdots +X\_{n}}{n}}} ![{\\displaystyle {\\bar {X}}\_{n}={\\frac {X\_{1}+\\cdots +X\_{n}}{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e9e4899d5168356e3bfce210ef39b7a67326d613) is such that n σ ( X ÂŻ n − ÎŒ ) \= Z n {\\displaystyle {\\frac {\\sqrt {n}}{\\sigma }}\\left({\\bar {X}}\_{n}-\\mu \\right)=Z\_{n}} ![{\\displaystyle {\\frac {\\sqrt {n}}{\\sigma }}\\left({\\bar {X}}\_{n}-\\mu \\right)=Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cfa303121de459d785fc0849213f79cfb5cb7802) converges to the normal distribution N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) , from which the central limit theorem follows. ### Convergence to the limit \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=13 "Edit section: Convergence to the limit")\] The central limit theorem gives only an [asymptotic distribution](https://en.wikipedia.org/wiki/Asymptotic_distribution "Asymptotic distribution"). As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] The convergence in the central limit theorem is [uniform](https://en.wikipedia.org/wiki/Uniform_convergence "Uniform convergence") because the limiting cumulative distribution function is continuous. If the third central [moment](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") E ⁥ \[ ( X 1 − ÎŒ ) 3 \] {\\textstyle \\operatorname {E} \\left\[(X\_{1}-\\mu )^{3}\\right\]} ![{\\textstyle \\operatorname {E} \\left\[(X\_{1}-\\mu )^{3}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/389b355cd56db15cdf7e88c8b0aff830a381726f) exists and is finite, then the speed of convergence is at least on the order of 1 / n {\\textstyle 1/{\\sqrt {n}}} ![{\\textstyle 1/{\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a7ffe079afd996740883579f310cedf037846363) (see [Berry–Esseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem")). [Stein's method](https://en.wikipedia.org/wiki/Stein%27s_method "Stein's method")[\[21\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-stein1972-21) can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics.[\[22\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-22) The convergence to the normal distribution is monotonic, in the sense that the [entropy](https://en.wikipedia.org/wiki/Information_entropy "Information entropy") of Z n {\\textstyle Z\_{n}} ![{\\textstyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6bb2f366f0fced92341046f7bdc5669190382a1) increases [monotonically](https://en.wikipedia.org/wiki/Monotonic_function "Monotonic function") to that of the normal distribution.[\[23\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-ABBN-23) The central limit theorem applies in particular to sums of independent and identically distributed [discrete random variables](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable"). A sum of [discrete random variables](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable") is still a [discrete random variable](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable"), so that we are confronted with a sequence of [discrete random variables](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable") whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution")). This means that if we build a [histogram](https://en.wikipedia.org/wiki/Histogram "Histogram") of the realizations of the sum of n independent identical discrete variables, the piecewise-linear curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity; this relation is known as [de Moivre–Laplace theorem](https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem "De Moivre–Laplace theorem"). The [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution") article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values. ### Common misconceptions \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=14 "Edit section: Common misconceptions")\] Studies have shown that the central limit theorem is subject to several common but serious misconceptions, some of which appear in widely used textbooks.[\[24\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-24)[\[25\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-25)[\[26\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-26) These include: - The misconceived belief that the theorem applies to random sampling of any variable, rather than to the mean values (or sums) of [iid](https://en.wikipedia.org/wiki/Iid "Iid") random variables extracted from a population by repeated sampling. That is, the theorem assumes the random sampling produces a [sampling distribution](https://en.wikipedia.org/wiki/Sampling_distribution "Sampling distribution") formed from different values of means (or sums) of such random variables. - The misconceived belief that the theorem ensures that random sampling leads to the emergence of a normal distribution for sufficiently large samples of any random variable, regardless of the population distribution. In reality, such sampling asymptotically reproduces the properties of the population, an intuitive result underpinned by the [Glivenko–Cantelli theorem](https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem "Glivenko–Cantelli theorem"). - The misconceived belief that the theorem leads to a good approximation of a normal distribution for sample sizes greater than around 30,[\[27\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-27) allowing reliable inferences regardless of the nature of the population. In reality, this empirical rule of thumb has no valid justification, and can lead to seriously flawed inferences. See [Z-test](https://en.wikipedia.org/wiki/Z-test "Z-test") for where the approximation holds. ### Relation to the law of large numbers \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=15 "Edit section: Relation to the law of large numbers")\] The [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers") as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior of Sn as n approaches infinity?" In mathematical analysis, [asymptotic series](https://en.wikipedia.org/wiki/Asymptotic_series "Asymptotic series") are one of the most popular tools employed to approach such questions. Suppose we have an asymptotic expansion of f ( n ) {\\textstyle f(n)} ![{\\textstyle f(n)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f435bf426ab479f25120bda0b15bb6f1f3ddab4c): f ( n ) \= a 1 φ 1 ( n ) \+ a 2 φ 2 ( n ) \+ O ( φ 3 ( n ) ) ( n → ∞ ) . {\\displaystyle f(n)=a\_{1}\\varphi \_{1}(n)+a\_{2}\\varphi \_{2}(n)+O{\\big (}\\varphi \_{3}(n){\\big )}\\qquad (n\\to \\infty ).} ![{\\displaystyle f(n)=a\_{1}\\varphi \_{1}(n)+a\_{2}\\varphi \_{2}(n)+O{\\big (}\\varphi \_{3}(n){\\big )}\\qquad (n\\to \\infty ).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/260a39ef95ac12c7c49207851f142e04231b3bbf) Dividing both parts by *φ*1(*n*) and taking the limit will produce *a*1, the coefficient of the highest-order term in the expansion, which represents the rate at which *f*(*n*) changes in its leading term. lim n → ∞ f ( n ) φ 1 ( n ) \= a 1 . {\\displaystyle \\lim \_{n\\to \\infty }{\\frac {f(n)}{\\varphi \_{1}(n)}}=a\_{1}.} ![{\\displaystyle \\lim \_{n\\to \\infty }{\\frac {f(n)}{\\varphi \_{1}(n)}}=a\_{1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/44cc891b6ed15c5597004a713bb9254f19e922a6) Informally, one can say: "*f*(*n*) grows approximately as *a*1*φ*1(*n*)". Taking the difference between *f*(*n*) and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about *f*(*n*): lim n → ∞ f ( n ) − a 1 φ 1 ( n ) φ 2 ( n ) \= a 2 . {\\displaystyle \\lim \_{n\\to \\infty }{\\frac {f(n)-a\_{1}\\varphi \_{1}(n)}{\\varphi \_{2}(n)}}=a\_{2}.} ![{\\displaystyle \\lim \_{n\\to \\infty }{\\frac {f(n)-a\_{1}\\varphi \_{1}(n)}{\\varphi \_{2}(n)}}=a\_{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2fced405a36296ad3d95ea743491cf3773124a0b) Here one can say that the difference between the function and its approximation grows approximately as *a*2*φ*2(*n*). The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself. Informally, something along these lines happens when the sum, Sn, of independent identically distributed random variables, *X*1, ..., *Xn*, is studied in classical probability theory.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] If each Xi has finite mean ÎŒ, then by the law of large numbers, ⁠*Sn*/*n*⁠ → *ÎŒ*.[\[28\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-28) If in addition each Xi has finite variance *σ*2, then by the central limit theorem, S n − n ÎŒ n → Ο , {\\displaystyle {\\frac {S\_{n}-n\\mu }{\\sqrt {n}}}\\to \\xi ,} ![{\\displaystyle {\\frac {S\_{n}-n\\mu }{\\sqrt {n}}}\\to \\xi ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6ab93c204de66a8de1792dce791a455d187e2899) where Ο is distributed as *N*(0,*σ*2). This provides values of the first two constants in the informal expansion S n ≈ ÎŒ n \+ Ο n . {\\displaystyle S\_{n}\\approx \\mu n+\\xi {\\sqrt {n}}.} ![{\\displaystyle S\_{n}\\approx \\mu n+\\xi {\\sqrt {n}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d486bc5083bebdb5d386be40624936c849610b3c) In the case where the Xi do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors: S n − a n b n → Ξ , {\\displaystyle {\\frac {S\_{n}-a\_{n}}{b\_{n}}}\\rightarrow \\Xi ,} ![{\\displaystyle {\\frac {S\_{n}-a\_{n}}{b\_{n}}}\\rightarrow \\Xi ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1d90d434da533ea39b8411d6cb5bd25a0d2f6725) or informally S n ≈ a n \+ Ξ b n . {\\displaystyle S\_{n}\\approx a\_{n}+\\Xi b\_{n}.} ![{\\displaystyle S\_{n}\\approx a\_{n}+\\Xi b\_{n}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c3defe7d02c1f798ba4cb00acf610edbd79fe573) Distributions Ξ which can arise in this way are called *[stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution")*.[\[29\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-29) Clearly, the normal distribution is stable, but there are also other stable distributions, such as the [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"), for which the mean or variance are not defined. The scaling factor bn may be proportional to nc, for any *c* ≄ ⁠1/2⁠; it may also be multiplied by a [slowly varying function](https://en.wikipedia.org/wiki/Slowly_varying_function "Slowly varying function") of n.[\[30\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Uchaikin-30)[\[31\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-31) The [law of the iterated logarithm](https://en.wikipedia.org/wiki/Law_of_the_iterated_logarithm "Law of the iterated logarithm") specifies what is happening "in between" the [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers") and the central limit theorem. Specifically it says that the normalizing function √*n* log log *n*, intermediate in size between n of the law of large numbers and √*n* of the central limit theorem, provides a non-trivial limiting behavior. ### Alternative statements of the theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=16 "Edit section: Alternative statements of the theorem")\] #### Density functions \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=17 "Edit section: Density functions")\] The [density](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") of the sum of two or more independent variables is the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov[\[32\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-32) for a particular local limit theorem for sums of [independent and identically distributed random variables](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables "Independent and identically distributed random variables"). #### Characteristic functions \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=18 "Edit section: Characteristic functions")\] Since the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. Specifically, an appropriate scaling factor needs to be applied to the argument of the characteristic function. An equivalent statement can be made about [Fourier transforms](https://en.wikipedia.org/wiki/Fourier_transform "Fourier transform"), since the characteristic function is essentially a Fourier transform. ### Calculating the variance \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=19 "Edit section: Calculating the variance")\] Let Sn be the sum of n random variables. Many central limit theorems provide conditions such that Sn/√Var(Sn) converges in distribution to *N*(0,1) (the normal distribution with mean 0, variance 1) as n → ∞. In some cases, it is possible to find a constant *σ*2 and function f(n) such that Sn/(σ√n⋅f(n)) converges in distribution to *N*(0,1) as n→ ∞. **Lemma[\[33\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-33)**—Suppose X 1 , X 2 , 
 {\\displaystyle X\_{1},X\_{2},\\dots } ![{\\displaystyle X\_{1},X\_{2},\\dots }](https://wikimedia.org/api/rest_v1/media/math/render/svg/5d284b1a1ab85cfbe91d035c6a3bc7a1d17cd31a) is a sequence of real-valued and strictly stationary random variables with E ⁥ ( X i ) \= 0 {\\displaystyle \\operatorname {E} (X\_{i})=0} ![{\\displaystyle \\operatorname {E} (X\_{i})=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c0e3b337f629495e7455d1c7eba3914a53ef97f3) for all i {\\displaystyle i} ![{\\displaystyle i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20) , g : \[ 0 , 1 \] → R {\\displaystyle g:\[0,1\]\\to \\mathbb {R} } ![{\\displaystyle g:\[0,1\]\\to \\mathbb {R} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/26aef45c20ce13e8d53e79e068df9b5804c5c170) , and S n \= ∑ i \= 1 n g ( i n ) X i {\\displaystyle S\_{n}=\\sum \_{i=1}^{n}g\\left({\\tfrac {i}{n}}\\right)X\_{i}} ![{\\displaystyle S\_{n}=\\sum \_{i=1}^{n}g\\left({\\tfrac {i}{n}}\\right)X\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/05e2b72930e6a989d55d9e95d1509f01e34ffebf) . Construct σ 2 \= E ⁥ ( X 1 2 ) \+ 2 ∑ i \= 1 ∞ E ⁥ ( X 1 X 1 \+ i ) {\\displaystyle \\sigma ^{2}=\\operatorname {E} (X\_{1}^{2})+2\\sum \_{i=1}^{\\infty }\\operatorname {E} (X\_{1}X\_{1+i})} ![{\\displaystyle \\sigma ^{2}=\\operatorname {E} (X\_{1}^{2})+2\\sum \_{i=1}^{\\infty }\\operatorname {E} (X\_{1}X\_{1+i})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f955c402c4eef319d7b8b4af89a317822359d9ca) 1. If ∑ i \= 1 ∞ E ⁥ ( X 1 X 1 \+ i ) {\\displaystyle \\sum \_{i=1}^{\\infty }\\operatorname {E} (X\_{1}X\_{1+i})} ![{\\displaystyle \\sum \_{i=1}^{\\infty }\\operatorname {E} (X\_{1}X\_{1+i})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/11f66d88dd1515d176b4906f98ee72938695bce8) is absolutely convergent, \| ∫ 0 1 g ( x ) g â€Č ( x ) d x \| \< ∞ {\\displaystyle \\left\|\\int \_{0}^{1}g(x)g'(x)\\,dx\\right\|\<\\infty } ![{\\displaystyle \\left\|\\int \_{0}^{1}g(x)g'(x)\\,dx\\right\|\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/ff2cd32256310d51ceb91bc120c3a0ca535027f8) , and 0 \< ∫ 0 1 ( g ( x ) ) 2 d x \< ∞ {\\displaystyle 0\<\\int \_{0}^{1}(g(x))^{2}dx\<\\infty } ![{\\displaystyle 0\<\\int \_{0}^{1}(g(x))^{2}dx\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/adf443a91a2107307b730cbc39d29a3a8995f3eb) then V a r ( S n ) / ( n Îł n ) → σ 2 {\\displaystyle \\mathrm {Var} (S\_{n})/(n\\gamma \_{n})\\to \\sigma ^{2}} ![{\\displaystyle \\mathrm {Var} (S\_{n})/(n\\gamma \_{n})\\to \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/96ca5fa3ea9a2151545d586041dcdbf816e5f8e0) as n → ∞ {\\displaystyle n\\to \\infty } ![{\\displaystyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0d55d9b32f6fa8fab6a84ea444a6b5a24bb45e1) where Îł n \= 1 n ∑ i \= 1 n ( g ( i n ) ) 2 {\\displaystyle \\gamma \_{n}={\\frac {1}{n}}\\sum \_{i=1}^{n}\\left(g\\left({\\tfrac {i}{n}}\\right)\\right)^{2}} ![{\\displaystyle \\gamma \_{n}={\\frac {1}{n}}\\sum \_{i=1}^{n}\\left(g\\left({\\tfrac {i}{n}}\\right)\\right)^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5838491977c539d44aa25f92fcfa8a165cf24ff2) . 2. If in addition σ \> 0 {\\displaystyle \\sigma \>0} ![{\\displaystyle \\sigma \>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/762ecd0f0905dd0d4d7a07f80fa8bfb324b9b021) and S n / V a r ( S n ) {\\displaystyle S\_{n}/{\\sqrt {\\mathrm {Var} (S\_{n})}}} ![{\\displaystyle S\_{n}/{\\sqrt {\\mathrm {Var} (S\_{n})}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f0fa1657c056ad1bec10401d2b948e72c3501e42) converges in distribution to N ( 0 , 1 ) {\\displaystyle {\\mathcal {N}}(0,1)} ![{\\displaystyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a3eeb356405c0b33b680b5caa425ada4e9f53e8b) as n → ∞ {\\displaystyle n\\to \\infty } ![{\\displaystyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0d55d9b32f6fa8fab6a84ea444a6b5a24bb45e1) then S n / ( σ n Îł n ) {\\displaystyle S\_{n}/(\\sigma {\\sqrt {n\\gamma \_{n}}})} ![{\\displaystyle S\_{n}/(\\sigma {\\sqrt {n\\gamma \_{n}}})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5cffedd221b40182d9d29e2250061e39338b60ab) also converges in distribution to N ( 0 , 1 ) {\\displaystyle {\\mathcal {N}}(0,1)} ![{\\displaystyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a3eeb356405c0b33b680b5caa425ada4e9f53e8b) as n → ∞ {\\displaystyle n\\to \\infty } ![{\\displaystyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0d55d9b32f6fa8fab6a84ea444a6b5a24bb45e1) . ## Extensions \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=20 "Edit section: Extensions")\] ### Products of positive random variables \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=21 "Edit section: Products of positive random variables")\] The [logarithm](https://en.wikipedia.org/wiki/Logarithm "Logarithm") of a product is simply the sum of the logarithms of the factors. Therefore, when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution"). Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different [random](https://en.wikipedia.org/wiki/Random "Random") factors, so they follow a log-normal distribution. This multiplicative version of the central limit theorem is sometimes called [Gibrat's law](https://en.wikipedia.org/wiki/Gibrat%27s_law "Gibrat's law"). Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable.[\[34\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Rempala-34) ## Beyond the classical framework \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=22 "Edit section: Beyond the classical framework")\] Asymptotic normality, that is, [convergence](https://en.wikipedia.org/wiki/Convergence_in_distribution "Convergence in distribution") to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now. ### Convex body \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=23 "Edit section: Convex body")\] **Theorem**—There exists a sequence *Δn* ↓ 0 for which the following holds. Let *n* ≄ 1, and let random variables *X*1, ..., *Xn* have a [log-concave](https://en.wikipedia.org/wiki/Logarithmically_concave_function "Logarithmically concave function") [joint density](https://en.wikipedia.org/wiki/Joint_density_function "Joint density function") f such that *f*(*x*1, ..., *xn*) = *f*(\|*x*1\|, ..., \|*xn*\|) for all *x*1, ..., *xn*, and E(*X*2 *k*) = 1 for all *k* = 1, ..., *n*. Then the distribution of X 1 \+ ⋯ \+ X n n {\\displaystyle {\\frac {X\_{1}+\\cdots +X\_{n}}{\\sqrt {n}}}} ![{\\displaystyle {\\frac {X\_{1}+\\cdots +X\_{n}}{\\sqrt {n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/506e5cbbd5ebf53c483a6b0434c76bca48a9bdfb) is Δn\-close to N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) in the [total variation distance](https://en.wikipedia.org/wiki/Total_variation_distance_of_probability_measures "Total variation distance of probability measures").[\[35\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEKlartag2007Theorem_1.2-35) These two Δn\-close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence. An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies". Another example: *f*(*x*1, ..., *xn*) = const · exp(−(\|*x*1\|*α* + ⋯ + \|*xn*\|*α*)*ÎČ*) where *α* \> 1 and *αÎČ* \> 1. If *ÎČ* = 1 then *f*(*x*1, ..., *xn*) factorizes into const · exp (−\|*x*1\|*α*) 
 exp(−\|*xn*\|*α*), which means *X*1, ..., *Xn* are independent. In general, however, they are dependent. The condition *f*(*x*1, ..., *xn*) = *f*(\|*x*1\|, ..., \|*xn*\|) ensures that *X*1, ..., *Xn* are of zero mean and [uncorrelated](https://en.wikipedia.org/wiki/Uncorrelated "Uncorrelated");\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] still, they need not be independent, nor even [pairwise independent](https://en.wikipedia.org/wiki/Pairwise_independence "Pairwise independence").\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] By the way, pairwise independence cannot replace independence in the classical central limit theorem.[\[36\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEDurrett2004Section_2.4,_Example_4.5-36) Here is a [Berry–Esseen](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem") type result. **Theorem**—Let *X*1, ..., *Xn* satisfy the assumptions of the previous theorem, then[\[37\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEKlartag2008Theorem_1-37) \| P ( a ≀ X 1 \+ ⋯ \+ X n n ≀ b ) − 1 2 π ∫ a b e − 1 2 t 2 d t \| ≀ C n {\\displaystyle \\left\|\\mathbb {P} \\left(a\\leq {\\frac {X\_{1}+\\cdots +X\_{n}}{\\sqrt {n}}}\\leq b\\right)-{\\frac {1}{\\sqrt {2\\pi }}}\\int \_{a}^{b}e^{-{\\frac {1}{2}}t^{2}}\\,dt\\right\|\\leq {\\frac {C}{n}}} ![{\\displaystyle \\left\|\\mathbb {P} \\left(a\\leq {\\frac {X\_{1}+\\cdots +X\_{n}}{\\sqrt {n}}}\\leq b\\right)-{\\frac {1}{\\sqrt {2\\pi }}}\\int \_{a}^{b}e^{-{\\frac {1}{2}}t^{2}}\\,dt\\right\|\\leq {\\frac {C}{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c68c5077344e1cc79881c190943e766a8f542e00) for all *a* \< *b*; here C is a [universal (absolute) constant](https://en.wikipedia.org/wiki/Mathematical_constant "Mathematical constant"). Moreover, for every *c*1, ..., *cn* ∈ **R** such that *c*2 1 + ⋯ + *c*2 *n* = 1, \| P ( a ≀ c 1 X 1 \+ ⋯ \+ c n X n ≀ b ) − 1 2 π ∫ a b e − 1 2 t 2 d t \| ≀ C ( c 1 4 \+ ⋯ \+ c n 4 ) . {\\displaystyle \\left\|\\mathbb {P} \\left(a\\leq c\_{1}X\_{1}+\\cdots +c\_{n}X\_{n}\\leq b\\right)-{\\frac {1}{\\sqrt {2\\pi }}}\\int \_{a}^{b}e^{-{\\frac {1}{2}}t^{2}}\\,dt\\right\|\\leq C\\left(c\_{1}^{4}+\\dots +c\_{n}^{4}\\right).} ![{\\displaystyle \\left\|\\mathbb {P} \\left(a\\leq c\_{1}X\_{1}+\\cdots +c\_{n}X\_{n}\\leq b\\right)-{\\frac {1}{\\sqrt {2\\pi }}}\\int \_{a}^{b}e^{-{\\frac {1}{2}}t^{2}}\\,dt\\right\|\\leq C\\left(c\_{1}^{4}+\\dots +c\_{n}^{4}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bbc1a71d4ee2ee74bfd8b7b67959d79c171afc08) The distribution of ⁠*X*1 + ⋯ + *Xn*/√*n*⁠ need not be approximately normal (in fact, it can be uniform).[\[38\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEKlartag2007Theorem_1.1-38) However, the distribution of *c*1*X*1 + ⋯ + *cnXn* is close to N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) (in the total variation distance) for most vectors (*c*1, ..., *cn*) according to the uniform distribution on the sphere *c*2 1 + ⋯ + *c*2 *n* = 1. ### Lacunary trigonometric series \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=24 "Edit section: Lacunary trigonometric series")\] **Theorem ([Salem](https://en.wikipedia.org/wiki/Rapha%C3%ABl_Salem "RaphaĂ«l Salem")–[Zygmund](https://en.wikipedia.org/wiki/Antoni_Zygmund "Antoni Zygmund"))**—Let U be a random variable distributed uniformly on (0,2π), and *Xk* = *rk* cos(*nkU* + *ak*), where - nk satisfy the lacunarity condition: there exists *q* \> 1 such that *n**k* + 1 ≄ *qn**k* for all k, - rk are such that r 1 2 \+ r 2 2 \+ ⋯ \= ∞ and r k 2 r 1 2 \+ ⋯ \+ r k 2 → 0 , {\\displaystyle r\_{1}^{2}+r\_{2}^{2}+\\cdots =\\infty \\quad {\\text{ and }}\\quad {\\frac {r\_{k}^{2}}{r\_{1}^{2}+\\cdots +r\_{k}^{2}}}\\to 0,} ![{\\displaystyle r\_{1}^{2}+r\_{2}^{2}+\\cdots =\\infty \\quad {\\text{ and }}\\quad {\\frac {r\_{k}^{2}}{r\_{1}^{2}+\\cdots +r\_{k}^{2}}}\\to 0,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7d87209928a7946d7d202155512b294c79e72ada) - 0 ≀ *a**k* \< 2π. Then[\[39\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Zygmund-39)[\[40\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEGaposhkin1966Theorem_2.1.13-40) X 1 \+ ⋯ \+ X k r 1 2 \+ ⋯ \+ r k 2 {\\displaystyle {\\frac {X\_{1}+\\cdots +X\_{k}}{\\sqrt {r\_{1}^{2}+\\cdots +r\_{k}^{2}}}}} ![{\\displaystyle {\\frac {X\_{1}+\\cdots +X\_{k}}{\\sqrt {r\_{1}^{2}+\\cdots +r\_{k}^{2}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f65a0da4b413fce55317ae98f18905fe91442a2c) converges in distribution to N ( 0 , 1 2 ) {\\textstyle {\\mathcal {N}}{\\big (}0,{\\frac {1}{2}}{\\big )}} ![{\\textstyle {\\mathcal {N}}{\\big (}0,{\\frac {1}{2}}{\\big )}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8c8d6598f79fac7b8c5f683f4319e95306e0ce9). ### Gaussian polytopes \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=25 "Edit section: Gaussian polytopes")\] **Theorem**—Let *A*1, ..., *A**n* be independent random points on the plane **R**2 each having the two-dimensional standard normal distribution. Let Kn be the [convex hull](https://en.wikipedia.org/wiki/Convex_hull "Convex hull") of these points, and Xn the area of Kn Then[\[41\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.1-41) X n − E ⁥ ( X n ) Var ⁥ ( X n ) {\\displaystyle {\\frac {X\_{n}-\\operatorname {E} (X\_{n})}{\\sqrt {\\operatorname {Var} (X\_{n})}}}} ![{\\displaystyle {\\frac {X\_{n}-\\operatorname {E} (X\_{n})}{\\sqrt {\\operatorname {Var} (X\_{n})}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/98854722b040253e6fe84713c8d08c0307afb0a7) converges in distribution to N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as n tends to infinity. The same also holds in all dimensions greater than 2. The [polytope](https://en.wikipedia.org/wiki/Convex_polytope "Convex polytope") Kn is called a Gaussian [random polytope](https://en.wikipedia.org/wiki/Random_polytope "Random polytope"). A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions.[\[42\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.2-42) ### Linear functions of orthogonal matrices \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=26 "Edit section: Linear functions of orthogonal matrices")\] A linear function of a matrix **M** is a linear combination of its elements (with given coefficients), **M** ↩ tr(**AM**) where **A** is the matrix of the coefficients; see [Trace (linear algebra)\#Inner product](https://en.wikipedia.org/wiki/Trace_\(linear_algebra\)#Inner_product "Trace (linear algebra)"). A random [orthogonal matrix](https://en.wikipedia.org/wiki/Orthogonal_matrix "Orthogonal matrix") is said to be distributed uniformly, if its distribution is the normalized [Haar measure](https://en.wikipedia.org/wiki/Haar_measure "Haar measure") on the [orthogonal group](https://en.wikipedia.org/wiki/Orthogonal_group "Orthogonal group") O(*n*,**R**); see [Rotation matrix\#Uniform random rotation matrices](https://en.wikipedia.org/wiki/Rotation_matrix#Uniform_random_rotation_matrices "Rotation matrix"). **Theorem**—Let **M** be a random orthogonal *n* × *n* matrix distributed uniformly, and **A** a fixed *n* × *n* matrix such that tr(**AA**\*) = *n*, and let *X* = tr(**AM**). Then[\[43\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Meckes-43) the distribution of X is close to N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) in the total variation metric up to\[*[clarification needed](https://en.wikipedia.org/wiki/Wikipedia:Please_clarify "Wikipedia:Please clarify")*\] ⁠2√3/*n* − 1⁠. ### Subsequences \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=27 "Edit section: Subsequences")\] **Theorem**—Let random variables *X*1, *X*2, ... ∈ *L*2(Ω) be such that *Xn* → 0 [weakly](https://en.wikipedia.org/wiki/Weak_convergence_\(Hilbert_space\) "Weak convergence (Hilbert space)") in *L*2(Ω) and *X* *n* → 1 weakly in *L*1(Ω). Then there exist integers *n*1 \< *n*2 \< ⋯ such that X n 1 \+ ⋯ \+ X n k k {\\displaystyle {\\frac {X\_{n\_{1}}+\\cdots +X\_{n\_{k}}}{\\sqrt {k}}}} ![{\\displaystyle {\\frac {X\_{n\_{1}}+\\cdots +X\_{n\_{k}}}{\\sqrt {k}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2926d591f2008d629880d119762ac66b916f18f2) converges in distribution to N ( 0 , 1 ) {\\textstyle {\\mathcal {N}}(0,1)} ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as k tends to infinity.[\[44\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEGaposhkin1966Sect._1.5-44) ### Random walk on a crystal lattice \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=28 "Edit section: Random walk on a crystal lattice")\] The central limit theorem may be established for the simple [random walk](https://en.wikipedia.org/wiki/Random_walk "Random walk") on a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures.[\[45\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-45)[\[46\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-46) ## Applications and examples \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=29 "Edit section: Applications and examples")\] A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample [statistics](https://en.wikipedia.org/wiki/Statistic "Statistic") to the normal distribution in controlled experiments. [![](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Dice_sum_central_limit_theorem.svg/330px-Dice_sum_central_limit_theorem.svg.png)](https://en.wikipedia.org/wiki/File:Dice_sum_central_limit_theorem.svg) Comparison of probability density functions *p*(*k*) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing n, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). [![](https://upload.wikimedia.org/wikipedia/commons/thumb/2/2d/Empirical_CLT_-_Figure_-_040711.jpg/500px-Empirical_CLT_-_Figure_-_040711.jpg)](https://en.wikipedia.org/wiki/File:Empirical_CLT_-_Figure_-_040711.jpg) This figure demonstrates the central limit theorem. The sample means are generated using a random number generator, which draws numbers between 0 and 100 from a uniform probability distribution. It illustrates that increasing sample sizes result in the 500 measured sample means being more closely distributed about the population mean (50 in this case). It also compares the observed distributions with the distributions that would be expected for a normalized Gaussian distribution, and shows the [chi-squared](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test "Pearson's chi-squared test") values that quantify the goodness of the fit (the fit is good if the reduced [chi-squared](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test "Pearson's chi-squared test") value is less than or approximately equal to one). The input into the normalized Gaussian function is the mean of sample means (~50) and the mean sample standard deviation divided by the square root of the sample size (~28.87/√*n*), which is called the standard deviation of the mean (since it refers to the spread of sample means). [![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/75/Mean-of-the-outcomes-of-rolling-a-fair-coin-n-times.svg/960px-Mean-of-the-outcomes-of-rolling-a-fair-coin-n-times.svg.png)](https://en.wikipedia.org/wiki/File:Mean-of-the-outcomes-of-rolling-a-fair-coin-n-times.svg) Another simulation using the binomial distribution. Random 0s and 1s were generated, and then their means calculated for sample sizes ranging from 1 to 2048. Note that as the sample size increases the tails become thinner and the distribution becomes more concentrated around the mean. ### Regression \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=30 "Edit section: Regression")\] [Regression analysis](https://en.wikipedia.org/wiki/Regression_analysis "Regression analysis"), and in particular [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares "Ordinary least squares"), specifies that a [dependent variable](https://en.wikipedia.org/wiki/Dependent_variable "Dependent variable") depends according to some function upon one or more [independent variables](https://en.wikipedia.org/wiki/Independent_variable "Independent variable"), with an additive [error term](https://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics "Errors and residuals in statistics"). Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution. ### Other illustrations \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=31 "Edit section: Other illustrations")\] Main article: [Illustration of the central limit theorem](https://en.wikipedia.org/wiki/Illustration_of_the_central_limit_theorem "Illustration of the central limit theorem") Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem.[\[47\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Marasinghe-47) ## History \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=32 "Edit section: History")\] Dutch mathematician [Henk Tijms](https://en.wikipedia.org/wiki/Henk_Tijms "Henk Tijms") writes:[\[48\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Tijms-48) > The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born mathematician [Abraham de Moivre](https://en.wikipedia.org/wiki/Abraham_de_Moivre "Abraham de Moivre") who, in a remarkable article published in 1733, used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This finding was far ahead of its time, and was nearly forgotten until the famous French mathematician [Pierre-Simon Laplace](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") rescued it from obscurity in his monumental work *ThĂ©orie analytique des probabilitĂ©s*, which was published in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematician [Aleksandr Lyapunov](https://en.wikipedia.org/wiki/Aleksandr_Lyapunov "Aleksandr Lyapunov") defined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory. Sir [Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") described the Central Limit Theorem in this way:[\[49\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-49) > I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error". The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along. The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by [George PĂłlya](https://en.wikipedia.org/wiki/George_P%C3%B3lya "George PĂłlya") in 1920 in the title of a paper.[\[50\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Polya1920-50)[\[51\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-LC1986-51) PĂłlya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word *central* in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails".[\[51\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-LC1986-51) The abstract of the paper *On the central limit theorem of calculus of probability and the problem of moments* by PĂłlya[\[50\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Polya1920-50) in 1920 translates as follows. > The occurrence of the Gaussian probability density 1 = *e*−*x*2 in repeated experiments, in errors of measurements, which result in the combination of very many and very small elementary errors, in diffusion processes etc., can be explained, as is well-known, by the very same limit theorem, which plays a central role in the calculus of probability. The actual discoverer of this limit theorem is to be named Laplace; it is likely that its rigorous proof was first given by Tschebyscheff and its sharpest formulation can be found, as far as I am aware of, in an article by [Liapounoff](https://en.wikipedia.org/wiki/Aleksandr_Lyapunov "Aleksandr Lyapunov"). ... A thorough account of the theorem's history, detailing Laplace's foundational work, as well as [Cauchy](https://en.wikipedia.org/wiki/Augustin-Louis_Cauchy "Augustin-Louis Cauchy")'s, [Bessel](https://en.wikipedia.org/wiki/Friedrich_Bessel "Friedrich Bessel")'s and [Poisson](https://en.wikipedia.org/wiki/Sim%C3%A9on_Denis_Poisson "SimĂ©on Denis Poisson")'s contributions, is provided by Hald.[\[52\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Hald-52) Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by [von Mises](https://en.wikipedia.org/wiki/Richard_von_Mises "Richard von Mises"), [PĂłlya](https://en.wikipedia.org/wiki/George_P%C3%B3lya "George PĂłlya"), [Lindeberg](https://en.wikipedia.org/wiki/Jarl_Waldemar_Lindeberg "Jarl Waldemar Lindeberg"), [LĂ©vy](https://en.wikipedia.org/wiki/Paul_L%C3%A9vy_\(mathematician\) "Paul LĂ©vy (mathematician)"), and [CramĂ©r](https://en.wikipedia.org/wiki/Harald_Cram%C3%A9r "Harald CramĂ©r") during the 1920s, are given by Hans Fischer.[\[53\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEFischer2011Chapter_2;_Chapter_5.2-53) Le Cam describes a period around 1935.[\[51\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-LC1986-51) Bernstein[\[54\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Bernstein-54) presents a historical discussion focusing on the work of [Pafnuty Chebyshev](https://en.wikipedia.org/wiki/Pafnuty_Chebyshev "Pafnuty Chebyshev") and his students [Andrey Markov](https://en.wikipedia.org/wiki/Andrey_Markov "Andrey Markov") and [Aleksandr Lyapunov](https://en.wikipedia.org/wiki/Aleksandr_Lyapunov "Aleksandr Lyapunov") that led to the first proofs of the CLT in a general setting. A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of [Alan Turing](https://en.wikipedia.org/wiki/Alan_Turing "Alan Turing")'s 1934 Fellowship Dissertation for [King's College](https://en.wikipedia.org/wiki/King%27s_College,_Cambridge "King's College, Cambridge") at the [University of Cambridge](https://en.wikipedia.org/wiki/University_of_Cambridge "University of Cambridge"). Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published.[\[55\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-55) ## See also \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=33 "Edit section: See also")\] - [Asymptotic equipartition property](https://en.wikipedia.org/wiki/Asymptotic_equipartition_property "Asymptotic equipartition property") - [Asymptotic distribution](https://en.wikipedia.org/wiki/Asymptotic_distribution "Asymptotic distribution") - [Bates distribution](https://en.wikipedia.org/wiki/Bates_distribution "Bates distribution") - [Benford's law](https://en.wikipedia.org/wiki/Benford%27s_law "Benford's law") – result of extension of CLT to product of random variables. - [Berry–Esseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem") - [Central limit theorem for directional statistics](https://en.wikipedia.org/wiki/Central_limit_theorem_for_directional_statistics "Central limit theorem for directional statistics") – Central limit theorem applied to the case of directional statistics - [Delta method](https://en.wikipedia.org/wiki/Delta_method "Delta method") – to compute the limit distribution of a function of a random variable. - [ErdƑs–Kac theorem](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kac_theorem "ErdƑs–Kac theorem") – connects the number of prime factors of an integer with the normal probability distribution - [Fisher–Tippett–Gnedenko theorem](https://en.wikipedia.org/wiki/Fisher%E2%80%93Tippett%E2%80%93Gnedenko_theorem "Fisher–Tippett–Gnedenko theorem") – limit theorem for extremum values (such as max{*Xn*}) - [Irwin–Hall distribution](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "Irwin–Hall distribution") - [Markov chain central limit theorem](https://en.wikipedia.org/wiki/Markov_chain_central_limit_theorem "Markov chain central limit theorem") - [Normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution") - [Tweedie convergence theorem](https://en.wikipedia.org/wiki/Tweedie_distribution "Tweedie distribution") – a theorem that can be considered to bridge between the central limit theorem and the [Poisson convergence theorem](https://en.wikipedia.org/wiki/Poisson_convergence_theorem "Poisson convergence theorem")[\[56\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-J%C3%B8rgensen-1997-56) - [Donsker's theorem](https://en.wikipedia.org/wiki/Donsker%27s_theorem "Donsker's theorem") ## Notes \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=34 "Edit section: Notes")\] 1. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEFischer2011[[Category:Wikipedia_articles_needing_page_number_citations_from_July_2023]]<sup_class="noprint_Inline-Template_"_style="white-space:nowrap;">&#91;<i>[[Wikipedia:Citing_sources|<span_title="This_citation_requires_a_reference_to_the_specific_page_or_range_of_pages_in_which_the_material_appears.&#32;\(July_2023\)">page&nbsp;needed</span>]]</i>&#93;</sup>_1-0)** [Fischer (2011)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFFischer2011), p. \[*[page needed](https://en.wikipedia.org/wiki/Wikipedia:Citing_sources "Wikipedia:Citing sources")*\]. 2. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-2)** Montgomery, Douglas C.; Runger, George C. (2014). *Applied Statistics and Probability for Engineers* (6th ed.). Wiley. p. 241. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9781118539712](https://en.wikipedia.org/wiki/Special:BookSources/9781118539712 "Special:BookSources/9781118539712") . 3. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-3)** Rouaud, Mathieu (2013). [*Probability, Statistics and Estimation*](http://www.incertitudes.fr/book.pdf) (PDF). p. 10. [Archived](https://ghostarchive.org/archive/20221009/http://www.incertitudes.fr/book.pdf) (PDF) from the original on 2022-10-09. 4. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995357_4-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), p. 357. 5. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBauer2001199Theorem_30.13_5-0)** [Bauer (2001)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBauer2001), p. 199, Theorem 30.13. 6. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995362_6-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), p. 362. 7. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-7)** Robbins, Herbert (1948). ["The asymptotic distribution of the sum of a random number of random variables"](https://projecteuclid.org/journals/bulletin-of-the-american-mathematical-society/volume-54/issue-12/The-asymptotic-distribution-of-the-sum-of-a-random-number/bams/1183513324.full). *Bull. Amer. Math. Soc*. **54** (12): 1151–1161\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0002-9904-1948-09142-X](https://doi.org/10.1090%2FS0002-9904-1948-09142-X). 8. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-8)** Chen, Louis H.Y.; Goldstein, Larry; Shao, Qi-Man (2011). *Normal Approximation by Stein's Method*. Berlin Heidelberg: Springer-Verlag. pp. 270–271\. 9. ^ [***a***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-vanderVaart_9-0) [***b***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-vanderVaart_9-1) van der Vaart, A.W. (1998). *Asymptotic statistics*. New York, NY: Cambridge University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-521-49603-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-49603-2 "Special:BookSources/978-0-521-49603-2") . [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [98015176](https://lccn.loc.gov/98015176). 10. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-10)** [O’Donnell, Ryan](https://en.wikipedia.org/wiki/Ryan_O%27Donnell_\(computer_scientist\) "Ryan O'Donnell (computer scientist)") (2014). ["Theorem 5.38"](https://web.archive.org/web/20190408054104/http://www.contrib.andrew.cmu.edu/~ryanod/?p=866). Archived from [the original](http://www.contrib.andrew.cmu.edu/~ryanod/?p=866) on 2019-04-08. Retrieved 2017-10-18. 11. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-11)** Bentkus, V. (2005). "A Lyapunov-type bound in R d {\\displaystyle \\mathbb {R} ^{d}} ![{\\displaystyle \\mathbb {R} ^{d}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a713426956296f1668fce772df3c60b9dde8a685) ". *Theory Probab. Appl*. **49** (2): 311–323\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1137/S0040585X97981123](https://doi.org/10.1137%2FS0040585X97981123). 12. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-12)** Le Cam, L. (February 1986). "The Central Limit Theorem around 1935". *Statistical Science*. **1** (1): 78–91\. [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2245503](https://www.jstor.org/stable/2245503). 13. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-13)** LĂ©vy, Paul (1937). *Theorie de l'addition des variables aleatoires* \[*Combination theory of unpredictable variables*\] (in French). Paris: Gauthier-Villars. 14. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-14)** Gnedenko, Boris Vladimirovich; Kologorov, AndreÄ­ Nikolaevich; Doob, Joseph L.; Hsu, Pao-Lu (1968). *Limit distributions for sums of independent random variables*. Reading, MA: Addison-wesley. 15. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-15)** Nolan, John P. (2020). [*Univariate stable distributions, Models for Heavy Tailed Data*](https://doi.org/10.1007/978-3-030-52915-4). Springer Series in Operations Research and Financial Engineering. Switzerland: Springer. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/978-3-030-52915-4](https://doi.org/10.1007%2F978-3-030-52915-4). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-3-030-52914-7](https://en.wikipedia.org/wiki/Special:BookSources/978-3-030-52914-7 "Special:BookSources/978-3-030-52914-7") . [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [226648987](https://api.semanticscholar.org/CorpusID:226648987). 16. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995Theorem_27.4_16-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), Theorem 27.4. 17. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEDurrett2004Sect._7.7\(c\),_Theorem_7.8_17-0)** [Durrett (2004)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFDurrett2004), Sect. 7.7(c), Theorem 7.8. 18. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEDurrett2004Sect._7.7,_Theorem_7.4_18-0)** [Durrett (2004)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFDurrett2004), Sect. 7.7, Theorem 7.4. 19. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995Theorem_35.12_19-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), Theorem 35.12. 20. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-20)** Lemons, Don (2003). [*An Introduction to Stochastic Processes in Physics*](https://jhupbooks.press.jhu.edu/content/introduction-stochastic-processes-physics). Johns Hopkins University Press. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.56021/9780801868665](https://doi.org/10.56021%2F9780801868665). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9780801876387](https://en.wikipedia.org/wiki/Special:BookSources/9780801876387 "Special:BookSources/9780801876387") . Retrieved 2016-08-11. 21. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-stein1972_21-0)** [Stein, C.](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") (1972). ["A bound for the error in the normal approximation to the distribution of a sum of dependent random variables"](https://projecteuclid.org/euclid.bsmsp/1200514239). *Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability*. **6** (2): 583–602\. [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0402873](https://mathscinet.ams.org/mathscinet-getitem?mr=0402873). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0278\.60026](https://zbmath.org/?format=complete&q=an:0278.60026). 22. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-22)** Chen, L. H. Y.; Goldstein, L.; Shao, Q. M. (2011). *Normal approximation by Stein's method*. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-3-642-15006-7](https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-15006-7 "Special:BookSources/978-3-642-15006-7") . 23. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-ABBN_23-0)** [Artstein, S.](https://en.wikipedia.org/wiki/Shiri_Artstein "Shiri Artstein"); [Ball, K.](https://en.wikipedia.org/wiki/Keith_Martin_Ball "Keith Martin Ball"); [Barthe, F.](https://en.wikipedia.org/wiki/Franck_Barthe "Franck Barthe"); [Naor, A.](https://en.wikipedia.org/wiki/Assaf_Naor "Assaf Naor") (2004). ["Solution of Shannon's Problem on the Monotonicity of Entropy"](https://doi.org/10.1090%2FS0894-0347-04-00459-X). *Journal of the American Mathematical Society*. **17** (4): 975–982\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0894-0347-04-00459-X](https://doi.org/10.1090%2FS0894-0347-04-00459-X). 24. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-24)** Brewer, J. K. (1985). "Behavioral statistics textbooks: Source of myths and misconceptions?". *Journal of Educational Statistics*. **10** (3): 252–268\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.3102/10769986010003252](https://doi.org/10.3102%2F10769986010003252). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [119611584](https://api.semanticscholar.org/CorpusID:119611584). 25. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-25)** Yu, C.; Behrens, J.; Spencer, A. Identification of Misconception in the Central Limit Theorem and Related Concepts, *American Educational Research Association* lecture 19 April 1995 26. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-26)** Sotos, A. E. C.; Vanhoof, S.; Van den Noortgate, W.; Onghena, P. (2007). ["Students' misconceptions of statistical inference: A review of the empirical evidence from research on statistics education"](https://lirias.kuleuven.be/handle/123456789/136347). *Educational Research Review*. **2** (2): 98–113\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.edurev.2007.04.001](https://doi.org/10.1016%2Fj.edurev.2007.04.001). 27. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-27)** ["Sampling distribution of the sample mean"](https://web.archive.org/web/20230602200310/https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/sampling-distribution-of-the-sample-mean). *Khan Academy*. 2 June 2023. Archived from [the original](https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/sampling-distribution-of-the-sample-mean) (video) on 2023-06-02. Retrieved 2023-10-08. 28. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-28)** Rosenthal, Jeffrey Seth (2000). *A First Look at Rigorous Probability Theory*. World Scientific. Theorem 5.3.4, p. 47. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [981-02-4322-7](https://en.wikipedia.org/wiki/Special:BookSources/981-02-4322-7 "Special:BookSources/981-02-4322-7") . 29. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-29)** Johnson, Oliver Thomas (2004). *Information Theory and the Central Limit Theorem*. Imperial College Press. p. 88. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [1-86094-473-6](https://en.wikipedia.org/wiki/Special:BookSources/1-86094-473-6 "Special:BookSources/1-86094-473-6") . 30. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Uchaikin_30-0)** Uchaikin, Vladimir V.; Zolotarev, V.M. (1999). *Chance and Stability: Stable distributions and their applications*. VSP. pp. 61–62\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [90-6764-301-7](https://en.wikipedia.org/wiki/Special:BookSources/90-6764-301-7 "Special:BookSources/90-6764-301-7") . 31. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-31)** Borodin, A. N.; Ibragimov, I. A.; Sudakov, V. N. (1995). *Limit Theorems for Functionals of Random Walks*. AMS Bookstore. Theorem 1.1, p. 8. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-8218-0438-3](https://en.wikipedia.org/wiki/Special:BookSources/0-8218-0438-3 "Special:BookSources/0-8218-0438-3") . 32. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-32)** Petrov, V. V. (1976). [*Sums of Independent Random Variables*](https://books.google.com/books?id=zSDqCAAAQBAJ). New York-Heidelberg: Springer-Verlag. ch. 7. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9783642658099](https://en.wikipedia.org/wiki/Special:BookSources/9783642658099 "Special:BookSources/9783642658099") . 33. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-33)** Hew, Patrick Chisan (2017). "Asymptotic distribution of rewards accumulated by alternating renewal processes". *Statistics and Probability Letters*. **129**: 355–359\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.spl.2017.06.027](https://doi.org/10.1016%2Fj.spl.2017.06.027). 34. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Rempala_34-0)** Rempala, G.; Wesolowski, J. (2002). ["Asymptotics of products of sums and *U*\-statistics"](https://projecteuclid.org/journals/electronic-communications-in-probability/volume-7/issue-none/Asymptotics-for-Products-of-Sums-and-U-statistics/10.1214/ECP.v7-1046.pdf) (PDF). *Electronic Communications in Probability*. **7**: 47–54\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/ecp.v7-1046](https://doi.org/10.1214%2Fecp.v7-1046). 35. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEKlartag2007Theorem_1.2_35-0)** [Klartag (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFKlartag2007), Theorem 1.2. 36. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEDurrett2004Section_2.4,_Example_4.5_36-0)** [Durrett (2004)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFDurrett2004), Section 2.4, Example 4.5. 37. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEKlartag2008Theorem_1_37-0)** [Klartag (2008)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFKlartag2008), Theorem 1. 38. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEKlartag2007Theorem_1.1_38-0)** [Klartag (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFKlartag2007), Theorem 1.1. 39. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Zygmund_39-0)** [Zygmund, Antoni](https://en.wikipedia.org/wiki/Antoni_Zygmund "Antoni Zygmund") (2003) \[1959\]. [*Trigonometric Series*](https://en.wikipedia.org/wiki/Trigonometric_Series "Trigonometric Series"). Cambridge University Press. vol. II, sect. XVI.5, Theorem 5-5. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-521-89053-5](https://en.wikipedia.org/wiki/Special:BookSources/0-521-89053-5 "Special:BookSources/0-521-89053-5") . 40. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEGaposhkin1966Theorem_2.1.13_40-0)** [Gaposhkin (1966)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFGaposhkin1966), Theorem 2.1.13. 41. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.1_41-0)** [BĂĄrĂĄny & Vu (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFB%C3%A1r%C3%A1nyVu2007), Theorem 1.1. 42. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.2_42-0)** [BĂĄrĂĄny & Vu (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFB%C3%A1r%C3%A1nyVu2007), Theorem 1.2. 43. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Meckes_43-0)** [Meckes, Elizabeth](https://en.wikipedia.org/wiki/Elizabeth_Meckes "Elizabeth Meckes") (2008). "Linear functions on the classical matrix groups". *Transactions of the American Mathematical Society*. **360** (10): 5355–5366\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0509441](https://arxiv.org/abs/math/0509441). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0002-9947-08-04444-9](https://doi.org/10.1090%2FS0002-9947-08-04444-9). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [11981408](https://api.semanticscholar.org/CorpusID:11981408). 44. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEGaposhkin1966Sect._1.5_44-0)** [Gaposhkin (1966)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFGaposhkin1966), Sect. 1.5. 45. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-45)** Kotani, M.; [Sunada, Toshikazu](https://en.wikipedia.org/wiki/Toshikazu_Sunada "Toshikazu Sunada") (2003). *Spectral geometry of crystal lattices*. Vol. 338. Contemporary Math. pp. 271–305\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8218-4269-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-4269-0 "Special:BookSources/978-0-8218-4269-0") . 46. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-46)** [Sunada, Toshikazu](https://en.wikipedia.org/wiki/Toshikazu_Sunada "Toshikazu Sunada") (2012). *Topological Crystallography – With a View Towards Discrete Geometric Analysis*. Surveys and Tutorials in the Applied Mathematical Sciences. Vol. 6. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-4-431-54177-6](https://en.wikipedia.org/wiki/Special:BookSources/978-4-431-54177-6 "Special:BookSources/978-4-431-54177-6") . 47. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Marasinghe_47-0)** Marasinghe, M.; Meeker, W.; Cook, D.; Shin, T. S. (August 1994). *Using graphics and simulation to teach statistical concepts*. Annual meeting of the American Statistician Association, Toronto, Canada. 48. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Tijms_48-0)** Henk, Tijms (2004). *Understanding Probability: Chance Rules in Everyday Life*. Cambridge: Cambridge University Press. p. 169. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-521-54036-4](https://en.wikipedia.org/wiki/Special:BookSources/0-521-54036-4 "Special:BookSources/0-521-54036-4") . 49. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-49)** Galton, F. (1889). [*Natural Inheritance*](https://galton.org/cgi-bin/searchImages/galton/search/books/natural-inheritance/pages/natural-inheritance_0073.htm). p. 66. 50. ^ [***a***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Polya1920_50-0) [***b***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Polya1920_50-1) [PĂłlya, George](https://en.wikipedia.org/wiki/George_P%C3%B3lya "George PĂłlya") (1920). ["Über den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem"](https://www-gdz.sub.uni-goettingen.de/cgi-bin/digbib.cgi?PPN266833020_0008) \[On the central limit theorem of probability calculation and the problem of moments\]. *[Mathematische Zeitschrift](https://en.wikipedia.org/wiki/Mathematische_Zeitschrift "Mathematische Zeitschrift")* (in German). **8** (3–4\): 171–181\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/BF01206525](https://doi.org/10.1007%2FBF01206525). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [123063388](https://api.semanticscholar.org/CorpusID:123063388). 51. ^ [***a***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-LC1986_51-0) [***b***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-LC1986_51-1) [***c***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-LC1986_51-2) [Le Cam, Lucien](https://en.wikipedia.org/wiki/Lucien_Le_Cam "Lucien Le Cam") (1986). ["The central limit theorem around 1935"](http://projecteuclid.org/euclid.ss/1177013818). *Statistical Science*. **1** (1): 78–91\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/ss/1177013818](https://doi.org/10.1214%2Fss%2F1177013818). 52. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Hald_52-0)** Hald, Andreas (22 April 1998). [*A History of Mathematical Statistics from 1750 to 1930*](http://www.gbv.de/dms/goettingen/229762905.pdf) (PDF). Wiley. chapter 17. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0471179122](https://en.wikipedia.org/wiki/Special:BookSources/978-0471179122 "Special:BookSources/978-0471179122") . [Archived](https://ghostarchive.org/archive/20221009/http://www.gbv.de/dms/goettingen/229762905.pdf) (PDF) from the original on 2022-10-09. 53. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEFischer2011Chapter_2;_Chapter_5.2_53-0)** [Fischer (2011)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFFischer2011), Chapter 2; Chapter 5.2. 54. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Bernstein_54-0)** [Bernstein, S. N.](https://en.wikipedia.org/wiki/Sergei_Natanovich_Bernstein "Sergei Natanovich Bernstein") (1945). "On the work of P. L. Chebyshev in Probability Theory". In Bernstein., S. N. (ed.). *Nauchnoe Nasledie P. L. Chebysheva. Vypusk Pervyi: Matematika* \[*The Scientific Legacy of P. L. Chebyshev. Part I: Mathematics*\] (in Russian). Moscow & Leningrad: Academiya Nauk SSSR. p. 174. 55. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-55)** Zabell, S. L. (1995). "Alan Turing and the Central Limit Theorem". *American Mathematical Monthly*. **102** (6): 483–494\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/00029890.1995.12004608](https://doi.org/10.1080%2F00029890.1995.12004608). 56. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-J%C3%B8rgensen-1997_56-0)** JĂžrgensen, Bent (1997). *The Theory of Dispersion Models*. Chapman & Hall. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0412997112](https://en.wikipedia.org/wiki/Special:BookSources/978-0412997112 "Special:BookSources/978-0412997112") . ## References \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=35 "Edit section: References")\] - [BĂĄrĂĄny, Imre](https://en.wikipedia.org/wiki/Imre_B%C3%A1r%C3%A1ny "Imre BĂĄrĂĄny"); Vu, Van (2007). "Central limit theorems for Gaussian polytopes". *Annals of Probability*. **35** (4). Institute of Mathematical Statistics: 1593–1621\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0610192](https://arxiv.org/abs/math/0610192). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/009117906000000791](https://doi.org/10.1214%2F009117906000000791). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [9128253](https://api.semanticscholar.org/CorpusID:9128253). - Bauer, Heinz (2001). *Measure and Integration Theory*. Berlin: de Gruyter. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [3110167190](https://en.wikipedia.org/wiki/Special:BookSources/3110167190 "Special:BookSources/3110167190") . - Billingsley, Patrick (1995). *Probability and Measure* (3rd ed.). John Wiley & Sons. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-471-00710-2](https://en.wikipedia.org/wiki/Special:BookSources/0-471-00710-2 "Special:BookSources/0-471-00710-2") . - Bradley, Richard (2005). "Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions". *Probability Surveys*. **2**: 107–144\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0511078](https://arxiv.org/abs/math/0511078). [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2005math.....11078B](https://ui.adsabs.harvard.edu/abs/2005math.....11078B). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/154957805100000104](https://doi.org/10.1214%2F154957805100000104). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [8395267](https://api.semanticscholar.org/CorpusID:8395267). - Bradley, Richard (2007). *Introduction to Strong Mixing Conditions* (1st ed.). Heber City, UT: Kendrick Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-9740427-9-4](https://en.wikipedia.org/wiki/Special:BookSources/978-0-9740427-9-4 "Special:BookSources/978-0-9740427-9-4") . - Dinov, Ivo; Christou, Nicolas; Sanchez, Juana (2008). ["Central Limit Theorem: New SOCR Applet and Demonstration Activity"](https://web.archive.org/web/20160303185802/http://www.amstat.org/publications/jse/v16n2/dinov.html). *Journal of Statistics Education*. **16** (2). ASA: 1–15\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/10691898.2008.11889560](https://doi.org/10.1080%2F10691898.2008.11889560). [PMC](https://en.wikipedia.org/wiki/PMC_\(identifier\) "PMC (identifier)") [3152447](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3152447). [PMID](https://en.wikipedia.org/wiki/PMID_\(identifier\) "PMID (identifier)") [21833159](https://pubmed.ncbi.nlm.nih.gov/21833159). Archived from [the original](http://www.amstat.org/publications/jse/v16n2/dinov.html) on 2016-03-03. Retrieved 2008-08-23. - [Durrett, Richard](https://en.wikipedia.org/wiki/Rick_Durrett "Rick Durrett") (2004). *Probability: theory and examples* (3rd ed.). Cambridge University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0521765390](https://en.wikipedia.org/wiki/Special:BookSources/0521765390 "Special:BookSources/0521765390") . - Fischer, Hans (2011). [*A History of the Central Limit Theorem: From Classical to Modern Probability Theory*](http://www.medicine.mcgill.ca/epidemiology/hanley/bios601/GaussianModel/HistoryCentralLimitTheorem.pdf) (PDF). Sources and Studies in the History of Mathematics and Physical Sciences. New York: Springer. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/978-0-387-87857-7](https://doi.org/10.1007%2F978-0-387-87857-7). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-87856-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-87856-0 "Special:BookSources/978-0-387-87856-0") . [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [2743162](https://mathscinet.ams.org/mathscinet-getitem?mr=2743162). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [1226\.60004](https://zbmath.org/?format=complete&q=an:1226.60004). [Archived](https://web.archive.org/web/20171031171033/http://www.medicine.mcgill.ca/epidemiology/hanley/bios601/GaussianModel/HistoryCentralLimitTheorem.pdf) (PDF) from the original on 2017-10-31. - Gaposhkin, V. F. (1966). "Lacunary series and independent functions". *Russian Mathematical Surveys*. **21** (6): 1–82\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1966RuMaS..21....1G](https://ui.adsabs.harvard.edu/abs/1966RuMaS..21....1G). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1070/RM1966v021n06ABEH001196](https://doi.org/10.1070%2FRM1966v021n06ABEH001196). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [250833638](https://api.semanticscholar.org/CorpusID:250833638). . - Klartag, Bo'az (2007). "A central limit theorem for convex sets". *Inventiones Mathematicae*. **168** (1): 91–131\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0605014](https://arxiv.org/abs/math/0605014). [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2007InMat.168...91K](https://ui.adsabs.harvard.edu/abs/2007InMat.168...91K). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00222-006-0028-8](https://doi.org/10.1007%2Fs00222-006-0028-8). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [119169773](https://api.semanticscholar.org/CorpusID:119169773). - Klartag, Bo'az (2008). "A Berry–Esseen type inequality for convex bodies with an unconditional basis". *Probability Theory and Related Fields*. **145** (1–2\): 1–33\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[0705\.0832](https://arxiv.org/abs/0705.0832). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00440-008-0158-6](https://doi.org/10.1007%2Fs00440-008-0158-6). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [10163322](https://api.semanticscholar.org/CorpusID:10163322). ## External links \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=36 "Edit section: External links")\] [![logo](https://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/40px-Commons-logo.svg.png)](https://en.wikipedia.org/wiki/File:Commons-logo.svg) Wikimedia Commons has media related to [Central limit theorem](https://commons.wikimedia.org/wiki/Category:Central_limit_theorem "commons:Category:Central limit theorem"). - [Central Limit Theorem](https://www.khanacademy.org/math/probability/statistics-inferential/sampling_distribution/v/central-limit-theorem) at Khan Academy - ["Central limit theorem"](https://www.encyclopediaofmath.org/index.php?title=Central_limit_theorem). *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*. [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"). 2001 \[1994\]. - [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Central Limit Theorem"](https://mathworld.wolfram.com/CentralLimitTheorem.html). *[MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld")*. - [A music video demonstrating the central limit theorem with a Galton board](https://www.mctague.org/carl/blog/2021/04/23/central-limit-theorem/) by Carl McTague | [v](https://en.wikipedia.org/wiki/Template:Statistics "Template:Statistics") [t](https://en.wikipedia.org/wiki/Template_talk:Statistics "Template talk:Statistics") [e](https://en.wikipedia.org/wiki/Special:EditPage/Template:Statistics "Special:EditPage/Template:Statistics")[Statistics](https://en.wikipedia.org/wiki/Statistics "Statistics") | | |---|---| | [Outline](https://en.wikipedia.org/wiki/Outline_of_statistics "Outline of statistics") [Index](https://en.wikipedia.org/wiki/List_of_statistics_articles "List of statistics articles") | | | [Descriptive statistics](https://en.wikipedia.org/wiki/Descriptive_statistics "Descriptive statistics") | | | | | | [Continuous data](https://en.wikipedia.org/wiki/Continuous_probability_distribution "Continuous probability distribution") | | | | | | [Center](https://en.wikipedia.org/wiki/Central_tendency "Central tendency") | [Mean](https://en.wikipedia.org/wiki/Mean "Mean") [Arithmetic](https://en.wikipedia.org/wiki/Arithmetic_mean "Arithmetic mean") [Arithmetic-Geometric](https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric_mean "Arithmetic–geometric mean") [Contraharmonic](https://en.wikipedia.org/wiki/Contraharmonic_mean "Contraharmonic mean") [Cubic](https://en.wikipedia.org/wiki/Cubic_mean "Cubic mean") [Generalized/power](https://en.wikipedia.org/wiki/Generalized_mean "Generalized mean") [Geometric](https://en.wikipedia.org/wiki/Geometric_mean "Geometric mean") [Harmonic](https://en.wikipedia.org/wiki/Harmonic_mean "Harmonic mean") [Heronian](https://en.wikipedia.org/wiki/Heronian_mean "Heronian mean") [Heinz](https://en.wikipedia.org/wiki/Heinz_mean "Heinz mean") [Lehmer](https://en.wikipedia.org/wiki/Lehmer_mean "Lehmer mean") [Median](https://en.wikipedia.org/wiki/Median "Median") [Mode](https://en.wikipedia.org/wiki/Mode_\(statistics\) "Mode (statistics)") | | [Dispersion](https://en.wikipedia.org/wiki/Statistical_dispersion "Statistical dispersion") | [Average absolute deviation](https://en.wikipedia.org/wiki/Average_absolute_deviation "Average absolute deviation") [Coefficient of variation](https://en.wikipedia.org/wiki/Coefficient_of_variation "Coefficient of variation") [Interquartile range](https://en.wikipedia.org/wiki/Interquartile_range "Interquartile range") [Percentile](https://en.wikipedia.org/wiki/Percentile "Percentile") [Range](https://en.wikipedia.org/wiki/Range_\(statistics\) "Range (statistics)") [Standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "Standard deviation") [Variance](https://en.wikipedia.org/wiki/Variance#Sample_variance "Variance") | | [Shape](https://en.wikipedia.org/wiki/Shape_of_the_distribution "Shape of the distribution") | [Central limit theorem]() [Moments](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") [Kurtosis](https://en.wikipedia.org/wiki/Kurtosis "Kurtosis") [L-moments](https://en.wikipedia.org/wiki/L-moment "L-moment") [Skewness](https://en.wikipedia.org/wiki/Skewness "Skewness") | | [Count data](https://en.wikipedia.org/wiki/Count_data "Count data") | [Index of dispersion](https://en.wikipedia.org/wiki/Index_of_dispersion "Index of dispersion") | | Summary tables | [Contingency table](https://en.wikipedia.org/wiki/Contingency_table "Contingency table") [Frequency distribution](https://en.wikipedia.org/wiki/Frequency_distribution "Frequency distribution") [Grouped data](https://en.wikipedia.org/wiki/Grouped_data "Grouped data") | | [Dependence](https://en.wikipedia.org/wiki/Correlation_and_dependence "Correlation and dependence") | [Partial correlation](https://en.wikipedia.org/wiki/Partial_correlation "Partial correlation") [Pearson product-moment correlation](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient "Pearson correlation coefficient") [Rank correlation](https://en.wikipedia.org/wiki/Rank_correlation "Rank correlation") [Kendall's τ](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient "Kendall rank correlation coefficient") [Spearman's ρ](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient "Spearman's rank correlation coefficient") [Scatter plot](https://en.wikipedia.org/wiki/Scatter_plot "Scatter plot") | | [Graphics](https://en.wikipedia.org/wiki/Statistical_graphics "Statistical graphics") | [Bar chart](https://en.wikipedia.org/wiki/Bar_chart "Bar chart") [Biplot](https://en.wikipedia.org/wiki/Biplot "Biplot") [Box plot](https://en.wikipedia.org/wiki/Box_plot "Box plot") [Control chart](https://en.wikipedia.org/wiki/Control_chart "Control chart") [Correlogram](https://en.wikipedia.org/wiki/Correlogram "Correlogram") [Fan chart](https://en.wikipedia.org/wiki/Fan_chart_\(statistics\) "Fan chart (statistics)") [Forest plot](https://en.wikipedia.org/wiki/Forest_plot "Forest plot") [Histogram](https://en.wikipedia.org/wiki/Histogram "Histogram") [Pie chart](https://en.wikipedia.org/wiki/Pie_chart "Pie chart") [Q–Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot "Q–Q plot") [Radar chart](https://en.wikipedia.org/wiki/Radar_chart "Radar chart") [Run chart](https://en.wikipedia.org/wiki/Run_chart "Run chart") [Scatter plot](https://en.wikipedia.org/wiki/Scatter_plot "Scatter plot") [Stem-and-leaf display](https://en.wikipedia.org/wiki/Stem-and-leaf_display "Stem-and-leaf display") [Violin plot](https://en.wikipedia.org/wiki/Violin_plot "Violin plot") [Heatmap](https://en.wikipedia.org/wiki/Heatmap "Heatmap") [Scatter Plot Matrix](https://en.wikipedia.org/wiki/Scatter_plot "Scatter plot") [ECDF plot](https://en.wikipedia.org/wiki/Empirical_distribution_function "Empirical distribution function") | | [Statistical data processing](https://en.wikipedia.org/wiki/Data_preprocessing "Data preprocessing") | | | | | | [Transformations](https://en.wikipedia.org/wiki/Data_transformation_\(statistics\) "Data transformation (statistics)") | [Data transformation](https://en.wikipedia.org/wiki/Data_transformation_\(statistics\) "Data transformation (statistics)") [Log transformation](https://en.wikipedia.org/w/index.php?title=Log_transformation&action=edit&redlink=1 "Log transformation (page does not exist)") [Power transform](https://en.wikipedia.org/wiki/Power_transform "Power transform") [Box–Cox transformation](https://en.wikipedia.org/wiki/Box%E2%80%93Cox_transformation "Box–Cox transformation") [Yeo–Johnson transformation](https://en.wikipedia.org/wiki/Yeo%E2%80%93Johnson_transformation "Yeo–Johnson transformation") [Variance-stabilizing transformation](https://en.wikipedia.org/wiki/Variance-stabilizing_transformation "Variance-stabilizing transformation") [Anscombe transform](https://en.wikipedia.org/wiki/Anscombe_transform "Anscombe transform") [Fisher transformation](https://en.wikipedia.org/wiki/Fisher_transformation "Fisher transformation") | | [Scaling and normalization](https://en.wikipedia.org/wiki/Feature_scaling "Feature scaling") | [Feature scaling](https://en.wikipedia.org/wiki/Feature_scaling "Feature scaling") [Normalization](https://en.wikipedia.org/wiki/Normalization_\(statistics\) "Normalization (statistics)") [Standardization (z-score)](https://en.wikipedia.org/wiki/Standard_score "Standard score") [Min–max normalization](https://en.wikipedia.org/w/index.php?title=Min%E2%80%93max_normalization&action=edit&redlink=1 "Min–max normalization (page does not exist)") [Unit vector normalization](https://en.wikipedia.org/w/index.php?title=Unit_vector_normalization&action=edit&redlink=1 "Unit vector normalization (page does not exist)") | | Data cleaning | [Data cleaning](https://en.wikipedia.org/wiki/Data_cleaning "Data cleaning") [Outlier](https://en.wikipedia.org/wiki/Outlier "Outlier") [Winsorizing](https://en.wikipedia.org/wiki/Winsorizing "Winsorizing") [Truncation](https://en.wikipedia.org/wiki/Truncation_\(statistics\) "Truncation (statistics)") [Missing data](https://en.wikipedia.org/wiki/Missing_data "Missing data") | | Data reduction | [Dimensionality reduction](https://en.wikipedia.org/wiki/Dimensionality_reduction "Dimensionality reduction") [Principal component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis "Principal component analysis") [Factor analysis](https://en.wikipedia.org/wiki/Factor_analysis "Factor analysis") | | Time-series preprocessing | [Differencing](https://en.wikipedia.org/wiki/Differencing "Differencing") [Detrending](https://en.wikipedia.org/wiki/Detrending "Detrending") [Seasonal adjustment](https://en.wikipedia.org/wiki/Seasonal_adjustment "Seasonal adjustment") [Stationarity transformation](https://en.wikipedia.org/wiki/Stationary_process "Stationary process") | | [Data collection](https://en.wikipedia.org/wiki/Data_collection "Data collection") | | | | | | [Study design](https://en.wikipedia.org/wiki/Design_of_experiments "Design of experiments") | [Effect size](https://en.wikipedia.org/wiki/Effect_size "Effect size") [Missing data](https://en.wikipedia.org/wiki/Missing_data "Missing data") [Optimal design](https://en.wikipedia.org/wiki/Optimal_design "Optimal design") [Population](https://en.wikipedia.org/wiki/Statistical_population "Statistical population") [Replication](https://en.wikipedia.org/wiki/Replication_\(statistics\) "Replication (statistics)") [Sample size determination](https://en.wikipedia.org/wiki/Sample_size_determination "Sample size determination") [Statistic](https://en.wikipedia.org/wiki/Statistic "Statistic") [Statistical power](https://en.wikipedia.org/wiki/Statistical_power "Statistical power") | | [Survey methodology](https://en.wikipedia.org/wiki/Survey_methodology "Survey methodology") | [Sampling](https://en.wikipedia.org/wiki/Sampling_\(statistics\) "Sampling (statistics)") [Cluster](https://en.wikipedia.org/wiki/Cluster_sampling "Cluster sampling") [Stratified](https://en.wikipedia.org/wiki/Stratified_sampling "Stratified sampling") [Opinion poll](https://en.wikipedia.org/wiki/Opinion_poll "Opinion poll") [Questionnaire](https://en.wikipedia.org/wiki/Questionnaire "Questionnaire") [Standard error](https://en.wikipedia.org/wiki/Standard_error "Standard error") | | [Controlled experiments](https://en.wikipedia.org/wiki/Experiment "Experiment") | [Blocking](https://en.wikipedia.org/wiki/Blocking_\(statistics\) "Blocking (statistics)") [Factorial experiment](https://en.wikipedia.org/wiki/Factorial_experiment "Factorial experiment") [Interaction](https://en.wikipedia.org/wiki/Interaction_\(statistics\) "Interaction (statistics)") [Random assignment](https://en.wikipedia.org/wiki/Random_assignment "Random assignment") [Randomized controlled trial](https://en.wikipedia.org/wiki/Randomized_controlled_trial "Randomized controlled trial") [Randomized experiment](https://en.wikipedia.org/wiki/Randomized_experiment "Randomized experiment") [Scientific control](https://en.wikipedia.org/wiki/Scientific_control "Scientific control") | | Adaptive designs | [Adaptive clinical trial](https://en.wikipedia.org/wiki/Adaptive_clinical_trial "Adaptive clinical trial") [Stochastic approximation](https://en.wikipedia.org/wiki/Stochastic_approximation "Stochastic approximation") [Up-and-down designs](https://en.wikipedia.org/wiki/Up-and-Down_Designs "Up-and-Down Designs") | | [Observational studies](https://en.wikipedia.org/wiki/Observational_study "Observational study") | [Cohort study](https://en.wikipedia.org/wiki/Cohort_study "Cohort study") [Cross-sectional study](https://en.wikipedia.org/wiki/Cross-sectional_study "Cross-sectional study") [Natural experiment](https://en.wikipedia.org/wiki/Natural_experiment "Natural experiment") [Quasi-experiment](https://en.wikipedia.org/wiki/Quasi-experiment "Quasi-experiment") | | [Statistical inference](https://en.wikipedia.org/wiki/Statistical_inference "Statistical inference") | | | | | | [Statistical theory](https://en.wikipedia.org/wiki/Statistical_theory "Statistical theory") | [Population](https://en.wikipedia.org/wiki/Population_\(statistics\) "Population (statistics)") [Statistic](https://en.wikipedia.org/wiki/Statistic "Statistic") [Probability distribution](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") [Sampling distribution](https://en.wikipedia.org/wiki/Sampling_distribution "Sampling distribution") [Order statistic](https://en.wikipedia.org/wiki/Order_statistic "Order statistic") [Empirical distribution](https://en.wikipedia.org/wiki/Empirical_distribution_function "Empirical distribution function") [Density estimation](https://en.wikipedia.org/wiki/Density_estimation "Density estimation") [Statistical model](https://en.wikipedia.org/wiki/Statistical_model "Statistical model") [Model specification](https://en.wikipedia.org/wiki/Model_specification "Model specification") [L*p* space](https://en.wikipedia.org/wiki/Lp_space "Lp space") [Parameter](https://en.wikipedia.org/wiki/Statistical_parameter "Statistical parameter") [location](https://en.wikipedia.org/wiki/Location_parameter "Location parameter") [scale](https://en.wikipedia.org/wiki/Scale_parameter "Scale parameter") [shape](https://en.wikipedia.org/wiki/Shape_parameter "Shape parameter") [Parametric family](https://en.wikipedia.org/wiki/Parametric_statistics "Parametric statistics") [Likelihood](https://en.wikipedia.org/wiki/Likelihood_function "Likelihood function") [(monotone)](https://en.wikipedia.org/wiki/Monotone_likelihood_ratio "Monotone likelihood ratio") [Location–scale family](https://en.wikipedia.org/wiki/Location%E2%80%93scale_family "Location–scale family") [Exponential family](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") [Completeness](https://en.wikipedia.org/wiki/Completeness_\(statistics\) "Completeness (statistics)") [Sufficiency](https://en.wikipedia.org/wiki/Sufficient_statistic "Sufficient statistic") [Statistical functional](https://en.wikipedia.org/wiki/Plug-in_principle "Plug-in principle") [Bootstrap](https://en.wikipedia.org/wiki/Bootstrapping_\(statistics\) "Bootstrapping (statistics)") [U](https://en.wikipedia.org/wiki/U-statistic "U-statistic") [V](https://en.wikipedia.org/wiki/V-statistic "V-statistic") [Optimal decision](https://en.wikipedia.org/wiki/Optimal_decision "Optimal decision") [loss function](https://en.wikipedia.org/wiki/Loss_function "Loss function") [Efficiency](https://en.wikipedia.org/wiki/Efficiency_\(statistics\) "Efficiency (statistics)") [Statistical distance](https://en.wikipedia.org/wiki/Statistical_distance "Statistical distance") [divergence](https://en.wikipedia.org/wiki/Divergence_\(statistics\) "Divergence (statistics)") [Asymptotics](https://en.wikipedia.org/wiki/Asymptotic_theory_\(statistics\) "Asymptotic theory (statistics)") [Robustness](https://en.wikipedia.org/wiki/Robust_statistics "Robust statistics") | | [Frequentist inference](https://en.wikipedia.org/wiki/Frequentist_inference "Frequentist inference") | | | | | | [Point estimation](https://en.wikipedia.org/wiki/Point_estimation "Point estimation") | [Estimating equations](https://en.wikipedia.org/wiki/Estimating_equations "Estimating equations") [Maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") [Method of moments](https://en.wikipedia.org/wiki/Method_of_moments_\(statistics\) "Method of moments (statistics)") [M-estimator](https://en.wikipedia.org/wiki/M-estimator "M-estimator") [Minimum distance](https://en.wikipedia.org/wiki/Minimum_distance_estimation "Minimum distance estimation") [Unbiased estimators](https://en.wikipedia.org/wiki/Bias_of_an_estimator "Bias of an estimator") [Mean-unbiased minimum-variance](https://en.wikipedia.org/wiki/Minimum-variance_unbiased_estimator "Minimum-variance unbiased estimator") [Rao–Blackwellization](https://en.wikipedia.org/wiki/Rao%E2%80%93Blackwell_theorem "Rao–Blackwell theorem") [Lehmann–ScheffĂ© theorem](https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9_theorem "Lehmann–ScheffĂ© theorem") [Median unbiased](https://en.wikipedia.org/wiki/Median-unbiased_estimator "Median-unbiased estimator") [Plug-in](https://en.wikipedia.org/wiki/Plug-in_principle "Plug-in principle") | | [Interval estimation](https://en.wikipedia.org/wiki/Interval_estimation "Interval estimation") | [Confidence interval](https://en.wikipedia.org/wiki/Confidence_interval "Confidence interval") [Pivot](https://en.wikipedia.org/wiki/Pivotal_quantity "Pivotal quantity") [Likelihood interval](https://en.wikipedia.org/wiki/Likelihood_interval "Likelihood interval") [Prediction interval](https://en.wikipedia.org/wiki/Prediction_interval "Prediction interval") [Tolerance interval](https://en.wikipedia.org/wiki/Tolerance_interval "Tolerance interval") [Resampling](https://en.wikipedia.org/wiki/Resampling_\(statistics\) "Resampling (statistics)") [Bootstrap](https://en.wikipedia.org/wiki/Bootstrapping_\(statistics\) "Bootstrapping (statistics)") [Jackknife](https://en.wikipedia.org/wiki/Jackknife_resampling "Jackknife resampling") | | [Testing hypotheses](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing "Statistical hypothesis testing") | [1- & 2-tails](https://en.wikipedia.org/wiki/One-_and_two-tailed_tests "One- and two-tailed tests") [Power](https://en.wikipedia.org/wiki/Power_\(statistics\) "Power (statistics)") [Uniformly most powerful test](https://en.wikipedia.org/wiki/Uniformly_most_powerful_test "Uniformly most powerful test") [Permutation test](https://en.wikipedia.org/wiki/Permutation_test "Permutation test") [Randomization test](https://en.wikipedia.org/wiki/Randomization_test "Randomization test") [Multiple comparisons](https://en.wikipedia.org/wiki/Multiple_comparisons "Multiple comparisons") | | [Parametric tests](https://en.wikipedia.org/wiki/Parametric_statistics "Parametric statistics") | [Likelihood-ratio](https://en.wikipedia.org/wiki/Likelihood-ratio_test "Likelihood-ratio test") [Score/Lagrange multiplier](https://en.wikipedia.org/wiki/Score_test "Score test") [Wald](https://en.wikipedia.org/wiki/Wald_test "Wald test") | | [Specific tests](https://en.wikipedia.org/wiki/List_of_statistical_tests "List of statistical tests") | | | | | | [*Z*\-test (normal)](https://en.wikipedia.org/wiki/Z-test "Z-test") [Student's *t*\-test](https://en.wikipedia.org/wiki/Student%27s_t-test "Student's t-test") [*F*\-test](https://en.wikipedia.org/wiki/F-test "F-test") | | | [Goodness of fit](https://en.wikipedia.org/wiki/Goodness_of_fit "Goodness of fit") | [Chi-squared](https://en.wikipedia.org/wiki/Chi-squared_test "Chi-squared test") [*G*\-test](https://en.wikipedia.org/wiki/G-test "G-test") [Kolmogorov–Smirnov](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test "Kolmogorov–Smirnov test") [Anderson–Darling](https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test "Anderson–Darling test") [Lilliefors](https://en.wikipedia.org/wiki/Lilliefors_test "Lilliefors test") [Jarque–Bera](https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test "Jarque–Bera test") [Normality (Shapiro–Wilk)](https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test "Shapiro–Wilk test") [Likelihood-ratio test](https://en.wikipedia.org/wiki/Likelihood-ratio_test "Likelihood-ratio test") [Model selection](https://en.wikipedia.org/wiki/Model_selection "Model selection") [Cross validation](https://en.wikipedia.org/wiki/Cross-validation_\(statistics\) "Cross-validation (statistics)") [AIC](https://en.wikipedia.org/wiki/Akaike_information_criterion "Akaike information criterion") [BIC](https://en.wikipedia.org/wiki/Bayesian_information_criterion "Bayesian information criterion") | | [Rank statistics](https://en.wikipedia.org/wiki/Rank_statistics "Rank statistics") | [Sign](https://en.wikipedia.org/wiki/Sign_test "Sign test") [Sample median](https://en.wikipedia.org/wiki/Sample_median "Sample median") [Signed rank (Wilcoxon)](https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test "Wilcoxon signed-rank test") [Hodges–Lehmann estimator](https://en.wikipedia.org/wiki/Hodges%E2%80%93Lehmann_estimator "Hodges–Lehmann estimator") [Rank sum (Mann–Whitney)](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test "Mann–Whitney U test") [Nonparametric](https://en.wikipedia.org/wiki/Nonparametric_statistics "Nonparametric statistics") [anova](https://en.wikipedia.org/wiki/Analysis_of_variance "Analysis of variance") [1-way (Kruskal–Wallis)](https://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_test "Kruskal–Wallis test") [2-way (Friedman)](https://en.wikipedia.org/wiki/Friedman_test "Friedman test") [Ordered alternative (Jonckheere–Terpstra)](https://en.wikipedia.org/wiki/Jonckheere%27s_trend_test "Jonckheere's trend test") [Van der Waerden test](https://en.wikipedia.org/wiki/Van_der_Waerden_test "Van der Waerden test") | | [Bayesian inference](https://en.wikipedia.org/wiki/Bayesian_inference "Bayesian inference") | [Bayesian probability](https://en.wikipedia.org/wiki/Bayesian_probability "Bayesian probability") [prior](https://en.wikipedia.org/wiki/Prior_probability "Prior probability") [posterior](https://en.wikipedia.org/wiki/Posterior_probability "Posterior probability") [Credible interval](https://en.wikipedia.org/wiki/Credible_interval "Credible interval") [Bayes factor](https://en.wikipedia.org/wiki/Bayes_factor "Bayes factor") [Bayesian estimator](https://en.wikipedia.org/wiki/Bayes_estimator "Bayes estimator") [Maximum posterior estimator](https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation "Maximum a posteriori estimation") | | [Correlation](https://en.wikipedia.org/wiki/Correlation_and_dependence "Correlation and dependence") [Regression analysis](https://en.wikipedia.org/wiki/Regression_analysis "Regression analysis") | | | | | | [Correlation](https://en.wikipedia.org/wiki/Correlation_and_dependence "Correlation and dependence") | [Pearson product-moment](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient "Pearson product-moment correlation coefficient") [Partial correlation](https://en.wikipedia.org/wiki/Partial_correlation "Partial correlation") [Confounding variable](https://en.wikipedia.org/wiki/Confounding "Confounding") [Coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination "Coefficient of determination") | | [Regression analysis](https://en.wikipedia.org/wiki/Regression_analysis "Regression analysis") | [Errors and residuals](https://en.wikipedia.org/wiki/Errors_and_residuals "Errors and residuals") [Regression validation](https://en.wikipedia.org/wiki/Regression_validation "Regression validation") [Mixed effects models](https://en.wikipedia.org/wiki/Mixed_model "Mixed model") [Simultaneous equations models](https://en.wikipedia.org/wiki/Simultaneous_equations_model "Simultaneous equations model") [Multivariate adaptive regression splines (MARS)](https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines "Multivariate adaptive regression splines") ![](https://upload.wikimedia.org/wikipedia/commons/thumb/8/83/Symbol_template_class_pink.svg/20px-Symbol_template_class_pink.svg.png) [Template:Least squares and regression analysis](https://en.wikipedia.org/wiki/Template:Least_squares_and_regression_analysis "Template:Least squares and regression analysis") | | [Linear regression](https://en.wikipedia.org/wiki/Linear_regression "Linear regression") | [Simple linear regression](https://en.wikipedia.org/wiki/Simple_linear_regression "Simple linear regression") [Ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares "Ordinary least squares") [General linear model](https://en.wikipedia.org/wiki/General_linear_model "General linear model") [Bayesian regression](https://en.wikipedia.org/wiki/Bayesian_linear_regression "Bayesian linear regression") | | Non-standard predictors | [Nonlinear regression](https://en.wikipedia.org/wiki/Nonlinear_regression "Nonlinear regression") [Nonparametric](https://en.wikipedia.org/wiki/Nonparametric_regression "Nonparametric regression") [Semiparametric](https://en.wikipedia.org/wiki/Semiparametric_regression "Semiparametric regression") [Isotonic](https://en.wikipedia.org/wiki/Isotonic_regression "Isotonic regression") [Robust](https://en.wikipedia.org/wiki/Robust_regression "Robust regression") [Homoscedasticity and Heteroscedasticity](https://en.wikipedia.org/wiki/Homoscedasticity_and_heteroscedasticity "Homoscedasticity and heteroscedasticity") | | [Generalized linear model](https://en.wikipedia.org/wiki/Generalized_linear_model "Generalized linear model") | [Exponential families](https://en.wikipedia.org/wiki/Exponential_family "Exponential family") [Logistic (Bernoulli)](https://en.wikipedia.org/wiki/Logistic_regression "Logistic regression") / [Binomial](https://en.wikipedia.org/wiki/Binomial_regression "Binomial regression") / [Poisson regressions](https://en.wikipedia.org/wiki/Poisson_regression "Poisson regression") | | [Partition of variance](https://en.wikipedia.org/wiki/Partition_of_sums_of_squares "Partition of sums of squares") | [Analysis of variance (ANOVA, anova)](https://en.wikipedia.org/wiki/Analysis_of_variance "Analysis of variance") [Analysis of covariance](https://en.wikipedia.org/wiki/Analysis_of_covariance "Analysis of covariance") [Multivariate ANOVA](https://en.wikipedia.org/wiki/Multivariate_analysis_of_variance "Multivariate analysis of variance") [Degrees of freedom](https://en.wikipedia.org/wiki/Degrees_of_freedom_\(statistics\) "Degrees of freedom (statistics)") | | [Categorical](https://en.wikipedia.org/wiki/Categorical_variable "Categorical variable") / [multivariate](https://en.wikipedia.org/wiki/Multivariate_statistics "Multivariate statistics") / [time-series](https://en.wikipedia.org/wiki/Time_series "Time series") / [survival analysis](https://en.wikipedia.org/wiki/Survival_analysis "Survival analysis") | | | | | | [Categorical](https://en.wikipedia.org/wiki/Categorical_variable "Categorical variable") | [Cohen's kappa](https://en.wikipedia.org/wiki/Cohen%27s_kappa "Cohen's kappa") [Contingency table](https://en.wikipedia.org/wiki/Contingency_table "Contingency table") [Graphical model](https://en.wikipedia.org/wiki/Graphical_model "Graphical model") [Log-linear model](https://en.wikipedia.org/wiki/Poisson_regression "Poisson regression") [McNemar's test](https://en.wikipedia.org/wiki/McNemar%27s_test "McNemar's test") [Cochran–Mantel–Haenszel statistics](https://en.wikipedia.org/wiki/Cochran%E2%80%93Mantel%E2%80%93Haenszel_statistics "Cochran–Mantel–Haenszel statistics") | | [Multivariate](https://en.wikipedia.org/wiki/Multivariate_statistics "Multivariate statistics") | [Regression](https://en.wikipedia.org/wiki/General_linear_model "General linear model") [Manova](https://en.wikipedia.org/wiki/Multivariate_analysis_of_variance "Multivariate analysis of variance") [Principal components](https://en.wikipedia.org/wiki/Principal_component_analysis "Principal component analysis") [Canonical correlation](https://en.wikipedia.org/wiki/Canonical_correlation "Canonical correlation") [Discriminant analysis](https://en.wikipedia.org/wiki/Linear_discriminant_analysis "Linear discriminant analysis") [Cluster analysis](https://en.wikipedia.org/wiki/Cluster_analysis "Cluster analysis") [Classification](https://en.wikipedia.org/wiki/Statistical_classification "Statistical classification") [Structural equation model](https://en.wikipedia.org/wiki/Structural_equation_modeling "Structural equation modeling") [Factor analysis](https://en.wikipedia.org/wiki/Factor_analysis "Factor analysis") [Multivariate distributions](https://en.wikipedia.org/wiki/Multivariate_distribution "Multivariate distribution") [Elliptical distributions](https://en.wikipedia.org/wiki/Elliptical_distribution "Elliptical distribution") [Normal](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") | | [Time-series](https://en.wikipedia.org/wiki/Time_series "Time series") | | | | | | General | [Decomposition](https://en.wikipedia.org/wiki/Decomposition_of_time_series "Decomposition of time series") [Trend](https://en.wikipedia.org/wiki/Trend_estimation "Trend estimation") [Stationarity](https://en.wikipedia.org/wiki/Stationary_process "Stationary process") [Seasonal adjustment](https://en.wikipedia.org/wiki/Seasonal_adjustment "Seasonal adjustment") [Exponential smoothing](https://en.wikipedia.org/wiki/Exponential_smoothing "Exponential smoothing") [Cointegration](https://en.wikipedia.org/wiki/Cointegration "Cointegration") [Structural break](https://en.wikipedia.org/wiki/Structural_break "Structural break") [Granger causality](https://en.wikipedia.org/wiki/Granger_causality "Granger causality") | | Specific tests | [Dickey–Fuller](https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test "Dickey–Fuller test") [Johansen](https://en.wikipedia.org/wiki/Johansen_test "Johansen test") [Q-statistic (Ljung–Box)](https://en.wikipedia.org/wiki/Ljung%E2%80%93Box_test "Ljung–Box test") [Durbin–Watson](https://en.wikipedia.org/wiki/Durbin%E2%80%93Watson_statistic "Durbin–Watson statistic") [Breusch–Godfrey](https://en.wikipedia.org/wiki/Breusch%E2%80%93Godfrey_test "Breusch–Godfrey test") | | [Time domain](https://en.wikipedia.org/wiki/Time_domain "Time domain") | [Autocorrelation (ACF)](https://en.wikipedia.org/wiki/Autocorrelation "Autocorrelation") [partial (PACF)](https://en.wikipedia.org/wiki/Partial_autocorrelation_function "Partial autocorrelation function") [Cross-correlation (XCF)](https://en.wikipedia.org/wiki/Cross-correlation "Cross-correlation") [ARMA model](https://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model "Autoregressive–moving-average model") [ARIMA model (Box–Jenkins)](https://en.wikipedia.org/wiki/Box%E2%80%93Jenkins_method "Box–Jenkins method") [Autoregressive conditional heteroskedasticity (ARCH)](https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity "Autoregressive conditional heteroskedasticity") [Vector autoregression (VAR)](https://en.wikipedia.org/wiki/Vector_autoregression "Vector autoregression") ([Autoregressive model (AR)](https://en.wikipedia.org/wiki/Autoregressive_model "Autoregressive model")) | | [Frequency domain](https://en.wikipedia.org/wiki/Frequency_domain "Frequency domain") | [Spectral density estimation](https://en.wikipedia.org/wiki/Spectral_density_estimation "Spectral density estimation") [Fourier analysis](https://en.wikipedia.org/wiki/Fourier_analysis "Fourier analysis") [Least-squares spectral analysis](https://en.wikipedia.org/wiki/Least-squares_spectral_analysis "Least-squares spectral analysis") [Wavelet](https://en.wikipedia.org/wiki/Wavelet "Wavelet") [Whittle likelihood](https://en.wikipedia.org/wiki/Whittle_likelihood "Whittle likelihood") | | [Survival](https://en.wikipedia.org/wiki/Survival_analysis "Survival analysis") | | | | | | [Survival function](https://en.wikipedia.org/wiki/Survival_function "Survival function") | [Kaplan–Meier estimator (product limit)](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator "Kaplan–Meier estimator") [Proportional hazards models](https://en.wikipedia.org/wiki/Proportional_hazards_model "Proportional hazards model") [Accelerated failure time (AFT) model](https://en.wikipedia.org/wiki/Accelerated_failure_time_model "Accelerated failure time model") [First hitting time](https://en.wikipedia.org/wiki/First-hitting-time_model "First-hitting-time model") | | [Hazard function](https://en.wikipedia.org/wiki/Failure_rate "Failure rate") | [Nelson–Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator "Nelson–Aalen estimator") | | Test | [Log-rank test](https://en.wikipedia.org/wiki/Log-rank_test "Log-rank test") | | [Applications](https://en.wikipedia.org/wiki/List_of_fields_of_application_of_statistics "List of fields of application of statistics") | | | | | | [Biostatistics](https://en.wikipedia.org/wiki/Biostatistics "Biostatistics") | [Bioinformatics](https://en.wikipedia.org/wiki/Bioinformatics "Bioinformatics") [Clinical trials](https://en.wikipedia.org/wiki/Clinical_trial "Clinical trial") / [studies](https://en.wikipedia.org/wiki/Clinical_study_design "Clinical study design") [Epidemiology](https://en.wikipedia.org/wiki/Epidemiology "Epidemiology") [Medical statistics](https://en.wikipedia.org/wiki/Medical_statistics "Medical statistics") | | [Engineering statistics](https://en.wikipedia.org/wiki/Engineering_statistics "Engineering statistics") | [Chemometrics](https://en.wikipedia.org/wiki/Chemometrics "Chemometrics") [Methods engineering](https://en.wikipedia.org/wiki/Methods_engineering "Methods engineering") [Probabilistic design](https://en.wikipedia.org/wiki/Probabilistic_design "Probabilistic design") [Process](https://en.wikipedia.org/wiki/Statistical_process_control "Statistical process control") / [quality control](https://en.wikipedia.org/wiki/Quality_control "Quality control") [Reliability](https://en.wikipedia.org/wiki/Reliability_engineering "Reliability engineering") [System identification](https://en.wikipedia.org/wiki/System_identification "System identification") | | [Social statistics](https://en.wikipedia.org/wiki/Social_statistics "Social statistics") | [Actuarial science](https://en.wikipedia.org/wiki/Actuarial_science "Actuarial science") [Census](https://en.wikipedia.org/wiki/Census "Census") [Crime statistics](https://en.wikipedia.org/wiki/Crime_statistics "Crime statistics") [Demography](https://en.wikipedia.org/wiki/Demographic_statistics "Demographic statistics") [Econometrics](https://en.wikipedia.org/wiki/Econometrics "Econometrics") [Jurimetrics](https://en.wikipedia.org/wiki/Jurimetrics "Jurimetrics") [National accounts](https://en.wikipedia.org/wiki/National_accounts "National accounts") [Official statistics](https://en.wikipedia.org/wiki/Official_statistics "Official statistics") [Population statistics](https://en.wikipedia.org/wiki/Population_statistics "Population statistics") [Psychometrics](https://en.wikipedia.org/wiki/Psychometrics "Psychometrics") | | [Spatial statistics](https://en.wikipedia.org/wiki/Spatial_analysis "Spatial analysis") | [Cartography](https://en.wikipedia.org/wiki/Cartography "Cartography") [Environmental statistics](https://en.wikipedia.org/wiki/Environmental_statistics "Environmental statistics") [Geographic information system](https://en.wikipedia.org/wiki/Geographic_information_system "Geographic information system") [Geostatistics](https://en.wikipedia.org/wiki/Geostatistics "Geostatistics") [Kriging](https://en.wikipedia.org/wiki/Kriging "Kriging") | | ![](https://upload.wikimedia.org/wikipedia/en/thumb/9/96/Symbol_category_class.svg/20px-Symbol_category_class.svg.png)**[Category](https://en.wikipedia.org/wiki/Category:Statistics "Category:Statistics")** **[![icon](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Nuvola_apps_edu_mathematics_blue-p.svg/40px-Nuvola_apps_edu_mathematics_blue-p.svg.png)](https://en.wikipedia.org/wiki/File:Nuvola_apps_edu_mathematics_blue-p.svg) [Mathematics portal](https://en.wikipedia.org/wiki/Portal:Mathematics "Portal:Mathematics")** [![](https://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/20px-Commons-logo.svg.png)](https://en.wikipedia.org/wiki/File:Commons-logo.svg "Commons page")**[Commons](https://commons.wikimedia.org/wiki/Category:Statistics "commons:Category:Statistics")** ![](https://upload.wikimedia.org/wikipedia/commons/thumb/3/37/People_icon.svg/20px-People_icon.svg.png) **[WikiProject](https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Statistics "Wikipedia:WikiProject Statistics")** | | | [Authority control databases](https://en.wikipedia.org/wiki/Help:Authority_control "Help:Authority control") [![Edit this at Wikidata](https://upload.wikimedia.org/wikipedia/en/thumb/8/8a/OOjs_UI_icon_edit-ltr-progressive.svg/20px-OOjs_UI_icon_edit-ltr-progressive.svg.png)](https://www.wikidata.org/wiki/Q190391#identifiers "Edit this at Wikidata") | | |---|---| | International | [GND](https://d-nb.info/gnd/4067618-3) [FAST](https://id.worldcat.org/fast/850721) | | National | [United States](https://id.loc.gov/authorities/sh85021905) [France](https://catalogue.bnf.fr/ark:/12148/cb122653738) [BnF data](https://data.bnf.fr/ark:/12148/cb122653738) [Israel](https://www.nli.org.il/en/authorities/987007284968305171) | | Other | [Yale LUX](https://lux.collections.yale.edu/view/concept/86e1a226-080e-49d7-aaf4-f9fd9d5439b9) | ![](https://en.wikipedia.org/wiki/Special:CentralAutoLogin/start?useformat=desktop&type=1x1&usesul3=1) Retrieved from "<https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&oldid=1345105468>" [Categories](https://en.wikipedia.org/wiki/Help:Category "Help:Category"): - [Central limit theorem](https://en.wikipedia.org/wiki/Category:Central_limit_theorem "Category:Central limit theorem") - [Theorems in probability theory](https://en.wikipedia.org/wiki/Category:Theorems_in_probability_theory "Category:Theorems in probability theory") - [Theorems in statistics](https://en.wikipedia.org/wiki/Category:Theorems_in_statistics "Category:Theorems in statistics") - [Asymptotic theory (statistics)](https://en.wikipedia.org/wiki/Category:Asymptotic_theory_\(statistics\) "Category:Asymptotic theory (statistics)") Hidden categories: - [Wikipedia articles needing page number citations from July 2023](https://en.wikipedia.org/wiki/Category:Wikipedia_articles_needing_page_number_citations_from_July_2023 "Category:Wikipedia articles needing page number citations from July 2023") - [CS1 French-language sources (fr)](https://en.wikipedia.org/wiki/Category:CS1_French-language_sources_\(fr\) "Category:CS1 French-language sources (fr)") - [CS1 German-language sources (de)](https://en.wikipedia.org/wiki/Category:CS1_German-language_sources_\(de\) "Category:CS1 German-language sources (de)") - [CS1 Russian-language sources (ru)](https://en.wikipedia.org/wiki/Category:CS1_Russian-language_sources_\(ru\) "Category:CS1 Russian-language sources (ru)") - [Articles with short description](https://en.wikipedia.org/wiki/Category:Articles_with_short_description "Category:Articles with short description") - [Short description is different from Wikidata](https://en.wikipedia.org/wiki/Category:Short_description_is_different_from_Wikidata "Category:Short description is different from Wikidata") - [Use dmy dates from July 2023](https://en.wikipedia.org/wiki/Category:Use_dmy_dates_from_July_2023 "Category:Use dmy dates from July 2023") - [All articles with unsourced statements](https://en.wikipedia.org/wiki/Category:All_articles_with_unsourced_statements "Category:All articles with unsourced statements") - [Articles with unsourced statements from July 2016](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_July_2016 "Category:Articles with unsourced statements from July 2016") - [Articles with unsourced statements from April 2012](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_April_2012 "Category:Articles with unsourced statements from April 2012") - [Articles with unsourced statements from June 2012](https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_June_2012 "Category:Articles with unsourced statements from June 2012") - [Wikipedia articles needing clarification from June 2012](https://en.wikipedia.org/wiki/Category:Wikipedia_articles_needing_clarification_from_June_2012 "Category:Wikipedia articles needing clarification from June 2012") - [Pages using multiple image with auto scaled images](https://en.wikipedia.org/wiki/Category:Pages_using_multiple_image_with_auto_scaled_images "Category:Pages using multiple image with auto scaled images") - [Commons category link from Wikidata](https://en.wikipedia.org/wiki/Category:Commons_category_link_from_Wikidata "Category:Commons category link from Wikidata") - [Articles containing proofs](https://en.wikipedia.org/wiki/Category:Articles_containing_proofs "Category:Articles containing proofs") - This page was last edited on 24 March 2026, at 09:23 (UTC). - Text is available under the [Creative Commons Attribution-ShareAlike 4.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_4.0_International_License "Wikipedia:Text of the Creative Commons Attribution-ShareAlike 4.0 International License"); additional terms may apply. By using this site, you agree to the [Terms of Use](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Terms_of_Use "foundation:Special:MyLanguage/Policy:Terms of Use") and [Privacy Policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy "foundation:Special:MyLanguage/Policy:Privacy policy"). WikipediaÂź is a registered trademark of the [Wikimedia Foundation, Inc.](https://wikimediafoundation.org/), a non-profit organization. - [Privacy policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy) - [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About) - [Disclaimers](https://en.wikipedia.org/wiki/Wikipedia:General_disclaimer) - [Contact Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Contact_us) - [Legal & safety contacts](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Legal:Wikimedia_Foundation_Legal_and_Safety_Contact_Information) - [Code of Conduct](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Universal_Code_of_Conduct) - [Developers](https://developer.wikimedia.org/) - [Statistics](https://stats.wikimedia.org/#/en.wikipedia.org) - [Cookie statement](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Cookie_statement) - [Mobile view](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&mobileaction=toggle_view_mobile) - [![Wikimedia Foundation](https://en.wikipedia.org/static/images/footer/wikimedia.svg)](https://www.wikimedia.org/) - [![Powered by MediaWiki](https://en.wikipedia.org/w/resources/assets/mediawiki_compact.svg)](https://www.mediawiki.org/) Search Toggle the table of contents Central limit theorem 42 languages [Add topic](https://en.wikipedia.org/wiki/Central_limit_theorem)
Readable Markdown
| | | |---|---| | [![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/7b/IllustrationCentralTheorem.png/330px-IllustrationCentralTheorem.png)](https://en.wikipedia.org/wiki/File:IllustrationCentralTheorem.png) | | | Type | [Theorem](https://en.wikipedia.org/wiki/Theorem "Theorem") | | Field | [Probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory") | | Statement | The scaled sum of a sequence of [i.i.d. random variables](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables "Independent and identically distributed random variables") with finite positive [variance](https://en.wikipedia.org/wiki/Variance "Variance") converges in distribution to the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution"). | | Generalizations | [Lindeberg's CLT](https://en.wikipedia.org/wiki/Lindeberg%27s_condition "Lindeberg's condition") | In [probability theory](https://en.wikipedia.org/wiki/Probability_theory "Probability theory"), the **central limit theorem** (**CLT**) states that, under appropriate conditions, the [distribution](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") of a normalized version of the sample mean converges to a [standard normal distribution](https://en.wikipedia.org/wiki/Normal_distribution#Standard_normal_distribution "Normal distribution"). This holds even if the original variables themselves are not [normally distributed](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution"). There are several versions of the CLT, each applying in the context of different conditions. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern form it was only precisely stated in the 1920s.[\[1\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEFischer2011[[Category:Wikipedia_articles_needing_page_number_citations_from_July_2023]]<sup_class="noprint_Inline-Template_"_style="white-space:nowrap;">&#91;<i>[[Wikipedia:Citing_sources|<span_title="This_citation_requires_a_reference_to_the_specific_page_or_range_of_pages_in_which_the_material_appears.&#32;\(July_2023\)">page&nbsp;needed</span>]]</i>&#93;</sup>-1) In [statistics](https://en.wikipedia.org/wiki/Statistics "Statistics"), the CLT can be stated as: let ![{\\displaystyle X\_{1},X\_{2},\\dots ,X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/197911bbfe9fe9540571f88763a0064df10686ac) denote a [statistical sample](https://en.wikipedia.org/wiki/Sampling_\(statistics\) "Sampling (statistics)") of size ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) from a population with [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") (average) ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) and finite positive [variance](https://en.wikipedia.org/wiki/Variance "Variance") ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5), and let ![{\\displaystyle {\\bar {X}}\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ab9d44cdd15adc4e2e00c9b282832eefe22d2dc) denote the sample mean (which is itself a [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable")). Then the [limit as ![{\\displaystyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0d55d9b32f6fa8fab6a84ea444a6b5a24bb45e1) of the distribution](https://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_distribution "Convergence of random variables") of ![{\\displaystyle ({\\bar {X}}\_{n}-\\mu ){\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c0140363b34a41babc747665cd5a1d09eaeb8bc7) is a normal distribution with mean ![{\\displaystyle 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2aae8864a3c1fec9585261791a809ddec1489950) and variance ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5).[\[2\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-2) In other words, suppose that a large sample of [observations](https://en.wikipedia.org/wiki/Random_variate "Random variate") is obtained, each observation being randomly produced in a way that does not depend on the values of the other observations, and the average ([arithmetic mean](https://en.wikipedia.org/wiki/Arithmetic_mean "Arithmetic mean")) of the observed values is computed. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size is large enough, the [probability distribution](https://en.wikipedia.org/wiki/Probability_distribution "Probability distribution") of these averages will closely approximate a normal distribution. The central limit theorem has several variants. In its common form, the random variables must be [independent and identically distributed](https://en.wikipedia.org/wiki/Independent_and_identically_distributed "Independent and identically distributed") (i.i.d.). This requirement can be weakened; convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations if they comply with certain conditions. The earliest version of this theorem, that the normal distribution may be used as an approximation to the [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution"), is the [de Moivre–Laplace theorem](https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem "De Moivre–Laplace theorem"). ## Independent sequences \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=1 "Edit section: Independent sequences")\] [![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/7b/IllustrationCentralTheorem.png/500px-IllustrationCentralTheorem.png)](https://en.wikipedia.org/wiki/File:IllustrationCentralTheorem.png) Whatever the form of the population distribution, the sampling distribution tends to a Gaussian, and its dispersion is given by the central limit theorem.[\[3\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-3) Let ![{\\displaystyle (X\_{n})\_{n\\geq 1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/46a33659d18632a22501249c54c6be339cf08a4c) be a sequence of [i.i.d. random variables](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables "Independent and identically distributed random variables") having a distribution with [expected value](https://en.wikipedia.org/wiki/Expected_value "Expected value") given by ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) and finite [variance](https://en.wikipedia.org/wiki/Variance "Variance") given by ![{\\displaystyle \\sigma ^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c82112327f2b74f6bf7ba72fe64621a7ae9064de) Suppose we are interested in the [sample average](https://en.wikipedia.org/wiki/Sample_mean "Sample mean") ![{\\displaystyle {\\bar {X}}\_{n}\\equiv {\\frac {X\_{1}+\\cdots +X\_{n}}{n}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da94013ecb1dbc10d9a54c8eb669a3a87171582e) By the [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers"), the sample average [converges almost surely](https://en.wikipedia.org/wiki/Almost_sure_convergence "Almost sure convergence") (and therefore also [converges in probability](https://en.wikipedia.org/wiki/Convergence_in_probability "Convergence in probability")) to the expected value ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) as ![{\\displaystyle n\\to \\infty .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d8a34a9f62668de90200a6cbde865c27af2cdbb7) The classical central limit theorem describes the size and the distributional form of the [stochastic](https://en.wiktionary.org/wiki/stochastic "wikt:stochastic") fluctuations around the deterministic number ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) during this convergence. More precisely, it states that as ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) gets larger, the distribution of the normalized mean ![{\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1f533faaeb8931b80712ddb6361d4814c2be02a), i.e. the difference between the sample average ![{\\displaystyle {\\bar {X}}\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ab9d44cdd15adc4e2e00c9b282832eefe22d2dc) and its limit ![{\\displaystyle \\mu ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1e7e1ef161a49a22b500d63307460ad92eeb6a16) scaled by the factor ![{\\displaystyle {\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2a2994734eae382ce30100fb17b9447fd8e99f81), approaches the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution") with mean ![{\\displaystyle 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2aae8864a3c1fec9585261791a809ddec1489950) and variance ![{\\displaystyle \\sigma ^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c82112327f2b74f6bf7ba72fe64621a7ae9064de) For large enough ![{\\displaystyle n,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/397bfafc701afdf14c2743278a097f6f2957eabb) the distribution of ![{\\displaystyle {\\bar {X}}\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ab9d44cdd15adc4e2e00c9b282832eefe22d2dc) gets arbitrarily close to the normal distribution with mean ![{\\displaystyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161) and variance ![{\\displaystyle \\sigma ^{2}/n.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4d48d9aeca39ca2771b0b7e941dd17f652df8f3b) The usefulness of the theorem is that the distribution of ![{\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1f533faaeb8931b80712ddb6361d4814c2be02a) approaches normality regardless of the shape of the distribution of the individual ![{\\displaystyle X\_{i}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6d8cf06c3e80129deea779a3d738464b1990906c) Formally, the theorem can be stated as follows: In the case ![{\\displaystyle \\sigma \>0,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/135c308f5cd7c6374f738b618f1c7e09afe129f9) convergence in distribution means that the [cumulative distribution functions](https://en.wikipedia.org/wiki/Cumulative_distribution_function "Cumulative distribution function") of ![{\\displaystyle {\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c1f533faaeb8931b80712ddb6361d4814c2be02a) converge pointwise to the cdf of the ![{\\displaystyle {\\mathcal {N}}(0,\\sigma ^{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a12e4999caaf1154cee3440edde18c9e5f66a8da) distribution: for every real number ![{\\displaystyle z,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/47989a9b66a4ea8a0ec19e8159749fce8a9a8ca8) ![{\\displaystyle \\lim \_{n\\to \\infty }\\mathbb {P} \\left\[{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )\\leq z\\right\]=\\lim \_{n\\to \\infty }\\mathbb {P} \\left\[{\\frac {{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )}{\\sigma }}\\leq {\\frac {z}{\\sigma }}\\right\]=\\Phi \\left({\\frac {z}{\\sigma }}\\right),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/defd4cf70972fa6a76a8570fee6551f4cb7d70b8) where ![{\\displaystyle \\Phi (z)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2fbffdd4f2cf1c6e8daaddfddd00677b16f12809) is the standard normal cdf evaluated at ![{\\displaystyle z.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fd7f273b229260c8fe9aa42378b0471336394cc2) The convergence is uniform in ![{\\displaystyle z}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf368e72c009decd9b6686ee84a375632e11de98) in the sense that ![{\\displaystyle \\lim \_{n\\to \\infty }\\;\\sup \_{z\\in \\mathbb {R} }\\;\\left\|\\mathbb {P} \\left\[{\\sqrt {n}}({\\bar {X}}\_{n}-\\mu )\\leq z\\right\]-\\Phi \\left({\\frac {z}{\\sigma }}\\right)\\right\|=0~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/835addcb3ec37594d1e9a6a78c0373a5e7b2eddc) where ![{\\displaystyle \\sup }](https://wikimedia.org/api/rest_v1/media/math/render/svg/266a03117c05ef2558c304b1dbb5f319fcd56fd6) denotes the [supremum](https://en.wikipedia.org/wiki/Supremum "Supremum") (i.e. least upper bound) of the set.[\[5\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBauer2001199Theorem_30.13-5) In this variant of the central limit theorem the random variables ![{\\textstyle X\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/70db743cc2ba4453965ff0bdd6d6ef4f6d0ec699) have to be independent, but not necessarily identically distributed. The theorem also requires that random variables ![{\\textstyle \\left\|X\_{i}\\right\|}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4576839a2dde8200ec639c41d60f25dc5e311daa) have [moments](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") of some order ![{\\textstyle (2+\\delta )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2df82efa92af67a0e93dd48e5079860e949d715a), and that the rate of growth of these moments is limited by the Lyapunov condition given below. **Lyapunov CLT[\[6\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBillingsley1995362-6)**—Suppose ![{\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c8cf831d0f6051f596823e3ab16dec9b32c3605b) is a sequence of independent random variables, each with finite expected value ![{\\textstyle \\mu \_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4fd49d4df5ae6cae38aaec73f22e4eaacb29b253) and variance ![{\\textstyle \\sigma \_{i}^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83678ff991d38b63c52a71d9b4c3e129b9d98ff1). Define ![{\\displaystyle s\_{n}^{2}=\\sum \_{i=1}^{n}\\sigma \_{i}^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1df16c0efa2304bd4f4ca88a2bbfef75c13374b6) If for some ![{\\textstyle \\delta \>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e503a231ad333c0bd9a2dc7d48d8848076423356), *Lyapunov’s condition* ![{\\displaystyle \\lim \_{n\\to \\infty }\\;{\\frac {1}{s\_{n}^{2+\\delta }}}\\,\\sum \_{i=1}^{n}\\operatorname {E} \\left\[\\left\|X\_{i}-\\mu \_{i}\\right\|^{2+\\delta }\\right\]=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f06b7a3309fd45005cce7a4e0b15ca3758f662f5) is satisfied, then a sum of ![{\\textstyle {\\frac {X\_{i}-\\mu \_{i}}{s\_{n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e74b2c70018625d030bc0da067e9fbc0bae4921) converges in distribution to a standard normal random variable, as ![{\\textstyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc6e1f880981346a604257ebcacdef24c0aca2d6) goes to infinity: ![{\\displaystyle {\\frac {1}{s\_{n}}}\\,\\sum \_{i=1}^{n}\\left(X\_{i}-\\mu \_{i}\\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}(0,1).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/14316bd39542fb71d72341e143c5a311ae21786b) In practice it is usually easiest to check Lyapunov's condition for ![{\\textstyle \\delta =1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0519d0fea3348d08426b0d99a7af0e13f938aea7). If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold. ### Lindeberg (-Feller) CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=4 "Edit section: Lindeberg (-Feller) CLT")\] In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from [Lindeberg](https://en.wikipedia.org/wiki/Jarl_Waldemar_Lindeberg "Jarl Waldemar Lindeberg") in 1920). Suppose that for every ![{\\textstyle \\varepsilon \>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e73828dc02cc2d6974733344381d188455ab3d55), ![{\\displaystyle \\lim \_{n\\to \\infty }{\\frac {1}{s\_{n}^{2}}}\\sum \_{i=1}^{n}\\operatorname {E} \\left\[(X\_{i}-\\mu \_{i})^{2}\\cdot \\mathbf {1} \_{\\left\\{\\left\|X\_{i}-\\mu \_{i}\\right\|\>\\varepsilon s\_{n}\\right\\}}\\right\]=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dfd10d152dd578ac2a2fa674a084bd7b03b95b1b) where ![{\\textstyle \\mathbf {1} \_{\\{\\ldots \\}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6d8bb3f9d0effd91db23de8e5c25323be916bdda) is the [indicator function](https://en.wikipedia.org/wiki/Indicator_function "Indicator function"). Then the distribution of the standardized sums ![{\\displaystyle {\\frac {1}{s\_{n}}}\\sum \_{i=1}^{n}\\left(X\_{i}-\\mu \_{i}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/455361b7063b96cc68099041e69a3461ffca46d2) converges towards the standard normal distribution ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7). ### CLT for the sum of a random number of random variables \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=5 "Edit section: CLT for the sum of a random number of random variables")\] Rather than summing an integer number ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) of random variables and taking ![{\\displaystyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a0d55d9b32f6fa8fab6a84ea444a6b5a24bb45e1), the sum can be of a random number ![{\\displaystyle N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3) of random variables, with conditions on ![{\\displaystyle N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3). For example, the following theorem is Corollary 4 of Robbins (1948). It assumes that ![{\\displaystyle N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3) is asymptotically normal (Robbins also developed other conditions that lead to the same result). ### Multidimensional CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=6 "Edit section: Multidimensional CLT")\] Proofs that use characteristic functions can be extended to cases where each individual ![{\\textstyle \\mathbf {X} \_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d1c5b312b59eec02027f69297c93d303697233eb) is a [random vector](https://en.wikipedia.org/wiki/Random_vector "Random vector") in ![{\\textstyle \\mathbb {R} ^{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bc5ce44a3fa7ec45276820499f4fdcbb89e12a7b), with mean vector ![{\\textstyle {\\boldsymbol {\\mu }}=\\operatorname {E} \[\\mathbf {X} \_{i}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f2931b016ce578d17578ee3cdffeb31852446873) and [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix "Covariance matrix") ![{\\textstyle \\mathbf {\\Sigma } }](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ae7bb00bd54a3671be6b1f2b3a067462c91aa63) (among the components of the vector), and these random vectors are independent and identically distributed. The multidimensional central limit theorem states that when scaled, sums converge to a [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution").[\[9\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-vanderVaart-9) Summation of these vectors is done component-wise. For ![{\\displaystyle i=1,2,3,\\ldots ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/75890aaf87574f07064dd7effe29c1ace708bead) let ![{\\displaystyle \\mathbf {X} \_{i}={\\begin{bmatrix}X\_{i}^{(1)}\\\\\\vdots \\\\X\_{i}^{(k)}\\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5849f8855c39fb6f72ec0bb0af6f027e9b764557) be independent random vectors. The sum of the random vectors ![{\\displaystyle \\mathbf {X} \_{1},\\ldots ,\\mathbf {X} \_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ed48d149cb70706fe64e27eb403783b9163a686) is ![{\\displaystyle \\sum \_{i=1}^{n}\\mathbf {X} \_{i}={\\begin{bmatrix}X\_{1}^{(1)}\\\\\\vdots \\\\X\_{1}^{(k)}\\end{bmatrix}}+{\\begin{bmatrix}X\_{2}^{(1)}\\\\\\vdots \\\\X\_{2}^{(k)}\\end{bmatrix}}+\\cdots +{\\begin{bmatrix}X\_{n}^{(1)}\\\\\\vdots \\\\X\_{n}^{(k)}\\end{bmatrix}}={\\begin{bmatrix}\\sum \_{i=1}^{n}X\_{i}^{(1)}\\\\\\vdots \\\\\\sum \_{i=1}^{n}X\_{i}^{(k)}\\end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c25add12bc8b8482b3465848a882be2c756861a0) and their average is ![{\\displaystyle \\mathbf {{\\bar {X}}\_{n}} ={\\begin{bmatrix}{\\bar {X}}\_{i}^{(1)}\\\\\\vdots \\\\{\\bar {X}}\_{i}^{(k)}\\end{bmatrix}}={\\frac {1}{n}}\\sum \_{i=1}^{n}\\mathbf {X} \_{i}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf121f2531614d5deda7f010f4299906d40e7763) Therefore, ![{\\displaystyle {\\frac {1}{\\sqrt {n}}}\\sum \_{i=1}^{n}\\left\[\\mathbf {X} \_{i}-\\operatorname {E} \\left(\\mathbf {X} \_{i}\\right)\\right\]={\\frac {1}{\\sqrt {n}}}\\sum \_{i=1}^{n}(\\mathbf {X} \_{i}-{\\boldsymbol {\\mu }})={\\sqrt {n}}\\left({\\overline {\\mathbf {X} }}\_{n}-{\\boldsymbol {\\mu }}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/77d3c26b99d8a97339b688367c519d0e28f9541c) The multivariate central limit theorem states that ![{\\displaystyle {\\sqrt {n}}\\left({\\overline {\\mathbf {X} }}\_{n}-{\\boldsymbol {\\mu }}\\right)\\mathrel {\\overset {d}{\\longrightarrow }} {\\mathcal {N}}\_{k}(0,{\\boldsymbol {\\Sigma }}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0e50727443f2f8d94b4e35b66a3d441a12e69e6b) where the [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix "Covariance matrix") ![{\\displaystyle {\\boldsymbol {\\Sigma }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8532511177f5a2d2dc2b8c1ea37d483c70266911) is equal to ![{\\displaystyle {\\boldsymbol {\\Sigma }}={\\begin{bmatrix}{\\operatorname {Var} \\left(X\_{1}^{(1)}\\right)}&\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(1)},X\_{1}^{(k)}\\right)\\\\\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(1)}\\right)&\\operatorname {Var} \\left(X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(2)},X\_{1}^{(k)}\\right)\\\\\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(1)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(2)}\\right)&\\operatorname {Var} \\left(X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Cov} \\left(X\_{1}^{(3)},X\_{1}^{(k)}\\right)\\\\\\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(1)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(2)}\\right)&\\operatorname {Cov} \\left(X\_{1}^{(k)},X\_{1}^{(3)}\\right)&\\cdots &\\operatorname {Var} \\left(X\_{1}^{(k)}\\right)\\\\\\end{bmatrix}}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6d498478cfaf665725932f6934d0e59d52b8d97) The multivariate central limit theorem can be proved using the [CramĂ©r–Wold theorem](https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Wold_theorem "CramĂ©r–Wold theorem").[\[9\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-vanderVaart-9) The rate of convergence is given by the following [Berry–Esseen](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem") type result: It is unknown whether the factor ![{\\textstyle d^{1/4}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ed525ca1204085b15a58c7fc9b0e7dd1c38428f) is necessary.[\[11\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-11) ## The generalized central limit theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=7 "Edit section: The generalized central limit theorem")\] The generalized central limit theorem (GCLT) was an effort of multiple mathematicians ([Sergei Bernstein](https://en.wikipedia.org/wiki/Sergei_Bernstein "Sergei Bernstein"), [Jarl Waldemar Lindeberg](https://en.wikipedia.org/wiki/Jarl_Waldemar_Lindeberg "Jarl Waldemar Lindeberg"), [Paul LĂ©vy](https://en.wikipedia.org/wiki/Paul_L%C3%A9vy_\(mathematician\) "Paul LĂ©vy (mathematician)"), [William Feller](https://en.wikipedia.org/wiki/William_Feller "William Feller"), [Andrey Kolmogorov](https://en.wikipedia.org/wiki/Andrey_Kolmogorov "Andrey Kolmogorov"), and others) over the period from 1920 to 1937.[\[12\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-12) The first published complete proof of the GCLT was in 1937 by Paul LĂ©vy in French.[\[13\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-13) An English language version of the complete proof of the GCLT is available in the translation of [Boris Vladimirovich Gnedenko](https://en.wikipedia.org/wiki/Boris_Vladimirovich_Gnedenko "Boris Vladimirovich Gnedenko") and Kolmogorov's 1954 book.[\[14\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-14) The statement of the GCLT is as follows:[\[15\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-15) **Statement of GCLT**—A non-degenerate random variable Z is [α\-stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution") for some 0 \< *α* ≀ 2 if and only if there is an independent, identically distributed sequence of random variables *X*1, *X*2, *X*3, ..., and constants *a**n* \> 0, *b**n* ∈ ℝ with ![{\\displaystyle a\_{n}(X\_{1}+\\dots +X\_{n})-b\_{n}\\to Z.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b245985cb0bacf087b4042c20326688a86fec9bc) Here, '→' means the sequence of random variable sums converges in distribution; i.e., the corresponding distributions satisfy *F**n*(*y*) → *F*(*y*) at all continuity points of F. In other words, if sums of independent, identically distributed random variables converge in distribution to some Z, then Z must be a [stable distribution](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution"). ## Dependent processes \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=8 "Edit section: Dependent processes")\] ### CLT under weak dependence \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=9 "Edit section: CLT under weak dependence")\] A useful generalization of a sequence of independent, identically distributed random variables is a [mixing](https://en.wikipedia.org/wiki/Mixing_\(mathematics\) "Mixing (mathematics)") random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially [strong mixing](https://en.wikipedia.org/wiki/Mixing_\(mathematics\)#Mixing_in_stochastic_processes "Mixing (mathematics)") (also called α-mixing) defined by ![{\\textstyle \\alpha (n)\\to 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e4ed99c356c145334ba6ea4de91528e6cc812a00) where ![{\\textstyle \\alpha (n)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/468f23bd38dabd879395bf653a352665a569a5e6) is so-called [strong mixing coefficient](https://en.wikipedia.org/wiki/Mixing_\(mathematics\)#Mixing_in_stochastic_processes "Mixing (mathematics)"). A simplified formulation of the central limit theorem under strong mixing is:[\[16\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBillingsley1995Theorem_27.4-16) In fact, ![{\\displaystyle \\sigma ^{2}=\\operatorname {E} \\left(X\_{1}^{2}\\right)+2\\sum \_{k=1}^{\\infty }\\operatorname {E} \\left(X\_{1}X\_{1+k}\\right),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5bb0d3c7756dcb842a098c5ff56495a7e16c4ba3) where the series converges absolutely. The assumption ![{\\textstyle \\sigma \\neq 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba5fed43f7e3f2424d0be5a899641b1245afaf75) cannot be omitted, since the asymptotic normality fails for ![{\\textstyle X\_{n}=Y\_{n}-Y\_{n-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7f4384d7efbcc99abdb70838f377c73b071b7a59) where ![{\\textstyle Y\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f44f731779482cae5f5ef040bfa0cb6c181dd4eb) are another [stationary sequence](https://en.wikipedia.org/wiki/Stationary_sequence "Stationary sequence"). There is a stronger version of the theorem:[\[17\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEDurrett2004Sect._7.7\(c\),_Theorem_7.8-17) the assumption ![{\\textstyle \\operatorname {E} \\left\[X\_{n}^{12}\\right\]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/657baea1ae3ea930c0d5057a69e2353a155a0df4) is replaced with ![{\\textstyle \\operatorname {E} \\left\[{\\left\|X\_{n}\\right\|}^{2+\\delta }\\right\]\<\\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/940680c6ace3f0b2a962e5e31fcc79b6e4f28f13), and the assumption ![{\\textstyle \\alpha \_{n}=O\\left(n^{-5}\\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9101fb0dac273b5c7c43dc1e9477ab1998baca60) is replaced with ![{\\displaystyle \\sum \_{n}\\alpha \_{n}^{\\frac {\\delta }{2(2+\\delta )}}\<\\infty .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/326a749d6ae5429bb4e90c62fa60214ff2c65d93) Existence of such ![{\\textstyle \\delta \>0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e503a231ad333c0bd9a2dc7d48d8848076423356) ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see ([Bradley 2007](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBradley2007)). ### Martingale difference CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=10 "Edit section: Martingale difference CLT")\] **Theorem**—Let a [martingale](https://en.wikipedia.org/wiki/Martingale_\(probability_theory\) "Martingale (probability theory)") ![{\\textstyle M\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c6770260b77798ff7d5e557db457172ec8dacd02) satisfy then ![{\\textstyle {\\frac {M\_{n}}{\\sqrt {n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8fefdb90dfbb56b1aa437d0dc16f9cab5906c7fc) converges in distribution to ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as ![{\\textstyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2e2d57fbe15926e75495e28e3517ffec3622d96).[\[18\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEDurrett2004Sect._7.7,_Theorem_7.4-18)[\[19\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEBillingsley1995Theorem_35.12-19) ### Proof of classical CLT \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=12 "Edit section: Proof of classical CLT")\] The central limit theorem has a proof using [characteristic functions](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)").[\[20\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-20) It is similar to the proof of the (weak) [law of large numbers](https://en.wikipedia.org/wiki/Proof_of_the_law_of_large_numbers "Proof of the law of large numbers"). Assume ![{\\textstyle \\{X\_{1},\\ldots ,X\_{n},\\ldots \\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c8cf831d0f6051f596823e3ab16dec9b32c3605b) are independent and identically distributed random variables, each with mean ![{\\textstyle \\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/259577540a13444806174d5a1ae7662974f58085) and finite variance ![{\\textstyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a86f1d00f664920ef46109bcddc0778f4976b490). The sum ![{\\textstyle X\_{1}+\\cdots +X\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c2dd1d07185beda1344edb462098bdeb9c3d4bd2) has [mean](https://en.wikipedia.org/wiki/Linearity_of_expectation "Linearity of expectation") ![{\\textstyle n\\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/86c2acb561e48611211d932c89b04070975c3097) and [variance](https://en.wikipedia.org/wiki/Variance#Sum_of_uncorrelated_variables_\(Bienaym%C3%A9_formula\) "Variance") ![{\\textstyle n\\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a3ac46172bd48cbd73b45b4cd9866ba79c60c024). Consider the random variable ![{\\displaystyle Z\_{n}={\\frac {X\_{1}+\\cdots +X\_{n}-n\\mu }{\\sqrt {n\\sigma ^{2}}}}=\\sum \_{i=1}^{n}{\\frac {X\_{i}-\\mu }{\\sqrt {n\\sigma ^{2}}}}=\\sum \_{i=1}^{n}{\\frac {1}{\\sqrt {n}}}Y\_{i},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad49250c7f766ddd917c1a211d0a9cc17e22ef07) where in the last step we defined the new random variables ![{\\textstyle Y\_{i}={\\frac {X\_{i}-\\mu }{\\sigma }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0d649652d30c90f9eff8ced6aa98a604cce20439), each with zero mean and unit variance (![{\\textstyle \\operatorname {var} (Y)=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/89961aa24aac81f848a2039be849ae38207cbb20)). The [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") of ![{\\textstyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6bb2f366f0fced92341046f7bdc5669190382a1) is given by ![{\\displaystyle {\\begin{aligned}\\varphi \_{Z\_{n}}\\!(t)=\\varphi \_{\\sum \_{i=1}^{n}{{\\frac {1}{\\sqrt {n}}}Y\_{i}}}\\!(t)\\ &=\\ \\varphi \_{Y\_{1}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\varphi \_{Y\_{2}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\cdots \\varphi \_{Y\_{n}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\\\\[1ex\]&=\\ \\left\[\\varphi \_{Y\_{1}}\\!\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)\\right\]^{n},\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/47b0081e06952eb33ecd2d847e4c8bf6c41449bd) where in the last step we used the fact that all of the ![{\\textstyle Y\_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b48694270212208c6f289fd3f5cd0c85a2ab1426) are identically distributed. The characteristic function of ![{\\textstyle Y\_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d99286ab1964199302400d49e2efbacff8e0df7) is, by [Taylor's theorem](https://en.wikipedia.org/wiki/Taylor%27s_theorem "Taylor's theorem"), ![{\\displaystyle \\varphi \_{Y\_{1}}\\!\\left({\\frac {t}{\\sqrt {n}}}\\right)=1-{\\frac {t^{2}}{2n}}+o\\!\\left({\\frac {t^{2}}{n}}\\right),\\quad \\left({\\frac {t}{\\sqrt {n}}}\\right)\\to 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ef0090e6f84b8f85bbd892cdb5b6d12e7e05c3e0) where ![{\\textstyle o(t^{2}/n)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/532efbd52ebff5b1011012e68aed287fd957abc2) is "[little o notation](https://en.wikipedia.org/wiki/Little-o_notation "Little-o notation")" for some function of ![{\\textstyle t}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2bc926f90178739fccd01a96c6fa778ab3535d6) that goes to zero more rapidly than ![{\\textstyle t^{2}/n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b3697c7a2d39c44bf73f66827f536ff885c0d92b). By the limit of the [exponential function](https://en.wikipedia.org/wiki/Exponential_function "Exponential function") (![{\\textstyle e^{x}=\\lim \_{n\\to \\infty }\\left(1+{\\frac {x}{n}}\\right)^{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7ea7045a332544d742dd5772aca36c5a4ea64b36)), the characteristic function of ![{\\displaystyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e5073995dface6fb94824a8bec0075e65205fc64) equals ![{\\displaystyle \\varphi \_{Z\_{n}}(t)=\\left(1-{\\frac {t^{2}}{2n}}+o\\left({\\frac {t^{2}}{n}}\\right)\\right)^{n}\\rightarrow e^{-{\\frac {1}{2}}t^{2}},\\quad n\\to \\infty .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f99cfa0ea68282331a0ce987cab1c839a055bf9d) All of the higher order terms vanish in the limit ![{\\textstyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2e2d57fbe15926e75495e28e3517ffec3622d96). The right hand side equals the characteristic function of a standard normal distribution ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7), which implies through [LĂ©vy's continuity theorem](https://en.wikipedia.org/wiki/L%C3%A9vy_continuity_theorem "LĂ©vy continuity theorem") that the distribution of ![{\\textstyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6bb2f366f0fced92341046f7bdc5669190382a1) will approach ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as ![{\\textstyle n\\to \\infty }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2e2d57fbe15926e75495e28e3517ffec3622d96). Therefore, the [sample average](https://en.wikipedia.org/wiki/Sample_mean "Sample mean") ![{\\displaystyle {\\bar {X}}\_{n}={\\frac {X\_{1}+\\cdots +X\_{n}}{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e9e4899d5168356e3bfce210ef39b7a67326d613) is such that ![{\\displaystyle {\\frac {\\sqrt {n}}{\\sigma }}\\left({\\bar {X}}\_{n}-\\mu \\right)=Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cfa303121de459d785fc0849213f79cfb5cb7802) converges to the normal distribution ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7), from which the central limit theorem follows. ### Convergence to the limit \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=13 "Edit section: Convergence to the limit")\] The central limit theorem gives only an [asymptotic distribution](https://en.wikipedia.org/wiki/Asymptotic_distribution "Asymptotic distribution"). As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] The convergence in the central limit theorem is [uniform](https://en.wikipedia.org/wiki/Uniform_convergence "Uniform convergence") because the limiting cumulative distribution function is continuous. If the third central [moment](https://en.wikipedia.org/wiki/Moment_\(mathematics\) "Moment (mathematics)") ![{\\textstyle \\operatorname {E} \\left\[(X\_{1}-\\mu )^{3}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/389b355cd56db15cdf7e88c8b0aff830a381726f) exists and is finite, then the speed of convergence is at least on the order of ![{\\textstyle 1/{\\sqrt {n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a7ffe079afd996740883579f310cedf037846363) (see [Berry–Esseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem")). [Stein's method](https://en.wikipedia.org/wiki/Stein%27s_method "Stein's method")[\[21\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-stein1972-21) can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics.[\[22\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-22) The convergence to the normal distribution is monotonic, in the sense that the [entropy](https://en.wikipedia.org/wiki/Information_entropy "Information entropy") of ![{\\textstyle Z\_{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b6bb2f366f0fced92341046f7bdc5669190382a1) increases [monotonically](https://en.wikipedia.org/wiki/Monotonic_function "Monotonic function") to that of the normal distribution.[\[23\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-ABBN-23) The central limit theorem applies in particular to sums of independent and identically distributed [discrete random variables](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable"). A sum of [discrete random variables](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable") is still a [discrete random variable](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable"), so that we are confronted with a sequence of [discrete random variables](https://en.wikipedia.org/wiki/Discrete_random_variable "Discrete random variable") whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution")). This means that if we build a [histogram](https://en.wikipedia.org/wiki/Histogram "Histogram") of the realizations of the sum of n independent identical discrete variables, the piecewise-linear curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity; this relation is known as [de Moivre–Laplace theorem](https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem "De Moivre–Laplace theorem"). The [binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution "Binomial distribution") article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values. ### Common misconceptions \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=14 "Edit section: Common misconceptions")\] Studies have shown that the central limit theorem is subject to several common but serious misconceptions, some of which appear in widely used textbooks.[\[24\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-24)[\[25\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-25)[\[26\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-26) These include: - The misconceived belief that the theorem applies to random sampling of any variable, rather than to the mean values (or sums) of [iid](https://en.wikipedia.org/wiki/Iid "Iid") random variables extracted from a population by repeated sampling. That is, the theorem assumes the random sampling produces a [sampling distribution](https://en.wikipedia.org/wiki/Sampling_distribution "Sampling distribution") formed from different values of means (or sums) of such random variables. - The misconceived belief that the theorem ensures that random sampling leads to the emergence of a normal distribution for sufficiently large samples of any random variable, regardless of the population distribution. In reality, such sampling asymptotically reproduces the properties of the population, an intuitive result underpinned by the [Glivenko–Cantelli theorem](https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem "Glivenko–Cantelli theorem"). - The misconceived belief that the theorem leads to a good approximation of a normal distribution for sample sizes greater than around 30,[\[27\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-27) allowing reliable inferences regardless of the nature of the population. In reality, this empirical rule of thumb has no valid justification, and can lead to seriously flawed inferences. See [Z-test](https://en.wikipedia.org/wiki/Z-test "Z-test") for where the approximation holds. ### Relation to the law of large numbers \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=15 "Edit section: Relation to the law of large numbers")\] The [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers") as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior of Sn as n approaches infinity?" In mathematical analysis, [asymptotic series](https://en.wikipedia.org/wiki/Asymptotic_series "Asymptotic series") are one of the most popular tools employed to approach such questions. Suppose we have an asymptotic expansion of ![{\\textstyle f(n)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f435bf426ab479f25120bda0b15bb6f1f3ddab4c): ![{\\displaystyle f(n)=a\_{1}\\varphi \_{1}(n)+a\_{2}\\varphi \_{2}(n)+O{\\big (}\\varphi \_{3}(n){\\big )}\\qquad (n\\to \\infty ).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/260a39ef95ac12c7c49207851f142e04231b3bbf) Dividing both parts by *φ*1(*n*) and taking the limit will produce *a*1, the coefficient of the highest-order term in the expansion, which represents the rate at which *f*(*n*) changes in its leading term. ![{\\displaystyle \\lim \_{n\\to \\infty }{\\frac {f(n)}{\\varphi \_{1}(n)}}=a\_{1}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/44cc891b6ed15c5597004a713bb9254f19e922a6) Informally, one can say: "*f*(*n*) grows approximately as *a*1*φ*1(*n*)". Taking the difference between *f*(*n*) and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about *f*(*n*): ![{\\displaystyle \\lim \_{n\\to \\infty }{\\frac {f(n)-a\_{1}\\varphi \_{1}(n)}{\\varphi \_{2}(n)}}=a\_{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2fced405a36296ad3d95ea743491cf3773124a0b) Here one can say that the difference between the function and its approximation grows approximately as *a*2*φ*2(*n*). The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself. Informally, something along these lines happens when the sum, Sn, of independent identically distributed random variables, *X*1, ..., *Xn*, is studied in classical probability theory.\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] If each Xi has finite mean ÎŒ, then by the law of large numbers, ⁠*Sn*/*n*⁠ → *ÎŒ*.[\[28\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-28) If in addition each Xi has finite variance *σ*2, then by the central limit theorem, ![{\\displaystyle {\\frac {S\_{n}-n\\mu }{\\sqrt {n}}}\\to \\xi ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6ab93c204de66a8de1792dce791a455d187e2899) where Ο is distributed as *N*(0,*σ*2). This provides values of the first two constants in the informal expansion ![{\\displaystyle S\_{n}\\approx \\mu n+\\xi {\\sqrt {n}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d486bc5083bebdb5d386be40624936c849610b3c) In the case where the Xi do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors: ![{\\displaystyle {\\frac {S\_{n}-a\_{n}}{b\_{n}}}\\rightarrow \\Xi ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1d90d434da533ea39b8411d6cb5bd25a0d2f6725) or informally ![{\\displaystyle S\_{n}\\approx a\_{n}+\\Xi b\_{n}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c3defe7d02c1f798ba4cb00acf610edbd79fe573) Distributions Ξ which can arise in this way are called *[stable](https://en.wikipedia.org/wiki/Stable_distribution "Stable distribution")*.[\[29\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-29) Clearly, the normal distribution is stable, but there are also other stable distributions, such as the [Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution "Cauchy distribution"), for which the mean or variance are not defined. The scaling factor bn may be proportional to nc, for any *c* ≄ ⁠1/2⁠; it may also be multiplied by a [slowly varying function](https://en.wikipedia.org/wiki/Slowly_varying_function "Slowly varying function") of n.[\[30\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Uchaikin-30)[\[31\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-31) The [law of the iterated logarithm](https://en.wikipedia.org/wiki/Law_of_the_iterated_logarithm "Law of the iterated logarithm") specifies what is happening "in between" the [law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers "Law of large numbers") and the central limit theorem. Specifically it says that the normalizing function √*n* log log *n*, intermediate in size between n of the law of large numbers and √*n* of the central limit theorem, provides a non-trivial limiting behavior. ### Alternative statements of the theorem \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=16 "Edit section: Alternative statements of the theorem")\] The [density](https://en.wikipedia.org/wiki/Probability_density_function "Probability density function") of the sum of two or more independent variables is the [convolution](https://en.wikipedia.org/wiki/Convolution "Convolution") of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov[\[32\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-32) for a particular local limit theorem for sums of [independent and identically distributed random variables](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables "Independent and identically distributed random variables"). #### Characteristic functions \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=18 "Edit section: Characteristic functions")\] Since the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_\(probability_theory\) "Characteristic function (probability theory)") of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. Specifically, an appropriate scaling factor needs to be applied to the argument of the characteristic function. An equivalent statement can be made about [Fourier transforms](https://en.wikipedia.org/wiki/Fourier_transform "Fourier transform"), since the characteristic function is essentially a Fourier transform. ### Calculating the variance \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=19 "Edit section: Calculating the variance")\] Let Sn be the sum of n random variables. Many central limit theorems provide conditions such that Sn/√Var(Sn) converges in distribution to *N*(0,1) (the normal distribution with mean 0, variance 1) as n → ∞. In some cases, it is possible to find a constant *σ*2 and function f(n) such that Sn/(σ√n⋅f(n)) converges in distribution to *N*(0,1) as n→ ∞. ### Products of positive random variables \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=21 "Edit section: Products of positive random variables")\] The [logarithm](https://en.wikipedia.org/wiki/Logarithm "Logarithm") of a product is simply the sum of the logarithms of the factors. Therefore, when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution "Log-normal distribution"). Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different [random](https://en.wikipedia.org/wiki/Random "Random") factors, so they follow a log-normal distribution. This multiplicative version of the central limit theorem is sometimes called [Gibrat's law](https://en.wikipedia.org/wiki/Gibrat%27s_law "Gibrat's law"). Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable.[\[34\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Rempala-34) ## Beyond the classical framework \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=22 "Edit section: Beyond the classical framework")\] Asymptotic normality, that is, [convergence](https://en.wikipedia.org/wiki/Convergence_in_distribution "Convergence in distribution") to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now. **Theorem**—There exists a sequence *Δn* ↓ 0 for which the following holds. Let *n* ≄ 1, and let random variables *X*1, ..., *Xn* have a [log-concave](https://en.wikipedia.org/wiki/Logarithmically_concave_function "Logarithmically concave function") [joint density](https://en.wikipedia.org/wiki/Joint_density_function "Joint density function") f such that *f*(*x*1, ..., *xn*) = *f*(\|*x*1\|, ..., \|*xn*\|) for all *x*1, ..., *xn*, and E(*X*2 *k*) = 1 for all *k* = 1, ..., *n*. Then the distribution of ![{\\displaystyle {\\frac {X\_{1}+\\cdots +X\_{n}}{\\sqrt {n}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/506e5cbbd5ebf53c483a6b0434c76bca48a9bdfb) is Δn\-close to ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) in the [total variation distance](https://en.wikipedia.org/wiki/Total_variation_distance_of_probability_measures "Total variation distance of probability measures").[\[35\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEKlartag2007Theorem_1.2-35) These two Δn\-close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence. An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies". Another example: *f*(*x*1, ..., *xn*) = const · exp(−(\|*x*1\|*α* + ⋯ + \|*xn*\|*α*)*ÎČ*) where *α* \> 1 and *αÎČ* \> 1. If *ÎČ* = 1 then *f*(*x*1, ..., *xn*) factorizes into const · exp (−\|*x*1\|*α*) 
 exp(−\|*xn*\|*α*), which means *X*1, ..., *Xn* are independent. In general, however, they are dependent. The condition *f*(*x*1, ..., *xn*) = *f*(\|*x*1\|, ..., \|*xn*\|) ensures that *X*1, ..., *Xn* are of zero mean and [uncorrelated](https://en.wikipedia.org/wiki/Uncorrelated "Uncorrelated");\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] still, they need not be independent, nor even [pairwise independent](https://en.wikipedia.org/wiki/Pairwise_independence "Pairwise independence").\[*[citation needed](https://en.wikipedia.org/wiki/Wikipedia:Citation_needed "Wikipedia:Citation needed")*\] By the way, pairwise independence cannot replace independence in the classical central limit theorem.[\[36\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEDurrett2004Section_2.4,_Example_4.5-36) Here is a [Berry–Esseen](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem") type result. **Theorem**—Let *X*1, ..., *Xn* satisfy the assumptions of the previous theorem, then[\[37\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEKlartag2008Theorem_1-37) ![{\\displaystyle \\left\|\\mathbb {P} \\left(a\\leq {\\frac {X\_{1}+\\cdots +X\_{n}}{\\sqrt {n}}}\\leq b\\right)-{\\frac {1}{\\sqrt {2\\pi }}}\\int \_{a}^{b}e^{-{\\frac {1}{2}}t^{2}}\\,dt\\right\|\\leq {\\frac {C}{n}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c68c5077344e1cc79881c190943e766a8f542e00) for all *a* \< *b*; here C is a [universal (absolute) constant](https://en.wikipedia.org/wiki/Mathematical_constant "Mathematical constant"). Moreover, for every *c*1, ..., *cn* ∈ **R** such that *c*2 1 + ⋯ + *c*2 *n* = 1, ![{\\displaystyle \\left\|\\mathbb {P} \\left(a\\leq c\_{1}X\_{1}+\\cdots +c\_{n}X\_{n}\\leq b\\right)-{\\frac {1}{\\sqrt {2\\pi }}}\\int \_{a}^{b}e^{-{\\frac {1}{2}}t^{2}}\\,dt\\right\|\\leq C\\left(c\_{1}^{4}+\\dots +c\_{n}^{4}\\right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bbc1a71d4ee2ee74bfd8b7b67959d79c171afc08) The distribution of ⁠*X*1 + ⋯ + *Xn*/√*n*⁠ need not be approximately normal (in fact, it can be uniform).[\[38\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEKlartag2007Theorem_1.1-38) However, the distribution of *c*1*X*1 + ⋯ + *cnXn* is close to ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) (in the total variation distance) for most vectors (*c*1, ..., *cn*) according to the uniform distribution on the sphere *c*2 1 + ⋯ + *c*2 *n* = 1. ### Lacunary trigonometric series \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=24 "Edit section: Lacunary trigonometric series")\] **Theorem ([Salem](https://en.wikipedia.org/wiki/Rapha%C3%ABl_Salem "RaphaĂ«l Salem")–[Zygmund](https://en.wikipedia.org/wiki/Antoni_Zygmund "Antoni Zygmund"))**—Let U be a random variable distributed uniformly on (0,2π), and *Xk* = *rk* cos(*nkU* + *ak*), where - nk satisfy the lacunarity condition: there exists *q* \> 1 such that *n**k* + 1 ≄ *qn**k* for all k, - rk are such that ![{\\displaystyle r\_{1}^{2}+r\_{2}^{2}+\\cdots =\\infty \\quad {\\text{ and }}\\quad {\\frac {r\_{k}^{2}}{r\_{1}^{2}+\\cdots +r\_{k}^{2}}}\\to 0,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7d87209928a7946d7d202155512b294c79e72ada) - 0 ≀ *a**k* \< 2π. Then[\[39\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Zygmund-39)[\[40\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEGaposhkin1966Theorem_2.1.13-40) ![{\\displaystyle {\\frac {X\_{1}+\\cdots +X\_{k}}{\\sqrt {r\_{1}^{2}+\\cdots +r\_{k}^{2}}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f65a0da4b413fce55317ae98f18905fe91442a2c) converges in distribution to ![{\\textstyle {\\mathcal {N}}{\\big (}0,{\\frac {1}{2}}{\\big )}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8c8d6598f79fac7b8c5f683f4319e95306e0ce9). **Theorem**—Let *A*1, ..., *A**n* be independent random points on the plane **R**2 each having the two-dimensional standard normal distribution. Let Kn be the [convex hull](https://en.wikipedia.org/wiki/Convex_hull "Convex hull") of these points, and Xn the area of Kn Then[\[41\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.1-41) ![{\\displaystyle {\\frac {X\_{n}-\\operatorname {E} (X\_{n})}{\\sqrt {\\operatorname {Var} (X\_{n})}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/98854722b040253e6fe84713c8d08c0307afb0a7) converges in distribution to ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as n tends to infinity. The same also holds in all dimensions greater than 2. The [polytope](https://en.wikipedia.org/wiki/Convex_polytope "Convex polytope") Kn is called a Gaussian [random polytope](https://en.wikipedia.org/wiki/Random_polytope "Random polytope"). A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions.[\[42\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.2-42) ### Linear functions of orthogonal matrices \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=26 "Edit section: Linear functions of orthogonal matrices")\] A linear function of a matrix **M** is a linear combination of its elements (with given coefficients), **M** ↩ tr(**AM**) where **A** is the matrix of the coefficients; see [Trace (linear algebra)\#Inner product](https://en.wikipedia.org/wiki/Trace_\(linear_algebra\)#Inner_product "Trace (linear algebra)"). A random [orthogonal matrix](https://en.wikipedia.org/wiki/Orthogonal_matrix "Orthogonal matrix") is said to be distributed uniformly, if its distribution is the normalized [Haar measure](https://en.wikipedia.org/wiki/Haar_measure "Haar measure") on the [orthogonal group](https://en.wikipedia.org/wiki/Orthogonal_group "Orthogonal group") O(*n*,**R**); see [Rotation matrix\#Uniform random rotation matrices](https://en.wikipedia.org/wiki/Rotation_matrix#Uniform_random_rotation_matrices "Rotation matrix"). **Theorem**—Let **M** be a random orthogonal *n* × *n* matrix distributed uniformly, and **A** a fixed *n* × *n* matrix such that tr(**AA**\*) = *n*, and let *X* = tr(**AM**). Then[\[43\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Meckes-43) the distribution of X is close to ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) in the total variation metric up to\[*[clarification needed](https://en.wikipedia.org/wiki/Wikipedia:Please_clarify "Wikipedia:Please clarify")*\] ⁠2√3/*n* − 1⁠. **Theorem**—Let random variables *X*1, *X*2, ... ∈ *L*2(Ω) be such that *Xn* → 0 [weakly](https://en.wikipedia.org/wiki/Weak_convergence_\(Hilbert_space\) "Weak convergence (Hilbert space)") in *L*2(Ω) and *X* *n* → 1 weakly in *L*1(Ω). Then there exist integers *n*1 \< *n*2 \< ⋯ such that ![{\\displaystyle {\\frac {X\_{n\_{1}}+\\cdots +X\_{n\_{k}}}{\\sqrt {k}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2926d591f2008d629880d119762ac66b916f18f2) converges in distribution to ![{\\textstyle {\\mathcal {N}}(0,1)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ba37ff02211fb81c813065b489b78ec9df951e7) as k tends to infinity.[\[44\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEGaposhkin1966Sect._1.5-44) ### Random walk on a crystal lattice \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=28 "Edit section: Random walk on a crystal lattice")\] The central limit theorem may be established for the simple [random walk](https://en.wikipedia.org/wiki/Random_walk "Random walk") on a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures.[\[45\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-45)[\[46\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-46) ## Applications and examples \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=29 "Edit section: Applications and examples")\] A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample [statistics](https://en.wikipedia.org/wiki/Statistic "Statistic") to the normal distribution in controlled experiments. [![](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Dice_sum_central_limit_theorem.svg/330px-Dice_sum_central_limit_theorem.svg.png)](https://en.wikipedia.org/wiki/File:Dice_sum_central_limit_theorem.svg) Comparison of probability density functions *p*(*k*) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing n, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). [![](https://upload.wikimedia.org/wikipedia/commons/thumb/2/2d/Empirical_CLT_-_Figure_-_040711.jpg/500px-Empirical_CLT_-_Figure_-_040711.jpg)](https://en.wikipedia.org/wiki/File:Empirical_CLT_-_Figure_-_040711.jpg) This figure demonstrates the central limit theorem. The sample means are generated using a random number generator, which draws numbers between 0 and 100 from a uniform probability distribution. It illustrates that increasing sample sizes result in the 500 measured sample means being more closely distributed about the population mean (50 in this case). It also compares the observed distributions with the distributions that would be expected for a normalized Gaussian distribution, and shows the [chi-squared](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test "Pearson's chi-squared test") values that quantify the goodness of the fit (the fit is good if the reduced [chi-squared](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test "Pearson's chi-squared test") value is less than or approximately equal to one). The input into the normalized Gaussian function is the mean of sample means (~50) and the mean sample standard deviation divided by the square root of the sample size (~28.87/√*n*), which is called the standard deviation of the mean (since it refers to the spread of sample means). [![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/75/Mean-of-the-outcomes-of-rolling-a-fair-coin-n-times.svg/960px-Mean-of-the-outcomes-of-rolling-a-fair-coin-n-times.svg.png)](https://en.wikipedia.org/wiki/File:Mean-of-the-outcomes-of-rolling-a-fair-coin-n-times.svg) Another simulation using the binomial distribution. Random 0s and 1s were generated, and then their means calculated for sample sizes ranging from 1 to 2048. Note that as the sample size increases the tails become thinner and the distribution becomes more concentrated around the mean. [Regression analysis](https://en.wikipedia.org/wiki/Regression_analysis "Regression analysis"), and in particular [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares "Ordinary least squares"), specifies that a [dependent variable](https://en.wikipedia.org/wiki/Dependent_variable "Dependent variable") depends according to some function upon one or more [independent variables](https://en.wikipedia.org/wiki/Independent_variable "Independent variable"), with an additive [error term](https://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics "Errors and residuals in statistics"). Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution. ### Other illustrations \[[edit](https://en.wikipedia.org/w/index.php?title=Central_limit_theorem&action=edit&section=31 "Edit section: Other illustrations")\] Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem.[\[47\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Marasinghe-47) Dutch mathematician [Henk Tijms](https://en.wikipedia.org/wiki/Henk_Tijms "Henk Tijms") writes:[\[48\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Tijms-48) > The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born mathematician [Abraham de Moivre](https://en.wikipedia.org/wiki/Abraham_de_Moivre "Abraham de Moivre") who, in a remarkable article published in 1733, used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This finding was far ahead of its time, and was nearly forgotten until the famous French mathematician [Pierre-Simon Laplace](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace "Pierre-Simon Laplace") rescued it from obscurity in his monumental work *ThĂ©orie analytique des probabilitĂ©s*, which was published in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematician [Aleksandr Lyapunov](https://en.wikipedia.org/wiki/Aleksandr_Lyapunov "Aleksandr Lyapunov") defined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory. Sir [Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") described the Central Limit Theorem in this way:[\[49\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-49) > I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error". The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along. The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by [George PĂłlya](https://en.wikipedia.org/wiki/George_P%C3%B3lya "George PĂłlya") in 1920 in the title of a paper.[\[50\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Polya1920-50)[\[51\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-LC1986-51) PĂłlya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word *central* in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails".[\[51\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-LC1986-51) The abstract of the paper *On the central limit theorem of calculus of probability and the problem of moments* by PĂłlya[\[50\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Polya1920-50) in 1920 translates as follows. > The occurrence of the Gaussian probability density 1 = *e*−*x*2 in repeated experiments, in errors of measurements, which result in the combination of very many and very small elementary errors, in diffusion processes etc., can be explained, as is well-known, by the very same limit theorem, which plays a central role in the calculus of probability. The actual discoverer of this limit theorem is to be named Laplace; it is likely that its rigorous proof was first given by Tschebyscheff and its sharpest formulation can be found, as far as I am aware of, in an article by [Liapounoff](https://en.wikipedia.org/wiki/Aleksandr_Lyapunov "Aleksandr Lyapunov"). ... A thorough account of the theorem's history, detailing Laplace's foundational work, as well as [Cauchy](https://en.wikipedia.org/wiki/Augustin-Louis_Cauchy "Augustin-Louis Cauchy")'s, [Bessel](https://en.wikipedia.org/wiki/Friedrich_Bessel "Friedrich Bessel")'s and [Poisson](https://en.wikipedia.org/wiki/Sim%C3%A9on_Denis_Poisson "SimĂ©on Denis Poisson")'s contributions, is provided by Hald.[\[52\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Hald-52) Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by [von Mises](https://en.wikipedia.org/wiki/Richard_von_Mises "Richard von Mises"), [PĂłlya](https://en.wikipedia.org/wiki/George_P%C3%B3lya "George PĂłlya"), [Lindeberg](https://en.wikipedia.org/wiki/Jarl_Waldemar_Lindeberg "Jarl Waldemar Lindeberg"), [LĂ©vy](https://en.wikipedia.org/wiki/Paul_L%C3%A9vy_\(mathematician\) "Paul LĂ©vy (mathematician)"), and [CramĂ©r](https://en.wikipedia.org/wiki/Harald_Cram%C3%A9r "Harald CramĂ©r") during the 1920s, are given by Hans Fischer.[\[53\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-FOOTNOTEFischer2011Chapter_2;_Chapter_5.2-53) Le Cam describes a period around 1935.[\[51\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-LC1986-51) Bernstein[\[54\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-Bernstein-54) presents a historical discussion focusing on the work of [Pafnuty Chebyshev](https://en.wikipedia.org/wiki/Pafnuty_Chebyshev "Pafnuty Chebyshev") and his students [Andrey Markov](https://en.wikipedia.org/wiki/Andrey_Markov "Andrey Markov") and [Aleksandr Lyapunov](https://en.wikipedia.org/wiki/Aleksandr_Lyapunov "Aleksandr Lyapunov") that led to the first proofs of the CLT in a general setting. A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of [Alan Turing](https://en.wikipedia.org/wiki/Alan_Turing "Alan Turing")'s 1934 Fellowship Dissertation for [King's College](https://en.wikipedia.org/wiki/King%27s_College,_Cambridge "King's College, Cambridge") at the [University of Cambridge](https://en.wikipedia.org/wiki/University_of_Cambridge "University of Cambridge"). Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published.[\[55\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-55) - [Asymptotic equipartition property](https://en.wikipedia.org/wiki/Asymptotic_equipartition_property "Asymptotic equipartition property") - [Asymptotic distribution](https://en.wikipedia.org/wiki/Asymptotic_distribution "Asymptotic distribution") - [Bates distribution](https://en.wikipedia.org/wiki/Bates_distribution "Bates distribution") - [Benford's law](https://en.wikipedia.org/wiki/Benford%27s_law "Benford's law") – result of extension of CLT to product of random variables. - [Berry–Esseen theorem](https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem "Berry–Esseen theorem") - [Central limit theorem for directional statistics](https://en.wikipedia.org/wiki/Central_limit_theorem_for_directional_statistics "Central limit theorem for directional statistics") – Central limit theorem applied to the case of directional statistics - [Delta method](https://en.wikipedia.org/wiki/Delta_method "Delta method") – to compute the limit distribution of a function of a random variable. - [ErdƑs–Kac theorem](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kac_theorem "ErdƑs–Kac theorem") – connects the number of prime factors of an integer with the normal probability distribution - [Fisher–Tippett–Gnedenko theorem](https://en.wikipedia.org/wiki/Fisher%E2%80%93Tippett%E2%80%93Gnedenko_theorem "Fisher–Tippett–Gnedenko theorem") – limit theorem for extremum values (such as max{*Xn*}) - [Irwin–Hall distribution](https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution "Irwin–Hall distribution") - [Markov chain central limit theorem](https://en.wikipedia.org/wiki/Markov_chain_central_limit_theorem "Markov chain central limit theorem") - [Normal distribution](https://en.wikipedia.org/wiki/Normal_distribution "Normal distribution") - [Tweedie convergence theorem](https://en.wikipedia.org/wiki/Tweedie_distribution "Tweedie distribution") – a theorem that can be considered to bridge between the central limit theorem and the [Poisson convergence theorem](https://en.wikipedia.org/wiki/Poisson_convergence_theorem "Poisson convergence theorem")[\[56\]](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_note-J%C3%B8rgensen-1997-56) - [Donsker's theorem](https://en.wikipedia.org/wiki/Donsker%27s_theorem "Donsker's theorem") 1. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEFischer2011[[Category:Wikipedia_articles_needing_page_number_citations_from_July_2023]]<sup_class="noprint_Inline-Template_"_style="white-space:nowrap;">&#91;<i>[[Wikipedia:Citing_sources|<span_title="This_citation_requires_a_reference_to_the_specific_page_or_range_of_pages_in_which_the_material_appears.&#32;\(July_2023\)">page&nbsp;needed</span>]]</i>&#93;</sup>_1-0)** [Fischer (2011)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFFischer2011), p. \[*[page needed](https://en.wikipedia.org/wiki/Wikipedia:Citing_sources "Wikipedia:Citing sources")*\]. 2. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-2)** Montgomery, Douglas C.; Runger, George C. (2014). *Applied Statistics and Probability for Engineers* (6th ed.). Wiley. p. 241. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9781118539712](https://en.wikipedia.org/wiki/Special:BookSources/9781118539712 "Special:BookSources/9781118539712") . 3. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-3)** Rouaud, Mathieu (2013). [*Probability, Statistics and Estimation*](http://www.incertitudes.fr/book.pdf) (PDF). p. 10. [Archived](https://ghostarchive.org/archive/20221009/http://www.incertitudes.fr/book.pdf) (PDF) from the original on 2022-10-09. 4. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995357_4-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), p. 357. 5. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBauer2001199Theorem_30.13_5-0)** [Bauer (2001)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBauer2001), p. 199, Theorem 30.13. 6. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995362_6-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), p. 362. 7. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-7)** Robbins, Herbert (1948). ["The asymptotic distribution of the sum of a random number of random variables"](https://projecteuclid.org/journals/bulletin-of-the-american-mathematical-society/volume-54/issue-12/The-asymptotic-distribution-of-the-sum-of-a-random-number/bams/1183513324.full). *Bull. Amer. Math. Soc*. **54** (12): 1151–1161\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0002-9904-1948-09142-X](https://doi.org/10.1090%2FS0002-9904-1948-09142-X). 8. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-8)** Chen, Louis H.Y.; Goldstein, Larry; Shao, Qi-Man (2011). *Normal Approximation by Stein's Method*. Berlin Heidelberg: Springer-Verlag. pp. 270–271\. 9. ^ [***a***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-vanderVaart_9-0) [***b***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-vanderVaart_9-1) van der Vaart, A.W. (1998). *Asymptotic statistics*. New York, NY: Cambridge University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-521-49603-2](https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-49603-2 "Special:BookSources/978-0-521-49603-2") . [LCCN](https://en.wikipedia.org/wiki/LCCN_\(identifier\) "LCCN (identifier)") [98015176](https://lccn.loc.gov/98015176). 10. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-10)** [O’Donnell, Ryan](https://en.wikipedia.org/wiki/Ryan_O%27Donnell_\(computer_scientist\) "Ryan O'Donnell (computer scientist)") (2014). ["Theorem 5.38"](https://web.archive.org/web/20190408054104/http://www.contrib.andrew.cmu.edu/~ryanod/?p=866). Archived from [the original](http://www.contrib.andrew.cmu.edu/~ryanod/?p=866) on 2019-04-08. Retrieved 2017-10-18. 11. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-11)** Bentkus, V. (2005). "A Lyapunov-type bound in ![{\\displaystyle \\mathbb {R} ^{d}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a713426956296f1668fce772df3c60b9dde8a685)". *Theory Probab. Appl*. **49** (2): 311–323\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1137/S0040585X97981123](https://doi.org/10.1137%2FS0040585X97981123). 12. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-12)** Le Cam, L. (February 1986). "The Central Limit Theorem around 1935". *Statistical Science*. **1** (1): 78–91\. [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2245503](https://www.jstor.org/stable/2245503). 13. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-13)** LĂ©vy, Paul (1937). *Theorie de l'addition des variables aleatoires* \[*Combination theory of unpredictable variables*\] (in French). Paris: Gauthier-Villars. 14. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-14)** Gnedenko, Boris Vladimirovich; Kologorov, AndreÄ­ Nikolaevich; Doob, Joseph L.; Hsu, Pao-Lu (1968). *Limit distributions for sums of independent random variables*. Reading, MA: Addison-wesley. 15. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-15)** Nolan, John P. (2020). [*Univariate stable distributions, Models for Heavy Tailed Data*](https://doi.org/10.1007/978-3-030-52915-4). Springer Series in Operations Research and Financial Engineering. Switzerland: Springer. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/978-3-030-52915-4](https://doi.org/10.1007%2F978-3-030-52915-4). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-3-030-52914-7](https://en.wikipedia.org/wiki/Special:BookSources/978-3-030-52914-7 "Special:BookSources/978-3-030-52914-7") . [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [226648987](https://api.semanticscholar.org/CorpusID:226648987). 16. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995Theorem_27.4_16-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), Theorem 27.4. 17. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEDurrett2004Sect._7.7\(c\),_Theorem_7.8_17-0)** [Durrett (2004)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFDurrett2004), Sect. 7.7(c), Theorem 7.8. 18. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEDurrett2004Sect._7.7,_Theorem_7.4_18-0)** [Durrett (2004)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFDurrett2004), Sect. 7.7, Theorem 7.4. 19. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEBillingsley1995Theorem_35.12_19-0)** [Billingsley (1995)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFBillingsley1995), Theorem 35.12. 20. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-20)** Lemons, Don (2003). [*An Introduction to Stochastic Processes in Physics*](https://jhupbooks.press.jhu.edu/content/introduction-stochastic-processes-physics). Johns Hopkins University Press. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.56021/9780801868665](https://doi.org/10.56021%2F9780801868665). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9780801876387](https://en.wikipedia.org/wiki/Special:BookSources/9780801876387 "Special:BookSources/9780801876387") . Retrieved 2016-08-11. 21. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-stein1972_21-0)** [Stein, C.](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") (1972). ["A bound for the error in the normal approximation to the distribution of a sum of dependent random variables"](https://projecteuclid.org/euclid.bsmsp/1200514239). *Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability*. **6** (2): 583–602\. [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0402873](https://mathscinet.ams.org/mathscinet-getitem?mr=0402873). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0278\.60026](https://zbmath.org/?format=complete&q=an:0278.60026). 22. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-22)** Chen, L. H. Y.; Goldstein, L.; Shao, Q. M. (2011). *Normal approximation by Stein's method*. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-3-642-15006-7](https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-15006-7 "Special:BookSources/978-3-642-15006-7") . 23. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-ABBN_23-0)** [Artstein, S.](https://en.wikipedia.org/wiki/Shiri_Artstein "Shiri Artstein"); [Ball, K.](https://en.wikipedia.org/wiki/Keith_Martin_Ball "Keith Martin Ball"); [Barthe, F.](https://en.wikipedia.org/wiki/Franck_Barthe "Franck Barthe"); [Naor, A.](https://en.wikipedia.org/wiki/Assaf_Naor "Assaf Naor") (2004). ["Solution of Shannon's Problem on the Monotonicity of Entropy"](https://doi.org/10.1090%2FS0894-0347-04-00459-X). *Journal of the American Mathematical Society*. **17** (4): 975–982\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0894-0347-04-00459-X](https://doi.org/10.1090%2FS0894-0347-04-00459-X). 24. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-24)** Brewer, J. K. (1985). "Behavioral statistics textbooks: Source of myths and misconceptions?". *Journal of Educational Statistics*. **10** (3): 252–268\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.3102/10769986010003252](https://doi.org/10.3102%2F10769986010003252). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [119611584](https://api.semanticscholar.org/CorpusID:119611584). 25. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-25)** Yu, C.; Behrens, J.; Spencer, A. Identification of Misconception in the Central Limit Theorem and Related Concepts, *American Educational Research Association* lecture 19 April 1995 26. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-26)** Sotos, A. E. C.; Vanhoof, S.; Van den Noortgate, W.; Onghena, P. (2007). ["Students' misconceptions of statistical inference: A review of the empirical evidence from research on statistics education"](https://lirias.kuleuven.be/handle/123456789/136347). *Educational Research Review*. **2** (2): 98–113\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.edurev.2007.04.001](https://doi.org/10.1016%2Fj.edurev.2007.04.001). 27. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-27)** ["Sampling distribution of the sample mean"](https://web.archive.org/web/20230602200310/https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/sampling-distribution-of-the-sample-mean). *Khan Academy*. 2 June 2023. Archived from [the original](https://www.khanacademy.org/math/statistics-probability/sampling-distributions-library/sample-means/v/sampling-distribution-of-the-sample-mean) (video) on 2023-06-02. Retrieved 2023-10-08. 28. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-28)** Rosenthal, Jeffrey Seth (2000). *A First Look at Rigorous Probability Theory*. World Scientific. Theorem 5.3.4, p. 47. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [981-02-4322-7](https://en.wikipedia.org/wiki/Special:BookSources/981-02-4322-7 "Special:BookSources/981-02-4322-7") . 29. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-29)** Johnson, Oliver Thomas (2004). *Information Theory and the Central Limit Theorem*. Imperial College Press. p. 88. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [1-86094-473-6](https://en.wikipedia.org/wiki/Special:BookSources/1-86094-473-6 "Special:BookSources/1-86094-473-6") . 30. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Uchaikin_30-0)** Uchaikin, Vladimir V.; Zolotarev, V.M. (1999). *Chance and Stability: Stable distributions and their applications*. VSP. pp. 61–62\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [90-6764-301-7](https://en.wikipedia.org/wiki/Special:BookSources/90-6764-301-7 "Special:BookSources/90-6764-301-7") . 31. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-31)** Borodin, A. N.; Ibragimov, I. A.; Sudakov, V. N. (1995). *Limit Theorems for Functionals of Random Walks*. AMS Bookstore. Theorem 1.1, p. 8. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-8218-0438-3](https://en.wikipedia.org/wiki/Special:BookSources/0-8218-0438-3 "Special:BookSources/0-8218-0438-3") . 32. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-32)** Petrov, V. V. (1976). [*Sums of Independent Random Variables*](https://books.google.com/books?id=zSDqCAAAQBAJ). New York-Heidelberg: Springer-Verlag. ch. 7. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [9783642658099](https://en.wikipedia.org/wiki/Special:BookSources/9783642658099 "Special:BookSources/9783642658099") . 33. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-33)** Hew, Patrick Chisan (2017). "Asymptotic distribution of rewards accumulated by alternating renewal processes". *Statistics and Probability Letters*. **129**: 355–359\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1016/j.spl.2017.06.027](https://doi.org/10.1016%2Fj.spl.2017.06.027). 34. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Rempala_34-0)** Rempala, G.; Wesolowski, J. (2002). ["Asymptotics of products of sums and *U*\-statistics"](https://projecteuclid.org/journals/electronic-communications-in-probability/volume-7/issue-none/Asymptotics-for-Products-of-Sums-and-U-statistics/10.1214/ECP.v7-1046.pdf) (PDF). *Electronic Communications in Probability*. **7**: 47–54\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/ecp.v7-1046](https://doi.org/10.1214%2Fecp.v7-1046). 35. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEKlartag2007Theorem_1.2_35-0)** [Klartag (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFKlartag2007), Theorem 1.2. 36. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEDurrett2004Section_2.4,_Example_4.5_36-0)** [Durrett (2004)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFDurrett2004), Section 2.4, Example 4.5. 37. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEKlartag2008Theorem_1_37-0)** [Klartag (2008)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFKlartag2008), Theorem 1. 38. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEKlartag2007Theorem_1.1_38-0)** [Klartag (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFKlartag2007), Theorem 1.1. 39. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Zygmund_39-0)** [Zygmund, Antoni](https://en.wikipedia.org/wiki/Antoni_Zygmund "Antoni Zygmund") (2003) \[1959\]. [*Trigonometric Series*](https://en.wikipedia.org/wiki/Trigonometric_Series "Trigonometric Series"). Cambridge University Press. vol. II, sect. XVI.5, Theorem 5-5. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-521-89053-5](https://en.wikipedia.org/wiki/Special:BookSources/0-521-89053-5 "Special:BookSources/0-521-89053-5") . 40. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEGaposhkin1966Theorem_2.1.13_40-0)** [Gaposhkin (1966)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFGaposhkin1966), Theorem 2.1.13. 41. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.1_41-0)** [BĂĄrĂĄny & Vu (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFB%C3%A1r%C3%A1nyVu2007), Theorem 1.1. 42. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEB%C3%A1r%C3%A1nyVu2007Theorem_1.2_42-0)** [BĂĄrĂĄny & Vu (2007)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFB%C3%A1r%C3%A1nyVu2007), Theorem 1.2. 43. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Meckes_43-0)** [Meckes, Elizabeth](https://en.wikipedia.org/wiki/Elizabeth_Meckes "Elizabeth Meckes") (2008). "Linear functions on the classical matrix groups". *Transactions of the American Mathematical Society*. **360** (10): 5355–5366\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0509441](https://arxiv.org/abs/math/0509441). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1090/S0002-9947-08-04444-9](https://doi.org/10.1090%2FS0002-9947-08-04444-9). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [11981408](https://api.semanticscholar.org/CorpusID:11981408). 44. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEGaposhkin1966Sect._1.5_44-0)** [Gaposhkin (1966)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFGaposhkin1966), Sect. 1.5. 45. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-45)** Kotani, M.; [Sunada, Toshikazu](https://en.wikipedia.org/wiki/Toshikazu_Sunada "Toshikazu Sunada") (2003). *Spectral geometry of crystal lattices*. Vol. 338. Contemporary Math. pp. 271–305\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-8218-4269-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-4269-0 "Special:BookSources/978-0-8218-4269-0") . 46. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-46)** [Sunada, Toshikazu](https://en.wikipedia.org/wiki/Toshikazu_Sunada "Toshikazu Sunada") (2012). *Topological Crystallography – With a View Towards Discrete Geometric Analysis*. Surveys and Tutorials in the Applied Mathematical Sciences. Vol. 6. Springer. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-4-431-54177-6](https://en.wikipedia.org/wiki/Special:BookSources/978-4-431-54177-6 "Special:BookSources/978-4-431-54177-6") . 47. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Marasinghe_47-0)** Marasinghe, M.; Meeker, W.; Cook, D.; Shin, T. S. (August 1994). *Using graphics and simulation to teach statistical concepts*. Annual meeting of the American Statistician Association, Toronto, Canada. 48. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Tijms_48-0)** Henk, Tijms (2004). *Understanding Probability: Chance Rules in Everyday Life*. Cambridge: Cambridge University Press. p. 169. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-521-54036-4](https://en.wikipedia.org/wiki/Special:BookSources/0-521-54036-4 "Special:BookSources/0-521-54036-4") . 49. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-49)** Galton, F. (1889). [*Natural Inheritance*](https://galton.org/cgi-bin/searchImages/galton/search/books/natural-inheritance/pages/natural-inheritance_0073.htm). p. 66. 50. ^ [***a***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Polya1920_50-0) [***b***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Polya1920_50-1) [PĂłlya, George](https://en.wikipedia.org/wiki/George_P%C3%B3lya "George PĂłlya") (1920). ["Über den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem"](https://www-gdz.sub.uni-goettingen.de/cgi-bin/digbib.cgi?PPN266833020_0008) \[On the central limit theorem of probability calculation and the problem of moments\]. *[Mathematische Zeitschrift](https://en.wikipedia.org/wiki/Mathematische_Zeitschrift "Mathematische Zeitschrift")* (in German). **8** (3–4\): 171–181\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/BF01206525](https://doi.org/10.1007%2FBF01206525). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [123063388](https://api.semanticscholar.org/CorpusID:123063388). 51. ^ [***a***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-LC1986_51-0) [***b***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-LC1986_51-1) [***c***](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-LC1986_51-2) [Le Cam, Lucien](https://en.wikipedia.org/wiki/Lucien_Le_Cam "Lucien Le Cam") (1986). ["The central limit theorem around 1935"](http://projecteuclid.org/euclid.ss/1177013818). *Statistical Science*. **1** (1): 78–91\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/ss/1177013818](https://doi.org/10.1214%2Fss%2F1177013818). 52. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Hald_52-0)** Hald, Andreas (22 April 1998). [*A History of Mathematical Statistics from 1750 to 1930*](http://www.gbv.de/dms/goettingen/229762905.pdf) (PDF). Wiley. chapter 17. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0471179122](https://en.wikipedia.org/wiki/Special:BookSources/978-0471179122 "Special:BookSources/978-0471179122") . [Archived](https://ghostarchive.org/archive/20221009/http://www.gbv.de/dms/goettingen/229762905.pdf) (PDF) from the original on 2022-10-09. 53. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-FOOTNOTEFischer2011Chapter_2;_Chapter_5.2_53-0)** [Fischer (2011)](https://en.wikipedia.org/wiki/Central_limit_theorem#CITEREFFischer2011), Chapter 2; Chapter 5.2. 54. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-Bernstein_54-0)** [Bernstein, S. N.](https://en.wikipedia.org/wiki/Sergei_Natanovich_Bernstein "Sergei Natanovich Bernstein") (1945). "On the work of P. L. Chebyshev in Probability Theory". In Bernstein., S. N. (ed.). *Nauchnoe Nasledie P. L. Chebysheva. Vypusk Pervyi: Matematika* \[*The Scientific Legacy of P. L. Chebyshev. Part I: Mathematics*\] (in Russian). Moscow & Leningrad: Academiya Nauk SSSR. p. 174. 55. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-55)** Zabell, S. L. (1995). "Alan Turing and the Central Limit Theorem". *American Mathematical Monthly*. **102** (6): 483–494\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/00029890.1995.12004608](https://doi.org/10.1080%2F00029890.1995.12004608). 56. **[^](https://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-J%C3%B8rgensen-1997_56-0)** JĂžrgensen, Bent (1997). *The Theory of Dispersion Models*. Chapman & Hall. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0412997112](https://en.wikipedia.org/wiki/Special:BookSources/978-0412997112 "Special:BookSources/978-0412997112") . - [BĂĄrĂĄny, Imre](https://en.wikipedia.org/wiki/Imre_B%C3%A1r%C3%A1ny "Imre BĂĄrĂĄny"); Vu, Van (2007). "Central limit theorems for Gaussian polytopes". *Annals of Probability*. **35** (4). Institute of Mathematical Statistics: 1593–1621\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0610192](https://arxiv.org/abs/math/0610192). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/009117906000000791](https://doi.org/10.1214%2F009117906000000791). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [9128253](https://api.semanticscholar.org/CorpusID:9128253). - Bauer, Heinz (2001). *Measure and Integration Theory*. Berlin: de Gruyter. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [3110167190](https://en.wikipedia.org/wiki/Special:BookSources/3110167190 "Special:BookSources/3110167190") . - Billingsley, Patrick (1995). *Probability and Measure* (3rd ed.). John Wiley & Sons. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-471-00710-2](https://en.wikipedia.org/wiki/Special:BookSources/0-471-00710-2 "Special:BookSources/0-471-00710-2") . - Bradley, Richard (2005). "Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions". *Probability Surveys*. **2**: 107–144\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0511078](https://arxiv.org/abs/math/0511078). [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2005math.....11078B](https://ui.adsabs.harvard.edu/abs/2005math.....11078B). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/154957805100000104](https://doi.org/10.1214%2F154957805100000104). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [8395267](https://api.semanticscholar.org/CorpusID:8395267). - Bradley, Richard (2007). *Introduction to Strong Mixing Conditions* (1st ed.). Heber City, UT: Kendrick Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-9740427-9-4](https://en.wikipedia.org/wiki/Special:BookSources/978-0-9740427-9-4 "Special:BookSources/978-0-9740427-9-4") . - Dinov, Ivo; Christou, Nicolas; Sanchez, Juana (2008). ["Central Limit Theorem: New SOCR Applet and Demonstration Activity"](https://web.archive.org/web/20160303185802/http://www.amstat.org/publications/jse/v16n2/dinov.html). *Journal of Statistics Education*. **16** (2). ASA: 1–15\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1080/10691898.2008.11889560](https://doi.org/10.1080%2F10691898.2008.11889560). [PMC](https://en.wikipedia.org/wiki/PMC_\(identifier\) "PMC (identifier)") [3152447](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3152447). [PMID](https://en.wikipedia.org/wiki/PMID_\(identifier\) "PMID (identifier)") [21833159](https://pubmed.ncbi.nlm.nih.gov/21833159). Archived from [the original](http://www.amstat.org/publications/jse/v16n2/dinov.html) on 2016-03-03. Retrieved 2008-08-23. - [Durrett, Richard](https://en.wikipedia.org/wiki/Rick_Durrett "Rick Durrett") (2004). *Probability: theory and examples* (3rd ed.). Cambridge University Press. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0521765390](https://en.wikipedia.org/wiki/Special:BookSources/0521765390 "Special:BookSources/0521765390") . - Fischer, Hans (2011). [*A History of the Central Limit Theorem: From Classical to Modern Probability Theory*](http://www.medicine.mcgill.ca/epidemiology/hanley/bios601/GaussianModel/HistoryCentralLimitTheorem.pdf) (PDF). Sources and Studies in the History of Mathematics and Physical Sciences. New York: Springer. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/978-0-387-87857-7](https://doi.org/10.1007%2F978-0-387-87857-7). [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-0-387-87856-0](https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-87856-0 "Special:BookSources/978-0-387-87856-0") . [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [2743162](https://mathscinet.ams.org/mathscinet-getitem?mr=2743162). [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [1226\.60004](https://zbmath.org/?format=complete&q=an:1226.60004). [Archived](https://web.archive.org/web/20171031171033/http://www.medicine.mcgill.ca/epidemiology/hanley/bios601/GaussianModel/HistoryCentralLimitTheorem.pdf) (PDF) from the original on 2017-10-31. - Gaposhkin, V. F. (1966). "Lacunary series and independent functions". *Russian Mathematical Surveys*. **21** (6): 1–82\. [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[1966RuMaS..21....1G](https://ui.adsabs.harvard.edu/abs/1966RuMaS..21....1G). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1070/RM1966v021n06ABEH001196](https://doi.org/10.1070%2FRM1966v021n06ABEH001196). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [250833638](https://api.semanticscholar.org/CorpusID:250833638). . - Klartag, Bo'az (2007). "A central limit theorem for convex sets". *Inventiones Mathematicae*. **168** (1): 91–131\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[math/0605014](https://arxiv.org/abs/math/0605014). [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2007InMat.168...91K](https://ui.adsabs.harvard.edu/abs/2007InMat.168...91K). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00222-006-0028-8](https://doi.org/10.1007%2Fs00222-006-0028-8). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [119169773](https://api.semanticscholar.org/CorpusID:119169773). - Klartag, Bo'az (2008). "A Berry–Esseen type inequality for convex bodies with an unconditional basis". *Probability Theory and Related Fields*. **145** (1–2\): 1–33\. [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[0705\.0832](https://arxiv.org/abs/0705.0832). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1007/s00440-008-0158-6](https://doi.org/10.1007%2Fs00440-008-0158-6). [S2CID](https://en.wikipedia.org/wiki/S2CID_\(identifier\) "S2CID (identifier)") [10163322](https://api.semanticscholar.org/CorpusID:10163322). - [Central Limit Theorem](https://www.khanacademy.org/math/probability/statistics-inferential/sampling_distribution/v/central-limit-theorem) at Khan Academy - ["Central limit theorem"](https://www.encyclopediaofmath.org/index.php?title=Central_limit_theorem). *[Encyclopedia of Mathematics](https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics "Encyclopedia of Mathematics")*. [EMS Press](https://en.wikipedia.org/wiki/European_Mathematical_Society "European Mathematical Society"). 2001 \[1994\]. - [Weisstein, Eric W.](https://en.wikipedia.org/wiki/Eric_W._Weisstein "Eric W. Weisstein") ["Central Limit Theorem"](https://mathworld.wolfram.com/CentralLimitTheorem.html). *[MathWorld](https://en.wikipedia.org/wiki/MathWorld "MathWorld")*. - [A music video demonstrating the central limit theorem with a Galton board](https://www.mctague.org/carl/blog/2021/04/23/central-limit-theorem/) by Carl McTague
Shard152 (laksa)
Root Hash17790707453426894952
Unparsed URLorg,wikipedia!en,/wiki/Central_limit_theorem s443