🕷️ Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 62 (from laksa000)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

ℹ️ Skipped - page is already crawled

📄
INDEXABLE
CRAWLED
18 days ago
🤖
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0.6 months ago
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://www.britannica.com/science/probability-theory/Brownian-motion-process
Last Crawled2026-03-20 05:34:25 (18 days ago)
First Indexed2018-03-02 22:42:41 (8 years ago)
HTTP Status Code200
Meta TitleProbability theory - Brownian Motion, Process, Randomness | Britannica
Meta DescriptionProbability theory - Brownian Motion, Process, Randomness: The most important stochastic process is the Brownian motion or Wiener process. It was first discussed by Louis Bachelier (1900), who was interested in modeling fluctuations in prices in financial markets, and by Albert Einstein (1905), who gave a mathematical model for the irregular motion of colloidal particles first observed by the Scottish botanist Robert Brown in 1827. The first mathematically rigorous treatment of this model was given by Wiener (1923). Einstein’s results led to an early, dramatic confirmation of the molecular theory of matter in the French physicist Jean Perrin’s experiments to determine Avogadro’s number, for which Perrin was
Meta Canonicalnull
Boilerpipe Text
Markovian processes A stochastic process is called Markovian (after the Russian mathematician Andrey Andreyevich Markov ) if at any time t the conditional probability of an arbitrary future event given the entire past of the process—i.e., given X ( s ) for all s ≤ t —equals the conditional probability of that future event given only X ( t ). Thus, in order to make a probabilistic statement about the future behaviour of a Markov process, it is no more helpful to know the entire history of the process than it is to know only its current state. The conditional distribution of X ( t + h ) given X ( t ) is called the transition probability of the process. If this conditional distribution does not depend on t , the process is said to have “stationary” transition probabilities. A Markov process with stationary transition probabilities may or may not be a stationary process in the sense of the preceding paragraph. If Y 1 , Y 2 ,… are independent random variables and X ( t ) = Y 1 +⋯+ Y t , the stochastic process X ( t ) is a Markov process. Given X ( t ) = x , the conditional probability that X ( t + h ) belongs to an interval ( a , b ) is just the probability that Y t + 1 +⋯+ Y t + h belongs to the translated interval ( a − x , b − x ); and because of independence this conditional probability would be the same if the values of X (1),…, X ( t − 1) were also given. If the Y s are identically distributed as well as independent, this transition probability does not depend on t , and then X ( t ) is a Markov process with stationary transition probabilities. Sometimes X ( t ) is called a random walk , but this terminology is not completely standard. Since both the Poisson process and Brownian motion are created from random walks by simple limiting processes, they, too, are Markov processes with stationary transition probabilities. The Ornstein-Uhlenbeck process defined as the solution ( 19 ) to the stochastic differential equation ( 18 ) is also a Markov process with stationary transition probabilities. The Ornstein-Uhlenbeck process and many other Markov processes with stationary transition probabilities behave like stationary processes as t → ∞. Roughly speaking, the conditional distribution of X ( t ) given X (0) = x converges as t → ∞ to a distribution, called the stationary distribution, that does not depend on the starting value X (0) = x . Moreover, with probability 1, the proportion of time the process spends in any subset of its state space converges to the stationary probability of that set; and, if X (0) is given the stationary distribution to begin with, the process becomes a stationary process. The Ornstein-Uhlenbeck process defined in equation (19) is stationary if V (0) has a normal distribution with mean 0 and variance σ 2 /(2 m f ). At another extreme are absorbing processes. An example is the Markov process describing Peter’s fortune during the game of gambler’s ruin. The process is absorbed whenever either Peter or Paul is ruined. Questions of interest involve the probability of being absorbed in one state rather than another and the distribution of the time until absorption occurs. Some additional examples of stochastic processes follow. The Ehrenfest model of diffusion The Ehrenfest model of diffusion (named after the Austrian Dutch physicist Paul Ehrenfest) was proposed in the early 1900s in order to illuminate the statistical interpretation of the second law of thermodynamics , that the entropy of a closed system can only increase. Suppose N molecules of a gas are in a rectangular container divided into two equal parts by a permeable membrane. The state of the system at time t is X ( t ), the number of molecules on the left-hand side of the membrane. At each time t = 1, 2,… a molecule is chosen at random (i.e., each molecule has probability 1/ N to be chosen) and is moved from its present location to the other side of the membrane. Hence, the system evolves according to the transition probability p ( i , j ) = P { X ( t + 1) = j | X ( t ) = i }, where The long run behaviour of the Ehrenfest process can be inferred from general theorems about Markov processes in discrete time with discrete state space and stationary transition probabilities. Let T ( j ) denote the first time t ≥ 1 such that X ( t ) = j and set T ( j ) = ∞ if X ( t ) ≠ j for all t . Assume that for all states i and j it is possible for the process to go from i to j in some number of steps—i.e., P { T ( j ) < ∞| X (0) = i } > 0. If the equations have a solution Q ( j ) that is a probability distribution—i.e., Q ( j ) ≥ 0, and Σ Q ( j ) = 1—then that solution is unique and is the stationary distribution of the process. Moreover, Q ( j ) = 1/ E { T ( j )| X (0) = j }; and, for any initial state j , the proportion of time t that X ( t ) = i converges with probability 1 to Q ( i ). For the special case of the Ehrenfest process, assume that N is large and X (0) = 0. According to the deterministic prediction of the second law of thermodynamics, the entropy of this system can only increase, which means that X ( t ) will steadily increase until half the molecules are on each side of the membrane. Indeed, according to the stochastic model described above, there is overwhelming probability that X ( t ) does increase initially. However, because of random fluctuations, the system occasionally moves from configurations having large entropy to those of smaller entropy and eventually even returns to its starting state, in defiance of the second law of thermodynamics. The accepted resolution of this contradiction is that the length of time such a system must operate in order that an observable decrease of entropy may occur is so enormously long that a decrease could never be verified experimentally. To consider only the most extreme case, let T denote the first time t ≥ 1 at which X ( t ) = 0—i.e., the time of first return to the starting configuration having all molecules on the right-hand side of the membrane. It can be verified by substitution in equation ( 20 ) that the stationary distribution of the Ehrenfest model is the binomial distribution and hence E ( T ) = 2 N . For example, if N is only 100 and transitions occur at the rate of 10 6 per second, E ( T ) is of the order of 10 15 years. Hence, on the macroscopic scale, on which experimental measurements can be made, the second law of thermodynamics holds. The symmetric random walk A Markov process that behaves in quite different and surprising ways is the symmetric random walk. A particle occupies a point with integer coordinates in d -dimensional Euclidean space . At each time t = 1, 2,… it moves from its present location to one of its 2 d nearest neighbours with equal probabilities 1/(2 d ), independently of its past moves. For d = 1 this corresponds to moving a step to the right or left according to the outcome of tossing a fair coin . It may be shown that for d = 1 or 2 the particle returns with probability 1 to its initial position and hence to every possible position infinitely many times, if the random walk continues indefinitely. In three or more dimensions, at any time t the number of possible steps that increase the distance of the particle from the origin is much larger than the number decreasing the distance, with the result that the particle eventually moves away from the origin and never returns. Even in one or two dimensions, although the particle eventually returns to its initial position, the expected waiting time until it returns is infinite , there is no stationary distribution, and the proportion of time the particle spends in any state converges to 0! Queuing models The simplest service system is a single-server queue, where customers arrive, wait their turn, are served by a single server, and depart. Related stochastic processes are the waiting time of the n th customer and the number of customers in the queue at time t . For example, suppose that customers arrive at times 0 = T 0 < T 1 < T 2 <⋯ and wait in a queue until their turn. Let V n denote the service time required by the n th customer, n = 0, 1, 2,…, and set U n = T n − T n − 1 . The waiting time, W n , of the n th customer satisfies the relation W 0 = 0 and, for n ≥ 1, W n = max(0, W n − 1 + V n − 1 − U n ). To see this, observe that the n th customer must wait for the same length of time as the ( n − 1)th customer plus the service time of the ( n − 1)th customer minus the time between the arrival of the ( n − 1)th and n th customer, during which the ( n − 1)th customer is already waiting but the n th customer is not. An exception occurs if this quantity is negative, and then the waiting time of the n th customer is 0. Various assumptions can be made about the input and service mechanisms. One possibility is that customers arrive according to a Poisson process and their service times are independent, identically distributed random variables that are also independent of the arrival process. Then, in terms of Y n = V n − 1 − U n , which are independent, identically distributed random variables, the recursive relation defining W n becomes W n = max(0, W n − 1 + Y n ). This process is a Markov process. It is often called a random walk with reflecting barrier at 0, because it behaves like a random walk whenever it is positive and is pushed up to be equal to 0 whenever it tries to become negative. Quantities of interest are the mean and variance of the waiting time of the n th customer and, since these are very difficult to determine exactly, the mean and variance of the stationary distribution. More realistic queuing models try to accommodate systems with several servers and different classes of customers, who are served according to certain priorities. In most cases it is impossible to give a mathematical analysis of the system, which must be simulated on a computer in order to obtain numerical results. The insights gained from theoretical analysis of simple cases can be helpful in performing these simulations. Queuing theory had its origins in attempts to understand traffic in telephone systems. Present-day research is stimulated, among other things, by problems associated with multiple-user computer systems. Reflecting barriers arise in other problems as well. For example, if B ( t ) denotes Brownian motion, then X ( t ) = B ( t ) + c t is called Brownian motion with drift c . This model is appropriate for Brownian motion of a particle under the influence of a constant force field such as gravity. One can add a reflecting barrier at 0 to account for reflections of the Brownian particle off the bottom of its container. The result is a model for sedimentation, which for c < 0 in the steady state as t → ∞ gives a statistical derivation of the law of pressure as a function of depth in an isothermal atmosphere. Just as ordinary Brownian motion can be obtained as the limit of a rescaled random walk as the number of steps becomes very large and the size of individual steps small, Brownian motion with a reflecting barrier at 0 can be obtained as the limit of a rescaled random walk with reflection at 0. In this way, Brownian motion with a reflecting barrier plays a role in the analysis of queuing systems. In fact, in modern probability theory one of the most important uses of Brownian motion and other diffusion processes is as approximations to more complicated stochastic processes. The exact mathematical description of these approximations gives remarkable generalizations of the central limit theorem from sequences of random variables to sequences of random functions. Insurance risk theory The ruin problem of insurance risk theory is closely related to the problem of gambler’s ruin described earlier and, rather surprisingly, to the single-server queue as well. Suppose the amount of capital at time t in one portfolio of an insurance company is denoted by X ( t ). Initially X (0) = x > 0. During each unit of time, the portfolio receives an amount c > 0 in premiums. At random times claims are made against the insurance company, which must pay the amount V n > 0 to settle the n th claim. If N ( t ) denotes the number of claims made in time t , then provided that this quantity has been positive at all earlier times s < t . At the first time X ( t ) becomes negative, however, the portfolio is ruined. A principal problem of insurance risk theory is to find the probability of ultimate ruin. If one imagines that the problem of gambler’s ruin is modified so that Peter’s opponent has an infinite amount of capital and can never be ruined, then the probability that Peter is ultimately ruined is similar to the ruin probability of insurance risk theory. In fact, with the artificial assumptions that (i) c = 1, (ii) time proceeds by discrete units, say t = 1, 2,…, (iii) V n is identically equal to 2 for all n , and (iv) at each time t a claim occurs with probability p or does not occur with probability q independently of what occurs at other times, then the process X ( t ) is the same stochastic process as Peter’s fortune, which is absorbed if it ever reaches the state 0. The probability of Peter’s ultimate ruin against an infinitely rich adversary is easily obtained by taking the limit of equation (6) as m → ∞. The answer is ( q / p ) x if p > q —i.e., the game is favourable to Peter—and 1 if p ≤ q . More interesting assumptions for the insurance risk problem are that the number of claims N ( t ) is a Poisson process and the sizes of the claims V 1 , V 2 ,… are independent, identically distributed positive random variables . Rather surprisingly, under these assumptions the probability of ultimate ruin as a function of the initial fortune x is exactly the same as the stationary probability that the waiting time in the single-server queue with Poisson input exceeds x . Unfortunately, neither problem is easy to solve exactly, although there is a very good approximate solution originally derived by the Swedish mathematician Harald Cramér. Martingale theory As a final example, it seems appropriate to mention one of the dominant ideas of modern probability theory, which at the same time springs directly from the relation of probability to games of chance . Suppose that X 1 , X 2 ,… is any stochastic process and, for each n = 0, 1,…, f n = f n ( X 1 ,…, X n ) is a (Borel-measurable) function of the indicated observations. The new stochastic process f n is called a martingale if E ( f n | X 1 ,…, X n − 1 ) = f n − 1 for every value of n > 0 and all values of X 1 ,…, X n − 1 . If the sequence of X s are outcomes in successive trials of a game of chance and f n is the fortune of a gambler after the n th trial, then the martingale condition says that the game is absolutely fair in the sense that, no matter what the past history of the game, the gambler’s conditional expected fortune after one more trial is exactly equal to his present fortune. For example, let X 0 = x , and for n ≥ 1 let X n equal 1 or −1 according as a coin having probability p of heads and q = 1 − p of tails turns up heads or tails on the n th toss. Let S n = X 0 +⋯+ X n . Then f n = S n − n ( p − q ) and f n = ( q / p ) S n are martingales. One of the basic results of martingale theory is that, if the gambler is free to quit the game at any time using any strategy whatever, provided only that this strategy does not foresee the future, then the game remains fair. This means that, if N denotes the stopping time at which the gambler’s strategy tells him to quit the game, so that his final fortune is f N , then Strictly speaking, this result is not true without some additional conditions that must be verified for any particular application. To see how efficiently it works, consider once again the problem of gambler’s ruin and let N be the first value of n such that S n = 0 or m ; i.e., N denotes the random time at which ruin first occurs and the game ends. In the case p = 1/2, application of equation ( 21 ) to the martingale f n = S n , together with the observation that f N = either 0 or m , yields the equalities x = f 0 = E ( f N | f 0 = x ) = m [1 − Q ( x )], which can be immediately solved to give the answer in equation ( 6 ). For p ≠ 1/2, one uses the martingale f n = ( q / p ) S n and similar reasoning to obtain from which the first equation in ( 6 ) easily follows. The expected duration of the game is obtained by a similar argument. A particularly beautiful and important result is the martingale convergence theorem , which implies that a nonnegative martingale converges with probability 1 as n → ∞. This means that, if a gambler’s successive fortunes form a (nonnegative) martingale, they cannot continue to fluctuate indefinitely but must approach some limiting value. Basic martingale theory and many of its applications were developed by the American mathematician Joseph Leo Doob during the 1940s and ’50s following some earlier results due to Paul Lévy . Subsequently it has become one of the most powerful tools available to study stochastic processes .
Markdown
[![Encyclopedia Britannica](https://cdn.britannica.com/mendel/eb-logo/MendelNewThistleLogo.png)](https://www.britannica.com/) [![Encyclopedia Britannica](https://cdn.britannica.com/mendel/eb-logo/MendelNewThistleLogo.png)](https://www.britannica.com/) [SUBSCRIBE](https://premium.britannica.com/premium-membership/?utm_source=premium&utm_medium=global-nav&utm_campaign=blue-evergreen) [SUBSCRIBE](https://premium.britannica.com/premium-membership/?utm_source=premium&utm_medium=global-nav-mobile&utm_campaign=blue-evergreen) Login https://premium.britannica.com/premium-membership/?utm\_source=premium\&utm\_medium=nav-login-box\&utm\_campaign=evergreen [SUBSCRIBE](https://premium.britannica.com/premium-membership/?utm_source=premium&utm_medium=hamburger-menu&utm_campaign=blue) [Ask the Chatbot](https://www.britannica.com/chatbot) [Games & Quizzes](https://www.britannica.com/quiz/browse) [History & Society](https://www.britannica.com/History-Society) [Science & Tech](https://www.britannica.com/Science-Tech) [Biographies](https://www.britannica.com/Biographies) [Animals & Nature](https://www.britannica.com/Animals-Nature) [Geography & Travel](https://www.britannica.com/Geography-Travel) [Arts & Culture](https://www.britannica.com/Arts-Culture) [ProCon](https://www.britannica.com/procon) [Money](https://www.britannica.com/money) [Videos](https://www.britannica.com/videos) [probability theory](https://www.britannica.com/science/probability-theory) - [Introduction](https://www.britannica.com/science/probability-theory) - [Experiments, sample space, events, and equally likely probabilities](https://www.britannica.com/science/probability-theory#ref32762) - [Applications of simple probability experiments](https://www.britannica.com/science/probability-theory#ref32763) - [The principle of additivity](https://www.britannica.com/science/probability-theory/The-principle-of-additivity) - [Multinomial probability](https://www.britannica.com/science/probability-theory/The-principle-of-additivity#ref32765) - [The birthday problem](https://www.britannica.com/science/probability-theory/The-birthday-problem) - [Conditional probability](https://www.britannica.com/science/probability-theory/The-birthday-problem#ref32767) - [Applications of conditional probability](https://www.britannica.com/science/probability-theory/Applications-of-conditional-probability) - [Independence](https://www.britannica.com/science/probability-theory/Applications-of-conditional-probability#ref32769) - [Bayes’s theorem](https://www.britannica.com/science/probability-theory/Applications-of-conditional-probability#ref32770) - [Random variables, distributions, expectation, and variance](https://www.britannica.com/science/probability-theory/Applications-of-conditional-probability#ref32771) - [Random variables](https://www.britannica.com/science/probability-theory/Applications-of-conditional-probability#ref32772) - [Probability distribution](https://www.britannica.com/science/probability-theory/Probability-distribution) - [Expected value](https://www.britannica.com/science/probability-theory/Probability-distribution#ref32774) - [Variance](https://www.britannica.com/science/probability-theory/Probability-distribution#ref32775) - [An alternative interpretation of probability](https://www.britannica.com/science/probability-theory/An-alternative-interpretation-of-probability) - [The law of large numbers, the central limit theorem, and the Poisson approximation](https://www.britannica.com/science/probability-theory/An-alternative-interpretation-of-probability#ref32777) - [The law of large numbers](https://www.britannica.com/science/probability-theory/An-alternative-interpretation-of-probability#ref32778) - [The central limit theorem](https://www.britannica.com/science/probability-theory/The-central-limit-theorem) - [The Poisson approximation](https://www.britannica.com/science/probability-theory/The-central-limit-theorem#ref32780) - [Infinite sample spaces and axiomatic probability](https://www.britannica.com/science/probability-theory/The-central-limit-theorem#ref32781) - [Infinite sample spaces](https://www.britannica.com/science/probability-theory/The-central-limit-theorem#ref32782) - [The strong law of large numbers](https://www.britannica.com/science/probability-theory/The-strong-law-of-large-numbers) - [Measure theory](https://www.britannica.com/science/probability-theory/The-strong-law-of-large-numbers#ref32784) - [Probability density functions](https://www.britannica.com/science/probability-theory/The-strong-law-of-large-numbers#ref32785) - [Conditional expectation and least squares prediction](https://www.britannica.com/science/probability-theory/Conditional-expectation-and-least-squares-prediction) - [The Poisson process and the Brownian motion process](https://www.britannica.com/science/probability-theory/Conditional-expectation-and-least-squares-prediction#ref32787) - [The Poisson process](https://www.britannica.com/science/probability-theory/Conditional-expectation-and-least-squares-prediction#ref32788) - [Brownian motion process](https://www.britannica.com/science/probability-theory/Brownian-motion-process) - [Stochastic processes](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref32790) - [Stationary processes](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref32791) - [Markovian processes](https://www.britannica.com/science/probability-theory/Markovian-processes) - [The Ehrenfest model of diffusion](https://www.britannica.com/science/probability-theory/Markovian-processes#ref32793) - [The symmetric random walk](https://www.britannica.com/science/probability-theory/Markovian-processes#ref32794) - [Queuing models](https://www.britannica.com/science/probability-theory/Markovian-processes#ref32795) - [Insurance risk theory](https://www.britannica.com/science/probability-theory/Markovian-processes#ref32796) - [Martingale theory](https://www.britannica.com/science/probability-theory/Markovian-processes#ref32797) [References & Edit History](https://www.britannica.com/science/probability-theory/additional-info) [Related Topics](https://www.britannica.com/facts/probability-theory) [Images](https://www.britannica.com/science/probability-theory/images-videos) [![sample space for a pair of dice](https://cdn.britannica.com/20/62920-004-EB46127C/Sample-space-pair-dice.jpg)](https://cdn.britannica.com/20/62920-004-EB46127C/Sample-space-pair-dice.jpg) [![Bayes's theorem used for evaluating the accuracy of a medical test](https://cdn.britannica.com/91/75091-004-0E6FAD63/test-results-accuracy-theorem-Bayes-HIV-positives.jpg)](https://cdn.britannica.com/91/75091-050-61F3EB47/test-results-accuracy-theorem-Bayes-HIV-positives.jpg) [![probability density function](https://cdn.britannica.com/30/3630-004-89103408/Probability-density-function.jpg)](https://cdn.britannica.com/30/3630-004-89103408/Probability-density-function.jpg) [![normal approximation to the binomial distribution](https://cdn.britannica.com/31/3631-004-96A99F98/approximation-distribution.jpg)](https://cdn.britannica.com/31/3631-004-96A99F98/approximation-distribution.jpg) At a Glance [![default image](https://cdn.britannica.com/mendel-resources/3-177/images/shared/new-thistle.svg?v=3.177.2)](https://www.britannica.com/summary/probability-theory) [probability theory summary](https://www.britannica.com/summary/probability-theory) Quizzes [![Equations written on blackboard](https://cdn.britannica.com/86/94086-131-0BAE374D/Equations-blackboard.jpg?w=200&h=200&c=crop)](https://www.britannica.com/quiz/Numbers-and-mathematics) [Numbers and Mathematics](https://www.britannica.com/quiz/Numbers-and-mathematics) [![Italian-born physicist Dr. Enrico Fermi draws a diagram at a blackboard with mathematical equations. circa 1950.](https://cdn.britannica.com/01/115001-131-7278E518/Enrico-Fermi-Italian-problem-physics-1950.jpg?w=200&h=200&c=crop)](https://www.britannica.com/quiz/define-it-math-terms) [Define It: Math Terms](https://www.britannica.com/quiz/define-it-math-terms) Related Questions - [What was Carl Friedrich Gauss’s childhood like?](https://www.britannica.com/question/What-was-Carl-Friedrich-Gausss-childhood-like) - [What awards did Carl Friedrich Gauss win?](https://www.britannica.com/question/What-awards-did-Carl-Friedrich-Gauss-win) - [How was Carl Friedrich Gauss influential?](https://www.britannica.com/question/How-was-Carl-Friedrich-Gauss-influential) ![Britannica AI Icon](https://cdn.britannica.com/mendel-resources/3-177/images/chatbot/star-ai.svg?v=3.177.2) Contents Ask Anything [Science](https://www.britannica.com/browse/Science) [Mathematics](https://www.britannica.com/browse/Mathematics) CITE Share Feedback External Websites ## [Brownian motion](https://www.britannica.com/science/Brownian-motion) process in [probability theory](https://www.britannica.com/science/probability-theory) in # [The Poisson process and the Brownian motion process](https://www.britannica.com/science/probability-theory/Conditional-expectation-and-least-squares-prediction#ref32787) Homework Help Written by [David O. Siegmund Professor of Statistics, Stanford University, California. Author of *Sequential Analysis; Tests and Confidence Intervals.*](https://www.britannica.com/contributor/David-O-Siegmund/3760) David O. Siegmund Fact-checked by [Britannica Editors Encyclopaedia Britannica's editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree....](https://www.britannica.com/editor/The-Editors-of-Encyclopaedia-Britannica/4419) Britannica Editors Last updated Mar. 1, 2026 •[History](https://www.britannica.com/science/probability-theory/additional-info#history) ![Britannica AI Icon](https://cdn.britannica.com/mendel-resources/3-177/images/chatbot/star-ai.svg?v=3.177.2) Britannica AI Ask Anything Table of Contents Table of Contents Ask Anything The most important [stochastic process](https://www.britannica.com/science/stochastic-process) is the Brownian motion or [Wiener process](https://www.britannica.com/science/Brownian-motion-process). It was first discussed by Louis Bachelier (1900), who was interested in modeling fluctuations in prices in financial markets, and by [Albert Einstein](https://www.britannica.com/biography/Albert-Einstein) (1905), who gave a [mathematical model](https://www.britannica.com/science/mathematical-model) for the irregular motion of [colloidal](https://www.britannica.com/science/colloid) particles first observed by the Scottish botanist [Robert Brown](https://www.britannica.com/biography/Robert-Brown-Scottish-botanist) in 1827. The first mathematically rigorous treatment of this model was given by Wiener (1923). Einstein’s results led to an early, dramatic confirmation of the molecular theory of matter in the French physicist [Jean Perrin](https://www.britannica.com/biography/Jean-Perrin)’s experiments to determine [Avogadro’s number](https://www.britannica.com/science/Avogadros-law), for which Perrin was awarded a [Nobel Prize](https://www.britannica.com/topic/Nobel-Prize) in 1926. Today somewhat different models for physical Brownian motion are deemed more [appropriate](https://www.britannica.com/dictionary/appropriate) than Einstein’s, but the original mathematical model continues to play a central role in the theory and application of stochastic processes. ## News • [Mathematicians explain AI’s intelligence: It’s all about patterns, not thinking](https://www.thehindu.com/education/mathematicians-explain-ais-intelligence-its-all-about-patterns-not-thinking/article70670543.ece) • Feb. 27, 2026, 4:00 AM ET (The Hindu) Show less Let *B*(*t*) denote the displacement (in one [dimension](https://www.britannica.com/science/dimension-geometry) for simplicity) of a colloidally suspended particle, which is buffeted by the numerous much smaller molecules of the medium in which it is suspended. This displacement will be obtained as a limit of a [random walk](https://www.britannica.com/science/random-walk) occurring in discrete time as the number of steps becomes infinitely large and the size of each individual step infinitesimally small. Assume that at times *k*δ, *k* = 1, 2,…, the colloidal particle is displaced a distance *h**X**k*, where *X*1, *X*2,… are +1 or −1 according as the outcomes of tossing a fair [coin](https://www.britannica.com/money/coin) are heads or tails. By time *t* the particle has taken *m* steps, where *m* is the largest integer ≤ *t*/δ, and its [displacement](https://www.britannica.com/dictionary/displacement) from its original position is *B**m*(*t*) = *h*(*X*1 +⋯+ *X**m*). The [expected value](https://www.britannica.com/topic/expected-value) of *B**m*(*t*) is 0, and its [variance](https://www.britannica.com/topic/variance) is *h*2*m*, or approximately *h*2*t*/δ. Now suppose that δ → 0, and at the same time *h* → 0 in such a way that the variance of *B**m*(1) converges to some positive [constant](https://www.britannica.com/topic/constant), σ2. This means that *m* becomes infinitely large, and *h* is approximately σ(*t*/*m*)1/2. It follows from the [central limit theorem](https://www.britannica.com/science/central-limit-theorem) (equation [12](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14392)) that lim *P*{*B**m*(*t*) ≤ *x*} = *G*(*x*/σ*t*1/2), where *G*(*x*) is the standard normal [cumulative distribution function](https://www.britannica.com/science/distribution-function) defined just below [equation](https://www.britannica.com/science/equation) (12). The Brownian motion process *B*(*t*) can be defined to be the limit in a certain technical sense of the *B**m*(*t*) as δ → 0 and *h* → 0 with *h*2/δ → σ2. The process *B*(*t*) has many other properties, which in principle are all inherited from the approximating random walk *B**m*(*t*). For example, if (*s*1, *t*1) and (*s*2, *t*2) are disjoint intervals, the increments *B*(*t*1) − *B*(*s*1) and *B*(*t*2) − *B*(*s*2) are independent random variables that are normally distributed with expectation 0 and variances equal to σ2(*t*1 − *s*1) and σ2(*t*2 − *s*2), respectively. Einstein took a different approach and derived various properties of the process *B*(*t*) by showing that its [probability density function](https://www.britannica.com/science/density-function), *g*(*x*, *t*), satisfies the [diffusion](https://www.merriam-webster.com/dictionary/diffusion) equation ∂*g*/∂*t* = *D*∂2*g*/∂*x*2, where *D* = σ2/2. The important [implication](https://www.merriam-webster.com/dictionary/implication) of Einstein’s theory for subsequent experimental research was that he identified the diffusion constant *D* in terms of certain measurable properties of the particle (its radius) and of the medium (its viscosity and temperature), which allowed one to make predictions and hence to confirm or reject the hypothesized existence of the unseen molecules that were assumed to be the cause of the irregular Brownian motion. Because of the beautiful blend of mathematical and physical reasoning involved, a brief summary of the successor to Einstein’s model is given below. Unlike the Poisson process, it is impossible to “draw” a picture of the path of a particle undergoing mathematical Brownian motion. [Wiener](https://www.britannica.com/biography/Norbert-Wiener) (1923) showed that the functions *B*(*t*) are [continuous](https://www.britannica.com/dictionary/continuous), as one expects, but nowhere differentiable. Thus, a particle undergoing mathematical Brownian motion does not have a well-defined velocity, and the curve *y* = *B*(*t*) does not have a well-defined tangent at any value of *t*. To see why this might be so, recall that the [derivative](https://www.britannica.com/science/derivative-mathematics) of *B*(*t*), if it exists, is the limit as *h* → 0 of the [ratio](https://www.britannica.com/science/ratio) \[*B*(*t* + *h*) − *B*(*t*)\]/*h*. Since *B*(*t* + *h*) − *B*(*t*) is normally distributed with mean 0 and [standard deviation](https://www.britannica.com/topic/standard-deviation-statistics) *h*1/2σ, in very rough terms *B*(*t* + *h*) − *B*(*t*) can be expected to equal some multiple (positive or negative) of *h*1/2. But the limit as *h* → 0 of *h*1/2/*h* = 1/*h*1/2 is [infinite](https://www.merriam-webster.com/dictionary/infinite). A related fact that illustrates the extreme irregularity of *B*(*t*) is that in every interval of time, no matter how small, a particle undergoing mathematical Brownian motion travels an infinite distance. Although these properties contradict the commonsense idea of a function—and indeed it is quite difficult to write down explicitly a single example of a continuous, nowhere-differentiable function—they turn out to be typical of a large class of stochastic processes, called diffusion processes, of which Brownian motion is the most prominent member. Especially notable contributions to the mathematical theory of Brownian motion and diffusion processes were made by [Paul Lévy](https://www.britannica.com/biography/Paul-Levy) and William Feller during the years 1930–60. A more sophisticated description of physical Brownian motion can be built on a simple application of [Newton’s second law](https://www.britannica.com/science/Newtons-laws-of-motion): *F* = *m**a*. Let *V*(*t*) denote the velocity of a colloidal particle of mass *m*. It is assumed that![Equation.](https://cdn.britannica.com/88/14388-004-4B538268/Equation.jpg) The quantity *f* retarding the movement of the particle is due to [friction](https://www.britannica.com/science/friction) caused by the surrounding medium. The term *d**A*(*t*) is the contribution of the very frequent collisions of the particle with unseen molecules of the medium. It is assumed that *f* can be determined by classical [fluid mechanics](https://www.britannica.com/science/fluid-mechanics), in which the molecules making up the surrounding medium are so many and so small that the medium can be considered smooth and [homogeneous](https://www.merriam-webster.com/dictionary/homogeneous). Then by [Stokes’s law](https://www.britannica.com/science/Stokess-law), for a spherical particle in a gas, *f* = 6π*a*η, where *a* is the radius of the particle and η the coefficient of viscosity of the medium. [Hypotheses](https://www.merriam-webster.com/dictionary/Hypotheses) concerning *A*(*t*) are less specific, because the molecules making up the surrounding medium cannot be observed directly. For example, it is assumed that, for *t* ≠ *s*, the [infinitesimal](https://www.britannica.com/science/infinitesimal) random increments *d**A*(*t*) = *A*(*t* + *d**t*) − *A*(*t*) and *A*(*s* + *d**s*) − *A*(*s*) caused by collisions of the particle with molecules of the surrounding medium are independent random variables having distributions with mean 0 and unknown variances σ2 *d**t* and σ2 *d**s* and that *d**A*(*t*) is independent of *d**V*(*s*) for *s* \< *t*. The [differential equation](https://www.britannica.com/science/differential-equation) ([18](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14388)) has the solution![Equation.](https://cdn.britannica.com/87/14387-004-89079E7F/Equation.jpg)where β = *f*/*m*. From this equation and the assumed properties of *A*(*t*), it follows that *E*\[*V*2(*t*)\] → σ2/(2*m**f*) as *t* → ∞. Now assume that, in accordance with the principle of [equipartition of energy](https://www.britannica.com/science/equipartition-of-energy), the steady-state average [kinetic energy](https://www.britannica.com/science/kinetic-energy) of the particle, *m* lim*t* → ∞*E*\[*V*2(*t*)\]/2, equals the average [kinetic](https://www.britannica.com/dictionary/kinetic) energy of the molecules of the medium. According to the [kinetic theory of an ideal gas](https://www.britannica.com/science/kinetic-theory-of-gases), this is *R**T*/2*N*, where *R* is the [ideal gas](https://www.britannica.com/science/ideal-gas) constant, *T* is the temperature of the gas in kelvins, and *N* is [Avogadro’s number](https://www.britannica.com/science/Avogadros-number), the number of molecules in one [gram molecular weight](https://www.britannica.com/science/mole-chemistry) of the gas. It follows that the unknown value of σ2 can be determined: σ2 = 2*R**T**f*/*N*. If one also assumes that the functions *V*(*t*) are continuous, which is certainly reasonable from physical considerations, it follows by mathematical [analysis](https://www.britannica.com/science/analysis-mathematics) that *A*(*t*) is a Brownian motion process as defined above. This conclusion poses questions about the meaning of the initial equation ([18](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14388)), because for mathematical Brownian motion the term *d**A*(*t*) does not exist in the usual sense of a derivative. Some additional mathematical analysis shows that the stochastic differential equation ([18](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14388)) and its solution equation ([19](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14387)) have a precise mathematical interpretation. The process *V*(*t*) is called the Ornstein-Uhlenbeck process, after the physicists Leonard Salomon Ornstein and [George Eugene Uhlenbeck](https://www.britannica.com/biography/George-Eugene-Uhlenbeck). The logical outgrowth of these attempts to [differentiate](https://www.merriam-webster.com/dictionary/differentiate) and [integrate](https://www.merriam-webster.com/dictionary/integrate) with respect to a Brownian motion process is the Ito (named for the Japanese mathematician Itō Kiyosi) stochastic [calculus](https://www.britannica.com/science/calculus-mathematics), which plays an important role in the modern theory of stochastic processes. The displacement at time *t* of the particle whose velocity is given by equation ([19](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14387)) is![Equation.](https://cdn.britannica.com/65/14365-004-C97F59BD/Equation.jpg) For *t* large compared with β, the first and third terms in this expression are small compared with the second. Hence, *X*(*t*) − *X*(0) is approximately equal to *A*(*t*)/*f*, and the mean square displacement, *E*{\[*X*(*t*) − *X*(0)\]2}, is approximately σ2/*f* 2 = *R**T*/(3π*a*η*N*). These final conclusions are consistent with Einstein’s model, although here they arise as an approximation to the model obtained from equation ([19](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14387)). Since it is primarily the conclusions that have observational consequences, there are essentially no new experimental [implications](https://www.merriam-webster.com/dictionary/implications). However, the analysis arising directly out of Newton’s second law, which yields a process having a well-defined velocity at each point, seems more satisfactory theoretically than Einstein’s original model. ## Stochastic processes A stochastic process is a family of random variables *X*(*t*) indexed by a [parameter](https://www.merriam-webster.com/dictionary/parameter) *t*, which usually takes values in the discrete [set](https://www.britannica.com/topic/set-mathematics-and-logic) Τ = {0, 1, 2,…} or the continuous set Τ = \[0, +∞). In many cases *t* represents time, and *X*(*t*) is a [random variable](https://www.britannica.com/topic/random-variable) observed at time *t*. Examples are the Poisson process, the Brownian motion process, and the Ornstein-Uhlenbeck process described in the preceding section. Considered as a totality, the family of random variables {*X*(*t*), *t* ∊ Τ} [constitutes](https://www.merriam-webster.com/dictionary/constitutes) a “random function.” ## Stationary processes The mathematical theory of stochastic processes attempts to define classes of processes for which a unified theory can be developed. The most important classes are stationary processes and Markov processes. A stochastic process is called stationary if, for all *n*, *t*1 \< *t*2 \<⋯\< *t**n*, and *h* \> 0, the joint distribution of *X*(*t*1 + *h*),…, *X*(*t**n* + *h*) does not depend on *h*. This means that in effect there is no origin on the time axis; the stochastic behaviour of a stationary process is the same no matter when the process is observed. A sequence of independent identically distributed random variables is an example of a stationary process. A rather different example is defined as follows: *U*(0) is uniformly distributed on \[0, 1\]; for each *t* = 1, 2,…, *U*(*t*) = 2*U*(*t* − 1) if *U*(*t* − 1) ≤ 1/2, and *U*(*t*) = 2*U*(*t* − 1) − 1 if *U*(*t* − 1) \> 1/2. The [marginal](https://www.britannica.com/dictionary/marginal) distributions of *U*(*t*), *t* = 0, 1,… are uniformly distributed on \[0, 1\], but, in contrast to the case of independent identically distributed random variables, the entire sequence can be predicted from knowledge of *U*(0). A third example of a stationary process is![Equation.](https://cdn.britannica.com/62/14362-004-E51EF9D1/Equation.jpg)where the *Y*s and *Z*s are independent normally distributed random variables with mean 0 and unit variance, and the *c*s and θs are constants. Processes of this kind can be useful in modeling seasonal or approximately periodic phenomena. A remarkable generalization of the strong [law of large numbers](https://www.britannica.com/science/law-of-large-numbers) is the [ergodic theorem](https://www.britannica.com/topic/ergodic-theorem): if *X*(*t*), *t* = 0, 1,… for the [discrete](https://www.britannica.com/dictionary/discrete) case or 0 ≤ *t* \< ∞ for the continuous case, is a stationary process such that *E*\[*X*(0)\] is finite, then with probability 1 the average![Elements of the ergodic theorem.](https://cdn.britannica.com/61/14361-004-E75A732F/Elements-theorem.jpg)if *t* is continuous, converges to a limit as *s* → ∞. In the special case that *t* is discrete and the *X*s are independent and identically distributed, the strong law of large numbers is also applicable and shows that the limit must equal *E*{*X*(0)}. However, the example that *X*(0) is an arbitrary random variable and *X*(*t*) ≡ *X*(0) for all *t* \> 0 shows that this cannot be true in general. The limit does equal *E*{*X*(0)} under an additional rather technical assumption to the effect that there is no subset of the state space, having probability strictly between 0 and 1, in which the process can get stuck and never escape. This assumption is not fulfilled by the example *X*(*t*) ≡ *X*(0) for all *t*, which gets stuck immediately at its initial value. It is satisfied by the sequence *U*(*t*) defined above, so by the ergodic theorem the average of these variables converges to 1/2 with probability 1. The ergodic theorem was first conjectured by the American chemist [J. Willard Gibbs](https://www.britannica.com/biography/J-Willard-Gibbs) in the early 1900s in the [context](https://www.merriam-webster.com/dictionary/context) of [statistical mechanics](https://www.britannica.com/science/statistical-mechanics) and was proved in a corrected, abstract formulation by the American mathematician [George David Birkhoff](https://www.britannica.com/biography/George-David-Birkhoff) in 1931. ![Britannica AI Icon](https://cdn.britannica.com/mendel-resources/3-177/images/chatbot/star-ai.svg?v=3.177.2)Britannica AI *chevron\_right* Probability theory - Brownian Motion, Process, Randomness *close* [AI-generated answers](https://www.britannica.com/about-britannica-ai) from Britannica articles. AI makes mistakes, so verify using Britannica articles. # [Markovian processes](https://www.britannica.com/science/Markov-chain) A stochastic process is called Markovian (after the Russian mathematician [Andrey Andreyevich Markov](https://www.britannica.com/biography/Andrey-Andreyevich-Markov)) if at any time *t* the [conditional probability](https://www.britannica.com/science/conditional-probability) of an arbitrary future event given the entire past of the process—i.e., given *X*(*s*) for all *s* ≤ *t*—equals the conditional probability of that future event given only *X*(*t*). Thus, in order to make a probabilistic statement about the future behaviour of a Markov process, it is no more helpful to know the entire history of the process than it is to know only its current state. The conditional distribution of *X*(*t* + *h*) given *X*(*t*) is called the transition probability of the process. If this conditional distribution does not depend on *t*, the process is said to have “stationary” [transition](https://www.britannica.com/dictionary/transition) probabilities. A Markov process with stationary transition probabilities may or may not be a stationary process in the sense of the preceding paragraph. If *Y*1, *Y*2,… are independent random variables and *X*(*t*) = *Y*1 +⋯+ *Y**t*, the stochastic process *X*(*t*) is a Markov process. Given *X*(*t*) = *x*, the conditional probability that *X*(*t* + *h*) belongs to an interval (*a*, *b*) is just the probability that *Y**t* + 1 +⋯+ *Y**t* + *h* belongs to the translated interval (*a* − *x*, *b* − *x*); and because of independence this conditional probability would be the same if the values of *X*(1),…, *X*(*t* − 1) were also given. If the *Y*s are identically distributed as well as independent, this transition probability does not depend on *t*, and then *X*(*t*) is a Markov process with [stationary](https://www.britannica.com/dictionary/stationary) transition probabilities. Sometimes *X*(*t*) is called a [random walk](https://www.britannica.com/science/random-walk), but this terminology is not completely standard. Since both the Poisson process and [Brownian motion](https://www.britannica.com/science/Brownian-motion) are created from random walks by simple limiting processes, they, too, are Markov processes with stationary transition probabilities. The Ornstein-Uhlenbeck process defined as the solution ([19](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14387)) to the stochastic [differential equation](https://www.britannica.com/science/differential-equation) ([18](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14388)) is also a Markov process with stationary transition probabilities. The Ornstein-Uhlenbeck process and many other Markov processes with stationary transition probabilities behave like stationary processes as *t* → ∞. Roughly speaking, the conditional distribution of *X*(*t*) given *X*(0) = *x* converges as *t* → ∞ to a distribution, called the stationary distribution, that does not depend on the starting [value](https://www.britannica.com/dictionary/value) *X*(0) = *x*. Moreover, with probability 1, the proportion of time the process spends in any subset of its state space converges to the stationary probability of that set; and, if *X*(0) is given the stationary distribution to begin with, the process becomes a stationary process. The Ornstein-Uhlenbeck process defined in [equation](https://www.britannica.com/science/equation) (19) is stationary if *V*(0) has a [normal distribution](https://www.britannica.com/topic/normal-distribution) with mean 0 and [variance](https://www.britannica.com/topic/variance) σ2/(2*m**f*). At another extreme are absorbing processes. An example is the Markov process describing Peter’s fortune during the game of gambler’s ruin. The process is absorbed whenever either Peter or Paul is ruined. Questions of interest involve the probability of being absorbed in one state rather than another and the distribution of the time until absorption occurs. Some additional examples of stochastic processes follow. ## The Ehrenfest model of diffusion The Ehrenfest model of [diffusion](https://www.merriam-webster.com/dictionary/diffusion) (named after the Austrian Dutch physicist Paul Ehrenfest) was proposed in the early 1900s in order to [illuminate](https://www.merriam-webster.com/dictionary/illuminate) the statistical interpretation of the [second law of thermodynamics](https://www.britannica.com/science/second-law-of-thermodynamics), that the [entropy](https://www.britannica.com/science/entropy-physics) of a closed system can only increase. Suppose *N* molecules of a gas are in a rectangular container divided into two equal parts by a permeable membrane. The state of the system at time *t* is *X*(*t*), the number of molecules on the left-hand side of the membrane. At each time *t* = 1, 2,… a molecule is chosen at random (i.e., each molecule has probability 1/*N* to be chosen) and is moved from its present location to the other side of the membrane. Hence, the system evolves according to the transition probability *p*(*i*, *j*) = *P*{*X*(*t* + 1) = *j*\|*X*(*t*) = *i*}, where![Equations.](https://cdn.britannica.com/60/14360-004-E981DB6D/Equations.jpg) The long run behaviour of the Ehrenfest process can be inferred from general theorems about Markov processes in discrete time with discrete state space and stationary transition probabilities. Let *T*(*j*) denote the first time *t* ≥ 1 such that *X*(*t*) = *j* and [set](https://www.britannica.com/topic/set-mathematics-and-logic) *T*(*j*) = ∞ if *X*(*t*) ≠ *j* for all *t*. Assume that for all states *i* and *j* it is possible for the process to go from *i* to *j* in some number of steps—i.e., *P*{*T*(*j*) \< ∞\|*X*(0) = *i*} \> 0. If the equations![Equation.](https://cdn.britannica.com/86/14386-004-8B016127/Equation.jpg)have a solution *Q*(*j*) that is a probability distribution—i.e., *Q*(*j*) ≥ 0, and Σ*Q*(*j*) = 1—then that solution is unique and is the stationary distribution of the process. Moreover, *Q*(*j*) = 1/*E*{*T*(*j*)\|*X*(0) = *j*}; and, for any initial state *j*, the proportion of time *t* that *X*(*t*) = *i* converges with probability 1 to *Q*(*i*). For the special case of the Ehrenfest process, assume that *N* is large and *X*(0) = 0. According to the deterministic prediction of the second law of thermodynamics, the [entropy](https://www.merriam-webster.com/dictionary/entropy) of this system can only increase, which means that *X*(*t*) will steadily increase until half the molecules are on each side of the membrane. Indeed, according to the stochastic model described above, there is overwhelming probability that *X*(*t*) does increase initially. However, because of random fluctuations, the system occasionally moves from configurations having large entropy to those of smaller entropy and eventually even returns to its starting state, in defiance of the second law of thermodynamics. The accepted resolution of this [contradiction](https://www.britannica.com/dictionary/contradiction) is that the length of time such a system must operate in order that an observable decrease of entropy may occur is so enormously long that a decrease could never be verified experimentally. To consider only the most extreme case, let *T* denote the first time *t* ≥ 1 at which *X*(*t*) = 0—i.e., the time of first return to the starting configuration having all molecules on the right-hand side of the membrane. It can be verified by substitution in equation ([20](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14386)) that the stationary distribution of the Ehrenfest model is the [binomial distribution](https://www.britannica.com/science/binomial-distribution)![Problem 12](https://cdn.britannica.com/22/76622-004-DA978066/Problem.jpg)and hence *E*(*T*) = 2*N*. For example, if *N* is only 100 and transitions occur at the rate of 106 per second, *E*(*T*) is of the order of 1015 years. Hence, on the macroscopic scale, on which experimental measurements can be made, the second law of thermodynamics holds. ## The symmetric random walk A Markov process that behaves in quite different and surprising ways is the symmetric random walk. A particle occupies a point with integer [coordinates](https://www.britannica.com/science/coordinate-system) in *d*\-dimensional [Euclidean space](https://www.britannica.com/science/Euclidean-space). At each time *t* = 1, 2,… it moves from its present location to one of its 2*d* nearest neighbours with equal probabilities 1/(2*d*), independently of its past moves. For *d* = 1 this corresponds to moving a step to the right or left according to the outcome of tossing a fair [coin](https://www.britannica.com/money/coin). It may be shown that for *d* = 1 or 2 the particle returns with probability 1 to its initial position and hence to every possible position infinitely many times, if the random walk continues indefinitely. In three or more dimensions, at any time *t* the number of possible steps that increase the distance of the particle from the origin is much larger than the number decreasing the distance, with the result that the particle eventually moves away from the origin and never returns. Even in one or two dimensions, although the particle eventually returns to its initial position, the expected waiting time until it returns is [infinite](https://www.merriam-webster.com/dictionary/infinite), there is no stationary distribution, and the proportion of time the particle spends in any state converges to 0\! ## [Queuing models](https://www.britannica.com/science/queuing-theory) The simplest service system is a single-server queue, where customers arrive, wait their turn, are served by a single server, and depart. Related stochastic processes are the waiting time of the *n*th customer and the number of customers in the queue at time *t*. For example, suppose that customers arrive at times 0 = *T*0 \< *T*1 \< *T*2 \<⋯ and wait in a [queue](https://www.britannica.com/dictionary/queue) until their turn. Let *V**n* denote the service time required by the *n*th customer, *n* = 0, 1, 2,…, and set *U**n* = *T**n* − *T**n* − 1. The waiting time, *W**n*, of the *n*th customer satisfies the relation *W*0 = 0 and, for *n* ≥ 1, *W**n* = max(0, *W**n* − 1 + *V**n* − 1 − *U**n*). To see this, observe that the *n*th customer must wait for the same length of time as the (*n* − 1)th customer plus the service time of the (*n* − 1)th customer minus the time between the arrival of the (*n* − 1)th and *n*th customer, during which the (*n* − 1)th customer is already waiting but the *n*th customer is not. An exception occurs if this quantity is negative, and then the waiting time of the *n*th customer is 0. Various assumptions can be made about the input and service mechanisms. One possibility is that customers arrive according to a Poisson process and their service times are independent, identically distributed random variables that are also independent of the arrival process. Then, in terms of *Y**n* = *V**n* − 1 − *U**n*, which are independent, identically distributed random variables, the recursive relation defining *W**n* becomes *W**n* = max(0, *W**n* − 1 + *Y**n*). This process is a Markov process. It is often called a random walk with reflecting barrier at 0, because it behaves like a random walk whenever it is positive and is pushed up to be equal to 0 whenever it tries to become negative. Quantities of interest are the mean and variance of the waiting time of the *n*th customer and, since these are very difficult to determine exactly, the mean and variance of the stationary distribution. More realistic queuing models try to accommodate systems with several servers and different classes of customers, who are served according to certain priorities. In most cases it is impossible to give a mathematical [analysis](https://www.britannica.com/science/analysis-mathematics) of the system, which must be simulated on a computer in order to obtain numerical results. The insights gained from theoretical analysis of simple cases can be helpful in performing these simulations. Queuing theory had its origins in attempts to understand traffic in telephone systems. Present-day research is stimulated, among other things, by problems associated with multiple-user computer systems. Reflecting barriers arise in other problems as well. For example, if *B*(*t*) [denotes](https://www.britannica.com/dictionary/denotes) Brownian motion, then *X*(*t*) = *B*(*t*) + *c**t* is called Brownian motion with drift *c*. This model is appropriate for Brownian motion of a particle under the influence of a [constant](https://www.britannica.com/topic/constant) force field such as gravity. One can add a reflecting barrier at 0 to account for reflections of the Brownian particle off the bottom of its container. The result is a model for sedimentation, which for *c* \< 0 in the steady state as *t* → ∞ gives a statistical derivation of the law of pressure as a [function](https://www.britannica.com/science/function-mathematics) of depth in an isothermal atmosphere. Just as ordinary Brownian motion can be obtained as the limit of a rescaled random walk as the number of steps becomes very large and the size of individual steps small, Brownian motion with a reflecting barrier at 0 can be obtained as the limit of a rescaled random walk with reflection at 0. In this way, Brownian motion with a reflecting barrier plays a role in the analysis of [queuing](https://www.britannica.com/dictionary/queuing) systems. In fact, in modern probability theory one of the most important uses of Brownian motion and other diffusion processes is as approximations to more complicated stochastic processes. The exact mathematical description of these approximations gives remarkable generalizations of the [central limit theorem](https://www.britannica.com/science/central-limit-theorem) from sequences of random variables to sequences of random functions. ## Insurance [risk](https://www.britannica.com/money/risk-finance) theory The ruin problem of insurance risk theory is closely related to the problem of gambler’s ruin described earlier and, rather surprisingly, to the single-server queue as well. Suppose the amount of capital at time *t* in one portfolio of an insurance company is denoted by *X*(*t*). Initially *X*(0) = *x* \> 0. During each unit of time, the portfolio receives an amount *c* \> 0 in premiums. At random times claims are made against the [insurance](https://www.britannica.com/dictionary/insurance) company, which must pay the amount *V**n* \> 0 to settle the *n*th claim. If *N*(*t*) denotes the number of claims made in time *t*, then![Problem 13](https://cdn.britannica.com/23/76623-004-7B8552AF/Problem.jpg)provided that this quantity has been positive at all earlier times *s* \< *t*. At the first time *X*(*t*) becomes negative, however, the portfolio is ruined. A principal problem of insurance risk theory is to find the probability of ultimate ruin. If one imagines that the problem of gambler’s ruin is modified so that Peter’s opponent has an infinite amount of capital and can never be ruined, then the probability that Peter is ultimately ruined is similar to the ruin probability of insurance risk theory. In fact, with the artificial assumptions that (i) *c* = 1, (ii) time proceeds by [discrete](https://www.britannica.com/dictionary/discrete) units, say *t* = 1, 2,…, (iii) *V**n* is identically equal to 2 for all *n*, and (iv) at each time *t* a claim occurs with probability *p* or does not occur with probability *q* independently of what occurs at other times, then the process *X*(*t*) is the same stochastic process as Peter’s fortune, which is absorbed if it ever reaches the state 0. The probability of Peter’s ultimate ruin against an infinitely rich adversary is easily obtained by taking the limit of equation (6) as *m* → ∞. The answer is (*q*/*p*)*x* if *p* \> *q*—i.e., the game is favourable to Peter—and 1 if *p* ≤ *q*. More interesting assumptions for the insurance risk problem are that the number of claims *N*(*t*) is a Poisson process and the sizes of the claims *V*1, *V*2,… are independent, identically distributed positive [random variables](https://www.britannica.com/topic/random-variable). Rather surprisingly, under these assumptions the probability of ultimate ruin as a function of the initial fortune *x* is exactly the same as the stationary probability that the waiting time in the single-server queue with Poisson input exceeds *x*. Unfortunately, neither problem is easy to solve exactly, although there is a very good approximate solution originally derived by the Swedish mathematician Harald Cramér. ## Martingale theory As a final example, it seems [appropriate](https://www.britannica.com/dictionary/appropriate) to mention one of the dominant ideas of modern probability theory, which at the same time springs directly from the relation of probability to games of [chance](https://www.britannica.com/science/likelihood). Suppose that *X*1, *X*2,… is any stochastic process and, for each *n* = 0, 1,…, *f**n* = *f**n*(*X*1,…, *X**n*) is a (Borel-measurable) function of the indicated observations. The new stochastic process *f**n* is called a martingale if *E*(*f**n*\|*X*1,…, *X**n* − 1) = *f**n* − 1 for every value of *n* \> 0 and all values of *X*1,…, *X**n* − 1. If the sequence of *X*s are outcomes in successive trials of a game of chance and *f**n* is the fortune of a gambler after the *n*th trial, then the martingale condition says that the game is absolutely fair in the sense that, no matter what the past history of the game, the gambler’s conditional expected fortune after one more trial is exactly equal to his present fortune. For example, let *X*0 = *x*, and for *n* ≥ 1 let *X**n* equal 1 or −1 according as a coin having probability *p* of heads and *q* = 1 − *p* of tails turns up heads or tails on the *n*th toss. Let *S**n* = *X*0 +⋯+ *X**n*. Then *f**n* = *S**n* − *n*(*p* − *q*) and *f**n* = (*q*/*p*)*S**n* are martingales. One of the basic results of martingale theory is that, if the gambler is free to quit the game at any time using any strategy whatever, provided only that this strategy does not [foresee](https://www.britannica.com/dictionary/foresee) the future, then the game remains fair. This means that, if *N* denotes the stopping time at which the gambler’s strategy tells him to quit the game, so that his final fortune is *f**N*, then![Equation.](https://cdn.britannica.com/85/14385-004-F17F0C02/Equation.jpg) Strictly speaking, this result is not true without some additional conditions that must be verified for any particular application. To see how efficiently it works, consider once again the problem of gambler’s ruin and let *N* be the first value of *n* such that *S**n* = 0 or *m*; i.e., *N* denotes the random time at which ruin first occurs and the game ends. In the case *p* = 1/2, application of equation ([21](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14385)) to the martingale *f**n* = *S**n*, together with the observation that *f**N* = either 0 or *m*, [yields](https://www.britannica.com/dictionary/yields) the equalities *x* = *f*0 = *E*(*f**N*\|*f*0 = *x*) = *m*\[1 − *Q*(*x*)\], which can be immediately solved to give the answer in equation ([6](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14398)). For *p* ≠ 1/2, one uses the martingale *f**n* = (*q*/*p*)*S**n* and similar reasoning to obtain![Equation.](https://cdn.britannica.com/64/14364-004-C607E795/Equation.jpg)from which the first equation in ([6](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14398)) easily follows. The expected duration of the game is obtained by a similar argument. A particularly beautiful and important result is the martingale convergence [theorem](https://www.britannica.com/topic/theorem), which implies that a nonnegative martingale converges with probability 1 as *n* → ∞. This means that, if a gambler’s successive fortunes form a (nonnegative) martingale, they cannot continue to [fluctuate](https://www.britannica.com/dictionary/fluctuate) indefinitely but must approach some limiting value. Basic martingale theory and many of its applications were developed by the American mathematician Joseph Leo Doob during the 1940s and ’50s following some earlier results due to [Paul Lévy](https://www.britannica.com/biography/Paul-Levy). Subsequently it has become one of the most powerful tools available to study [stochastic processes](https://www.britannica.com/science/stochastic-process). [David O. Siegmund](https://www.britannica.com/contributor/David-O-Siegmund/3760) Load Next Page Feedback Thank you for your feedback Our editors will review what you’ve submitted and determine whether to revise the article. *print* Print Please select which sections you would like to print: *verified*Cite While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions. Select Citation Style Siegmund, David O.. "probability theory". *Encyclopedia Britannica*, 1 Mar. 2026, https://www.britannica.com/science/probability-theory. Accessed 20 March 2026. Copy Citation Share Share to social media [Facebook](https://www.facebook.com/BRITANNICA/) [X](https://x.com/britannica) URL <https://www.britannica.com/science/probability-theory> External Websites - [Physics LibreTexts - Basics of Probability Theory](https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_9HC__Introduction_to_Waves_Physical_Optics_and_Quantum_Theory/4%3A_The_Universe_is_Inherently_Probabilistic/4.1%3A_Basics_of_Probability_Theory) - [National Center for Biotechnology Information - PubMed Central - A brief introduction to probability](https://pmc.ncbi.nlm.nih.gov/articles/PMC5864684/) - [Massachusetts Institute of Technology - Department of Mathematics - Probability Theory](https://math.mit.edu/research/highschool/primes/circle/documents/2022/Elena%20&%20Alice.pdf) - [Stanford Encyclopedia of Philosophy - Quantum Logic and Probability Theory](https://plato.stanford.edu/entries/qt-quantlog/) - [Stanford University - Review of Probability Theory](https://cs229.stanford.edu/section/cs229-prob.pdf) - [North Dakota State University - Probability theory (PDF)](https://www.ndsu.edu/pubweb/~novozhil/Teaching/767%20Data/18_pdfsam_notes.pdf) - [Indian Academy of Sciences - What is Probability Theory?](https://www.ias.ac.in/article/fulltext/reso/020/04/0292-0310) - [University of California - Department of Statistics - Probability: Philosophy and Mathematical Background](https://www.stat.berkeley.edu/~stark/SticiGui/Text/probabilityPhilosophy.htm) - [OpenStax - Principles of Data Science - Probability Theory](https://openstax.org/books/principles-data-science/pages/3-4-probability-theory)
Readable Markdown
## [Markovian processes](https://www.britannica.com/science/Markov-chain) A stochastic process is called Markovian (after the Russian mathematician [Andrey Andreyevich Markov](https://www.britannica.com/biography/Andrey-Andreyevich-Markov)) if at any time *t* the [conditional probability](https://www.britannica.com/science/conditional-probability) of an arbitrary future event given the entire past of the process—i.e., given *X*(*s*) for all *s* ≤ *t*—equals the conditional probability of that future event given only *X*(*t*). Thus, in order to make a probabilistic statement about the future behaviour of a Markov process, it is no more helpful to know the entire history of the process than it is to know only its current state. The conditional distribution of *X*(*t* + *h*) given *X*(*t*) is called the transition probability of the process. If this conditional distribution does not depend on *t*, the process is said to have “stationary” [transition](https://www.britannica.com/dictionary/transition) probabilities. A Markov process with stationary transition probabilities may or may not be a stationary process in the sense of the preceding paragraph. If *Y*1, *Y*2,… are independent random variables and *X*(*t*) = *Y*1 +⋯+ *Y**t*, the stochastic process *X*(*t*) is a Markov process. Given *X*(*t*) = *x*, the conditional probability that *X*(*t* + *h*) belongs to an interval (*a*, *b*) is just the probability that *Y**t* + 1 +⋯+ *Y**t* + *h* belongs to the translated interval (*a* − *x*, *b* − *x*); and because of independence this conditional probability would be the same if the values of *X*(1),…, *X*(*t* − 1) were also given. If the *Y*s are identically distributed as well as independent, this transition probability does not depend on *t*, and then *X*(*t*) is a Markov process with [stationary](https://www.britannica.com/dictionary/stationary) transition probabilities. Sometimes *X*(*t*) is called a [random walk](https://www.britannica.com/science/random-walk), but this terminology is not completely standard. Since both the Poisson process and [Brownian motion](https://www.britannica.com/science/Brownian-motion) are created from random walks by simple limiting processes, they, too, are Markov processes with stationary transition probabilities. The Ornstein-Uhlenbeck process defined as the solution ([19](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14387)) to the stochastic [differential equation](https://www.britannica.com/science/differential-equation) ([18](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14388)) is also a Markov process with stationary transition probabilities. The Ornstein-Uhlenbeck process and many other Markov processes with stationary transition probabilities behave like stationary processes as *t* → ∞. Roughly speaking, the conditional distribution of *X*(*t*) given *X*(0) = *x* converges as *t* → ∞ to a distribution, called the stationary distribution, that does not depend on the starting [value](https://www.britannica.com/dictionary/value) *X*(0) = *x*. Moreover, with probability 1, the proportion of time the process spends in any subset of its state space converges to the stationary probability of that set; and, if *X*(0) is given the stationary distribution to begin with, the process becomes a stationary process. The Ornstein-Uhlenbeck process defined in [equation](https://www.britannica.com/science/equation) (19) is stationary if *V*(0) has a [normal distribution](https://www.britannica.com/topic/normal-distribution) with mean 0 and [variance](https://www.britannica.com/topic/variance) σ2/(2*m**f*). At another extreme are absorbing processes. An example is the Markov process describing Peter’s fortune during the game of gambler’s ruin. The process is absorbed whenever either Peter or Paul is ruined. Questions of interest involve the probability of being absorbed in one state rather than another and the distribution of the time until absorption occurs. Some additional examples of stochastic processes follow. ## The Ehrenfest model of diffusion The Ehrenfest model of [diffusion](https://www.merriam-webster.com/dictionary/diffusion) (named after the Austrian Dutch physicist Paul Ehrenfest) was proposed in the early 1900s in order to [illuminate](https://www.merriam-webster.com/dictionary/illuminate) the statistical interpretation of the [second law of thermodynamics](https://www.britannica.com/science/second-law-of-thermodynamics), that the [entropy](https://www.britannica.com/science/entropy-physics) of a closed system can only increase. Suppose *N* molecules of a gas are in a rectangular container divided into two equal parts by a permeable membrane. The state of the system at time *t* is *X*(*t*), the number of molecules on the left-hand side of the membrane. At each time *t* = 1, 2,… a molecule is chosen at random (i.e., each molecule has probability 1/*N* to be chosen) and is moved from its present location to the other side of the membrane. Hence, the system evolves according to the transition probability *p*(*i*, *j*) = *P*{*X*(*t* + 1) = *j*\|*X*(*t*) = *i*}, where![Equations.](https://cdn.britannica.com/60/14360-004-E981DB6D/Equations.jpg) The long run behaviour of the Ehrenfest process can be inferred from general theorems about Markov processes in discrete time with discrete state space and stationary transition probabilities. Let *T*(*j*) denote the first time *t* ≥ 1 such that *X*(*t*) = *j* and [set](https://www.britannica.com/topic/set-mathematics-and-logic) *T*(*j*) = ∞ if *X*(*t*) ≠ *j* for all *t*. Assume that for all states *i* and *j* it is possible for the process to go from *i* to *j* in some number of steps—i.e., *P*{*T*(*j*) \< ∞\|*X*(0) = *i*} \> 0. If the equations![Equation.](https://cdn.britannica.com/86/14386-004-8B016127/Equation.jpg)have a solution *Q*(*j*) that is a probability distribution—i.e., *Q*(*j*) ≥ 0, and Σ*Q*(*j*) = 1—then that solution is unique and is the stationary distribution of the process. Moreover, *Q*(*j*) = 1/*E*{*T*(*j*)\|*X*(0) = *j*}; and, for any initial state *j*, the proportion of time *t* that *X*(*t*) = *i* converges with probability 1 to *Q*(*i*). For the special case of the Ehrenfest process, assume that *N* is large and *X*(0) = 0. According to the deterministic prediction of the second law of thermodynamics, the [entropy](https://www.merriam-webster.com/dictionary/entropy) of this system can only increase, which means that *X*(*t*) will steadily increase until half the molecules are on each side of the membrane. Indeed, according to the stochastic model described above, there is overwhelming probability that *X*(*t*) does increase initially. However, because of random fluctuations, the system occasionally moves from configurations having large entropy to those of smaller entropy and eventually even returns to its starting state, in defiance of the second law of thermodynamics. The accepted resolution of this [contradiction](https://www.britannica.com/dictionary/contradiction) is that the length of time such a system must operate in order that an observable decrease of entropy may occur is so enormously long that a decrease could never be verified experimentally. To consider only the most extreme case, let *T* denote the first time *t* ≥ 1 at which *X*(*t*) = 0—i.e., the time of first return to the starting configuration having all molecules on the right-hand side of the membrane. It can be verified by substitution in equation ([20](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14386)) that the stationary distribution of the Ehrenfest model is the [binomial distribution](https://www.britannica.com/science/binomial-distribution)![Problem 12](https://cdn.britannica.com/22/76622-004-DA978066/Problem.jpg)and hence *E*(*T*) = 2*N*. For example, if *N* is only 100 and transitions occur at the rate of 106 per second, *E*(*T*) is of the order of 1015 years. Hence, on the macroscopic scale, on which experimental measurements can be made, the second law of thermodynamics holds. ## The symmetric random walk A Markov process that behaves in quite different and surprising ways is the symmetric random walk. A particle occupies a point with integer [coordinates](https://www.britannica.com/science/coordinate-system) in *d*\-dimensional [Euclidean space](https://www.britannica.com/science/Euclidean-space). At each time *t* = 1, 2,… it moves from its present location to one of its 2*d* nearest neighbours with equal probabilities 1/(2*d*), independently of its past moves. For *d* = 1 this corresponds to moving a step to the right or left according to the outcome of tossing a fair [coin](https://www.britannica.com/money/coin). It may be shown that for *d* = 1 or 2 the particle returns with probability 1 to its initial position and hence to every possible position infinitely many times, if the random walk continues indefinitely. In three or more dimensions, at any time *t* the number of possible steps that increase the distance of the particle from the origin is much larger than the number decreasing the distance, with the result that the particle eventually moves away from the origin and never returns. Even in one or two dimensions, although the particle eventually returns to its initial position, the expected waiting time until it returns is [infinite](https://www.merriam-webster.com/dictionary/infinite), there is no stationary distribution, and the proportion of time the particle spends in any state converges to 0\! ## [Queuing models](https://www.britannica.com/science/queuing-theory) The simplest service system is a single-server queue, where customers arrive, wait their turn, are served by a single server, and depart. Related stochastic processes are the waiting time of the *n*th customer and the number of customers in the queue at time *t*. For example, suppose that customers arrive at times 0 = *T*0 \< *T*1 \< *T*2 \<⋯ and wait in a [queue](https://www.britannica.com/dictionary/queue) until their turn. Let *V**n* denote the service time required by the *n*th customer, *n* = 0, 1, 2,…, and set *U**n* = *T**n* − *T**n* − 1. The waiting time, *W**n*, of the *n*th customer satisfies the relation *W*0 = 0 and, for *n* ≥ 1, *W**n* = max(0, *W**n* − 1 + *V**n* − 1 − *U**n*). To see this, observe that the *n*th customer must wait for the same length of time as the (*n* − 1)th customer plus the service time of the (*n* − 1)th customer minus the time between the arrival of the (*n* − 1)th and *n*th customer, during which the (*n* − 1)th customer is already waiting but the *n*th customer is not. An exception occurs if this quantity is negative, and then the waiting time of the *n*th customer is 0. Various assumptions can be made about the input and service mechanisms. One possibility is that customers arrive according to a Poisson process and their service times are independent, identically distributed random variables that are also independent of the arrival process. Then, in terms of *Y**n* = *V**n* − 1 − *U**n*, which are independent, identically distributed random variables, the recursive relation defining *W**n* becomes *W**n* = max(0, *W**n* − 1 + *Y**n*). This process is a Markov process. It is often called a random walk with reflecting barrier at 0, because it behaves like a random walk whenever it is positive and is pushed up to be equal to 0 whenever it tries to become negative. Quantities of interest are the mean and variance of the waiting time of the *n*th customer and, since these are very difficult to determine exactly, the mean and variance of the stationary distribution. More realistic queuing models try to accommodate systems with several servers and different classes of customers, who are served according to certain priorities. In most cases it is impossible to give a mathematical [analysis](https://www.britannica.com/science/analysis-mathematics) of the system, which must be simulated on a computer in order to obtain numerical results. The insights gained from theoretical analysis of simple cases can be helpful in performing these simulations. Queuing theory had its origins in attempts to understand traffic in telephone systems. Present-day research is stimulated, among other things, by problems associated with multiple-user computer systems. Reflecting barriers arise in other problems as well. For example, if *B*(*t*) [denotes](https://www.britannica.com/dictionary/denotes) Brownian motion, then *X*(*t*) = *B*(*t*) + *c**t* is called Brownian motion with drift *c*. This model is appropriate for Brownian motion of a particle under the influence of a [constant](https://www.britannica.com/topic/constant) force field such as gravity. One can add a reflecting barrier at 0 to account for reflections of the Brownian particle off the bottom of its container. The result is a model for sedimentation, which for *c* \< 0 in the steady state as *t* → ∞ gives a statistical derivation of the law of pressure as a [function](https://www.britannica.com/science/function-mathematics) of depth in an isothermal atmosphere. Just as ordinary Brownian motion can be obtained as the limit of a rescaled random walk as the number of steps becomes very large and the size of individual steps small, Brownian motion with a reflecting barrier at 0 can be obtained as the limit of a rescaled random walk with reflection at 0. In this way, Brownian motion with a reflecting barrier plays a role in the analysis of [queuing](https://www.britannica.com/dictionary/queuing) systems. In fact, in modern probability theory one of the most important uses of Brownian motion and other diffusion processes is as approximations to more complicated stochastic processes. The exact mathematical description of these approximations gives remarkable generalizations of the [central limit theorem](https://www.britannica.com/science/central-limit-theorem) from sequences of random variables to sequences of random functions. ## Insurance [risk](https://www.britannica.com/money/risk-finance) theory The ruin problem of insurance risk theory is closely related to the problem of gambler’s ruin described earlier and, rather surprisingly, to the single-server queue as well. Suppose the amount of capital at time *t* in one portfolio of an insurance company is denoted by *X*(*t*). Initially *X*(0) = *x* \> 0. During each unit of time, the portfolio receives an amount *c* \> 0 in premiums. At random times claims are made against the [insurance](https://www.britannica.com/dictionary/insurance) company, which must pay the amount *V**n* \> 0 to settle the *n*th claim. If *N*(*t*) denotes the number of claims made in time *t*, then![Problem 13](https://cdn.britannica.com/23/76623-004-7B8552AF/Problem.jpg)provided that this quantity has been positive at all earlier times *s* \< *t*. At the first time *X*(*t*) becomes negative, however, the portfolio is ruined. A principal problem of insurance risk theory is to find the probability of ultimate ruin. If one imagines that the problem of gambler’s ruin is modified so that Peter’s opponent has an infinite amount of capital and can never be ruined, then the probability that Peter is ultimately ruined is similar to the ruin probability of insurance risk theory. In fact, with the artificial assumptions that (i) *c* = 1, (ii) time proceeds by [discrete](https://www.britannica.com/dictionary/discrete) units, say *t* = 1, 2,…, (iii) *V**n* is identically equal to 2 for all *n*, and (iv) at each time *t* a claim occurs with probability *p* or does not occur with probability *q* independently of what occurs at other times, then the process *X*(*t*) is the same stochastic process as Peter’s fortune, which is absorbed if it ever reaches the state 0. The probability of Peter’s ultimate ruin against an infinitely rich adversary is easily obtained by taking the limit of equation (6) as *m* → ∞. The answer is (*q*/*p*)*x* if *p* \> *q*—i.e., the game is favourable to Peter—and 1 if *p* ≤ *q*. More interesting assumptions for the insurance risk problem are that the number of claims *N*(*t*) is a Poisson process and the sizes of the claims *V*1, *V*2,… are independent, identically distributed positive [random variables](https://www.britannica.com/topic/random-variable). Rather surprisingly, under these assumptions the probability of ultimate ruin as a function of the initial fortune *x* is exactly the same as the stationary probability that the waiting time in the single-server queue with Poisson input exceeds *x*. Unfortunately, neither problem is easy to solve exactly, although there is a very good approximate solution originally derived by the Swedish mathematician Harald Cramér. ## Martingale theory As a final example, it seems [appropriate](https://www.britannica.com/dictionary/appropriate) to mention one of the dominant ideas of modern probability theory, which at the same time springs directly from the relation of probability to games of [chance](https://www.britannica.com/science/likelihood). Suppose that *X*1, *X*2,… is any stochastic process and, for each *n* = 0, 1,…, *f**n* = *f**n*(*X*1,…, *X**n*) is a (Borel-measurable) function of the indicated observations. The new stochastic process *f**n* is called a martingale if *E*(*f**n*\|*X*1,…, *X**n* − 1) = *f**n* − 1 for every value of *n* \> 0 and all values of *X*1,…, *X**n* − 1. If the sequence of *X*s are outcomes in successive trials of a game of chance and *f**n* is the fortune of a gambler after the *n*th trial, then the martingale condition says that the game is absolutely fair in the sense that, no matter what the past history of the game, the gambler’s conditional expected fortune after one more trial is exactly equal to his present fortune. For example, let *X*0 = *x*, and for *n* ≥ 1 let *X**n* equal 1 or −1 according as a coin having probability *p* of heads and *q* = 1 − *p* of tails turns up heads or tails on the *n*th toss. Let *S**n* = *X*0 +⋯+ *X**n*. Then *f**n* = *S**n* − *n*(*p* − *q*) and *f**n* = (*q*/*p*)*S**n* are martingales. One of the basic results of martingale theory is that, if the gambler is free to quit the game at any time using any strategy whatever, provided only that this strategy does not [foresee](https://www.britannica.com/dictionary/foresee) the future, then the game remains fair. This means that, if *N* denotes the stopping time at which the gambler’s strategy tells him to quit the game, so that his final fortune is *f**N*, then![Equation.](https://cdn.britannica.com/85/14385-004-F17F0C02/Equation.jpg) Strictly speaking, this result is not true without some additional conditions that must be verified for any particular application. To see how efficiently it works, consider once again the problem of gambler’s ruin and let *N* be the first value of *n* such that *S**n* = 0 or *m*; i.e., *N* denotes the random time at which ruin first occurs and the game ends. In the case *p* = 1/2, application of equation ([21](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14385)) to the martingale *f**n* = *S**n*, together with the observation that *f**N* = either 0 or *m*, [yields](https://www.britannica.com/dictionary/yields) the equalities *x* = *f*0 = *E*(*f**N*\|*f*0 = *x*) = *m*\[1 − *Q*(*x*)\], which can be immediately solved to give the answer in equation ([6](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14398)). For *p* ≠ 1/2, one uses the martingale *f**n* = (*q*/*p*)*S**n* and similar reasoning to obtain![Equation.](https://cdn.britannica.com/64/14364-004-C607E795/Equation.jpg)from which the first equation in ([6](https://www.britannica.com/science/probability-theory/Brownian-motion-process#ref-14398)) easily follows. The expected duration of the game is obtained by a similar argument. A particularly beautiful and important result is the martingale convergence [theorem](https://www.britannica.com/topic/theorem), which implies that a nonnegative martingale converges with probability 1 as *n* → ∞. This means that, if a gambler’s successive fortunes form a (nonnegative) martingale, they cannot continue to [fluctuate](https://www.britannica.com/dictionary/fluctuate) indefinitely but must approach some limiting value. Basic martingale theory and many of its applications were developed by the American mathematician Joseph Leo Doob during the 1940s and ’50s following some earlier results due to [Paul Lévy](https://www.britannica.com/biography/Paul-Levy). Subsequently it has become one of the most powerful tools available to study [stochastic processes](https://www.britannica.com/science/stochastic-process).
Shard62 (laksa)
Root Hash5455945239613777662
Unparsed URLcom,britannica!www,/science/probability-theory/Brownian-motion-process s443