ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.1 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html |
| Last Crawled | 2026-04-12 10:58:57 (2 days ago) |
| First Indexed | 2023-02-20 07:23:32 (3 years ago) |
| HTTP Status Code | 200 |
| Meta Title | Chapter 9 States, Recurrence and Periodicity | Lecture-Notes.knit |
| Meta Description | Lecture notes for module DATA2002: Probability and Time Series. |
| Meta Canonical | null |
| Boilerpipe Text | Suppose we have a Markov chain which is currently in state
\(i\)
. It is natural to ask questions such as the following:
Are there any states that we cannot ever get to?
Once we leave state
\(i\)
are we guaranteed to get back?
If we are certain to return to
\(i\)
how long does this take on average?
If it is possible to return to
\(i\)
, for what values of
\(n\)
is it possible to return in
\(n\)
steps?
To answer these questions, we study properties of a Markov chain, and it particular introduce classes of states.
Communication of States
In this section, we formalise the notion of one state being accessible from another.
A state
\(i\)
is said to
communicate
with a state
\(j\)
if there is a non-zero probability that a Markov chain currently in state
\(i\)
will move to state
\(j\)
in the future. Mathematically
\(p_{ij}^{(n)}>0\)
for some
\(n \geq 0\)
. This is denoted by
\(i \rightarrow j\)
.
That is to say, state
\(i\)
can communicate with state
\(j\)
if it is possible to move from
\(i\)
to
\(j\)
.
Note in
Defintion 9.1.1
that
\(n=0\)
is permitted. It follows that any state
\(i\)
is said communicate with itself:
\(i \rightarrow i\)
necessarily.
Consider the Markov Chain governing the company website of
Example 8.4.2
, that is the Markov chain described by the diagram
Note that if
\(X_t=2\)
for some
\(t\)
, that is the Markov chain is in state 2, then one can move to state 4 for example by the path
\(X_t=2, X_{t+1} = 1, X_{t+2}=4\)
(other routes are available). Therefore
\(2 \rightarrow 4\)
. Similarly
\(1 \rightarrow 4\)
and
\(3 \rightarrow 4\)
.
However since
\(p_{44}=1\)
, or equivalently
\(p_{41}=p_{42}=p_{43}=0\)
, it is impossible for the Markov chain to leave state 4. That is state
\(4\)
cannot communicate with any of the states
\(1,2,3\)
.
States
\(i\)
and
\(j\)
are said to
intercommunicate
if
\(i \rightarrow j\)
and
\(j \rightarrow i\)
. This is denoted by
\(i \leftrightarrow j\)
Considering again the Markov Chain governing the company website of
Example 8.4.2
, seen in
Example 9.1.2
, one can easily observe that
\(1 \leftrightarrow 2, 2 \leftrightarrow 3\)
and
\(1 \leftrightarrow 3\)
.
However since state
\(4\)
does not communicate with states
\(1,2\)
or
\(3\)
, it follows that state
\(4\)
does not intercommunicate with any of the states
\(1,2\)
or
\(3\)
We have introduced the notions of communication and intercommunication, as we anticipate that properties of states will be shared by those that intercommunicate with each other. In this vein, one could group together all states that can intercommunicate to partition the Markov chain into communicating classes.
We introduce two notions that capture collections of states that have strong properties regarding communication together.
A set
\(C\)
of states is called
irreducible
if
\(i \leftrightarrow j\)
, for all
\(i,j \in C\)
.
A Markov chain is said to be irreducible itself, if the set of all states is irreducible.
A set
\(C\)
of states is said to be
closed
if for any
\(i \in C\)
and
\(j \notin C\)
, then
\(p_{ij}=0\)
.
Once a Markov chain reaches a closed state
\(C\)
, it will subsequently never leave
\(C\)
.
Consider the Markov chain represented by the following:
Show that the set
\(\{1,2\}\)
is both irreducible and closed.
From the Markov chain diagram, one can read the transition matrix for the Markov chain as
\[P = \begin{pmatrix}
\frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 \\
\frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 & 0 \\
\frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & 0 & 0 \\
\frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \\
0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \\
0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2}
\end{pmatrix}.\]
Note that
\(p_{12} = \frac{1}{2}>0\)
so there is a path from state
\(1\)
to state
\(2\)
, and
\(p_{21} = \frac{1}{4}>0\)
so there is a path from state
\(2\)
to state
\(1\)
. Therefore
\(1 \leftrightarrow 2\)
, that is
\(\{1,2\}\)
is irreducible.
Also note
\(p_{13}=p_{14}=p_{15}=p_{16}=p_{23}=p_{24}=p_{25}=p_{26}=0\)
, so by
Definition 9.1.6
the set
\(\{1,2\}\)
is closed.
Therefore
\(\{1,2\}\)
is a closed and irreducible set.
Are there any other irreducible and closed sets in the Markov chain of
Example 9.1.7
? What about sets that are only irreducible, and sets that are only closed?
The terminology introduced in both
Defintion 9.1.5
and
Definition 9.1.6
freely for the remainder of the course.
If a closed set
\(C\)
of states contains only one state
\(i\)
, that is
\(p_{ii} = 1\)
and
\(p_{ij}=0\)
for all
\(j \neq i\)
, we call
\(i\)
an
absorbing state
.
Find an absorbing state among the Markov chains we have seen in previous examples.
Recurrence
A state of a Markov chain is called
recurrent
if
\[P[X_n = i \text{ for some } n\geq 1 \mid X_0 = i] = 1.\]
The essence of this definition is that a Markov chain that is currently in some recurrent state is certain to return to that state again in the future.
A state of a Markov chain that is not recurrent is called
transient
.
A Markov chain that is currently in some transient state is not certain to return to that state again in the future.
Consider the Markov Chain governing Mary Berrys’ choice of Nottingham coffee shop of
Example 8.1.3
, that is the Markov chain described by the diagram
Show that Latte Da is a recurrent state.
Suppose that the Mary Berry visits Latte Da on her
\(t^{th}\)
trip to Nottingham. Mathematically in terms of the Markov chain:
\(X_t = \text{Latte Da}\)
. First we calculate the probability that Mary Berry doesn’t visit Latte Da on her next
\(N\)
visits to Nottingham.
\[\begin{align*}
&P (\text{Mary Berry doesn't visit Latte Da on next $N$ visits} \mid X_t = \text{Latte Da}) \\
=& P ( X_{t+N} = X_{t+N-1} = \cdots = X_{t+1} = \text{Deja Brew} \mid X_t = \text{Latte Da}) \\
=& P ( X_{t+N} = \text{D.B.} \mid X_{t+N-1} = \text{D.B.}) \times P ( X_{t+N-1} = \text{D.B.} \mid X_{t+N-2} = \text{D.B.}) \times \ldots \\
& \qquad \qquad \ldots \times P ( X_{t+2} = \text{D.B.} \mid X_{t+1} = \text{D.B.}) \times P ( X_{t+1} = \text{D.B.} \mid X_{t} = \text{L.D.}) \\
=& P ( X_{1} = \text{D.B.} \mid X_{0} = \text{D.B.}) \times P ( X_{1} = \text{D.B.} \mid X_{0} = \text{D.B.}) \times \ldots \\
& \qquad \qquad \ldots \times P ( X_{1} = \text{D.B.} \mid X_{0} = \text{D.B.}) \times P ( X_{1} = \text{D.B.} \mid X_{0} = \text{L.D.}) \\
&= \frac{5}{6} \times \frac{2}{3} \times \ldots \frac{2}{3} \\
&= \frac{5}{6} \left( \frac{2}{3} \right)^{N-1}
\end{align*}\]
Note that as
\(N\)
gets increasingly big, that is as the number of visits Mary Berry subsequently takes to Nottingham grows, the probability calculated above tends to
\(0\)
. Therefore the probability of Mary Berry only ever using Deja Brew is
\(0\)
, or in other words, Mary Berry is certain to return to Latte Da at some point. Therefore Latte Da is a recurrent state.
Consider again the Markov Chain governing the company website of
Example 8.4.2
, seen in
Example 9.1.2
. Show that state 1 is a transient state.
Suppose the user is on the
Home Page
of the website, that is
\(X_t=1\)
for some
\(t\)
. The user could click on the link to the
Staff Page
, that is, state 4 of the Markov chain. At this point it is impossible for the user to return to the
Home Page
, or state 1. That is to say, it is not certain that the Markov chain will ever return to state 1. Therefore state 1 is transiant.
If
\(i \leftrightarrow j\)
, then state
\(i\)
is recurrent if and only if state
\(j\)
is recurrent.
It follows from
Example 9.2.3
and
Lemma 9.2.5
that the state Deja Brew in the Mary Berry coffee shop example is also recurrent.
Indeed
Lemma 9.2.5
indicates that it makes sense to label communicating classes as either recurrent or transiant: if one state in a communicating class is recurrent/transient then all the states in that class must be recurrent/transient respectively. This leads to the following definition.
An irreducible Markov chain is said to be
recurrent
if it contains at least one recurrent state.
An irreducible Markov chain being recurrent as per
Definition 9.2.6
is equivalent to every state of the Markov chain being recurrent.
Mean Recurrence Times
The
mean recurrence time
of a state
\(i\)
, denoted
\(\mu_i\)
, is given by
\[\mu_i = \begin{cases}
\sum\limits_{n \geq 1} n f_{ii}^{(n)},& \text{if state } i\text{ recurrent,} \\
\infty,& \text{if state } i \text{ transiant.} \\
\end{cases}\]
Suppose
\(i\)
is a recurrent state. It follows from
Definition 9.3.1
, that the mean recurrence time is the average time that it takes for the Markov chain currently in state
\(i\)
to return to
\(i\)
. This can be seen by noting that the summation is over all possibilities for how long it could take the Markov chain to return as required, and that
\(f_{ii}^{(n)}\)
is the probability that the Markov chain moves from state
\(i\)
to state
\(i\)
in exactly
\(n\)
steps.
Calculate the mean recurrence time
\(\mu_1\)
for the Markov Chain governing the company website of
Example 8.4.2
.
Substituting in the known value of
\(f_{11}^{(1)} = 0\)
and
\(f_{11}^{(n)} = \frac{1}{3} \cdot \frac{1}{2^{n-2}}\)
where
\(n \geq 2\)
from
Example 8.4.4
into
Definition 9.3.1
obtain
\[\mu_1 = \sum\limits_{n \geq 1} n f_{11}^{(n)} = f_{11}^{(1)} + \sum\limits_{n=2}^{\infty} n f_{11}^{(2)} = 0 + \sum\limits_{n=2}^{\infty} n \cdot \frac{1}{3} \cdot \frac{1}{2^{n-2}} = \frac{1}{3} \sum\limits_{n=2}^{\infty} \frac{n}{2^{n-2}}.\]
Using computer code calculate
\(\sum\limits_{n=2}^{\infty} \frac{n}{2^{n-2}} = 6\)
and so
\[\mu_1 = \frac{1}{3} \cdot 6 = 2.\]
Note that even if state
\(i\)
is recurrent, the mean recurrence time may still be
\(\infty\)
.
Consider a recurrent state
\(i\)
. The state
\(i\)
is said to be
positive recurrent
if
\(\mu_i < \infty\)
, or
null recurrent
if
\(\mu_i = \infty\)
.
It follows from
Example 9.3.2
that state
\(1\)
in the Markov chain governing the company website is positive recurrent since
\(\mu_1 =2 <\infty\)
.
If
\(i \leftrightarrow j\)
, then
\(i\)
is positive recurrent if and only if
\(j\)
is positive recurrent.
This lemma provides us with a shortcut to show positive recurrence of a large number of states.
Combining
Example 9.1.4
,
Example 9.3.4
and
Lemma 9.3.5
shows that states
\(2\)
and state
\(3\)
are also positive recurrent in the running company website Markov chain example.
An irreducible Markov chain is said to be
positive-recurrent
if it contains at least one positive-recurrent state.
An irreducible Markov chain being positive-recurrent as per
Definition 9.3.7
is equivalent to every state of the Markov chain being positive-recurrent.
Note that
Lemma 9.3.5
does not tell us anything about the mean recurrent time of intercommunicating recurrent states beyond finiteness. Namely knowing that
\(\mu_1 = 2\)
in
Example 9.3.2
does not provide any new information about
\(\mu_2\)
and
\(\mu_3\)
, beyond what we would have known given
\(\mu_1 < \infty\)
.
If a Markov chain has a finite number of states, then all recurrent states are positive.
An irreducible Markov chain with a finite number of states is positive-recurrant.
Periodicity
Consider a Markov chain in some state
\(i\)
. Consider the values of
\(n\)
for which
\(p_{ii}^{(n)}>0\)
, that is, the positive integers
\(n\)
for which it is possible for the Markov chain to return to
\(i\)
in
\(n\)
steps. Throughout this section we denote this collection of values by
\(\{ a_1, a_2, a_3, \ldots \}\)
.
The
period
of state
\(i\)
is given by
\[d_i = \gcd (a_1,a_2,a_3, \ldots )\]
Recall the scenario of
Week 6 Questions, Questions 1 to 8
:
Every year Ria chooses exactly two apprentices to compete in the fictitious competition
Nottingham’s Got Mathematicians
. Apprentices are recommended to Ria by Daniel and Lisa. Initially Daniel and Lisa recommend one candidate each. However if Daniel selects the apprentice who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Lisa the following year. Similarly if Lisa chooses the candidate who who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Daniel the following year. This rule is repeated every year, even if Daniel or Lisa choose both the candidates.
Generally Lisa is better at picking competitors: a Lisa endorsed candidate beats a Daniel endorsed candidate
\(75 \%\)
of the time.
This scenario can be modelled by the Markov chain:
Calculate
\(d_1\)
, the period of state
\(1\)
.
Consider all the possible paths that start and end in state
\(1\)
:
\[\begin{align*}
1 &\rightarrow 0 \rightarrow 1 \\
1 &\rightarrow 2 \rightarrow 1 \\
1 &\rightarrow 0 \rightarrow 1 \rightarrow 0 \rightarrow 1 \\
1 &\rightarrow 0 \rightarrow 1 \rightarrow 2 \rightarrow 1 \\
1 &\rightarrow 2 \rightarrow 1 \rightarrow 0 \rightarrow 1 \\
1 &\rightarrow 2 \rightarrow 1 \rightarrow 2 \rightarrow 1 \\
1 &\rightarrow 0 \rightarrow 1 \rightarrow 0 \rightarrow 1 \rightarrow 0 \rightarrow 1 \\
&\vdots
\end{align*}\]
The lengths of these paths respectively are
\(2,2,4,4,4,4,6, \ldots\)
. Calculate
\[d_1 = \gcd ( 2,2,4,4,4,4,6, \ldots) =2.\]
This definition goes a long way towards answering the question “If it is possible to return to
\(i\)
, for what values of
\(n\)
is it possible to return in
\(n\)
steps?” identified at the opening of the chapter. Namely for a given value
\(n\)
, it is possible to return from to state
\(i\)
to state
\(i\)
if and only if
\(d_i\)
divides
\(n\)
exactly.
If
\(i \leftrightarrow j\)
, then
\(i\)
and
\(j\)
have the same period:
\[d_i = d_j.\]
Consider the Markov chain of
Example 9.4.2
. Calculate
\(d_0\)
and
\(d_2\)
, the periods of states
\(0\)
and
\(2\)
.
Clearly states
\(0,1,2\)
intercommunicate, that is,
\(0 \leftrightarrow 1\)
and
\(1 \leftrightarrow 2\)
. From
Example 9.4.2
, we know
\(d_1 = 2\)
and so by
Lemma 9.4.3
it follows that
\(d_0=d_2=2\)
.
A state is said to be
aperiodic
if
\(d_i=1\)
.
A Markov chain is
aperiodic
if all of its states are aperiodic.
Since trivially
\(1\)
divides all positive integers
\(n\)
, this definition is equivalent to saying that it is possible to return to state
\(i\)
in any number of steps. Aperiodicity is often a feature of the most mathematically interesting Markov chains, and will be a key assumption when it comes to talking about steady states.
An absorbing state is aperiodic. |
| Markdown | - [DATA2002: Probability and Time Series](https://bookdown.org/danielcavey27/lecture_notes/)
- [**1** Consolidation of DATA1004](https://bookdown.org/danielcavey27/lecture_notes/consolidation-of-data1004.html)
- [**1\.1** Random Variables](https://bookdown.org/danielcavey27/lecture_notes/consolidation-of-data1004.html#random-variables)
- [**1\.2** Discrete Distributions](https://bookdown.org/danielcavey27/lecture_notes/consolidation-of-data1004.html#discrete-distributions)
- [**1\.3** Continuous Distributions](https://bookdown.org/danielcavey27/lecture_notes/consolidation-of-data1004.html#S1.3)
- [**1\.4** Central Limit Theorem](https://bookdown.org/danielcavey27/lecture_notes/consolidation-of-data1004.html#S1.4)
- [**2** Joint Probability Distributions](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html)
- [**2\.1** Joint probability density funtions](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html#joint-probability-density-funtions)
- [**2\.2** Joint cumulative distribution functions](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html#joint-cumulative-distribution-functions)
- [**2\.3** Independence](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html#independence)
- [**2\.4** Three or more random variables](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html#S2.4)
- [**2\.5** Expectation](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html#expectation)
- [**2\.6** Covariance](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html#covariance)
- [**2\.7** Correlation](https://bookdown.org/danielcavey27/lecture_notes/joint-probability-distributions.html#S3.3)
- [**3** Conditional Distributions](https://bookdown.org/danielcavey27/lecture_notes/S3.html)
- [**3\.1** Conditional Probabilities and Discrete Conditional Distributions](https://bookdown.org/danielcavey27/lecture_notes/S3.html#conditional-probabilities-and-discrete-conditional-distributions)
- [**3\.2** Continuous Conditional Distributions](https://bookdown.org/danielcavey27/lecture_notes/S3.html#continuous-conditional-distributions)
- [**4** Independence and Conditional Statistics](https://bookdown.org/danielcavey27/lecture_notes/S4.html)
- [**4\.1** Independence](https://bookdown.org/danielcavey27/lecture_notes/S4.html#independence-1)
- [**4\.2** Conditional Expectation](https://bookdown.org/danielcavey27/lecture_notes/S4.html#conditional-expectation)
- [**4\.3** Conditional Variance](https://bookdown.org/danielcavey27/lecture_notes/S4.html#conditional-variance)
- [**5** Transforms](https://bookdown.org/danielcavey27/lecture_notes/transforms.html)
- [**5\.1** One-dimensional Transformations](https://bookdown.org/danielcavey27/lecture_notes/transforms.html#S5.1)
- [**5\.2** Two-dimensional Transformations](https://bookdown.org/danielcavey27/lecture_notes/transforms.html#S5.2)
- [**5\.3** Maximum and Minimums](https://bookdown.org/danielcavey27/lecture_notes/transforms.html#maximum-and-minimums)
- [**6** Multivariate Normal Distributions](https://bookdown.org/danielcavey27/lecture_notes/multivariate-normal-distributions.html)
- [**6\.1** Matrices: symmetry, eigenvalues and postive definiteness](https://bookdown.org/danielcavey27/lecture_notes/multivariate-normal-distributions.html#matrices-symmetry-eigenvalues-and-postive-definiteness)
- [**6\.2** Definition of Multivariate Normal Distribution](https://bookdown.org/danielcavey27/lecture_notes/multivariate-normal-distributions.html#definition-of-multivariate-normal-distribution)
- [**6\.3** Two-Dimensional Normal Distribution](https://bookdown.org/danielcavey27/lecture_notes/multivariate-normal-distributions.html#two-dimensional-normal-distribution)
- [**6\.4** Properties of Multivariate Normal Distribution](https://bookdown.org/danielcavey27/lecture_notes/multivariate-normal-distributions.html#properties-of-multivariate-normal-distribution)
- [**7** Time Series](https://bookdown.org/danielcavey27/lecture_notes/time-series.html)
- [**7\.1** Introduction to Time Series](https://bookdown.org/danielcavey27/lecture_notes/time-series.html#introduction-to-time-series)
- [**7\.2** Stationarity](https://bookdown.org/danielcavey27/lecture_notes/time-series.html#stationarity)
- [**7\.3** Moving Average, General Linear and Autoregressive Processes](https://bookdown.org/danielcavey27/lecture_notes/time-series.html#moving-average-general-linear-and-autoregressive-processes)
- [**7\.4** Fitting](https://bookdown.org/danielcavey27/lecture_notes/time-series.html#Fitting_Sec)
- [**7\.5** Forecasting](https://bookdown.org/danielcavey27/lecture_notes/time-series.html#forecasting)
- [**8** Stochastic Process and Markov Chains](https://bookdown.org/danielcavey27/lecture_notes/stochastic-process-and-markov-chains.html)
- [**8\.1** Strochastic Processes](https://bookdown.org/danielcavey27/lecture_notes/stochastic-process-and-markov-chains.html#strochastic-processes)
- [**8\.2** Defintion of Markov Chains](https://bookdown.org/danielcavey27/lecture_notes/stochastic-process-and-markov-chains.html#defintion-of-markov-chains)
- [**8\.3** Transition Matrix](https://bookdown.org/danielcavey27/lecture_notes/stochastic-process-and-markov-chains.html#transition-matrix)
- [**8\.4** Hitting Probabilities and Hitting Times](https://bookdown.org/danielcavey27/lecture_notes/stochastic-process-and-markov-chains.html#hitting-probabilities-and-hitting-times)
- [**9** States, Recurrence and Periodicity](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html)
- [**9\.1** Communication of States](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#communication-of-states)
- [**9\.2** Recurrence](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#recurrence)
- [**9\.3** Mean Recurrence Times](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#mean-recurrence-times)
- [**9\.4** Periodicity](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#periodicity)
- [**10** Equilibrium Distributions](https://bookdown.org/danielcavey27/lecture_notes/equilibrium-distributions.html)
- [**10\.1** Definition of Equilibrium Distributions](https://bookdown.org/danielcavey27/lecture_notes/equilibrium-distributions.html#definition-of-equilibrium-distributions)
- [**10\.2** Calculating Equilibrium Disrtributions](https://bookdown.org/danielcavey27/lecture_notes/equilibrium-distributions.html#calculating-equilibrium-disrtributions)
- [**10\.3** Limiting Behaviour](https://bookdown.org/danielcavey27/lecture_notes/equilibrium-distributions.html#limiting-behaviour)
#
# Chapter 9 States, Recurrence and Periodicity
Suppose we have a Markov chain which is currently in state \\(i\\). It is natural to ask questions such as the following:
- Are there any states that we cannot ever get to?
- Once we leave state \\(i\\) are we guaranteed to get back?
- If we are certain to return to \\(i\\) how long does this take on average?
- If it is possible to return to \\(i\\), for what values of \\(n\\) is it possible to return in \\(n\\) steps?
To answer these questions, we study properties of a Markov chain, and it particular introduce classes of states.
## 9\.1 Communication of States
In this section, we formalise the notion of one state being accessible from another.
A state \\(i\\) is said to *communicate* with a state \\(j\\) if there is a non-zero probability that a Markov chain currently in state \\(i\\) will move to state \\(j\\) in the future. Mathematically \\(p\_{ij}^{(n)}\>0\\) for some \\(n \\geq 0\\). This is denoted by \\(i \\rightarrow j\\).
That is to say, state \\(i\\) can communicate with state \\(j\\) if it is possible to move from \\(i\\) to \\(j\\).
Note in [Defintion 9.1.1](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#CommunicatingStatesDef) that \\(n=0\\) is permitted. It follows that any state \\(i\\) is said communicate with itself: \\(i \\rightarrow i\\) necessarily.
Consider the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex), that is the Markov chain described by the diagram

Note that if \\(X\_t=2\\) for some \\(t\\), that is the Markov chain is in state 2, then one can move to state 4 for example by the path \\(X\_t=2, X\_{t+1} = 1, X\_{t+2}=4\\) (other routes are available). Therefore \\(2 \\rightarrow 4\\). Similarly \\(1 \\rightarrow 4\\) and \\(3 \\rightarrow 4\\).
However since \\(p\_{44}=1\\), or equivalently \\(p\_{41}=p\_{42}=p\_{43}=0\\), it is impossible for the Markov chain to leave state 4. That is state \\(4\\) cannot communicate with any of the states \\(1,2,3\\).
States \\(i\\) and \\(j\\) are said to *intercommunicate* if \\(i \\rightarrow j\\) and \\(j \\rightarrow i\\). This is denoted by \\(i \\leftrightarrow j\\)
Considering again the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex), seen in [Example 9.1.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#CommunicatingStatesEx), one can easily observe that \\(1 \\leftrightarrow 2, 2 \\leftrightarrow 3\\) and \\(1 \\leftrightarrow 3\\).
However since state \\(4\\) does not communicate with states \\(1,2\\) or \\(3\\), it follows that state \\(4\\) does not intercommunicate with any of the states \\(1,2\\) or \\(3\\)
We have introduced the notions of communication and intercommunication, as we anticipate that properties of states will be shared by those that intercommunicate with each other. In this vein, one could group together all states that can intercommunicate to partition the Markov chain into communicating classes.
We introduce two notions that capture collections of states that have strong properties regarding communication together.
A set \\(C\\) of states is called *irreducible* if \\(i \\leftrightarrow j\\), for all \\(i,j \\in C\\).
A Markov chain is said to be irreducible itself, if the set of all states is irreducible.
A set \\(C\\) of states is said to be *closed* if for any \\(i \\in C\\) and \\(j \\notin C\\), then \\(p\_{ij}=0\\).
Once a Markov chain reaches a closed state \\(C\\), it will subsequently never leave \\(C\\).
Consider the Markov chain represented by the following:

Show that the set \\(\\{1,2\\}\\) is both irreducible and closed.
From the Markov chain diagram, one can read the transition matrix for the Markov chain as \\\[P = \\begin{pmatrix} \\frac{1}{2} & \\frac{1}{2} & 0 & 0 & 0 & 0 \\\\ \\frac{1}{4} & \\frac{3}{4} & 0 & 0 & 0 & 0 \\\\ \\frac{1}{4} & \\frac{1}{4} & \\frac{1}{4} & \\frac{1}{4} & 0 & 0 \\\\ \\frac{1}{4} & 0 & \\frac{1}{4} & \\frac{1}{4} & 0 & \\frac{1}{4} \\\\ 0 & 0 & 0 & 0 & \\frac{1}{2} & \\frac{1}{2} \\\\ 0 & 0 & 0 & 0 & \\frac{1}{2} & \\frac{1}{2} \\end{pmatrix}.\\\]
Note that \\(p\_{12} = \\frac{1}{2}\>0\\) so there is a path from state \\(1\\) to state \\(2\\), and \\(p\_{21} = \\frac{1}{4}\>0\\) so there is a path from state \\(2\\) to state \\(1\\). Therefore \\(1 \\leftrightarrow 2\\), that is \\(\\{1,2\\}\\) is irreducible.
Also note \\(p\_{13}=p\_{14}=p\_{15}=p\_{16}=p\_{23}=p\_{24}=p\_{25}=p\_{26}=0\\), so by [Definition 9.1.6](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#ClosedDef) the set \\(\\{1,2\\}\\) is closed.
Therefore \\(\\{1,2\\}\\) is a closed and irreducible set.
Are there any other irreducible and closed sets in the Markov chain of [Example 9.1.7](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#ClosedIrreducibleSetsEx)? What about sets that are only irreducible, and sets that are only closed?
The terminology introduced in both [Defintion 9.1.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#IrreducibleDef) and [Definition 9.1.6](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#ClosedDef) freely for the remainder of the course.
If a closed set \\(C\\) of states contains only one state \\(i\\), that is \\(p\_{ii} = 1\\) and \\(p\_{ij}=0\\) for all \\(j \\neq i\\), we call \\(i\\) an *absorbing state*.
Find an absorbing state among the Markov chains we have seen in previous examples.
## 9\.2 Recurrence
A state of a Markov chain is called *recurrent* if \\\[P\[X\_n = i \\text{ for some } n\\geq 1 \\mid X\_0 = i\] = 1.\\\]
The essence of this definition is that a Markov chain that is currently in some recurrent state is certain to return to that state again in the future.
A state of a Markov chain that is not recurrent is called *transient*.
A Markov chain that is currently in some transient state is not certain to return to that state again in the future.
Consider the Markov Chain governing Mary Berrys’ choice of Nottingham coffee shop of [Example 8.1.3](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex2), that is the Markov chain described by the diagram

Show that Latte Da is a recurrent state.
Suppose that the Mary Berry visits Latte Da on her \\(t^{th}\\) trip to Nottingham. Mathematically in terms of the Markov chain: \\(X\_t = \\text{Latte Da}\\). First we calculate the probability that Mary Berry doesn’t visit Latte Da on her next \\(N\\) visits to Nottingham. \\\[\\begin{align\*} \&P (\\text{Mary Berry doesn't visit Latte Da on next \$N\$ visits} \\mid X\_t = \\text{Latte Da}) \\\\ =& P ( X\_{t+N} = X\_{t+N-1} = \\cdots = X\_{t+1} = \\text{Deja Brew} \\mid X\_t = \\text{Latte Da}) \\\\ =& P ( X\_{t+N} = \\text{D.B.} \\mid X\_{t+N-1} = \\text{D.B.}) \\times P ( X\_{t+N-1} = \\text{D.B.} \\mid X\_{t+N-2} = \\text{D.B.}) \\times \\ldots \\\\ & \\qquad \\qquad \\ldots \\times P ( X\_{t+2} = \\text{D.B.} \\mid X\_{t+1} = \\text{D.B.}) \\times P ( X\_{t+1} = \\text{D.B.} \\mid X\_{t} = \\text{L.D.}) \\\\ =& P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{D.B.}) \\times P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{D.B.}) \\times \\ldots \\\\ & \\qquad \\qquad \\ldots \\times P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{D.B.}) \\times P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{L.D.}) \\\\ &= \\frac{5}{6} \\times \\frac{2}{3} \\times \\ldots \\frac{2}{3} \\\\ &= \\frac{5}{6} \\left( \\frac{2}{3} \\right)^{N-1} \\end{align\*}\\\]
Note that as \\(N\\) gets increasingly big, that is as the number of visits Mary Berry subsequently takes to Nottingham grows, the probability calculated above tends to \\(0\\). Therefore the probability of Mary Berry only ever using Deja Brew is \\(0\\), or in other words, Mary Berry is certain to return to Latte Da at some point. Therefore Latte Da is a recurrent state.
Consider again the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex), seen in [Example 9.1.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#CommunicatingStatesEx). Show that state 1 is a transient state.
Suppose the user is on the *Home Page* of the website, that is \\(X\_t=1\\) for some \\(t\\). The user could click on the link to the *Staff Page*, that is, state 4 of the Markov chain. At this point it is impossible for the user to return to the *Home Page*, or state 1. That is to say, it is not certain that the Markov chain will ever return to state 1. Therefore state 1 is transiant.
If \\(i \\leftrightarrow j\\), then state \\(i\\) is recurrent if and only if state \\(j\\) is recurrent.
It follows from [Example 9.2.3](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Recurrent_state_example) and [Lemma 9.2.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_recurrent_relationship) that the state Deja Brew in the Mary Berry coffee shop example is also recurrent.
Indeed [Lemma 9.2.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_recurrent_relationship) indicates that it makes sense to label communicating classes as either recurrent or transiant: if one state in a communicating class is recurrent/transient then all the states in that class must be recurrent/transient respectively. This leads to the following definition.
An irreducible Markov chain is said to be *recurrent* if it contains at least one recurrent state.
An irreducible Markov chain being recurrent as per [Definition 9.2.6](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Recurrent_Markov_chain_def) is equivalent to every state of the Markov chain being recurrent.
## 9\.3 Mean Recurrence Times
The *mean recurrence time* of a state \\(i\\), denoted \\(\\mu\_i\\), is given by \\\[\\mu\_i = \\begin{cases} \\sum\\limits\_{n \\geq 1} n f\_{ii}^{(n)},& \\text{if state } i\\text{ recurrent,} \\\\ \\infty,& \\text{if state } i \\text{ transiant.} \\\\ \\end{cases}\\\]
Suppose \\(i\\) is a recurrent state. It follows from [Definition 9.3.1](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Mean_Recurrence_Time_def), that the mean recurrence time is the average time that it takes for the Markov chain currently in state \\(i\\) to return to \\(i\\). This can be seen by noting that the summation is over all possibilities for how long it could take the Markov chain to return as required, and that \\(f\_{ii}^{(n)}\\) is the probability that the Markov chain moves from state \\(i\\) to state \\(i\\) in exactly \\(n\\) steps.
Calculate the mean recurrence time \\(\\mu\_1\\) for the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex).
Substituting in the known value of \\(f\_{11}^{(1)} = 0\\) and \\(f\_{11}^{(n)} = \\frac{1}{3} \\cdot \\frac{1}{2^{n-2}}\\) where \\(n \\geq 2\\) from [Example 8.4.4](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#n_step_hitting_prob_ex) into [Definition 9.3.1](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Mean_Recurrence_Time_def) obtain \\\[\\mu\_1 = \\sum\\limits\_{n \\geq 1} n f\_{11}^{(n)} = f\_{11}^{(1)} + \\sum\\limits\_{n=2}^{\\infty} n f\_{11}^{(2)} = 0 + \\sum\\limits\_{n=2}^{\\infty} n \\cdot \\frac{1}{3} \\cdot \\frac{1}{2^{n-2}} = \\frac{1}{3} \\sum\\limits\_{n=2}^{\\infty} \\frac{n}{2^{n-2}}.\\\] Using computer code calculate \\(\\sum\\limits\_{n=2}^{\\infty} \\frac{n}{2^{n-2}} = 6\\) and so \\\[\\mu\_1 = \\frac{1}{3} \\cdot 6 = 2.\\\]
Note that even if state \\(i\\) is recurrent, the mean recurrence time may still be \\(\\infty\\).
Consider a recurrent state \\(i\\). The state \\(i\\) is said to be *positive recurrent* if \\(\\mu\_i \< \\infty\\), or *null recurrent* if \\(\\mu\_i = \\infty\\).
It follows from [Example 9.3.2](https://bookdown.org/danielcavey27/lecture_notes/Mean_Recurrence_Time_ex) that state \\(1\\) in the Markov chain governing the company website is positive recurrent since \\(\\mu\_1 =2 \<\\infty\\).
If \\(i \\leftrightarrow j\\), then \\(i\\) is positive recurrent if and only if \\(j\\) is positive recurrent.
This lemma provides us with a shortcut to show positive recurrence of a large number of states.
Combining [Example 9.1.4](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Intercommunicating_states_ex), [Example 9.3.4](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#positive_recurrent_ex) and [Lemma 9.3.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_postive_recurrent_relationship) shows that states \\(2\\) and state \\(3\\) are also positive recurrent in the running company website Markov chain example.
An irreducible Markov chain is said to be *positive-recurrent* if it contains at least one positive-recurrent state.
An irreducible Markov chain being positive-recurrent as per [Definition 9.3.7](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Positive_Recurrent_Markov_chain_def) is equivalent to every state of the Markov chain being positive-recurrent.
Note that [Lemma 9.3.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_postive_recurrent_relationship) does not tell us anything about the mean recurrent time of intercommunicating recurrent states beyond finiteness. Namely knowing that \\(\\mu\_1 = 2\\) in [Example 9.3.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Mean_Recurrence_Time_ex) does not provide any new information about \\(\\mu\_2\\) and \\(\\mu\_3\\), beyond what we would have known given \\(\\mu\_1 \< \\infty\\).
If a Markov chain has a finite number of states, then all recurrent states are positive.
An irreducible Markov chain with a finite number of states is positive-recurrant.
## 9\.4 Periodicity
Consider a Markov chain in some state \\(i\\). Consider the values of \\(n\\) for which \\(p\_{ii}^{(n)}\>0\\), that is, the positive integers \\(n\\) for which it is possible for the Markov chain to return to \\(i\\) in \\(n\\) steps. Throughout this section we denote this collection of values by \\(\\{ a\_1, a\_2, a\_3, \\ldots \\}\\).
The *period* of state \\(i\\) is given by \\\[d\_i = \\gcd (a\_1,a\_2,a\_3, \\ldots )\\\]
Recall the scenario of *Week 6 Questions, Questions 1 to 8*:
Every year Ria chooses exactly two apprentices to compete in the fictitious competition *Nottingham’s Got Mathematicians*. Apprentices are recommended to Ria by Daniel and Lisa. Initially Daniel and Lisa recommend one candidate each. However if Daniel selects the apprentice who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Lisa the following year. Similarly if Lisa chooses the candidate who who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Daniel the following year. This rule is repeated every year, even if Daniel or Lisa choose both the candidates.
Generally Lisa is better at picking competitors: a Lisa endorsed candidate beats a Daniel endorsed candidate \\(75 \\%\\) of the time.
This scenario can be modelled by the Markov chain:

Calculate \\(d\_1\\), the period of state \\(1\\).
Consider all the possible paths that start and end in state \\(1\\): \\\[\\begin{align\*} 1 &\\rightarrow 0 \\rightarrow 1 \\\\ 1 &\\rightarrow 2 \\rightarrow 1 \\\\ 1 &\\rightarrow 0 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\\\ 1 &\\rightarrow 0 \\rightarrow 1 \\rightarrow 2 \\rightarrow 1 \\\\ 1 &\\rightarrow 2 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\\\ 1 &\\rightarrow 2 \\rightarrow 1 \\rightarrow 2 \\rightarrow 1 \\\\ 1 &\\rightarrow 0 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\\\ &\\vdots \\end{align\*}\\\] The lengths of these paths respectively are \\(2,2,4,4,4,4,6, \\ldots\\). Calculate \\\[d\_1 = \\gcd ( 2,2,4,4,4,4,6, \\ldots) =2.\\\]
This definition goes a long way towards answering the question “If it is possible to return to \\(i\\), for what values of \\(n\\) is it possible to return in \\(n\\) steps?” identified at the opening of the chapter. Namely for a given value \\(n\\), it is possible to return from to state \\(i\\) to state \\(i\\) if and only if \\(d\_i\\) divides \\(n\\) exactly.
If \\(i \\leftrightarrow j\\), then \\(i\\) and \\(j\\) have the same period: \\\[d\_i = d\_j.\\\]
Consider the Markov chain of [Example 9.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#PeriodofaStateEx). Calculate \\(d\_0\\) and \\(d\_2\\), the periods of states \\(0\\) and \\(2\\).
Clearly states \\(0,1,2\\) intercommunicate, that is, \\(0 \\leftrightarrow 1\\) and \\(1 \\leftrightarrow 2\\). From [Example 9.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#PeriodofaStateEx), we know \\(d\_1 = 2\\) and so by [Lemma 9.4.3](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#IntercommunicatingStatesSamePeriod) it follows that \\(d\_0=d\_2=2\\).
A state is said to be *aperiodic* if \\(d\_i=1\\).
A Markov chain is *aperiodic* if all of its states are aperiodic.
Since trivially \\(1\\) divides all positive integers \\(n\\), this definition is equivalent to saying that it is possible to return to state \\(i\\) in any number of steps. Aperiodicity is often a feature of the most mathematically interesting Markov chains, and will be a key assumption when it comes to talking about steady states.
An absorbing state is aperiodic. |
| Readable Markdown | Suppose we have a Markov chain which is currently in state \\(i\\). It is natural to ask questions such as the following:
- Are there any states that we cannot ever get to?
- Once we leave state \\(i\\) are we guaranteed to get back?
- If we are certain to return to \\(i\\) how long does this take on average?
- If it is possible to return to \\(i\\), for what values of \\(n\\) is it possible to return in \\(n\\) steps?
To answer these questions, we study properties of a Markov chain, and it particular introduce classes of states.
## Communication of States
In this section, we formalise the notion of one state being accessible from another.
A state \\(i\\) is said to *communicate* with a state \\(j\\) if there is a non-zero probability that a Markov chain currently in state \\(i\\) will move to state \\(j\\) in the future. Mathematically \\(p\_{ij}^{(n)}\>0\\) for some \\(n \\geq 0\\). This is denoted by \\(i \\rightarrow j\\).
That is to say, state \\(i\\) can communicate with state \\(j\\) if it is possible to move from \\(i\\) to \\(j\\).
Note in [Defintion 9.1.1](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#CommunicatingStatesDef) that \\(n=0\\) is permitted. It follows that any state \\(i\\) is said communicate with itself: \\(i \\rightarrow i\\) necessarily.
Consider the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex), that is the Markov chain described by the diagram

Note that if \\(X\_t=2\\) for some \\(t\\), that is the Markov chain is in state 2, then one can move to state 4 for example by the path \\(X\_t=2, X\_{t+1} = 1, X\_{t+2}=4\\) (other routes are available). Therefore \\(2 \\rightarrow 4\\). Similarly \\(1 \\rightarrow 4\\) and \\(3 \\rightarrow 4\\).
However since \\(p\_{44}=1\\), or equivalently \\(p\_{41}=p\_{42}=p\_{43}=0\\), it is impossible for the Markov chain to leave state 4. That is state \\(4\\) cannot communicate with any of the states \\(1,2,3\\).
States \\(i\\) and \\(j\\) are said to *intercommunicate* if \\(i \\rightarrow j\\) and \\(j \\rightarrow i\\). This is denoted by \\(i \\leftrightarrow j\\)
Considering again the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex), seen in [Example 9.1.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#CommunicatingStatesEx), one can easily observe that \\(1 \\leftrightarrow 2, 2 \\leftrightarrow 3\\) and \\(1 \\leftrightarrow 3\\).
However since state \\(4\\) does not communicate with states \\(1,2\\) or \\(3\\), it follows that state \\(4\\) does not intercommunicate with any of the states \\(1,2\\) or \\(3\\)
We have introduced the notions of communication and intercommunication, as we anticipate that properties of states will be shared by those that intercommunicate with each other. In this vein, one could group together all states that can intercommunicate to partition the Markov chain into communicating classes.
We introduce two notions that capture collections of states that have strong properties regarding communication together.
A set \\(C\\) of states is called *irreducible* if \\(i \\leftrightarrow j\\), for all \\(i,j \\in C\\).
A Markov chain is said to be irreducible itself, if the set of all states is irreducible.
A set \\(C\\) of states is said to be *closed* if for any \\(i \\in C\\) and \\(j \\notin C\\), then \\(p\_{ij}=0\\).
Once a Markov chain reaches a closed state \\(C\\), it will subsequently never leave \\(C\\).
Consider the Markov chain represented by the following:

Show that the set \\(\\{1,2\\}\\) is both irreducible and closed.
From the Markov chain diagram, one can read the transition matrix for the Markov chain as \\\[P = \\begin{pmatrix} \\frac{1}{2} & \\frac{1}{2} & 0 & 0 & 0 & 0 \\\\ \\frac{1}{4} & \\frac{3}{4} & 0 & 0 & 0 & 0 \\\\ \\frac{1}{4} & \\frac{1}{4} & \\frac{1}{4} & \\frac{1}{4} & 0 & 0 \\\\ \\frac{1}{4} & 0 & \\frac{1}{4} & \\frac{1}{4} & 0 & \\frac{1}{4} \\\\ 0 & 0 & 0 & 0 & \\frac{1}{2} & \\frac{1}{2} \\\\ 0 & 0 & 0 & 0 & \\frac{1}{2} & \\frac{1}{2} \\end{pmatrix}.\\\]
Note that \\(p\_{12} = \\frac{1}{2}\>0\\) so there is a path from state \\(1\\) to state \\(2\\), and \\(p\_{21} = \\frac{1}{4}\>0\\) so there is a path from state \\(2\\) to state \\(1\\). Therefore \\(1 \\leftrightarrow 2\\), that is \\(\\{1,2\\}\\) is irreducible.
Also note \\(p\_{13}=p\_{14}=p\_{15}=p\_{16}=p\_{23}=p\_{24}=p\_{25}=p\_{26}=0\\), so by [Definition 9.1.6](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#ClosedDef) the set \\(\\{1,2\\}\\) is closed.
Therefore \\(\\{1,2\\}\\) is a closed and irreducible set.
Are there any other irreducible and closed sets in the Markov chain of [Example 9.1.7](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#ClosedIrreducibleSetsEx)? What about sets that are only irreducible, and sets that are only closed?
The terminology introduced in both [Defintion 9.1.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#IrreducibleDef) and [Definition 9.1.6](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#ClosedDef) freely for the remainder of the course.
If a closed set \\(C\\) of states contains only one state \\(i\\), that is \\(p\_{ii} = 1\\) and \\(p\_{ij}=0\\) for all \\(j \\neq i\\), we call \\(i\\) an *absorbing state*.
Find an absorbing state among the Markov chains we have seen in previous examples.
## Recurrence
A state of a Markov chain is called *recurrent* if \\\[P\[X\_n = i \\text{ for some } n\\geq 1 \\mid X\_0 = i\] = 1.\\\]
The essence of this definition is that a Markov chain that is currently in some recurrent state is certain to return to that state again in the future.
A state of a Markov chain that is not recurrent is called *transient*.
A Markov chain that is currently in some transient state is not certain to return to that state again in the future.
Consider the Markov Chain governing Mary Berrys’ choice of Nottingham coffee shop of [Example 8.1.3](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex2), that is the Markov chain described by the diagram

Show that Latte Da is a recurrent state.
Suppose that the Mary Berry visits Latte Da on her \\(t^{th}\\) trip to Nottingham. Mathematically in terms of the Markov chain: \\(X\_t = \\text{Latte Da}\\). First we calculate the probability that Mary Berry doesn’t visit Latte Da on her next \\(N\\) visits to Nottingham. \\\[\\begin{align\*} \&P (\\text{Mary Berry doesn't visit Latte Da on next \$N\$ visits} \\mid X\_t = \\text{Latte Da}) \\\\ =& P ( X\_{t+N} = X\_{t+N-1} = \\cdots = X\_{t+1} = \\text{Deja Brew} \\mid X\_t = \\text{Latte Da}) \\\\ =& P ( X\_{t+N} = \\text{D.B.} \\mid X\_{t+N-1} = \\text{D.B.}) \\times P ( X\_{t+N-1} = \\text{D.B.} \\mid X\_{t+N-2} = \\text{D.B.}) \\times \\ldots \\\\ & \\qquad \\qquad \\ldots \\times P ( X\_{t+2} = \\text{D.B.} \\mid X\_{t+1} = \\text{D.B.}) \\times P ( X\_{t+1} = \\text{D.B.} \\mid X\_{t} = \\text{L.D.}) \\\\ =& P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{D.B.}) \\times P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{D.B.}) \\times \\ldots \\\\ & \\qquad \\qquad \\ldots \\times P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{D.B.}) \\times P ( X\_{1} = \\text{D.B.} \\mid X\_{0} = \\text{L.D.}) \\\\ &= \\frac{5}{6} \\times \\frac{2}{3} \\times \\ldots \\frac{2}{3} \\\\ &= \\frac{5}{6} \\left( \\frac{2}{3} \\right)^{N-1} \\end{align\*}\\\]
Note that as \\(N\\) gets increasingly big, that is as the number of visits Mary Berry subsequently takes to Nottingham grows, the probability calculated above tends to \\(0\\). Therefore the probability of Mary Berry only ever using Deja Brew is \\(0\\), or in other words, Mary Berry is certain to return to Latte Da at some point. Therefore Latte Da is a recurrent state.
Consider again the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex), seen in [Example 9.1.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#CommunicatingStatesEx). Show that state 1 is a transient state.
Suppose the user is on the *Home Page* of the website, that is \\(X\_t=1\\) for some \\(t\\). The user could click on the link to the *Staff Page*, that is, state 4 of the Markov chain. At this point it is impossible for the user to return to the *Home Page*, or state 1. That is to say, it is not certain that the Markov chain will ever return to state 1. Therefore state 1 is transiant.
If \\(i \\leftrightarrow j\\), then state \\(i\\) is recurrent if and only if state \\(j\\) is recurrent.
It follows from [Example 9.2.3](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Recurrent_state_example) and [Lemma 9.2.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_recurrent_relationship) that the state Deja Brew in the Mary Berry coffee shop example is also recurrent.
Indeed [Lemma 9.2.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_recurrent_relationship) indicates that it makes sense to label communicating classes as either recurrent or transiant: if one state in a communicating class is recurrent/transient then all the states in that class must be recurrent/transient respectively. This leads to the following definition.
An irreducible Markov chain is said to be *recurrent* if it contains at least one recurrent state.
An irreducible Markov chain being recurrent as per [Definition 9.2.6](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Recurrent_Markov_chain_def) is equivalent to every state of the Markov chain being recurrent.
## Mean Recurrence Times
The *mean recurrence time* of a state \\(i\\), denoted \\(\\mu\_i\\), is given by \\\[\\mu\_i = \\begin{cases} \\sum\\limits\_{n \\geq 1} n f\_{ii}^{(n)},& \\text{if state } i\\text{ recurrent,} \\\\ \\infty,& \\text{if state } i \\text{ transiant.} \\\\ \\end{cases}\\\]
Suppose \\(i\\) is a recurrent state. It follows from [Definition 9.3.1](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Mean_Recurrence_Time_def), that the mean recurrence time is the average time that it takes for the Markov chain currently in state \\(i\\) to return to \\(i\\). This can be seen by noting that the summation is over all possibilities for how long it could take the Markov chain to return as required, and that \\(f\_{ii}^{(n)}\\) is the probability that the Markov chain moves from state \\(i\\) to state \\(i\\) in exactly \\(n\\) steps.
Calculate the mean recurrence time \\(\\mu\_1\\) for the Markov Chain governing the company website of [Example 8.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Hitting_Probability_ex).
Substituting in the known value of \\(f\_{11}^{(1)} = 0\\) and \\(f\_{11}^{(n)} = \\frac{1}{3} \\cdot \\frac{1}{2^{n-2}}\\) where \\(n \\geq 2\\) from [Example 8.4.4](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#n_step_hitting_prob_ex) into [Definition 9.3.1](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Mean_Recurrence_Time_def) obtain \\\[\\mu\_1 = \\sum\\limits\_{n \\geq 1} n f\_{11}^{(n)} = f\_{11}^{(1)} + \\sum\\limits\_{n=2}^{\\infty} n f\_{11}^{(2)} = 0 + \\sum\\limits\_{n=2}^{\\infty} n \\cdot \\frac{1}{3} \\cdot \\frac{1}{2^{n-2}} = \\frac{1}{3} \\sum\\limits\_{n=2}^{\\infty} \\frac{n}{2^{n-2}}.\\\] Using computer code calculate \\(\\sum\\limits\_{n=2}^{\\infty} \\frac{n}{2^{n-2}} = 6\\) and so \\\[\\mu\_1 = \\frac{1}{3} \\cdot 6 = 2.\\\]
Note that even if state \\(i\\) is recurrent, the mean recurrence time may still be \\(\\infty\\).
Consider a recurrent state \\(i\\). The state \\(i\\) is said to be *positive recurrent* if \\(\\mu\_i \< \\infty\\), or *null recurrent* if \\(\\mu\_i = \\infty\\).
It follows from [Example 9.3.2](https://bookdown.org/danielcavey27/lecture_notes/Mean_Recurrence_Time_ex) that state \\(1\\) in the Markov chain governing the company website is positive recurrent since \\(\\mu\_1 =2 \<\\infty\\).
If \\(i \\leftrightarrow j\\), then \\(i\\) is positive recurrent if and only if \\(j\\) is positive recurrent.
This lemma provides us with a shortcut to show positive recurrence of a large number of states.
Combining [Example 9.1.4](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Intercommunicating_states_ex), [Example 9.3.4](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#positive_recurrent_ex) and [Lemma 9.3.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_postive_recurrent_relationship) shows that states \\(2\\) and state \\(3\\) are also positive recurrent in the running company website Markov chain example.
An irreducible Markov chain is said to be *positive-recurrent* if it contains at least one positive-recurrent state.
An irreducible Markov chain being positive-recurrent as per [Definition 9.3.7](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Positive_Recurrent_Markov_chain_def) is equivalent to every state of the Markov chain being positive-recurrent.
Note that [Lemma 9.3.5](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Communicating_postive_recurrent_relationship) does not tell us anything about the mean recurrent time of intercommunicating recurrent states beyond finiteness. Namely knowing that \\(\\mu\_1 = 2\\) in [Example 9.3.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#Mean_Recurrence_Time_ex) does not provide any new information about \\(\\mu\_2\\) and \\(\\mu\_3\\), beyond what we would have known given \\(\\mu\_1 \< \\infty\\).
If a Markov chain has a finite number of states, then all recurrent states are positive.
An irreducible Markov chain with a finite number of states is positive-recurrant.
## Periodicity
Consider a Markov chain in some state \\(i\\). Consider the values of \\(n\\) for which \\(p\_{ii}^{(n)}\>0\\), that is, the positive integers \\(n\\) for which it is possible for the Markov chain to return to \\(i\\) in \\(n\\) steps. Throughout this section we denote this collection of values by \\(\\{ a\_1, a\_2, a\_3, \\ldots \\}\\).
The *period* of state \\(i\\) is given by \\\[d\_i = \\gcd (a\_1,a\_2,a\_3, \\ldots )\\\]
Recall the scenario of *Week 6 Questions, Questions 1 to 8*:
Every year Ria chooses exactly two apprentices to compete in the fictitious competition *Nottingham’s Got Mathematicians*. Apprentices are recommended to Ria by Daniel and Lisa. Initially Daniel and Lisa recommend one candidate each. However if Daniel selects the apprentice who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Lisa the following year. Similarly if Lisa chooses the candidate who who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Daniel the following year. This rule is repeated every year, even if Daniel or Lisa choose both the candidates.
Generally Lisa is better at picking competitors: a Lisa endorsed candidate beats a Daniel endorsed candidate \\(75 \\%\\) of the time.
This scenario can be modelled by the Markov chain:

Calculate \\(d\_1\\), the period of state \\(1\\).
Consider all the possible paths that start and end in state \\(1\\): \\\[\\begin{align\*} 1 &\\rightarrow 0 \\rightarrow 1 \\\\ 1 &\\rightarrow 2 \\rightarrow 1 \\\\ 1 &\\rightarrow 0 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\\\ 1 &\\rightarrow 0 \\rightarrow 1 \\rightarrow 2 \\rightarrow 1 \\\\ 1 &\\rightarrow 2 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\\\ 1 &\\rightarrow 2 \\rightarrow 1 \\rightarrow 2 \\rightarrow 1 \\\\ 1 &\\rightarrow 0 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\rightarrow 0 \\rightarrow 1 \\\\ &\\vdots \\end{align\*}\\\] The lengths of these paths respectively are \\(2,2,4,4,4,4,6, \\ldots\\). Calculate \\\[d\_1 = \\gcd ( 2,2,4,4,4,4,6, \\ldots) =2.\\\]
This definition goes a long way towards answering the question “If it is possible to return to \\(i\\), for what values of \\(n\\) is it possible to return in \\(n\\) steps?” identified at the opening of the chapter. Namely for a given value \\(n\\), it is possible to return from to state \\(i\\) to state \\(i\\) if and only if \\(d\_i\\) divides \\(n\\) exactly.
If \\(i \\leftrightarrow j\\), then \\(i\\) and \\(j\\) have the same period: \\\[d\_i = d\_j.\\\]
Consider the Markov chain of [Example 9.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#PeriodofaStateEx). Calculate \\(d\_0\\) and \\(d\_2\\), the periods of states \\(0\\) and \\(2\\).
Clearly states \\(0,1,2\\) intercommunicate, that is, \\(0 \\leftrightarrow 1\\) and \\(1 \\leftrightarrow 2\\). From [Example 9.4.2](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#PeriodofaStateEx), we know \\(d\_1 = 2\\) and so by [Lemma 9.4.3](https://bookdown.org/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html#IntercommunicatingStatesSamePeriod) it follows that \\(d\_0=d\_2=2\\).
A state is said to be *aperiodic* if \\(d\_i=1\\).
A Markov chain is *aperiodic* if all of its states are aperiodic.
Since trivially \\(1\\) divides all positive integers \\(n\\), this definition is equivalent to saying that it is possible to return to state \\(i\\) in any number of steps. Aperiodicity is often a feature of the most mathematically interesting Markov chains, and will be a key assumption when it comes to talking about steady states.
An absorbing state is aperiodic. |
| Shard | 39 (laksa) |
| Root Hash | 3646282709075601439 |
| Unparsed URL | org,bookdown!/danielcavey27/lecture_notes/states-recurrence-and-periodicity.html s443 |