ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.2 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://online.stat.psu.edu/stat505/lesson/4/4.5 |
| Last Crawled | 2026-04-01 08:28:13 (6 days ago) |
| First Indexed | 2020-06-11 19:07:10 (5 years ago) |
| HTTP Status Code | 200 |
| Meta Title | 4.5 - Eigenvalues and Eigenvectors | STAT 505 |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | To illustrate these calculations consider the correlation matrix
R
as shown below:
R
=
(
1
ρ
ρ
1
)
Then, using the definition of the eigenvalues, we must calculate the determinant of
R
−
λ
times the Identity matrix.
|
R
−
λ
I
|
=
|
(
1
ρ
ρ
1
)
−
λ
(
1
0
0
1
)
|
So,
R
in the expression above is given in
blue
, and the Identity matrix follows in
red
, and
λ
here is the eigenvalue that we wish to solve for. Carrying out the math we end up with the matrix with
1
−
λ
on the diagonal and
ρ
on the off-diagonal. Then calculating this determinant we obtain
(
1
−
λ
)
2
−
ρ
2
squared minus
ρ
2
.
|
1
−
λ
ρ
ρ
1
−
λ
|
=
(
1
−
λ
)
2
−
ρ
2
=
λ
2
−
2
λ
+
1
−
ρ
2
Setting this expression equal to zero we end up with the following...
λ
2
−
2
λ
+
1
−
ρ
2
=
0
To solve for
λ
we use the general result that any solution to the second-order polynomial below:
a
y
2
+
b
y
+
c
=
0
is given by the following expression:
y
=
−
b
±
b
2
−
4
a
c
2
a
Here,
a
=
1
,
b
=
−
2
(the term that precedes
λ
) and
c
is equal to
1
−
ρ
2
Substituting these terms in the equation above, we obtain that
λ
must be equal to 1 plus or minus the correlation
ρ
.
λ
=
2
±
2
2
−
4
(
1
−
ρ
2
)
2
=
1
±
1
−
(
1
−
ρ
2
)
=
1
±
ρ
Here we will take the following solutions:
λ
1
=
1
+
ρ
λ
2
=
1
−
ρ
Next, to obtain the corresponding eigenvectors, we must solve a system of equations below:
(
R
−
λ
I
)
e
=
0
This is the product of
R
−
λ
times
I
and the eigenvector
e
set equal to 0. Or in other words, this is translated for this specific problem in the expression below:
{
(
1
ρ
ρ
1
)
−
λ
(
1
0
0
1
)
}
(
e
1
e
2
)
=
(
0
0
)
This simplifies as follows:
(
1
−
λ
ρ
ρ
1
−
λ
)
(
e
1
e
2
)
=
(
0
0
)
Yielding a system of two equations with two unknowns:
(
1
−
λ
)
e
1
+
ρ
e
2
=
0
ρ
e
1
+
(
1
−
λ
)
e
2
=
0
Note
! This does
not
have a unique solution. If
(
e
1
,
e
2
)
(
e
1
,
e
2
) is one solution, then a second solution can be obtained by multiplying the first solution by any non-zero constant
c
, i.e.,
(
c
e
1
,
c
e
2
)
. Therefore, we will require the additional condition that the sum of the squared values of
(
e
1
and
e
2
)
are equal to 1 (ie.,
e
1
2
+
e
2
2
=
1
)
Consider the first equation:
(
1
−
λ
)
e
1
+
ρ
e
2
=
0
Solving this equation for
e
2
and we obtain the following:
e
2
=
−
(
1
−
λ
)
ρ
e
1
Substituting this into
e
1
2
+
e
2
2
=
1
we get the following:
e
1
2
+
(
1
−
λ
)
2
ρ
2
e
1
2
=
1
Recall that
λ
=
1
±
ρ
. In either case we end up finding that
(
1
−
λ
)
2
=
ρ
2
, so that the expression above simplifies to:
2
e
1
2
=
1
Or, in other words:
e
1
=
1
2
Using the expression for
e
2
which we obtained above,
e
2
=
−
1
−
λ
ρ
e
1
we get
e
2
=
1
2
for
λ
=
1
+
ρ
and
e
2
=
−
1
2
for
λ
=
1
−
ρ
Therefore, the two eigenvectors are given by the two vectors as shown below:
(
1
2
1
2
)
for
λ
1
=
1
+
ρ
and
(
1
2
−
1
2
)
for
λ
2
=
1
−
ρ
Some properties of the eigenvalues of the variance-covariance matrix are to be considered at this point. Suppose that
λ
1
through
λ
p
are the eigenvalues of the variance-covariance matrix
Σ
. By definition, the total variation is given by the sum of the variances. It turns out that this is also equal to the sum of the eigenvalues of the variance-covariance matrix. Thus, the total variation is:
∑
j
=
1
p
σ
j
2
=
σ
1
2
+
σ
2
2
+
⋯
+
σ
p
2
=
λ
1
+
λ
2
+
⋯
+
λ
p
=
∑
j
=
1
p
λ
j
The generalized variance is equal to the product of the eigenvalues:
|
Σ
|
=
∏
j
=
1
p
λ
j
=
λ
1
×
λ
2
×
⋯
×
λ
p |
| Markdown | [Skip to main content](https://online.stat.psu.edu/stat505/lesson/4/4.5#main-content)
[STAT 505](https://online.stat.psu.edu/stat505/ "Home") Applied Multivariate Statistical Analysis
### User Preferences
# Content Preview
Arcu felis bibendum ut tristique et egestas quis:
***
- Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
- Duis aute irure dolor in reprehenderit in voluptate
- Excepteur sint occaecat cupidatat non proident
Lorem ipsum dolor sit amet, consectetur adipisicing elit. Odit molestiae mollitia laudantium assumenda nam eaque, excepturi, soluta, perspiciatis cupiditate sapiente, adipisci quaerat odio voluptates consectetur nulla eveniet iure vitae quibusdam? Excepturi aliquam in iure, repellat, fugiat illum voluptate repellendus blanditiis veritatis ducimus ad ipsa quisquam, commodi vel necessitatibus, harum quos a dignissimos.
Close
Save changes
### Keyboard Shortcuts
Help
`F1` or `?`
Previous Page
`←` + `CTRL` (Windows)
`←` + `⌘` (Mac)
Next Page
`→` + `CTRL` (Windows)
`→` + `⌘` (Mac)
Search Site
`CTRL` + `SHIFT` + `F` (Windows)
`⌘` + `⇧` + `F` (Mac)
Close Message
`ESC`
## Breadcrumb
1. [Home](https://online.stat.psu.edu/stat505/)
2. [4](https://online.stat.psu.edu/stat505/lesson/4)
3. 4\.5
# 4\.5 - Eigenvalues and Eigenvectors
The next thing that we would like to be able to do is to describe the shape of this ellipse mathematically so that we can understand how the data are distributed in multiple dimensions under a multivariate normal. To do this we first must define the eigenvalues and the eigenvectors of a matrix.
In particular, we will consider the computation of the eigenvalues and eigenvectors of a symmetric matrix A as shown below:
A \= ( a 11 a 12 … a 1 p a 21 a 22 … a 2 p ⋮ ⋮ ⋱ ⋮ a p 1 a p 2 … a p p )
Note: we would call the matrix symmetric if the elements a i j are equal to a j i for each *i* and *j*.
Usually, A is taken to be either the variance-covariance matrix Σ, the correlation matrix, or their estimates **S** and **R**, respectively.
Eigenvalues and eigenvectors are used for:
- Computing prediction and confidence ellipses
- Principal Components Analysis (later in the course)
- Factor Analysis (also later in this course)
For the present, we will be primarily concerned with eigenvalues and eigenvectors of the variance-covariance matrix.
First of all, let's define what these terms are...
Eigenvalues
If we have a *p* x *p* matrix A we are going to have *p* *eigenvalues, λ 1 , λ 2 … λ p*. They are obtained by solving the equation given in the expression below:
\| A − λ I \| \= 0
On the left-hand side, we have the matrix A minus λ times the Identity matrix. When we calculate the determinant of the resulting matrix, we end up with a polynomial of order *p*. Setting this polynomial equal to zero, and solving for λ we obtain the desired eigenvalues. In general, we will have *p* solutions and so there are *p* eigenvalues, not necessarily all unique.
Eigenvectors
The corresponding *eigenvectors e 1 , e 2 , … , e p* are obtained by solving the expression below:
( A − λ j I ) e j \= 0
Here, we have the difference between the matrix A minus the j t h eigenvalue times the Identity matrix, this quantity is then multiplied by the j t h eigenvector and set it all equal to zero. This will obtain the eigenvector e j associated with eigenvalue λ j.
This does not generally have a unique solution. So, to obtain a unique solution we will often require that e j transposed e j is equal to 1. Or, if you like, the sum of the square elements of e j is equal to 1.
e j ′ e j \= 1
**Note\!** Eigenvectors also correspond to different eigenvalues that are orthogonal. In situations, where two (or more) eigenvalues are equal, corresponding eigenvectors may still be chosen to be orthogonal.
## Example 4-3: Consider the 2 x 2 matrix [Section](https://online.stat.psu.edu/stat505/lesson/4/4.5#paragraph--388)
To illustrate these calculations consider the correlation matrix **R** as shown below:
R \= ( 1 ρ ρ 1 )
Then, using the definition of the eigenvalues, we must calculate the determinant of R − λ times the Identity matrix.
\| R − λ I \| \= \| ( 1 ρ ρ 1 ) − λ ( 1 0 0 1 ) \|
So, R in the expression above is given in blue, and the Identity matrix follows in red, and λ here is the eigenvalue that we wish to solve for. Carrying out the math we end up with the matrix with 1 − λ on the diagonal and ρ on the off-diagonal. Then calculating this determinant we obtain ( 1 − λ ) 2 − ρ 2 squared minus ρ 2.
\| 1 − λ ρ ρ 1 − λ \| \= ( 1 − λ ) 2 − ρ 2 \= λ 2 − 2 λ \+ 1 − ρ 2
Setting this expression equal to zero we end up with the following...
λ 2 − 2 λ \+ 1 − ρ 2 \= 0
To solve for λ we use the general result that any solution to the second-order polynomial below:
a y 2 \+ b y \+ c \= 0
is given by the following expression:
y \= − b ± b 2 − 4 a c 2 a
Here, a \= 1 , b \= − 2 (the term that precedes λ) and *c* is equal to 1 − ρ 2 Substituting these terms in the equation above, we obtain that λ must be equal to 1 plus or minus the correlation ρ.
λ \= 2 ± 2 2 − 4 ( 1 − ρ 2 ) 2 \= 1 ± 1 − ( 1 − ρ 2 ) \= 1 ± ρ
Here we will take the following solutions:
λ 1 \= 1 \+ ρ λ 2 \= 1 − ρ
Next, to obtain the corresponding eigenvectors, we must solve a system of equations below:
( R − λ I ) e \= 0
This is the product of R − λ times **I** and the eigenvector **e** set equal to 0. Or in other words, this is translated for this specific problem in the expression below:
{ ( 1 ρ ρ 1 ) − λ ( 1 0 0 1 ) } ( e 1 e 2 ) \= ( 0 0 )
This simplifies as follows:
( 1 − λ ρ ρ 1 − λ ) ( e 1 e 2 ) \= ( 0 0 )
Yielding a system of two equations with two unknowns:
( 1 − λ ) e 1 \+ ρ e 2 \= 0 ρ e 1 \+ ( 1 − λ ) e 2 \= 0
**Note**! This does **not** have a unique solution. If
(
e
1
,
e
2
)
(
e
1
,
e
2
) is one solution, then a second solution can be obtained by multiplying the first solution by any non-zero constant *c*, i.e.,
(
c
e
1
,
c
e
2
)
. Therefore, we will require the additional condition that the sum of the squared values of
(
e
1
and
e
2
)
are equal to 1 (ie.,
e
1
2
\+
e
2
2
\=
1
)
Consider the first equation:
( 1 − λ ) e 1 \+ ρ e 2 \= 0
Solving this equation for e 2 and we obtain the following:
e 2 \= − ( 1 − λ ) ρ e 1
Substituting this into e 1 2 \+ e 2 2 \= 1 we get the following:
e 1 2 \+ ( 1 − λ ) 2 ρ 2 e 1 2 \= 1
Recall that λ \= 1 ± ρ. In either case we end up finding that ( 1 − λ ) 2 \= ρ 2, so that the expression above simplifies to:
2 e 1 2 \= 1
Or, in other words:
e 1 \= 1 2
Using the expression for e 2 which we obtained above,
e 2 \= − 1 − λ ρ e 1
we get
e 2 \= 1 2 for λ \= 1 \+ ρ and e 2 \= − 1 2 for λ \= 1 − ρ
Therefore, the two eigenvectors are given by the two vectors as shown below:
( 1 2 1 2 ) for λ 1 \= 1 \+ ρ and ( 1 2 − 1 2 ) for λ 2 \= 1 − ρ
Some properties of the eigenvalues of the variance-covariance matrix are to be considered at this point. Suppose that λ 1 through λ p are the eigenvalues of the variance-covariance matrix Σ. By definition, the total variation is given by the sum of the variances. It turns out that this is also equal to the sum of the eigenvalues of the variance-covariance matrix. Thus, the total variation is:
∑ j \= 1 p σ j 2 \= σ 1 2 \+ σ 2 2 \+ ⋯ \+ σ p 2 \= λ 1 \+ λ 2 \+ ⋯ \+ λ p \= ∑ j \= 1 p λ j
The generalized variance is equal to the product of the eigenvalues:
\| Σ \| \= ∏ j \= 1 p λ j \= λ 1 × λ 2 × ⋯ × λ p
- [« Previous4\.4 - Multivariate Normality and Outliers](https://online.stat.psu.edu/stat505/lesson/4/4.4 "Go to previous page")
- [Next4\.6 - Geometry of the Multivariate Normal Distribution »](https://online.stat.psu.edu/stat505/lesson/4/4.6 "Go to next page")
## Lesson
- [Lesson 0: Matrices and Vectors](https://online.stat.psu.edu/stat505/lesson/0)
- [Lesson 1: Measures of Central Tendency, Dispersion and Association](https://online.stat.psu.edu/stat505/lesson/1)
- [1\.1 - Measures of Central Tendency](https://online.stat.psu.edu/stat505/lesson/1/1.1)
- [1\.2 - Measures of Dispersion](https://online.stat.psu.edu/stat505/lesson/1/1.2)
- [1\.3 - Measures of Association](https://online.stat.psu.edu/stat505/lesson/1/1.3)
- [1\.4 - Example: Descriptive Statistics](https://online.stat.psu.edu/stat505/lesson/1/1.4)
- [1\.5 - Additional Measures of Dispersion](https://online.stat.psu.edu/stat505/lesson/1/1.5)
- [1\.6 - Example: Generalized Variance](https://online.stat.psu.edu/stat505/lesson/1/1.6)
- [1\.7 - Summary](https://online.stat.psu.edu/stat505/lesson/1/1.7)
- [Lesson 2: Linear Combinations of Random Variables](https://online.stat.psu.edu/stat505/lesson/2)
- [2\.1 - Examples of Linear Combinations](https://online.stat.psu.edu/stat505/lesson/2/2.1)
- [2\.2 - Measures of Central Tendency](https://online.stat.psu.edu/stat505/lesson/2/2.2)
- [2\.3 - Population Variance](https://online.stat.psu.edu/stat505/lesson/2/2.3)
- [2\.4 - Population Covariance](https://online.stat.psu.edu/stat505/lesson/2/2.4)
- [2\.5 - Summary](https://online.stat.psu.edu/stat505/lesson/2/2.5)
- [Lesson 3: Graphical Display of Multivariate Data](https://online.stat.psu.edu/stat505/lesson/3)
- [3\.1 - Graphical Methods](https://online.stat.psu.edu/stat505/lesson/3/3.1)
- [3\.2 - Summary](https://online.stat.psu.edu/stat505/lesson/3/3.2)
- [Lesson 4: Multivariate Normal Distribution](https://online.stat.psu.edu/stat505/lesson/4)
- [4\.1 - Comparing Distribution Types](https://online.stat.psu.edu/stat505/lesson/4/4.1)
- [4\.2 - Bivariate Normal Distribution](https://online.stat.psu.edu/stat505/lesson/4/4.2)
- [4\.3 - Exponent of Multivariate Normal Distribution](https://online.stat.psu.edu/stat505/lesson/4/4.3)
- [4\.4 - Multivariate Normality and Outliers](https://online.stat.psu.edu/stat505/lesson/4/4.4)
- [4\.5 - Eigenvalues and Eigenvectors](https://online.stat.psu.edu/stat505/lesson/4/4.5)
- [4\.6 - Geometry of the Multivariate Normal Distribution](https://online.stat.psu.edu/stat505/lesson/4/4.6)
- [4\.7 - Example: Wechsler Adult Intelligence Scale](https://online.stat.psu.edu/stat505/lesson/4/4.7)
- [4\.8 - Special Cases: p = 2](https://online.stat.psu.edu/stat505/lesson/4/4.8)
- [4\.9 Summary](https://online.stat.psu.edu/stat505/lesson/4/4.9)
- [Lesson 5: Sample Mean Vector and Sample Correlation and Related Inference Problems](https://online.stat.psu.edu/stat505/lesson/5)
- [5\.1 - Distribution of Sample Mean Vector](https://online.stat.psu.edu/stat505/lesson/5/5.1)
- [5\.2 - Interval Estimate of Population Mean](https://online.stat.psu.edu/stat505/lesson/5/5.2)
- [5\.3 - Inferences for Correlations](https://online.stat.psu.edu/stat505/lesson/5/5.3)
- [5\.4 - Summary](https://online.stat.psu.edu/stat505/lesson/5/5.4)
- [Lesson 6: Multivariate Conditional Distribution and Partial Correlation](https://online.stat.psu.edu/stat505/lesson/6)
- [6\.1 - Conditional Distributions](https://online.stat.psu.edu/stat505/lesson/6/6.1)
- [6\.2 - Example: Wechsler Adult Intelligence Scale](https://online.stat.psu.edu/stat505/lesson/6/6.2)
- [6\.3 - Testing for Partial Correlation](https://online.stat.psu.edu/stat505/lesson/6/6.3)
- [6\.4 - Summary](https://online.stat.psu.edu/stat505/lesson/6/6.4)
- [Lesson 7: Inferences Regarding Multivariate Population Mean](https://online.stat.psu.edu/stat505/lesson/7)
- [7\.1 - Basic](https://online.stat.psu.edu/stat505/lesson/7/7.1)
- [7\.1.1 - An Application of One-Sample Hotelling’s T-Square](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.1)
- [7\.1.2 - A Naive Approach](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.2)
- [7\.1.3 - Hotelling’s T-Square](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.3)
- [7\.1.4 - Example: Women’s Survey Data and Associated Confidence Intervals](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.4)
- [7\.1.5 - Profile Plots](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.5)
- [7\.1.6 - Paired Hotelling's T-Square](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.6)
- [7\.1.7 - Question 1: The Univariate Case](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.7)
- [7\.1.8 - Multivariate Paired Hotelling's T-Square](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.8)
- [7\.1.9 - Example: Spouse Data](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.9)
- [7\.1.10 - Confidence Intervals](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.10)
- [7\.1.11 - Question 2: Matching Perceptions](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.11)
- [7\.1.12 - Two-Sample Hotelling's T-Square](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.12)
- [7\.1.13 - The Univariate Case](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.13)
- [7\.1.14 - The Multivariate Case](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.14)
- [7\.1.15 - The Two-Sample Hotelling's T-Square Test Statistic](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.15)
- [7\.1.16 - Summary of Basic Material](https://online.stat.psu.edu/stat505/lesson/7/7.1/7.1.16)
- [7\.2 - Advanced](https://online.stat.psu.edu/stat505/lesson/7/7.2)
- [7\.2.1 - Profile Analysis for One Sample Hotelling's T-Square](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.1)
- [7\.2.2 - Upon Which Variable do the Swiss Banknotes Differ? -- Two Sample Mean Problem](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.2)
- [7\.2.3 - Example: Swiss Banknotes](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.3)
- [7\.2.4 - Bonferroni Corrected (1 - α) x 100% Confidence Intervals](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.4)
- [7\.2.5 - Profile Plots](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.5)
- [7\.2.6 - Model Assumptions and Diagnostics Assumptions](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.6)
- [7\.2.7 - Testing for Equality of Mean Vectors when Σ 1 ≠ Σ 2](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.7)
- [7\.2.8 - Simultaneous (1 - α) x 100% Confidence Intervals](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.8)
- [7\.2.9 - Summary of Advanced Material](https://online.stat.psu.edu/stat505/lesson/7/7.2/7.2.9)
- [Lesson 8: Multivariate Analysis of Variance (MANOVA)](https://online.stat.psu.edu/stat505/lesson/8)
- [8\.1 - The Univariate Approach: Analysis of Variance (ANOVA)](https://online.stat.psu.edu/stat505/lesson/8/8.1)
- [8\.2 - The Multivariate Approach: One-way Multivariate Analysis of Variance (One-way MANOVA)](https://online.stat.psu.edu/stat505/lesson/8/8.2)
- [8\.3 - Test Statistics for MANOVA](https://online.stat.psu.edu/stat505/lesson/8/8.3)
- [8\.4 - Example: Pottery Data - Checking Model Assumptions](https://online.stat.psu.edu/stat505/lesson/8/8.4)
- [8\.5 - Example: MANOVA of Pottery Data](https://online.stat.psu.edu/stat505/lesson/8/8.5)
- [8\.6 - Orthogonal Contrasts](https://online.stat.psu.edu/stat505/lesson/8/8.6)
- [8\.7 - Constructing Orthogonal Contrasts](https://online.stat.psu.edu/stat505/lesson/8/8.7)
- [8\.8 - Hypothesis Tests](https://online.stat.psu.edu/stat505/lesson/8/8.8)
- [8\.9 - Randomized Block Design: Two-way MANOVA](https://online.stat.psu.edu/stat505/lesson/8/8.9)
- [8\.10 - Two-way MANOVA Additive Model and Assumptions](https://online.stat.psu.edu/stat505/lesson/8/8.10)
- [8\.11 - Forming a MANOVA table](https://online.stat.psu.edu/stat505/lesson/8/8.11)
- [8\.12 - Summary](https://online.stat.psu.edu/stat505/lesson/8/8.12)
- [Lesson 9: Repeated Measures Analysis](https://online.stat.psu.edu/stat505/lesson/9)
- [9\.1 - Approach 1: Split-plot ANOVA](https://online.stat.psu.edu/stat505/lesson/9/9.1)
- [9\.2 - Example](https://online.stat.psu.edu/stat505/lesson/9/9.2)
- [9\.3 - Some Criticisms about the Split-ANOVA Approach](https://online.stat.psu.edu/stat505/lesson/9/9.3)
- [9\.4 - Approach 2: MANOVA](https://online.stat.psu.edu/stat505/lesson/9/9.4)
- [9\.5 - Step 2: Test for treatment by time interactions](https://online.stat.psu.edu/stat505/lesson/9/9.5)
- [9\.6 - Step 3: Test for the main effects of treatments](https://online.stat.psu.edu/stat505/lesson/9/9.6)
- [9\.7 - Summary](https://online.stat.psu.edu/stat505/lesson/9/9.7)
- [Lesson 10: Discriminant Analysis](https://online.stat.psu.edu/stat505/lesson/10)
- [10\.1 - Bayes Rule and Classification Problem](https://online.stat.psu.edu/stat505/lesson/10/10.1)
- [10\.2 - Discriminant Analysis Procedure](https://online.stat.psu.edu/stat505/lesson/10/10.2)
- [10\.3 - Linear Discriminant Analysis](https://online.stat.psu.edu/stat505/lesson/10/10.3)
- [10\.4 - Example: Insect Data](https://online.stat.psu.edu/stat505/lesson/10/10.4)
- [10\.5 - Estimating Misclassification Probabilities](https://online.stat.psu.edu/stat505/lesson/10/10.5)
- [10\.6 - Quadratic Discriminant Analysis](https://online.stat.psu.edu/stat505/lesson/10/10.6)
- [10\.7 - Example: Swiss Banknotes](https://online.stat.psu.edu/stat505/lesson/10/10.7)
- [10\.8 - Summary](https://online.stat.psu.edu/stat505/lesson/10/10.8)
- [Lesson 11: Principal Components Analysis (PCA)](https://online.stat.psu.edu/stat505/lesson/11)
- [11\.1 - Principal Component Analysis (PCA) Procedure](https://online.stat.psu.edu/stat505/lesson/11/11.1)
- [11\.2 - How do we find the coefficients?](https://online.stat.psu.edu/stat505/lesson/11/11.2)
- [11\.3 - Example: Places Rated](https://online.stat.psu.edu/stat505/lesson/11/11.3)
- [11\.4 - Interpretation of the Principal Components](https://online.stat.psu.edu/stat505/lesson/11/11.4)
- [11\.5 - Alternative: Standardize the Variables](https://online.stat.psu.edu/stat505/lesson/11/11.5)
- [11\.6 - Example: Places Rated after Standardization](https://online.stat.psu.edu/stat505/lesson/11/11.6)
- [11\.7 - Once the Components Are Calculated](https://online.stat.psu.edu/stat505/lesson/11/11.7)
- [11\.8 - Summary](https://online.stat.psu.edu/stat505/lesson/11/11.8)
- [Lesson 12: Factor Analysis](https://online.stat.psu.edu/stat505/lesson/12)
- [12\.1 - Notations and Terminology](https://online.stat.psu.edu/stat505/lesson/12/12.1)
- [12\.2 - Model Assumptions](https://online.stat.psu.edu/stat505/lesson/12/12.2)
- [12\.3 - Principal Component Method](https://online.stat.psu.edu/stat505/lesson/12/12.3)
- [12\.4 - Example: Places Rated Data - Principal Component Method](https://online.stat.psu.edu/stat505/lesson/12/12.4)
- [12\.5 - Communalities](https://online.stat.psu.edu/stat505/lesson/12/12.5)
- [12\.6 - Final Notes about the Principal Component Method](https://online.stat.psu.edu/stat505/lesson/12/12.6)
- [12\.7 - Maximum Likelihood Estimation Method](https://online.stat.psu.edu/stat505/lesson/12/12.7)
- [12\.8 - Example: Places Rated Data](https://online.stat.psu.edu/stat505/lesson/12/12.8)
- [12\.9 - Goodness-of-Fit](https://online.stat.psu.edu/stat505/lesson/12/12.9)
- [12\.10 - Factor Rotations](https://online.stat.psu.edu/stat505/lesson/12/12.10)
- [12\.11 - Varimax Rotation](https://online.stat.psu.edu/stat505/lesson/12/12.11)
- [12\.12 - Estimation of Factor Scores](https://online.stat.psu.edu/stat505/lesson/12/12.12)
- [12\.13 - Summary](https://online.stat.psu.edu/stat505/lesson/12/12.13)
- [Lesson 13: Canonical Correlation Analysis](https://online.stat.psu.edu/stat505/lesson/13)
- [13\.1 - Setting the Stage for Canonical Correlation Analysis](https://online.stat.psu.edu/stat505/lesson/13/13.1)
- [13\.2 - Example: Sales Data](https://online.stat.psu.edu/stat505/lesson/13/13.2)
- [13\.3. Test for Relationship Between Canonical Variate Pairs](https://online.stat.psu.edu/stat505/lesson/13/13.3/13.3.)
- [13\.4 - Obtain Estimates of Canonical Correlation](https://online.stat.psu.edu/stat505/lesson/13/13.4)
- [13\.5 - Obtain the Canonical Coefficients](https://online.stat.psu.edu/stat505/lesson/13/13.5)
- [13\.6 - Interpret Each Component](https://online.stat.psu.edu/stat505/lesson/13/13.6)
- [13\.7 - Reinforcing the Results](https://online.stat.psu.edu/stat505/lesson/13/13.7)
- [13\.8 - Summary](https://online.stat.psu.edu/stat505/lesson/13/13.8)
- [Lesson 14: Cluster Analysis](https://online.stat.psu.edu/stat505/lesson/14)
- [14\.1 - Example: Woodyard Hammock Data](https://online.stat.psu.edu/stat505/lesson/14/14.1)
- [14\.2 - Measures of Association for Continuous Variables](https://online.stat.psu.edu/stat505/lesson/14/14.2)
- [14\.3 - Measures of Association for Binary Variables](https://online.stat.psu.edu/stat505/lesson/14/14.3)
- [14\.4 - Agglomerative Hierarchical Clustering](https://online.stat.psu.edu/stat505/lesson/14/14.4)
- [14\.5 - Agglomerative Method Example](https://online.stat.psu.edu/stat505/lesson/14/14.5)
- [14\.6 - Cluster Description](https://online.stat.psu.edu/stat505/lesson/14/14.6)
- [14\.7 - Ward’s Method](https://online.stat.psu.edu/stat505/lesson/14/14.7)
- [14\.8 - K-Means Procedure](https://online.stat.psu.edu/stat505/lesson/14/14.8)
- [14\.9 - Defining Initial Clusters](https://online.stat.psu.edu/stat505/lesson/14/14.9)
- [14\.10 - Summary](https://online.stat.psu.edu/stat505/lesson/14/14.10)
## Resources
- [SAS Programs](https://online.stat.psu.edu/stat505/resource/sas-programs "SAS Program")
×
Save changes
Close
[OPEN.**ED**@PSU](https://oer.psu.edu/)
Except where otherwise noted, content on this site is licensed under a [CC BY-NC 4.0](http://creativecommons.org/licenses/by-nc/4.0/) license.
[Creative Commons Attribution NonCommercial License 4.0](https://creativecommons.org/licenses/by-nc/4.0/)
- [Log in](https://online.stat.psu.edu/stat505/user/login)
- [Privacy](https://www.psu.edu/web-privacy-statement/)
- [Non-discrimination](https://guru.psu.edu/policies/AD85.html/)
- [Equal Opportunity](https://guru.psu.edu/policies/OHR/hr11.html/)
- [Accessibility](https://www.psu.edu/accessibilitystatement/)
- [Copyright](https://www.psu.edu/copyright-information/)
[The Pennsylvania State University © 2025](https://www.psu.edu/) |
| Readable Markdown | To illustrate these calculations consider the correlation matrix **R** as shown below:
R \= ( 1 ρ ρ 1 )
Then, using the definition of the eigenvalues, we must calculate the determinant of R − λ times the Identity matrix.
\| R − λ I \| \= \| ( 1 ρ ρ 1 ) − λ ( 1 0 0 1 ) \|
So, R in the expression above is given in blue, and the Identity matrix follows in red, and λ here is the eigenvalue that we wish to solve for. Carrying out the math we end up with the matrix with 1 − λ on the diagonal and ρ on the off-diagonal. Then calculating this determinant we obtain ( 1 − λ ) 2 − ρ 2 squared minus ρ 2.
\| 1 − λ ρ ρ 1 − λ \| \= ( 1 − λ ) 2 − ρ 2 \= λ 2 − 2 λ \+ 1 − ρ 2
Setting this expression equal to zero we end up with the following...
λ 2 − 2 λ \+ 1 − ρ 2 \= 0
To solve for λ we use the general result that any solution to the second-order polynomial below:
a y 2 \+ b y \+ c \= 0
is given by the following expression:
y \= − b ± b 2 − 4 a c 2 a
Here, a \= 1 , b \= − 2 (the term that precedes λ) and *c* is equal to 1 − ρ 2 Substituting these terms in the equation above, we obtain that λ must be equal to 1 plus or minus the correlation ρ.
λ \= 2 ± 2 2 − 4 ( 1 − ρ 2 ) 2 \= 1 ± 1 − ( 1 − ρ 2 ) \= 1 ± ρ
Here we will take the following solutions:
λ 1 \= 1 \+ ρ λ 2 \= 1 − ρ
Next, to obtain the corresponding eigenvectors, we must solve a system of equations below:
( R − λ I ) e \= 0
This is the product of R − λ times **I** and the eigenvector **e** set equal to 0. Or in other words, this is translated for this specific problem in the expression below:
{ ( 1 ρ ρ 1 ) − λ ( 1 0 0 1 ) } ( e 1 e 2 ) \= ( 0 0 )
This simplifies as follows:
( 1 − λ ρ ρ 1 − λ ) ( e 1 e 2 ) \= ( 0 0 )
Yielding a system of two equations with two unknowns:
( 1 − λ ) e 1 \+ ρ e 2 \= 0 ρ e 1 \+ ( 1 − λ ) e 2 \= 0
**Note**! This does **not** have a unique solution. If
(
e
1
,
e
2
)
(
e
1
,
e
2
) is one solution, then a second solution can be obtained by multiplying the first solution by any non-zero constant *c*, i.e.,
(
c
e
1
,
c
e
2
)
. Therefore, we will require the additional condition that the sum of the squared values of
(
e
1
and
e
2
)
are equal to 1 (ie.,
e
1
2
\+
e
2
2
\=
1
)
Consider the first equation:
( 1 − λ ) e 1 \+ ρ e 2 \= 0
Solving this equation for e 2 and we obtain the following:
e 2 \= − ( 1 − λ ) ρ e 1
Substituting this into e 1 2 \+ e 2 2 \= 1 we get the following:
e 1 2 \+ ( 1 − λ ) 2 ρ 2 e 1 2 \= 1
Recall that λ \= 1 ± ρ. In either case we end up finding that ( 1 − λ ) 2 \= ρ 2, so that the expression above simplifies to:
2 e 1 2 \= 1
Or, in other words:
e 1 \= 1 2
Using the expression for e 2 which we obtained above,
e 2 \= − 1 − λ ρ e 1
we get
e 2 \= 1 2 for λ \= 1 \+ ρ and e 2 \= − 1 2 for λ \= 1 − ρ
Therefore, the two eigenvectors are given by the two vectors as shown below:
( 1 2 1 2 ) for λ 1 \= 1 \+ ρ and ( 1 2 − 1 2 ) for λ 2 \= 1 − ρ
Some properties of the eigenvalues of the variance-covariance matrix are to be considered at this point. Suppose that λ 1 through λ p are the eigenvalues of the variance-covariance matrix Σ. By definition, the total variation is given by the sum of the variances. It turns out that this is also equal to the sum of the eigenvalues of the variance-covariance matrix. Thus, the total variation is:
∑ j \= 1 p σ j 2 \= σ 1 2 \+ σ 2 2 \+ ⋯ \+ σ p 2 \= λ 1 \+ λ 2 \+ ⋯ \+ λ p \= ∑ j \= 1 p λ j
The generalized variance is equal to the product of the eigenvalues:
\| Σ \| \= ∏ j \= 1 p λ j \= λ 1 × λ 2 × ⋯ × λ p |
| Shard | 94 (laksa) |
| Root Hash | 16520191723648810894 |
| Unparsed URL | edu,psu!stat,online,/stat505/lesson/4/4.5 s443 |