ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.4 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| URL | https://www.value-at-risk.net/eigenvalues-and-eigenvectors/ | |||||||||
| Last Crawled | 2026-04-15 23:20:04 (11 days ago) | |||||||||
| First Indexed | 2016-07-07 19:06:39 (9 years ago) | |||||||||
| HTTP Status Code | 200 | |||||||||
| Content | ||||||||||
| Meta Title | Eigenvalues and Eigenvectors - Value-at-Risk: Theory and Practice | |||||||||
| Meta Description | An eigenvector of a matrix is any vector for which multiplying by the matrix is not different from multiplying by a scalar (the eigenvalue). Suppose λ is an | |||||||||
| Meta Canonical | null | |||||||||
| Boilerpipe Text | Consider a square matrix
c
. If
[2.66]
for some scalar λ and some vector
ν
≠ 0, then we call λ an
eigenvalue
and
ν
an
eigenvector
of
c
. This means that an eigenvector of a matrix is any vector for which multiplying by the matrix is not different from multiplying by a scalar (the eigenvalue).
2.6.1 Theory
Suppose λ is an eigenvalue and
ν
is a corresponding eigenvector of
c
. For any scalar
a
, the vector
a
ν
is also an eigenvector corresponding to eigenvalue λ. This follows because
[2.67]
Accordingly, eigenvectors are uniquely determined only up to scalar multiplication. If a set of eigenvectors are linearly independent, we say they are
distinct
. To determine eigenvalues and eigenvectors of a matrix, we focus first on the eigenvalues. Rearranging [2.66] we obtain
[2.68]
where
I
is the identity matrix. This equation will hold for some nonzero vector
ν
if and only if the matrix (
c
− λ
I
) is singular. Accordingly, we seek values for which the matrix (
c
− λ
I
) has a determinant of 0. Consider matrix
[2.69]
for which
[2.70]
This has determinant
[2.71]
which is a third-order polynomial. It has roots λ = −1, 2 and 3. We find corresponding eigenvectors
ν
by substituting the eigenvalues into [2.68] and solving. For example, with λ = −1, [2.68] becomes:
[2.72]
By inspection, a solution is
ν
= (−1, −3, 3). Obviously, any multiple of this is also a solution. We repeat the same analysis for the other eigenvalues. Results are indicated in Exhibit 2.7.
Exhibit 2.8:
Eigenvalues and eigenvectors of matrix [2.69].
The approach we employed in our example is useful for deriving an important result. Consider an arbitrary
n
×
n
matrix
c
. To find its eigenvalues, we construct the determinant of (
c
− λ
I
) and set it equal to 0. This results in an
n
th
-order polynomial equation. By the fundamental theorem of algebra, it has
n
solutions. We conclude that every matrix has
n
eigenvalues. Of course, some may be complex. Others may be repeated. In practical applications, eigenvalues are not calculated in this manner. Although setting the determinant of (
c
− λ
I
) equal to 0 and solving is theoretically useful, there are more efficient algorithms, which are implemented in various software packages. See Strang (
2005
).
Eigenvalues have a number of convenient properties. A matrix and its transpose both have the same eigenvalues. If λ is an eigenvalue of a nonsingular matrix, then 1/λ is an eigenvalue of its inverse. The product of the eigenvalues of a matrix equals its determinant.
2.6.2 Intuitive Example
Consider an intuitive example. A sphere of unit radius is positioned at the center of a three-dimensional coordinate system. It is rotating about the
x
3
-axis. The matrix
[2.73]
describes a one-eighth (45°) rotation of the sphere. For example, multiplying
c
by the vector (1, 0, 0) yields the vector (.7071, .7071, 0) which is rotated 45°. This is depicted in Exhibit 2.8.
Exhibit 2.8:
The matrix
c
rotates points 45° about the
x
3
-axis. This is illustrated for the point (1, 0, 0), which it transforms into the point (.7071, .7071, 0).
Intuitively, what might we expect to be an eigenvector of the matrix
c
? Is there a point on the unit sphere that a 45° rotation transforms into a multiple of itself? Of course! Consider the point at the north pole. It is the point (0, 0, 1), and it is transformed into itself. We conclude that an eigenvector of
c
is the vector (0, 0, 1). The corresponding eigenvalue is 1. Because it is a 3 × 3 matrix,
c
has two other eigenvalues, but they are both complex numbers.
Exercises
2.7
Find the eigenvalues and eigenvectors of the matrix
[2.74]
Solution
2.8
Prove that the eigenvalues of a diagonal matrix are its diagonal elements.
Solution
2.9
Use one of the stated properties of eigenvalues to prove that a matrix is singular if and only if it has 0 as one of its eigenvalues.
Solution
Post navigation | |||||||||
| Markdown | [Skip to content](https://www.value-at-risk.net/eigenvalues-and-eigenvectors/#content)
[Value-at-Risk](https://www.value-at-risk.net/)
Theory and Practice
Menu and widgets
- [Cover](https://www.value-at-risk.net/)
- [Title Page](https://www.value-at-risk.net/title-page/)
- [Copyright](https://www.value-at-risk.net/copyright/)
- [About the Author](https://www.value-at-risk.net/about-the-author/)
- [Acknowledgements](https://www.value-at-risk.net/acknowledge/)
- [Contents](https://www.value-at-risk.net/contents/)
- [0 Preface](https://www.value-at-risk.net/what-were-about/)
- [0\.1 What We’re About](https://www.value-at-risk.net/what-were-about/)
- [0\.2 Voldemort and the Second Edition](https://www.value-at-risk.net/the-second-edition/)
- [0\.3 How To Read This Book](https://www.value-at-risk.net/how-to-read-the-book/)
- [0\.4 Notation](https://www.value-at-risk.net/notation-and-terminology/)
- [1 Value-at-Risk](https://www.value-at-risk.net/measures/)
- [1\.1 Measures](https://www.value-at-risk.net/measures/)
- [1\.2 Risk Measures](https://www.value-at-risk.net/risk-measures/)
- [1\.3 Market Risk](https://www.value-at-risk.net/market-risk/)
- [1\.4 Value-at-Risk](https://www.value-at-risk.net/value-at-risk/)
- [1\.5 Risk Limits](https://www.value-at-risk.net/risk-limit/)
- [1\.6 Other Applications of Value-at-Risk](https://www.value-at-risk.net/value-at-risk-applications/)
- [1\.7 Examples](https://www.value-at-risk.net/examples/)
- [1\.8 Value-at-Risk Measures](https://www.value-at-risk.net/var-measures/)
- [1\.9 History of Value-at-Risk](https://www.value-at-risk.net/history-of-value-at-risk/)
- [1\.10 Further Reading](https://www.value-at-risk.net/publicized-losses/#section_1_9)
- [2 Mathematical Preliminaries](https://www.value-at-risk.net/2-1-motivation/)
- [2\.1 Motivation](https://www.value-at-risk.net/2-1-motivation/)
- [2\.2 Mathematical Notation](https://www.value-at-risk.net/mathematical-notation/)
- [2\.3 Gradient & Gradient-Hessian Approx.](https://www.value-at-risk.net/gradient-and-gradient-hessian-approximations/)
- [2\.4 Ordinary Interpolation](https://www.value-at-risk.net/ordinary-interpolation/)
- [2\.5 Complex Numbers](https://www.value-at-risk.net/complex-numbers/)
- [2\.6 Eigenvalues and Eigenvectors](https://www.value-at-risk.net/eigenvalues-and-eigenvectors/)
- [2\.7 Cholesky Factorization](https://www.value-at-risk.net/cholesky-factorization/)
- [2\.8 Minimizing a Quadratic Polynomial](https://www.value-at-risk.net/minimizing-a-quadratic-polynomial/)
- [2\.9 Ordinary Least Squares](https://www.value-at-risk.net/ordinary-least-squares/)
- [2\.10 Cubic Spline Interpolation](https://www.value-at-risk.net/cubic-spline-interpolation/)
- [2\.11 Finite Difference Approximations](https://www.value-at-risk.net/finite-difference-approximations-of-derivatives/)
- [2\.12 Newton’s Method](https://www.value-at-risk.net/newtons-method/)
- [2\.13 Change of Variables Formula](https://www.value-at-risk.net/change-of-variables-formula/)
- [2\.14 Numerical Integration: One Dim.](https://www.value-at-risk.net/numerical-integration-in-one-dimension/)
- [2\.15 Numerical Integration: Multi Dim.](https://www.value-at-risk.net/numerical-integration-multiple-dimensions/)
- [2\.16 Further Reading](https://www.value-at-risk.net/numerical-integration-multiple-dimensions/#section_2_16)
- [3 Probability](https://www.value-at-risk.net/3-1-motivation/)
- [3\.1 Motivation](https://www.value-at-risk.net/3-1-motivation/)
- [3\.2 Prerequisites](https://www.value-at-risk.net/prerequisites/)
- [3\.3 Parameters](https://www.value-at-risk.net/parameters/)
- [3\.4 Parameters of Random Vectors](https://www.value-at-risk.net/parameters-of-random-vectors/)
- [3\.5 Linear Polynomials of Random Vectors](https://www.value-at-risk.net/linear-polynomials-of-random-vectors/)
- [3\.6 Properties of Covariance Matrices](https://www.value-at-risk.net/properties-of-covariance-matrices/)
- [3\.7 Principal Component Analysis](https://www.value-at-risk.net/principal-component-analysis/)
- [3\.8 Bernoulli and Binomial Distributions](https://www.value-at-risk.net/bernoulli-and-binomial-distributions/)
- [3\.9 Uniform and Related Distributions](https://www.value-at-risk.net/uniform-and-related-distributions/)
- [3\.10 Normal and Related Distributions](https://www.value-at-risk.net/normal-and-related-distributions/)
- [3\.11 Mixtures of Distributions](https://www.value-at-risk.net/mixtures-of-distributions/)
- [3\.12 Moment-Generating Functions](https://www.value-at-risk.net/moment-generating-functions/)
- [3\.13 Quadratic Polynomials of Joint-Normal Random Vectors](https://www.value-at-risk.net/quadratic-polynomials-of-joint-normal-random-vectors/)
- [3\.14 The Cornish-Fisher Expansion](https://www.value-at-risk.net/the-cornish-fisher-expansion/)
- [3\.15 Central Limit Theorem](https://www.value-at-risk.net/central-limit-theorem/)
- [3\.16 The Inversion Theorem](https://www.value-at-risk.net/the-inversion-theorem/)
- [3\.17 Quantiles of Quadratic Polynomials of Joint-Normal Random Vectors](https://www.value-at-risk.net/quantiles-of-quadratic-polynomials-of-joint-normal-random-vectors/)
- [3\.18 Further Reading](https://www.value-at-risk.net/further-reading/)
- [4 Statistics and Time Series](https://www.value-at-risk.net/4-1-motivation/)
- [4\.1 Motivation](https://www.value-at-risk.net/4-1-motivation/)
- [4\.2 From Probability to Statistics](https://www.value-at-risk.net/from-probability-to-statistics/)
- [4\.3 Estimation](https://www.value-at-risk.net/estimation/)
- [4\.4 Maximum Likelihood Estimators](https://www.value-at-risk.net/maximum-likelihood-estimators/)
- [4\.5 Hypothesis Testing](https://www.value-at-risk.net/hypothesis-testing/)
- [4\.6 Stochastic Processes](https://www.value-at-risk.net/stochastic-processes/)
- [4\.7 Testing for Autocorrelations](https://www.value-at-risk.net/testing-for-autocorrelations/)
- [4\.8 White Noise, Moving-Average and Autoregressive Processes](https://www.value-at-risk.net/white-noise-autoregressive-and-moving-average-processes/)
- [4\.9 GARCH Processes](https://www.value-at-risk.net/garch-processes/)
- [4\.10 Regime-Switching Processes](https://www.value-at-risk.net/regime-switching-stochastic-processes/)
- [4\.11 Further Reading](https://www.value-at-risk.net/further-reading-statistics-and-time-series-analysis/)
- [5 Monte Carlo Method](https://www.value-at-risk.net/monte-carlo-motivation/)
- [5\.1 Motivation](https://www.value-at-risk.net/monte-carlo-motivation/)
- [5\.2 The Monte Carlo Method](https://www.value-at-risk.net/the-monte-carlo-method/)
- [5\.3 Realizations of Samples](https://www.value-at-risk.net/realizations-of-samples/)
- [5\.4 Pseudorandom Numbers](https://www.value-at-risk.net/pseudorandom-numbers/)
- [5\.5 Testing Pseudorandom Number Generators](https://www.value-at-risk.net/testing-pseudorandom-number-generators/)
- [5\.6 Implementing Pseudorandom Number Generators](https://www.value-at-risk.net/implementing-pseudorandom-number-generators/)
- [5\.7 Breaking the Curse of Dimensionality](https://www.value-at-risk.net/breaking-the-curse-of-dimensionality/)
- [5\.8 Pseudorandom Variates](https://www.value-at-risk.net/pseudorandom-variates/)
- [5\.9 Variance Reduction](https://www.value-at-risk.net/variance-reduction/)
- [5\.10 Further Reading](https://www.value-at-risk.net/monte-carlo-simulation-further-reading/)
- [6 Historical Market Data](https://www.value-at-risk.net/historical-market-data-for-value-at-risk/)
- [6\.1 Motivation](https://www.value-at-risk.net/historical-market-data-for-value-at-risk/)
- [6\.2 Forms of Historical Market Data](https://www.value-at-risk.net/forms-of-historical-market-data/)
- [6\.3 Nonsynchronous Data](https://www.value-at-risk.net/nonsynchronous-data/)
- [6\.4 Data Errors](https://www.value-at-risk.net/financial-data-errors/)
- [6\.5 Data Biases](https://www.value-at-risk.net/financial-data-biases/)
- [6\.6 Futures Prices](https://www.value-at-risk.net/futures-prices/)
- [6\.7 Implied Volatilities](https://www.value-at-risk.net/implied-volatilities/)
- [6\.8 Further Reading](https://www.value-at-risk.net/historical-market-data-further-reading/)
- [7 Inference](https://www.value-at-risk.net/value-at-risk-inference-procedures/)
- [7\.1 Motivation](https://www.value-at-risk.net/value-at-risk-inference-procedures/)
- [7\.2 Selecting Key Factors](https://www.value-at-risk.net/risk-factors-for-value-at-risk/)
- [7\.3 Current Practice](https://www.value-at-risk.net/value-at-risk-inference-current-practice/)
- [7\.4 Unconditional Leptokurtosis and Conditional Heteroskedasticity](https://www.value-at-risk.net/leptokurtosis-conditional-heteroskedasticity/)
- [7\.5 Further Reading](https://www.value-at-risk.net/inference-procedures-further-reading/)
- [8 Primary Portfolio Mappings](https://www.value-at-risk.net/primary-mappings-motivation/)
- [8\.1 Motivation](https://www.value-at-risk.net/primary-mappings-motivation/)
- [8\.2 Day Counts](https://www.value-at-risk.net/day-count-methods/)
- [8\.3 Primary Mappings](https://www.value-at-risk.net/primary-mapping/)
- [8\.4 Example: Equities](https://www.value-at-risk.net/portfolio-mapping-example-equities/)
- [8\.5 Example: Forwards](https://www.value-at-risk.net/portfolio-mapping-forwards/)
- [8\.6 Example: Options](https://www.value-at-risk.net/portfolio-mapping-options/)
- [8\.7 Physical Commodities](https://www.value-at-risk.net/portfolio-mapping-physical-commodities/)
- [8\.8 Further Reading](https://www.value-at-risk.net/portfolio-mappings-further-reading/)
- [9 Portfolio Remappings](https://www.value-at-risk.net/remappings-motivation/)
- [9\.1 Motivation](https://www.value-at-risk.net/remappings-motivation/)
- [9\.2 Holdings Remappings](https://www.value-at-risk.net/holdings-remappings/)
- [9\.3 Function Remappings](https://www.value-at-risk.net/global-remappings/)
- [9\.4 Variables Remappings](https://www.value-at-risk.net/change-of-variables-remappings/)
- [9\.5 Further Reading](https://www.value-at-risk.net/remappings-further-reading/)
- [10 Transformation Procedures](https://www.value-at-risk.net/transformations-motivation/)
- [10\.1 Motivation](https://www.value-at-risk.net/transformations-motivation/)
- [10\.2 Linear Transformation Procedures](https://www.value-at-risk.net/linear-transformation-procedures/)
- [10\.3 Quadratic Transformation Procedures](https://www.value-at-risk.net/quadratic-transformation-procedures/)
- [10\.4 Monte Carlo Transformation Procedures](https://www.value-at-risk.net/monte-carlo-value-at-risk/)
- [10\.5 Variance Reduction](https://www.value-at-risk.net/variance-reduction-value-at-risk/)
- [10\.6 Further Reading](https://www.value-at-risk.net/transformations-further-reading/)
- [11 Historical Simulation](https://www.value-at-risk.net/motivation-historical-simulation/)
- [11\.1 Motivation](https://www.value-at-risk.net/motivation-historical-simulation/)
- [11\.2 Generating Realizations Directly From Historical Market Data](https://www.value-at-risk.net/historical-simulation-realizations/)
- [11\.3 Calculating Value-at-Risk With Historical Simulation](https://www.value-at-risk.net/historical-simulation-realizations/#section_11_3)
- [11\.4 Origins of Historical Simulation](https://www.value-at-risk.net/origins-historical-var/)
- [11\.5 Flawed Arguments for Historical Simulation](https://www.value-at-risk.net/arguments-historical-simulation/)
- [11\.6 Shortcomings of Historical Simulation](https://www.value-at-risk.net/shortcomings-historical-simulation/)
- [11\.7 Further Reading](https://www.value-at-risk.net/historical-simulation-reading/)
- [12 Implementing Value-at-Risk](https://www.value-at-risk.net/implementing-var-motivation/)
- [12\.1 Motivation](https://www.value-at-risk.net/implementing-var-motivation/)
- [12\.2 Preliminaries](https://www.value-at-risk.net/implementing-var-motivation/#section_12_2)
- [12\.3 Purpose](https://www.value-at-risk.net/purpose/)
- [12\.4 Functional Requirements](https://www.value-at-risk.net/functional-requirements/)
- [12\.5 Build vs. Buy](https://www.value-at-risk.net/build-vs-buy/)
- [12\.6 Implementation](https://www.value-at-risk.net/implementation/)
- [12\.7 Further Reading](https://www.value-at-risk.net/reading-implementing-value-risk/)
- [13 Model Risk, Testing and Validation](https://www.value-at-risk.net/model-risk-motivation/)
- [13\.1 Motivation](https://www.value-at-risk.net/model-risk-motivation/)
- [13\.2 Model Risk](https://www.value-at-risk.net/model-risk/)
- [13\.3 Managing Model Risk](https://www.value-at-risk.net/managing-model-risk/)
- [13\.4 Further Reading](https://www.value-at-risk.net/reading-model-risk/)
- [14 Backtesting](https://www.value-at-risk.net/motivation-backtesting/)
- [14\.1 Motivation](https://www.value-at-risk.net/motivation-backtesting/)
- [14\.2 Backtesting](https://www.value-at-risk.net/backtesting-value-at-risk/)
- [14\.3 Backtesting With Coverage Tests](https://www.value-at-risk.net/backtesting-coverage-tests/)
- [14\.4 Backtesting With Distribution Tests](https://www.value-at-risk.net/backtesting-distribution-tests/)
- [14\.5 Backtesting With Independence Tests](https://www.value-at-risk.net/backtesting-independence-tests/)
- [14\.6 Example: Backtesting a One-Day 95% EUR Value-at-Risk Measure](https://www.value-at-risk.net/backtesting-example/)
- [14\.7 Backtesting Strategy](https://www.value-at-risk.net/backtesting-strategy/)
- [14\.8 Further Reading](https://www.value-at-risk.net/reading-backtesting/)
- [Back Matter](https://www.value-at-risk.net/endnotes/)
- [Endnotes](https://www.value-at-risk.net/endnotes/)
- [References](https://www.value-at-risk.net/references/)
- [Standard Normal Table](https://www.value-at-risk.net/standard-normal-table/)

[](https://first-edition.value-at-risk.net/) 
## Resources
- [Exercise solutions](https://www.glynholton.com/exercise-solutions-second-edition/)
- [Notation](https://www.value-at-risk.net/notation-and-terminology/)
- [How to cite](https://www.value-at-risk.net/how-to-cite/)
- [Author's blog](https://www.glynholton.com/)

# 2\.6 Eigenvalues and Eigenvectors
[](https://www.value-at-risk.net/2-1-motivation/ "2.1 Motivation")[](https://www.value-at-risk.net/complex-numbers/ "2.5 Complex Numbers")[](https://www.value-at-risk.net/cholesky-factorization/ "2.7 Cholesky Factorization")[](https://www.value-at-risk.net/3-1-motivation/ "3.1 Motivation")
# 2\.6 Eigenvalues and Eigenvectors
Consider a square matrix ***c***. If
\[2.66\]
[]()
for some scalar λ and some vector **ν** ≠ 0, then we call λ an **eigenvalue** and **ν** an **eigenvector** of ***c***. This means that an eigenvector of a matrix is any vector for which multiplying by the matrix is not different from multiplying by a scalar (the eigenvalue).
###### 2\.6.1 Theory
Suppose λ is an eigenvalue and **ν** is a corresponding eigenvector of ***c***. For any scalar *a*, the vector *a***ν** is also an eigenvector corresponding to eigenvalue λ. This follows because
\[2.67\]
[]()
Accordingly, eigenvectors are uniquely determined only up to scalar multiplication. If a set of eigenvectors are linearly independent, we say they are **distinct**. To determine eigenvalues and eigenvectors of a matrix, we focus first on the eigenvalues. Rearranging \[2.66\] we obtain
\[2.68\]
[]()
where ***I*** is the identity matrix. This equation will hold for some nonzero vector **ν** if and only if the matrix (***c*** − λ***I***) is singular. Accordingly, we seek values for which the matrix (***c*** − λ***I***) has a determinant of 0. Consider matrix
\[2.69\]
[]()
for which
\[2.70\]
[]()
This has determinant
\[2.71\]
[]()
which is a third-order polynomial. It has roots λ = −1, 2 and 3. We find corresponding eigenvectors **ν** by substituting the eigenvalues into \[2.68\] and solving. For example, with λ = −1, \[2.68\] becomes:
\[2.72\]
[]()
By inspection, a solution is **ν** = (−1, −3, 3). Obviously, any multiple of this is also a solution. We repeat the same analysis for the other eigenvalues. Results are indicated in Exhibit 2.7.

Exhibit 2.8: Eigenvalues and eigenvectors of matrix \[2.69\].
The approach we employed in our example is useful for deriving an important result. Consider an arbitrary *n* × *n* matrix ***c***. To find its eigenvalues, we construct the determinant of (***c*** − λ***I***) and set it equal to 0. This results in an *nth*\-order polynomial equation. By the fundamental theorem of algebra, it has *n* solutions. We conclude that every matrix has *n* eigenvalues. Of course, some may be complex. Others may be repeated. In practical applications, eigenvalues are not calculated in this manner. Although setting the determinant of (***c*** − λ***I***) equal to 0 and solving is theoretically useful, there are more efficient algorithms, which are implemented in various software packages. See Strang ([2005](https://www.value-at-risk.net/references/#Strang_2005)).
Eigenvalues have a number of convenient properties. A matrix and its transpose both have the same eigenvalues. If λ is an eigenvalue of a nonsingular matrix, then 1/λ is an eigenvalue of its inverse. The product of the eigenvalues of a matrix equals its determinant.
###### 2\.6.2 Intuitive Example
Consider an intuitive example. A sphere of unit radius is positioned at the center of a three-dimensional coordinate system. It is rotating about the *x*3\-axis. The matrix
\[2.73\]
[]()
describes a one-eighth (45°) rotation of the sphere. For example, multiplying ***c*** by the vector (1, 0, 0) yields the vector (.7071, .7071, 0) which is rotated 45°. This is depicted in Exhibit 2.8.

Exhibit 2.8: The matrix ***c*** rotates points 45° about the *x*3\-axis. This is illustrated for the point (1, 0, 0), which it transforms into the point (.7071, .7071, 0).
Intuitively, what might we expect to be an eigenvector of the matrix ***c***? Is there a point on the unit sphere that a 45° rotation transforms into a multiple of itself? Of course! Consider the point at the north pole. It is the point (0, 0, 1), and it is transformed into itself. We conclude that an eigenvector of ***c*** is the vector (0, 0, 1). The corresponding eigenvalue is 1. Because it is a 3 × 3 matrix, ***c*** has two other eigenvalues, but they are both complex numbers.
###### Exercises
2\.7
Find the eigenvalues and eigenvectors of the matrix
\[2.74\]
[]()
[Solution](https://www.glynholton.com/solutions/exercise-solution-2-7/)
2\.8
Prove that the eigenvalues of a diagonal matrix are its diagonal elements.
[Solution](https://www.glynholton.com/solutions/exercise-solution-2-8/)
2\.9
Use one of the stated properties of eigenvalues to prove that a matrix is singular if and only if it has 0 as one of its eigenvalues.
[Solution](https://www.glynholton.com/solutions/exercise-solution-2-9/)
[](https://www.value-at-risk.net/2-1-motivation/ "2.1 Motivation")[](https://www.value-at-risk.net/complex-numbers/ "2.5 Complex Numbers")[](https://www.value-at-risk.net/cholesky-factorization/ "2.7 Cholesky Factorization")[](https://www.value-at-risk.net/3-1-motivation/ "3.1 Motivation")
Posted on
Author [Glyn Holton](https://www.value-at-risk.net/author/var/)Categories [Section](https://www.value-at-risk.net/category/section/)
## Post navigation
[Previous Previous post: 2\.5 Complex Numbers](https://www.value-at-risk.net/complex-numbers/)
[Next Next post: 2\.7 Cholesky Factorization](https://www.value-at-risk.net/cholesky-factorization/)
[Proudly powered by WordPress](https://wordpress.org/) | |||||||||
| Readable Markdown | 
[](https://www.value-at-risk.net/2-1-motivation/ "2.1 Motivation")[](https://www.value-at-risk.net/complex-numbers/ "2.5 Complex Numbers")[](https://www.value-at-risk.net/cholesky-factorization/ "2.7 Cholesky Factorization")[](https://www.value-at-risk.net/3-1-motivation/ "3.1 Motivation")
Consider a square matrix ***c***. If
\[2.66\]
[]()
for some scalar λ and some vector **ν** ≠ 0, then we call λ an **eigenvalue** and **ν** an **eigenvector** of ***c***. This means that an eigenvector of a matrix is any vector for which multiplying by the matrix is not different from multiplying by a scalar (the eigenvalue).
###### 2\.6.1 Theory
Suppose λ is an eigenvalue and **ν** is a corresponding eigenvector of ***c***. For any scalar *a*, the vector *a***ν** is also an eigenvector corresponding to eigenvalue λ. This follows because
\[2.67\]
[]()
Accordingly, eigenvectors are uniquely determined only up to scalar multiplication. If a set of eigenvectors are linearly independent, we say they are **distinct**. To determine eigenvalues and eigenvectors of a matrix, we focus first on the eigenvalues. Rearranging \[2.66\] we obtain
\[2.68\]
[]()
where ***I*** is the identity matrix. This equation will hold for some nonzero vector **ν** if and only if the matrix (***c*** − λ***I***) is singular. Accordingly, we seek values for which the matrix (***c*** − λ***I***) has a determinant of 0. Consider matrix
\[2.69\]
[]()
for which
\[2.70\]
[]()
This has determinant
\[2.71\]
[]()
which is a third-order polynomial. It has roots λ = −1, 2 and 3. We find corresponding eigenvectors **ν** by substituting the eigenvalues into \[2.68\] and solving. For example, with λ = −1, \[2.68\] becomes:
\[2.72\]
[]()
By inspection, a solution is **ν** = (−1, −3, 3). Obviously, any multiple of this is also a solution. We repeat the same analysis for the other eigenvalues. Results are indicated in Exhibit 2.7.

Exhibit 2.8: Eigenvalues and eigenvectors of matrix \[2.69\].
The approach we employed in our example is useful for deriving an important result. Consider an arbitrary *n* × *n* matrix ***c***. To find its eigenvalues, we construct the determinant of (***c*** − λ***I***) and set it equal to 0. This results in an *nth*\-order polynomial equation. By the fundamental theorem of algebra, it has *n* solutions. We conclude that every matrix has *n* eigenvalues. Of course, some may be complex. Others may be repeated. In practical applications, eigenvalues are not calculated in this manner. Although setting the determinant of (***c*** − λ***I***) equal to 0 and solving is theoretically useful, there are more efficient algorithms, which are implemented in various software packages. See Strang ([2005](https://www.value-at-risk.net/references/#Strang_2005)).
Eigenvalues have a number of convenient properties. A matrix and its transpose both have the same eigenvalues. If λ is an eigenvalue of a nonsingular matrix, then 1/λ is an eigenvalue of its inverse. The product of the eigenvalues of a matrix equals its determinant.
###### 2\.6.2 Intuitive Example
Consider an intuitive example. A sphere of unit radius is positioned at the center of a three-dimensional coordinate system. It is rotating about the *x*3\-axis. The matrix
\[2.73\]
[]()
describes a one-eighth (45°) rotation of the sphere. For example, multiplying ***c*** by the vector (1, 0, 0) yields the vector (.7071, .7071, 0) which is rotated 45°. This is depicted in Exhibit 2.8.

Exhibit 2.8: The matrix ***c*** rotates points 45° about the *x*3\-axis. This is illustrated for the point (1, 0, 0), which it transforms into the point (.7071, .7071, 0).
Intuitively, what might we expect to be an eigenvector of the matrix ***c***? Is there a point on the unit sphere that a 45° rotation transforms into a multiple of itself? Of course! Consider the point at the north pole. It is the point (0, 0, 1), and it is transformed into itself. We conclude that an eigenvector of ***c*** is the vector (0, 0, 1). The corresponding eigenvalue is 1. Because it is a 3 × 3 matrix, ***c*** has two other eigenvalues, but they are both complex numbers.
###### Exercises
2\.7
Find the eigenvalues and eigenvectors of the matrix
\[2.74\]
[]()
[Solution](https://www.glynholton.com/solutions/exercise-solution-2-7/)
2\.8
Prove that the eigenvalues of a diagonal matrix are its diagonal elements.
[Solution](https://www.glynholton.com/solutions/exercise-solution-2-8/)
2\.9
Use one of the stated properties of eigenvalues to prove that a matrix is singular if and only if it has 0 as one of its eigenvalues.
[Solution](https://www.glynholton.com/solutions/exercise-solution-2-9/)
[](https://www.value-at-risk.net/2-1-motivation/ "2.1 Motivation")[](https://www.value-at-risk.net/complex-numbers/ "2.5 Complex Numbers")[](https://www.value-at-risk.net/cholesky-factorization/ "2.7 Cholesky Factorization")[](https://www.value-at-risk.net/3-1-motivation/ "3.1 Motivation")
## Post navigation | |||||||||
| ML Classification | ||||||||||
| ML Categories |
Raw JSON{
"/Science": 865,
"/Science/Mathematics": 860,
"/Science/Mathematics/Other": 748
} | |||||||||
| ML Page Types |
Raw JSON{
"/Article": 992,
"/Article/Tutorial_or_Guide": 751
} | |||||||||
| ML Intent Types |
Raw JSON{
"Informational": 999
} | |||||||||
| Content Metadata | ||||||||||
| Language | en-us | |||||||||
| Author | Glyn Holton | |||||||||
| Publish Time | 2012-04-02 12:32:01 (14 years ago) | |||||||||
| Original Publish Time | 2012-04-02 12:32:01 (14 years ago) | |||||||||
| Republished | No | |||||||||
| Word Count (Total) | 1,286 | |||||||||
| Word Count (Content) | 696 | |||||||||
| Links | ||||||||||
| External Links | 6 | |||||||||
| Internal Links | 138 | |||||||||
| Technical SEO | ||||||||||
| Meta Nofollow | No | |||||||||
| Meta Noarchive | No | |||||||||
| JS Rendered | No | |||||||||
| Redirect Target | null | |||||||||
| Performance | ||||||||||
| Download Time (ms) | 223 | |||||||||
| TTFB (ms) | 217 | |||||||||
| Download Size (bytes) | 19,456 | |||||||||
| Shard | 64 (laksa) | |||||||||
| Root Hash | 13461805588545726464 | |||||||||
| Unparsed URL | net,value-at-risk!www,/eigenvalues-and-eigenvectors/ s443 | |||||||||