âšī¸ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.2 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue.html |
| Last Crawled | 2026-04-07 12:50:48 (5 days ago) |
| First Indexed | 2021-01-04 06:43:13 (5 years ago) |
| HTTP Status Code | 200 |
| Meta Title | Eigenvalues and Eigenvectors |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | Definition. Let . The characteristic polynomial of A is
(I is the identity matrix.)
A root of the characteristic polynomial is called an eigenvalue (or a characteristic
value ) of A.
While the entries of A come from the field F, it makes sense to ask
for the roots of in an extension field E of F. For
example, if A is a matrix with real entries, you can ask for the
eigenvalues of A in or in .
Example. Consider the matrix
The characteristic polynomial is . Hence, A has no
eigenvalues in . Its eigenvalues in are . Example. Let
You can use row and column operations to simplify the computation of
:
(Adding a multiple of a row or a column to a row or column,
respectively, does not change the determinant.) Now expand by
cofactors of the second row:
The eigenvalues are , (double). Example. A matrix is upper triangular if for . Thus, the entries below the main
diagonal are zero. ( Lower triangular matrices
are defined in an analogous way.)
The eigenvalues of a triangular matrix
are just the diagonal entries . (You can prove this by induction on n.) Remark. To find the eigenvalues of a matrix,
you need to find the roots of the characteristic polynomial.
There are formulas for finding the roots of polynomials of degree
. (For example, the quadratic formula gives the roots
of a quadratic equation .) However, Abel showed
in the early part of the 19-th century that the general quintic is
not solvable by radicals. (For example, is not
solvable by radicals over .) In the real
world, the computation of eigenvalues often requires numerical
approximation.
If is an eigenvalue of A, then . Hence, the matrix is not invertible. It follows that must row reduce to a row reduced echelon matrix R
with fewer than n leading coefficients. Thus, the system has at least one free variable, and hence has more
than one solution. In particular, --- and therefore,
--- has at least one nonzero
solution .
Definition. Let , and let
be an eigenvalue of A. An
eigenvector (or a characteristic vector )
of A for is a nonzero vector such that
Equivalently,
Example. Let
The eigenvalues are , (double).
First, I'll find an eigenvector for .
I want such that
You can solve the system by row reduction. Since the column of zeros
on the right will never change, it's enough to row reduce the matrix on the right.
This says
Therefore, , , and the eigenvector is
Notice that this is the usual algorithm for finding a basis for the
solution space of a homogeneous system (or the null space of a
matrix).
I can set c to any nonzero number. For example, gives the eigenvector . Notice that
there are infinitely many eigenvectors for this eigenvalue, but all
of these eigenvectors are multiples of .
Likewise,
Hence, the eigenvectors are
Taking , gives ; taking , gives . This
eigenvalue gives rise to two independent eigenvectors.
Note, however, that a double root of the characteristic polynomial
need not give rise to two independent eigenvectors. Definition. Matrices are similar if there is an
invertible matrix such that .
Lemma. Similar matrices have the same
characteristic polynomial (and hence the same eigenvalues).
Proof.
Therefore, the matrices and are similar. Hence, they have the same
determinant. The determinant of is the characteristic
polynomial of A and the determinant of is
the characteristic polynomial of . Definition. Let be a
linear transformation, where V is a finite-dimensional vector space.
The characteristic polynomial of T is the
characteristic polynomial of a matrix of T relative to a basis of V.
The preceding lemma shows that this is independent of the choice of
basis. For if and are bases for V, then
Therefore, and are similar, so they have the same
characteristic polynomial.
This shows that it makes sense to speak of the
eigenvalues and eigenvectors of a
linear transformation T.
Definition. A matrix is diagonalizable if A has n
independent eigenvectors --- that is, if there is a basis for
consisting of eigenvectors of A.
Proposition. is
diagonalizable if and only if it is similar to a diagonal matrix.
Proof. Let be n
independent eigenvectors for A corresponding to eigenvalues . Let T be the linear
transformation corresponding to A:
Since for all i, the matrix of T
relative to the basis is
Now A is the matrix of T relative to the standard basis, so
The matrix is obtained by building a
matrix using the as the columns. Then .
Hence,
Conversely, if D is diagonal, P is invertible, and , the columns of P are independent
eigenvectors for A. In fact, if
then says
Hence, . Example. Consider the matrix matrix
In an earlier example, I showed that A has 3 independent eigenvectors
, , . Therefore, A
is diagonalizable.
To find a diagonalizing matrix, build a matrix using the eigenvectors
as the columns:
You can check by finding and doing the multiplication that
you get a diagonal matrix:
Of course, I knew this was the answer! I should get a diagonal matrix
with the eigenvalues on the main diagonal, in the same order that
I put the corresponding eigenvectors into P.
You can put the eigenvectors in as the columns of P in any order: A
different order will give a diagonal matrix with the eigenvalues on
the main diagonal in a different order. Example. Let
Find the eigenvalues and, for each eigenvalue, a complete set of
eigenvectors. If A is diagonalizable, find a matrix P such that is a diagonal matrix.
The eigenvalue is .
Now
Thinking of this as the coefficient matrix of a homogeneous linear
system with variables a, b, and c, I obtain the equations
Then , so
is an eigenvector. Since there's only one independent
eigenvector --- as opposed to 3 --- the matrix A is not
diagonalizable. Example. The following matrixhas eigenvalue
(a triple root):
Now
Thinking of this as the coefficient matrix of a homogeneous linear
system with variables a, b, and c, I obtain the equations
Set . This gives and . Thus, the only eigenvectors are the nonzero
multiples of . Since there is only one independent
eigenvectors, B is not diagonalizable. Proposition. Let be a linear transformation on an n dimensional vector
space. If are eigenvectors corresponding to
the distinct eigenvalues , then is independent.
Proof. Suppose to the contrary that is dependent. Let p be the smallest number
such that the subset is dependent. Then
there is a nontrivial linear relation
Note that , else
This would contradict minimality of p.
Hence, I can rewrite the equation above in the form
Apply T to both sides, and use :
On the other hand,
Subtract the last equation from the one before it to obtain
Since the eigenvalues are distinct, the terms are nonzero. Hence, this is a linear
relation in which contradicts minimality of p
--- unless .
In this case, , which contradicts the fact that is an eigenvector. Therefore, the original set must
in fact be independent. Example. Let A be an real matrix. The complex eigenvalues of A
always come in conjugate pairs and .
Moreover, if v is an eigenvector for , then
the conjugate is an eigenvector for .
For suppose . Taking complex conjugates, I get
( because A is a real matrix.)
In practical terms, this means that once you've found an eigenvector
for one complex eigenvalue, you can get an eigenvector for the
conjugate eigenvalue by taking the conjugate of the eigenvector. You
don't need to do a separate eigenvector computation.
For example, suppose
The characteristic polynomial is .
The eigenvalues are .
Find an eigenvector for :
I knew that the second row must be a multiple of the
first row, because I know the system has nontrivial solutions. So I
don't have to work out what multiple it is; I can just zero
out the second row on general principles.
This only works for matrices, and only for
those which are 's in eigenvector computations.
Next, there's no point in going all the way to row reduced echelon
form. I just need some nonzero vector such that
That is, I want
I can find an a and b that work by swapping and -1, and negating one of them. For example, take
(-1 negated) and . This checks:
So is an eigenvector for .
By the discussion at the start of the example, I don't need to do a
computation for . Just conjugate the previous
eigenvector: must be an eigenvector for .
Since there are 2 independent eigenvectors, you can use them
construct a diagonalizing matrix for A:
Notice that you get a diagonal matrix with the eigenvalues on the
main diagonal, in the same order in which you listed the
eigenvectors. Example. For the following matrix, find the
eigenvalues over , and for each eigenvalue, a
complete set of independent eigenvectors.
Find a diagonalizing matrix and the corresponding diagonal matrix.
The characteristic polynomial is
Now
The eigenvalues are and .
For , I have
With variables a, b, and c, the corresponding homogeneous system is
and . This gives the solution vector
Taking , I obtain the eigenvector .
For , I have
I multiplied the first row by , then divided it by 5. This
made it the same as the third row.
I divided the second row by .
(I knew the the first and third rows had to be multiples, since
they're clearly independent of the second row. Thus, if they weren't
multiples, the three rows would be independent, the eigenvector
matrix would be invertible, and there would be no eigenvectors [which
must be nonzero].)
Now I can wipe out the third row by subtracting the first:
With variables a, b, and c, the corresponding homogeneous system is
There will only be one parameter (c), so there will only be one
independent eigenvector. To get one, switch the "-5" and
" " and negate the "-5" to get
"5". This gives , , and . You can see that these values
for a and c work:
Thus, my eigenvector is .
Hence, an eigenvector for is the conjugate .
A diagonalizing matrix is given by
With this diagonalizing matrix, I have
Contact information
Bruce Ikenaga's Home Page Copyright 2011 by Bruce Ikenaga |
| Markdown | # Eigenvalues and Eigenvectors
*Definition.* Let  . The *characteristic polynomial* of A is

(I is the  identity matrix.)
A root of the characteristic polynomial is called an *eigenvalue* (or a *characteristic value*) of A.
While the entries of A come from the field F, it makes sense to ask for the roots of  in an extension field E of F. For example, if A is a matrix with real entries, you can ask for the eigenvalues of A in  or in  .
***
*Example.* Consider the matrix
![\$\$A = \\left\[\\matrix{0 & 1 \\cr -1 & 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue7.png)
The characteristic polynomial is  . Hence, A has no eigenvalues in  . Its eigenvalues in  are  .
***
*Example.* Let
![\$\$A = \\left\[\\matrix{2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\in M(3, \\real).\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue12.png)
You can use row and column operations to simplify the computation of  :

(Adding a multiple of a row or a column to a row or column, respectively, does not change the determinant.) Now expand by cofactors of the second row:

The eigenvalues are  ,  (double).
***
*Example.* A matrix  is *upper triangular* if  for  . Thus, the entries below the main diagonal are zero. ( *Lower triangular* matrices are defined in an analogous way.)
The eigenvalues of a triangular matrix
![\$\$A = \\left\[\\matrix{\\lambda\_1 & \* & \\cdots & \* \\cr 0 & \\lambda\_2 & \\cdots & \* \\cr \\vdots & \\vdots & \\ddots & \\vdots \\cr 0 & 0 & \\cdots & \\lambda\_n \\cr}\\right\]\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue21.png)
are just the diagonal entries  . (You can prove this by induction on n.)
***
*Remark.* To find the eigenvalues of a matrix, you need to find the roots of the characteristic polynomial.
There are formulas for finding the roots of polynomials of degree  . (For example, the quadratic formula gives the roots of a quadratic equation  .) However, Abel showed in the early part of the 19-th century that the general quintic is not solvable by radicals. (For example,  is not solvable by radicals over  .) In the real world, the computation of eigenvalues often requires numerical approximation. 
If  is an eigenvalue of A, then  . Hence, the  matrix  is not invertible. It follows that  must row reduce to a row reduced echelon matrix R with fewer than n leading coefficients. Thus, the system  has at least one free variable, and hence has more than one solution. In particular,  --- and therefore,  --- has at least *one nonzero solution*.
*Definition.* Let  , and let  be an eigenvalue of A. An *eigenvector* (or a *characteristic vector*) of A for  is a *nonzero* vector  such that

Equivalently,

***
*Example.* Let
![\$\$A = \\left\[\\matrix{2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\in M(3, \\real).\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue41.png)
The eigenvalues are  ,  (double).
First, I'll find an eigenvector for  .
![\$\$A - 0\\cdot I = \\left\[\\matrix{2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue45.png)
I want  such that
![\$\$\\left\[\\matrix{2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\left\[\\matrix{a \\cr b \\cr c \\cr}\\right\] = \\left\[\\matrix{0 \\cr 0 \\cr 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue47.png)
You can solve the system by row reduction. Since the column of zeros on the right will never change, it's enough to row reduce the  matrix on the right.
![\$\$\\left\[\\matrix{2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\quad \\to \\quad \\left\[\\matrix{1 & 0 & -1 \\cr 0 & 1 & -1 \\cr 0 & 0 & 0 \\cr}\\right\]\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue49.png)
This says

Therefore,  ,  , and the eigenvector is
![\$\$\\left\[\\matrix{a \\cr b \\cr c \\cr}\\right\] = \\left\[\\matrix{c \\cr c \\cr c \\cr}\\right\] = c \\left\[\\matrix{1 \\cr 1 \\cr 1 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue53.png)
Notice that this is the usual algorithm for finding a basis for the solution space of a homogeneous system (or the null space of a matrix).
I can set c to any nonzero number. For example,  gives the eigenvector  . Notice that there are infinitely many eigenvectors for this eigenvalue, but all of these eigenvectors are multiples of  .
Likewise,
![\$\$A - I = \\left\[\\matrix{1 & -3 & 1 \\cr 1 & -3 & 1 \\cr 1 & -3 & 1 \\cr}\\right\] \\quad \\to \\quad \\left\[\\matrix{1 & -3 & 1 \\cr 0 & 0 & 0 \\cr 0 & 0 & 0 \\cr}\\right\]\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue57.png)
Hence, the eigenvectors are
![\$\$\\left\[\\matrix{a \\cr b \\cr c \\cr}\\right\] = \\left\[\\matrix{3b - c \\cr b \\cr c \\cr}\\right\] = b \\left\[\\matrix{3 \\cr 1 \\cr 0 \\cr}\\right\] + c \\left\[\\matrix{-1 \\cr 0 \\cr 1 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue58.png)
Taking  ,  gives  ; taking  ,  gives  . This eigenvalue gives rise to two independent eigenvectors.
Note, however, that a double root of the characteristic polynomial *need not* give rise to two independent eigenvectors.
***
*Definition.* Matrices  are *similar* if there is an invertible matrix  such that  .
*Lemma.* Similar matrices have the same characteristic polynomial (and hence the same eigenvalues).
*Proof.*

Therefore, the matrices  and  are similar. Hence, they have the same determinant. The determinant of  is the characteristic polynomial of A and the determinant of  is the characteristic polynomial of  .
*Definition.* Let  be a linear transformation, where V is a finite-dimensional vector space. The *characteristic polynomial* of T is the characteristic polynomial of a matrix of T relative to a basis  of V.
The preceding lemma shows that this is independent of the choice of basis. For if  and  are bases for V, then
![\$\$\[T\]\_{{\\cal C},{\\cal C}} = \[{\\cal B} \\to {\\cal C}\]\[T\]\_{{\\cal B},{\\cal B}} \[{\\cal C} \\to {\\cal B}\] = \[{\\cal B} \\to {\\cal C}\]\[T\]\_{{\\cal B},{\\cal B}} \[{\\cal B} \\to {\\cal C}\]^{-1}.\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue78.png)
Therefore, ![\$\[T\]\_{{\\cal C},{\\cal C}}\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue79.png) and ![\$\[T\]\_{{\\cal B},{\\cal B}}\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue80.png) are similar, so they have the same characteristic polynomial.
This shows that it makes sense to speak of the *eigenvalues* and *eigenvectors* of a *linear transformation* T.
*Definition.* A matrix  is *diagonalizable* if A has n independent eigenvectors --- that is, if there is a basis for  consisting of eigenvectors of A.
*Proposition.*  is diagonalizable if and only if it is similar to a diagonal matrix.
*Proof.* Let  be n independent eigenvectors for A corresponding to eigenvalues  . Let T be the linear transformation corresponding to A:

Since  for all i, the matrix of T relative to the basis  is
![\$\$\[T\]\_{{\\cal B}, {\\cal B}} = \\left\[\\matrix{\\lambda\_1 & 0 & \\cdots & 0 \\cr 0 & \\lambda\_2 & \\cdots & 0 \\cr \\vdots & \\vdots & & \\vdots \\cr 0 & 0 & \\cdots & \\lambda\_n \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue89.png)
Now A is the matrix of T relative to the standard basis, so
![\$\$\[T\]\_{{\\cal B}, {\\cal B}} = \[{\\rm std} \\to {\\cal B}\]\\cdot A\\cdot \[{\\cal B} \\to {\\rm std}\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue90.png)
The matrix ![\$\[{\\cal B} \\to {\\rm std}\]\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue91.png) is obtained by building a matrix using the  as the columns. Then ![\$\[{\\rm std} \\to {\\cal B}\] = \[{\\cal B} \\to {\\rm std}\]^{-1}\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue93.png) .
Hence,
![\$\$\\left\[\\matrix{\\lambda\_1 & 0 & \\cdots & 0 \\cr 0 & \\lambda\_2 & \\cdots & 0 \\cr \\vdots & \\vdots & & \\vdots \\cr 0 & 0 & \\cdots & \\lambda\_n \\cr}\\right\] = \[{\\cal B} \\to {\\rm std}\]^{-1}\\cdot A \\cdot \[{\\cal B} \\to {\\rm std}\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue94.png)
Conversely, if D is diagonal, P is invertible, and  , the columns  of P are independent eigenvectors for A. In fact, if
![\$\$D = \\left\[\\matrix{\\lambda\_1 & 0 & \\cdots & 0 \\cr 0 & \\lambda\_2 & \\cdots & 0 \\cr \\vdots & \\vdots & & \\vdots \\cr 0 & 0 & \\cdots & \\lambda\_n \\cr}\\right\],\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue97.png)
then  says
![\$\$\\left\[\\matrix{\\uparrow & \\uparrow & & \\uparrow \\cr c\_1 & c\_2 & \\cdots & c\_n \\cr \\downarrow & \\downarrow & & \\downarrow \\cr}\\right\] \\left\[\\matrix{\\lambda\_1 & 0 & \\cdots & 0 \\cr 0 & \\lambda\_2 & \\cdots & 0 \\cr \\vdots & \\vdots & & \\vdots \\cr 0 & 0 & \\cdots & \\lambda\_n \\cr}\\right\] = A \\left\[\\matrix{\\uparrow & \\uparrow & & \\uparrow \\cr c\_1 & c\_2 & \\cdots & c\_n \\cr \\downarrow & \\downarrow & & \\downarrow \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue99.png)
Hence,  .
***
*Example.* Consider the matrix matrix
![\$\$A = \\left\[\\matrix{2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\in M(3, \\real).\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue101.png)
In an earlier example, I showed that A has 3 independent eigenvectors  ,  ,  . Therefore, A is diagonalizable.
To find a diagonalizing matrix, build a matrix using the eigenvectors as the columns:
![\$\$P = \\left\[\\matrix{ 1 & 3 & 1 \\cr 1 & 1 & 0 \\cr 1 & 0 & 1 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue105.png)
You can check by finding  and doing the multiplication that you get a diagonal matrix:
![\$\$P^{-1} A P = \\left\[\\matrix{ 1 & 3 & 1 \\cr 1 & 1 & 0 \\cr 1 & 0 & 1 \\cr}\\right\]^{-1} \\left\[\\matrix{ 2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\left\[\\matrix{ 1 & 3 & 1 \\cr 1 & 1 & 0 \\cr 1 & 0 & 1 \\cr}\\right\] =\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue107.png)
![\$\$\\left\[\\matrix{ -1 & 3 & -1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\left\[\\matrix{ 2 & -3 & 1 \\cr 1 & -2 & 1 \\cr 1 & -3 & 2 \\cr}\\right\] \\left\[\\matrix{ 1 & 3 & 1 \\cr 1 & 1 & 0 \\cr 1 & 0 & 1 \\cr}\\right\] = \\left\[\\matrix{ 0 & 0 & 0 \\cr 0 & 1 & 0 \\cr 0 & 0 & 1 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue108.png)
Of course, I knew this was the answer! I should get a diagonal matrix with the eigenvalues on the main diagonal, *in the same order that I put the corresponding eigenvectors into* P.
You can put the eigenvectors in as the columns of P in any order: A different order will give a diagonal matrix with the eigenvalues on the main diagonal in a different order.
***
*Example.* Let
![\$\$A = \\left\[\\matrix{ 4 & -4 & -5 \\cr 1 & 0 & -3 \\cr 0 & 0 & 2 \\cr}\\right\] \\in M(3, \\real).\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue109.png)
Find the eigenvalues and, for each eigenvalue, a complete set of eigenvectors. If A is diagonalizable, find a matrix P such that  is a diagonal matrix.

The eigenvalue is  .
Now
![\$\$A - 2I = \\left\[\\matrix{ 2 & -4 & -5 \\cr 1 & -2 & -3 \\cr 0 & 0 & 0 \\cr}\\right\] \\to \\left\[\\matrix{ 1 & -2 & 0 \\cr 0 & 0 & 1 \\cr 0 & 0 & 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue113.png)
Thinking of this as the coefficient matrix of a homogeneous linear system with variables a, b, and c, I obtain the equations

Then  , so
![\$\$\\left\[\\matrix{a \\cr b \\cr c \\cr}\\right\] = \\left\[\\matrix{2b \\cr b \\cr 0 \\cr}\\right\] = b\\cdot \\left\[\\matrix{2 \\cr 1 \\cr 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue116.png)
 is an eigenvector. Since there's only one independent eigenvector --- as opposed to 3 --- the matrix A is not diagonalizable.
***
*Example.* The following matrixhas eigenvalue  (a triple root):
![\$\$B = \\left\[\\matrix{-3 & 3 & -5 \\cr 12 & -7 & 14 \\cr 10 & -7 & 13 \\cr}\\right\] \\in M(3, \\real).\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue119.png)
Now
![\$\$B - 1\\cdot I = \\left\[\\matrix{ -4 & 3 & -5 \\cr 12 & -8 & 14 \\cr 10 & -7 & 12 \\cr}\\right\] \\to \\left\[\\matrix{ 1 & 0 & \\dfrac{1}{2} \\cr 0 & 1 & -1 \\cr 0 & 0 & 0 \\cr}\\right\]\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue120.png)
Thinking of this as the coefficient matrix of a homogeneous linear system with variables a, b, and c, I obtain the equations

Set  . This gives  and  . Thus, the only eigenvectors are the nonzero multiples of  . Since there is only one independent eigenvectors, B is not diagonalizable.
***
*Proposition.* Let  be a linear transformation on an n dimensional vector space. If  are eigenvectors corresponding to the *distinct* eigenvalues  , then  is independent.
*Proof.* Suppose to the contrary that  is dependent. Let p be the smallest number such that the subset  is dependent. Then there is a nontrivial linear relation

Note that  , else

This would contradict minimality of p.
Hence, I can rewrite the equation above in the form

Apply T to both sides, and use  :

On the other hand,

Subtract the last equation from the one before it to obtain

Since the eigenvalues are distinct, the terms  are nonzero. Hence, this is a linear relation in  which contradicts minimality of p --- unless  .
In this case,  , which contradicts the fact that  is an eigenvector. Therefore, the original set must in fact be independent. 
***
*Example.* Let A be an  *real* matrix. The complex eigenvalues of A always come in conjugate pairs  and  .
Moreover, if v is an eigenvector for  , then the conjugate  is an eigenvector for  .
For suppose  . Taking complex conjugates, I get

( because A is a real matrix.)
In practical terms, this means that once you've found an eigenvector for one complex eigenvalue, you can get an eigenvector for the conjugate eigenvalue by taking the conjugate of the eigenvector. You don't need to do a separate eigenvector computation.
For example, suppose
![\$\$A = \\left\[\\matrix{1 & -1 \\cr 2 & 3 \\cr}\\right\] \\in M(2, \\real).\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue154.png)
The characteristic polynomial is  . The eigenvalues are  .
Find an eigenvector for  :
![\$\$A - (2 + i)I = \\left\[\\matrix{-1 - i & -1 \\cr 2 & 2 - i \\cr}\\right\] \\to \\left\[\\matrix{-1 - i & -1 \\cr 0 & 0 \\cr}\\right\]\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue158.png)
I knew that the second row  must be a multiple of the first row, because I know the system has nontrivial solutions. So I don't have to work out *what* multiple it is; I can just zero out the second row on general principles.
This *only* works for  matrices, and only for those which are  's in eigenvector computations.
Next, there's no point in going all the way to row reduced echelon form. I just need some nonzero vector  such that
![\$\$\\left\[\\matrix{-1 - i & -1 \\cr 0 & 0 \\cr}\\right\] \\left\[\\matrix{a \\cr b \\cr}\\right\] = \\left\[\\matrix{0 \\cr 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue163.png)
That is, I want

I can find an a and b that work by swapping  and -1, and negating one of them. For example, take  (-1 negated) and  . This checks:

So  is an eigenvector for  .
By the discussion at the start of the example, I don't need to do a computation for  . Just conjugate the previous eigenvector:  must be an eigenvector for  .
Since there are 2 independent eigenvectors, you can use them construct a diagonalizing matrix for A:
![\$\$\\left\[\\matrix{1 & 1 \\cr -1 - i & -1 + i \\cr}\\right\]^{-1} \\left\[\\matrix{1 & -1 \\cr 2 & 3 \\cr}\\right\] \\left\[\\matrix{1 & 1 \\cr -1 - i & -1 + i \\cr}\\right\] = \\left\[\\matrix{2 + i & 0 \\cr 0 & 2 - i \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue174.png)
Notice that you get a diagonal matrix with the eigenvalues on the main diagonal, in the same order in which you listed the eigenvectors.
***
*Example.* For the following matrix, find the eigenvalues over  , and for each eigenvalue, a complete set of independent eigenvectors.
Find a diagonalizing matrix and the corresponding diagonal matrix.
![\$\$A = \\left\[\\matrix{ -2 & 0 & 5 \\cr 0 & 2 & 0 \\cr -5 & 0 & 4 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue176.png)
The characteristic polynomial is
![\$\$\|A - x I\| = \\left\|\\matrix{ -2 - x & 0 & 5 \\cr 0 & 2 - x & 0 \\cr -5 & 0 & 4 - x \\cr}\\right\| = (2 - x) \\left\|\\matrix{ -2 - x & 5 \\cr -5 & 4 - x \\cr}\\right\| = (2 - x)\[(x + 2)(x - 4) + 25\] =\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue177.png)

Now

The eigenvalues are  and  .
For  , I have
![\$\$A - 2 I = \\left\[\\matrix{ -4 & 0 & 5 \\cr 0 & 0 & 0 \\cr -5 & 0 & 2 \\cr}\\right\] \\quad \\to \\quad \\left\[\\matrix{ 1 & 0 & 0 \\cr 0 & 0 & 1 \\cr 0 & 0 & 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue183.png)
With variables a, b, and c, the corresponding homogeneous system is  and  . This gives the solution vector
![\$\$\\left\[\\matrix{a \\cr b \\cr c \\cr}\\right\] = \\left\[\\matrix{0 \\cr b \\cr 0 \\cr}\\right\] = b \\cdot \\left\[\\matrix{0 \\cr 1 \\cr 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue186.png)
Taking  , I obtain the eigenvector  .
For  , I have
![\$\$\\left\[\\matrix{ -3 - 4 i & 0 & 5 \\cr 0 & 1 - 4 i & 0 \\cr -5 & 0 & 3 - 4 i \\cr}\\right\] \\quad \\to \\quad \\left\[\\matrix{ -5 & 0 & 3 - 4 i \\cr 0 & 1 & 0 \\cr -5 & 0 & 3 - 4 i \\cr}\\right\]\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue190.png)
I multiplied the first row by  , then divided it by 5. This made it the same as the third row.
I divided the second row by  .
(I knew the the first and third rows had to be multiples, since they're clearly independent of the second row. Thus, if they weren't multiples, the three rows would be independent, the eigenvector matrix would be invertible, and there would be no eigenvectors \[which must be nonzero\].)
Now I can wipe out the third row by subtracting the first:
![\$\$\\left\[\\matrix{ -5 & 0 & 3 - 4 i \\cr 0 & 1 & 0 \\cr -5 & 0 & 3 - 4 i \\cr}\\right\] \\quad \\to \\quad \\left\[\\matrix{ -5 & 0 & 3 - 4 i \\cr 0 & 1 & 0 \\cr 0 & 0 & 0 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue193.png)
With variables a, b, and c, the corresponding homogeneous system is

There will only be one parameter (c), so there will only be one independent eigenvector. To get one, switch the "-5" and " " and negate the "-5" to get "5". This gives  ,  , and  . You can see that these values for a and c work:

Thus, my eigenvector is  .
Hence, an eigenvector for  is the conjugate  .
A diagonalizing matrix is given by
![\$\$P = \\left\[\\matrix{ 0 & 3 - 4 i & 3 + 4 i \\cr 1 & 0 & 0 \\cr 0 & 5 & 5 \\cr}\\right\].\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue203.png)
With this diagonalizing matrix, I have
![\$\$P^{-1} A P = \\left\[\\matrix{ 2 & 0 & 0 \\cr 0 & 1 + 4 i & 0 \\cr 0 & 0 & 1 - 4 i \\cr}\\right\].\\quad\\halmos\$\$](https://sites.millersville.edu/bikenaga/linear-algebra/eigenvalue/eigenvalue204.png)
***
***
[Contact information](https://sites.millersville.edu/bikenaga/contact.html)
[Bruce Ikenaga's Home Page](https://sites.millersville.edu/bikenaga/index.html)
Copyright 2011 by Bruce Ikenaga |
| Readable Markdown | null |
| Shard | 130 (laksa) |
| Root Hash | 13885530764537021130 |
| Unparsed URL | edu,millersville!sites,/bikenaga/linear-algebra/eigenvalue/eigenvalue.html s443 |