ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://setosa.io/ev/eigenvectors-and-eigenvalues/ |
| Last Crawled | 2026-04-14 16:50:09 (15 hours ago) |
| First Indexed | 2020-03-09 23:39:38 (6 years ago) |
| HTTP Status Code | 200 |
| Meta Title | Eigenvectors and Eigenvalues explained visually |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | Explained Visually
By
Victor Powell
and
Lewis Lehe
Eigenvalues/vectors are instrumental to understanding electrical circuits, mechanical systems, ecology and even Google's
PageRank
algorithm. Let's see if visualization can make these ideas more intuitive.
To begin, let
v
be a vector (shown as a point) and
A
be a matrix with columns
a
1
and
a
2
(shown as arrows). If we multiply
v
by
A
, then
A
sends
v
to a new vector
A
v
.
0
1
2
3
4
5
0
1
2
3
4
5
x
y
v
a₁
a₂
Αv
Α
=
a₁,x
1.00
a₂,x
0.50
a₁,y
0.50
a₂,y
1.00
=
1.00
0.50
0.50
1.00
v
=
2.00
v, x
3.00
v, y
Αv
=
3.50
v, x
4.00
v, y
If you can draw a line through the three points
(
0
,
0
)
,
v
and
A
v
, then
A
v
is just
v
multiplied by a number
λ
; that is,
A
v
=
λ
v
. In this case, we call
λ
an
eigenvalue
and
v
an
eigenvector
. For example, here
(
1
,
2
)
is an eigvector and
5
an eigenvalue.
A
v
=
(
1
2
8
1
)
⋅
(
1
2
)
=
5
(
1
2
)
=
λ
v
.
Below, change the columns of
A
and drag
v
to be an eigenvector. Note three facts: First, every point on the same line as an eigenvector is an eigenvector. Those lines are
eigenspaces
, and each has an associated eigenvalue. Second, if you place
v
on an eigenspace (either
s
1
or
s
2
) with associated eigenvalue
λ
<
1
, then
A
v
is closer to
(
0
,
0
)
than
v
; but when
λ
>
1
, it's farther. Third, both eigenspaces depend on both columns of
A
: it is not as though
a
1
only affects
s
1
.
0
1
2
3
4
5
0
1
2
3
4
5
x
y
v
a₁
a₂
Αv
λ₁ = 1.5
λ₂ = 0.5
s₁
s₂
What are eigenvalues/vectors good for?
If you keep multiplying
v
by
A
, you get a sequence
v
,
A
v
,
A
2
v
,
etc. Eigenspaces attract that sequence and eigenvalues tell you whether it ends up at
(
0
,
0
)
or far away. Therefore, eigenvectors/values tell us about systems that evolve step-by-step.
0
200
400
600
800
1,000
0
200
400
600
800
1,000
x
y
v
Αv
Α²v
a₁
a₂
λ₁ = 1.1
λ₂ = 0.5
s₁
s₂
Let's explore some applications and properties of these sequences.
Fibonacci Sequence
Suppose you have some amoebas in a petri dish. Every minute, all adult amoebas produce one child amoeba, and all child amoebas grow into adults (Note: this is not really how amoebas reproduce.). So if
t
is a minute, the equation of this system is
adults
t
+
1
=
adults
t
+
children
t
children
t
+
1
=
adults
t
which we can rewrite in matrix form like
v
t
+
1
=
A
⋅
v
t
(
adults
t
+
1
children
t
+
1
)
=
(
1
1
1
0
)
⋅
(
adults
t
children
t
)
Below, press "Forward" to step ahead a minute. The total population is the
Fibonacci Sequence
.
children
adults
0
1
2
0
1
2
v₀
children
adults
1 child + 0 adults = 1
1
1
2
3
5
8
13
21
34
55
89
144
233
As you can see, the system goes toward the grey line, which is an eigenspace with
λ
=
(
1
+
5
)
/
2
>
1
.
Steady States
Suppose that, every year, a fraction
p
of New Yorkers move to California and a fraction
q
of Californians move to New York. Drag the circles to decide these fractions and the number starting in each state.
New York
California
1 − p = 0.7
p = 0.3
q = 0.1
1 − q = 0.9
38.33m
19.65m
To understand the system better, we can start by writing it in matrix terms like:
v
t
+
1
=
A
v
t
(
New York
t
+
1
California
t
+
1
)
=
(
1
−
p
q
p
1
−
q
)
⋅
(
New York
t
California
t
)
It turns out that a matrix like
A
, whose entries are positive and whose columns add up to one (try it!), is called a
Markov matrix
, and it always has
λ
=
1
as its largest eigenvalue. That means there's a value of
v
t
for which
A
v
t
=
λ
v
t
=
1
v
t
=
v
t
. At this "steady state," the same number of people move in each direction, and the populations stay the same forever. Hover over the animation to see the system go to the steady state.
90% stay
70% stay
Hover over to play/restart
0m
10m
20m
30m
40m
50m
0m
10m
20m
30m
40m
50m
California
New York
v₀
For more on Markov matrices, check out our explanation of
Markov Chains
.
Complex eigenvalues
So far we've only looked at systems with real eigenvalues. But looking at the equation
A
v
=
λ
v
, who's to say
λ
and
v
can't have some imaginary part? That it can't be a
complex
number? For example,
(
1
1
−
1
1
)
⋅
(
1
i
)
=
(
1
+
i
)
⋅
(
1
i
)
.
Here,
1
+
i
is an eigenvalue and
(
1
,
i
)
is an eigenvector.
If a matrix has complex eigenvalues, its sequence spirals around
(
0
,
0
)
. To see this, drag
A
's columns (the arrows) around until you get a spiral. The eigenvalues are plotted in the real/imaginary plane to the right. You'll see that whenever the eigenvalues have an imaginary part, the system spirals, no matter where you start things off.
steps:
-3
-2
-1
1
2
3
-3
-2
-1
1
2
3
-3
3
-3
3
-3
3
-3
3
real
im
real
im
λ₀
λ₁
Learning more
We've really only scratched the surface of what linear algebra is all about. To learn more, check out the legendary Gilbert Strang's
Linear Algebra
course at MIT's Open Courseware site. To get more practice with applications of eigenvalues/vectors, also ceck out the excellent
Differential Equations
course.
For more explanations, visit the Explained Visually
project homepage.
Or subscribe to our mailing list. |
| Markdown | [Back](https://setosa.io/ev/)
# Eigenvectors and Eigenvalues
### Explained Visually
[Tweet](https://twitter.com/share)
By [Victor Powell](http://twitter.com/vicapow) and [Lewis Lehe](http://twitter.com/lewislehe)
Eigenvalues/vectors are instrumental to understanding electrical circuits, mechanical systems, ecology and even Google's [PageRank](http://www.rose-hulman.edu/~bryan/googleFinalVersionFixed.pdf) algorithm. Let's see if visualization can make these ideas more intuitive.
To begin, let v v be a vector (shown as a point) and A A be a matrix with columns a1 a 1 and a2 a 2 (shown as arrows). If we multiply v v by A A, then A A sends v v to a new vector Av A v.
0
1
2
3
4
5
0
1
2
3
4
5
x
y
v
a₁
a₂
Αv
Α
\=
a₁,x
1\.00
a₂,x
0\.50
a₁,y
0\.50
a₂,y
1\.00
\=
1\.00
0\.50
0\.50
1\.00
v
\=
2\.00
v, x
3\.00
v, y
Αv
\=
3\.50
v, x
4\.00
v, y
If you can draw a line through the three points (0,0) ( 0 , 0 ), v v and Av A v, then Av A v is just v v multiplied by a number λ λ; that is, Av\=λv A v \= λ v. In this case, we call λ λ an **eigenvalue** and v v an **eigenvector**. For example, here (1,2) ( 1 , 2 ) is an eigvector and 5 5 an eigenvalue.
Av\=(1821)⋅(12)\=5(12)\=λv.
A
v
\=
(
1
2
8
1
)
⋅
(
1
2
)
\=
5
(
1
2
)
\=
λ
v
.
Below, change the columns of A A and drag v v to be an eigenvector. Note three facts: First, every point on the same line as an eigenvector is an eigenvector. Those lines are **eigenspaces**, and each has an associated eigenvalue. Second, if you place v v on an eigenspace (either s1 s 1 or s2 s 2) with associated eigenvalue λ\<1 λ \< 1, then Av A v is closer to (0,0) ( 0 , 0 ) than v v; but when λ\>1 λ \> 1, it's farther. Third, both eigenspaces depend on both columns of A A: it is not as though a1 a 1 only affects s1 s 1.
0
1
2
3
4
5
0
1
2
3
4
5
x
y
v
a₁
a₂
Αv
λ₁ = 1.5
λ₂ = 0.5
s₁
s₂
### What are eigenvalues/vectors good for?
If you keep multiplying v v by A A, you get a sequence v,Av,A2v, v , A v , A 2 v , etc. Eigenspaces attract that sequence and eigenvalues tell you whether it ends up at (0,0) ( 0 , 0 ) or far away. Therefore, eigenvectors/values tell us about systems that evolve step-by-step.
0
200
400
600
800
1,000
0
200
400
600
800
1,000
x
y
v
Αv
Α²v
a₁
a₂
λ₁ = 1.1
λ₂ = 0.5
s₁
s₂
Let's explore some applications and properties of these sequences.
### Fibonacci Sequence
Suppose you have some amoebas in a petri dish. Every minute, all adult amoebas produce one child amoeba, and all child amoebas grow into adults (Note: this is not really how amoebas reproduce.). So if t t is a minute, the equation of this system is
adultst\+1childrent\+1\=\=adultst\+childrentadultst
adults
t
\+
1
\=
adults
t
\+
children
t
children
t
\+
1
\=
adults
t
which we can rewrite in matrix form like
vt\+1(adultst\+1childrent\+1)\=\=A(1110)⋅⋅vt(adultstchildrent)
v
t
\+
1
\=
A
⋅
v
t
(
adults
t
\+
1
children
t
\+
1
)
\=
(
1
1
1
0
)
⋅
(
adults
t
children
t
)
Below, press "Forward" to step ahead a minute. The total population is the [Fibonacci Sequence](http://en.wikipedia.org/wiki/Fibonacci_number).
children
adults
0
1
2
0
1
2
v₀
children
adults
reset
forward
1 child + 0 adults = 1
1
1
2
3
5
8
13
21
34
55
89
144
233
As you can see, the system goes toward the grey line, which is an eigenspace with λ\=(1\+5–√)/2\>1 λ \= ( 1 \+ 5 ) / 2 \> 1.
### Steady States
Suppose that, every year, a fraction p p of New Yorkers move to California and a fraction q q of Californians move to New York. Drag the circles to decide these fractions and the number starting in each state.
New York
California
1 − p = 0.7
p = 0.3
q = 0.1
1 − q = 0.9
38\.33m
19\.65m
To understand the system better, we can start by writing it in matrix terms like:
vt\+1(New Yorkt\+1Californiat\+1)\=\=Avt(1−ppq1−q)⋅(New YorktCaliforniat)
v
t
\+
1
\=
A
v
t
(
New York
t
\+
1
California
t
\+
1
)
\=
(
1
−
p
q
p
1
−
q
)
⋅
(
New York
t
California
t
)
It turns out that a matrix like A A, whose entries are positive and whose columns add up to one (try it!), is called a [Markov matrix](http://www.math.harvard.edu/~knill/teaching/math19b_2011/handouts/lecture33.pdf), and it always has λ\=1 λ \= 1 as its largest eigenvalue. That means there's a value of vt v t for which Avt\=λvt\=1vt\=vt A v t \= λ v t \= 1 v t \= v t. At this "steady state," the same number of people move in each direction, and the populations stay the same forever. Hover over the animation to see the system go to the steady state.
90% stay
70% stay
Hover over to play/restart
0m
10m
20m
30m
40m
50m
0m
10m
20m
30m
40m
50m
California
New York
v₀
For more on Markov matrices, check out our explanation of [Markov Chains](http://setosa.io/ev/markov-chains/).
# Complex eigenvalues
So far we've only looked at systems with real eigenvalues. But looking at the equation Av\=λv A v \= λ v, who's to say λ λ and v v can't have some imaginary part? That it can't be a [complex](http://en.wikipedia.org/wiki/Complex_number) number? For example,
(1−111)⋅(1i)\=(1\+i)⋅(1i).
(
1
1
−
1
1
)
⋅
(
1
i
)
\=
(
1
\+
i
)
⋅
(
1
i
)
.
Here, 1\+i 1 \+ i is an eigenvalue and (1,i) ( 1 , i ) is an eigenvector.
If a matrix has complex eigenvalues, its sequence spirals around (0,0) ( 0 , 0 ). To see this, drag A A's columns (the arrows) around until you get a spiral. The eigenvalues are plotted in the real/imaginary plane to the right. You'll see that whenever the eigenvalues have an imaginary part, the system spirals, no matter where you start things off.
steps:
\-3
\-2
\-1
1
2
3
\-3
\-2
\-1
1
2
3
\-3
3
\-3
3
\-3
3
\-3
3
real
im
real
im
λ₀
λ₁
### Learning more
We've really only scratched the surface of what linear algebra is all about. To learn more, check out the legendary Gilbert Strang's [Linear Algebra](http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/least-squares-determinants-and-eigenvalues/eigenvalues-and-eigenvectors/) course at MIT's Open Courseware site. To get more practice with applications of eigenvalues/vectors, also ceck out the excellent [Differential Equations](http://ocw.mit.edu/courses/mathematics/18-03sc-differential-equations-fall-2011/) course.
For more explanations, visit the Explained Visually [project homepage.](https://setosa.io/ev/)
Or subscribe to our mailing list.
Please enable JavaScript to view the [comments powered by Disqus.](http://disqus.com/?ref_noscript) |
| Readable Markdown | Explained Visually
By [Victor Powell](http://twitter.com/vicapow) and [Lewis Lehe](http://twitter.com/lewislehe)
Eigenvalues/vectors are instrumental to understanding electrical circuits, mechanical systems, ecology and even Google's [PageRank](http://www.rose-hulman.edu/~bryan/googleFinalVersionFixed.pdf) algorithm. Let's see if visualization can make these ideas more intuitive.
To begin, let v be a vector (shown as a point) and A be a matrix with columns a 1 and a 2 (shown as arrows). If we multiply v by A, then A sends v to a new vector A v.
0
1
2
3
4
5
0
1
2
3
4
5
x
y
v
a₁
a₂
Αv
Α
\=
a₁,x
1\.00
a₂,x
0\.50
a₁,y
0\.50
a₂,y
1\.00
\=
1\.00
0\.50
0\.50
1\.00
v
\=
2\.00
v, x
3\.00
v, y
Αv
\=
3\.50
v, x
4\.00
v, y
If you can draw a line through the three points ( 0 , 0 ), v and A v, then A v is just v multiplied by a number λ; that is, A v \= λ v. In this case, we call λ an **eigenvalue** and v an **eigenvector**. For example, here ( 1 , 2 ) is an eigvector and 5 an eigenvalue.
A v \= ( 1 2 8 1 ) ⋅ ( 1 2 ) \= 5 ( 1 2 ) \= λ v .
Below, change the columns of A and drag v to be an eigenvector. Note three facts: First, every point on the same line as an eigenvector is an eigenvector. Those lines are **eigenspaces**, and each has an associated eigenvalue. Second, if you place v on an eigenspace (either s 1 or s 2) with associated eigenvalue λ \< 1, then A v is closer to ( 0 , 0 ) than v; but when λ \> 1, it's farther. Third, both eigenspaces depend on both columns of A: it is not as though a 1 only affects s 1.
0
1
2
3
4
5
0
1
2
3
4
5
x
y
v
a₁
a₂
Αv
λ₁ = 1.5
λ₂ = 0.5
s₁
s₂
### What are eigenvalues/vectors good for?
If you keep multiplying v by A, you get a sequence v , A v , A 2 v , etc. Eigenspaces attract that sequence and eigenvalues tell you whether it ends up at ( 0 , 0 ) or far away. Therefore, eigenvectors/values tell us about systems that evolve step-by-step.
0 200 400 600 800 1,000 0 200 400 600 800 1,000 x y v Αv Α²v a₁ a₂ λ₁ = 1.1 λ₂ = 0.5 s₁ s₂
Let's explore some applications and properties of these sequences.
### Fibonacci Sequence
Suppose you have some amoebas in a petri dish. Every minute, all adult amoebas produce one child amoeba, and all child amoebas grow into adults (Note: this is not really how amoebas reproduce.). So if t is a minute, the equation of this system is
adults t \+ 1 \= adults t \+ children t children t \+ 1 \= adults t
which we can rewrite in matrix form like
v t \+ 1 \= A ⋅ v t ( adults t \+ 1 children t \+ 1 ) \= ( 1 1 1 0 ) ⋅ ( adults t children t )
Below, press "Forward" to step ahead a minute. The total population is the [Fibonacci Sequence](http://en.wikipedia.org/wiki/Fibonacci_number).
children
adults
0 1 2 0 1 2 v₀ children adults
1 child + 0 adults = 1
1
1
2
3
5
8
13
21
34
55
89
144
233
As you can see, the system goes toward the grey line, which is an eigenspace with λ \= ( 1 \+ 5 ) / 2 \> 1.
### Steady States
Suppose that, every year, a fraction p of New Yorkers move to California and a fraction q of Californians move to New York. Drag the circles to decide these fractions and the number starting in each state.
New York
California
1 − p = 0.7
p = 0.3
q = 0.1
1 − q = 0.9
38\.33m
19\.65m
To understand the system better, we can start by writing it in matrix terms like:
v t \+ 1 \= A v t ( New York t \+ 1 California t \+ 1 ) \= ( 1 − p q p 1 − q ) ⋅ ( New York t California t )
It turns out that a matrix like A, whose entries are positive and whose columns add up to one (try it!), is called a [Markov matrix](http://www.math.harvard.edu/~knill/teaching/math19b_2011/handouts/lecture33.pdf), and it always has λ \= 1 as its largest eigenvalue. That means there's a value of v t for which A v t \= λ v t \= 1 v t \= v t. At this "steady state," the same number of people move in each direction, and the populations stay the same forever. Hover over the animation to see the system go to the steady state.
90% stay 70% stay Hover over to play/restart0m 10m 20m 30m 40m 50m 0m 10m 20m 30m 40m 50m California New York v₀
For more on Markov matrices, check out our explanation of [Markov Chains](http://setosa.io/ev/markov-chains/).
## Complex eigenvalues
So far we've only looked at systems with real eigenvalues. But looking at the equation A v \= λ v, who's to say λ and v can't have some imaginary part? That it can't be a [complex](http://en.wikipedia.org/wiki/Complex_number) number? For example,
( 1 1 − 1 1 ) ⋅ ( 1 i ) \= ( 1 \+ i ) ⋅ ( 1 i ) .
Here, 1 \+ i is an eigenvalue and ( 1 , i ) is an eigenvector.
If a matrix has complex eigenvalues, its sequence spirals around ( 0 , 0 ). To see this, drag A's columns (the arrows) around until you get a spiral. The eigenvalues are plotted in the real/imaginary plane to the right. You'll see that whenever the eigenvalues have an imaginary part, the system spirals, no matter where you start things off.
steps:
\-3
\-2
\-1
1
2
3
\-3
\-2
\-1
1
2
3
\-3
3
\-3
3
\-3
3
\-3
3
real
im
real
im
λ₀
λ₁
### Learning more
We've really only scratched the surface of what linear algebra is all about. To learn more, check out the legendary Gilbert Strang's [Linear Algebra](http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/least-squares-determinants-and-eigenvalues/eigenvalues-and-eigenvectors/) course at MIT's Open Courseware site. To get more practice with applications of eigenvalues/vectors, also ceck out the excellent [Differential Equations](http://ocw.mit.edu/courses/mathematics/18-03sc-differential-equations-fall-2011/) course.
For more explanations, visit the Explained Visually [project homepage.](https://setosa.io/ev/)
Or subscribe to our mailing list. |
| Shard | 39 (laksa) |
| Root Hash | 14846392697162353439 |
| Unparsed URL | io,setosa!/ev/eigenvectors-and-eigenvalues/ s443 |