ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.2 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://www.3blue1brown.com/lessons/quick-eigen |
| Last Crawled | 2026-04-07 05:03:48 (4 days ago) |
| First Indexed | 2021-07-28 22:05:22 (4 years ago) |
| HTTP Status Code | 200 |
| Meta Title | 3Blue1Brown - A quick trick for computing eigenvalues |
| Meta Description | A quick way to compute eigenvalues of a 2x2 matrix |
| Meta Canonical | null |
| Boilerpipe Text | This is a video for anyone who already knows what eigenvalues and eigenvectors are, and who might enjoy a quick way to compute them in the case of 2x2 matrices. If you're unfamiliar with eigenvalues, take a look at the
previous chapter
which introduces them.
You can skip ahead if you just want to see the trick, but if possible we'd like you to rediscover it for yourself, so let's lay down a little background.
As a quick reminder, if the effect of a linear transformation on a given vector is to scale it by some constant, we call that vector an eigenvector of the transformation, and we call the relevant scaling factor the corresponding eigenvalue, often denoted with the letter lambda,
λ
\lambda
.
When you write this as an equation and rearrange a little, what you see is that if a number
λ
\lambda
is an eigenvalue of a matrix
A
A
, then the matrix
(
A
−
λ
I
)
(A - \lambda I)
must send some nonzero vector, namely the corresponding eigenvector, to the zero vector, which in turn means the determinant of this modified matrix must be
0
0
.
A
v
⃗
=
λ
I
v
⃗
A
v
⃗
−
λ
I
v
⃗
=
0
⃗
(
A
−
λ
I
)
v
⃗
=
0
⃗
det
(
A
−
λ
I
)
=
0
\begin{aligned} A \vec{\mathbf{v}} & =\lambda I \vec{\mathbf{v}} \\ \rule{0pt}{1.25em} A \vec{\mathbf{v}}-\lambda I \vec{\mathbf{v}} & =\vec{\mathbf{0}} \\ \rule{0pt}{1.25em} (A-\lambda I) \vec{\mathbf{v}} & =\vec{\mathbf{0}} \\ \rule{0pt}{1.25em} \operatorname{det}(A-\lambda I) &= 0 \end{aligned}
That's a bit of a mouthful, but again, we're assuming this is review for anyone reading.
The usual way to compute eigenvalues and how most students are taught to carry it out, is to subtract a variable lambda off the diagonals of a matrix and solve for when the determinant equals
0
0
. For example, when finding the eigenvalues of the matrix
[
3
1
4
1
]
\left[\begin{array}{ll} 3 & 1 \\ 4 & 1 \end{array}\right]
this looks like:
det
(
[
3
−
λ
1
4
1
−
λ
]
)
=
(
3
−
λ
)
(
1
−
λ
)
−
(
1
)
(
4
)
=
0
\operatorname{det}\left(\left[\begin{array}{cc} 3-\lambda & 1 \\ 4 & 1-\lambda \end{array}\right]\right)=(3-\lambda)(1-\lambda)-(1)(4) = 0
This always involves a few steps to expand this and simplify it to get a clean quadratic polynomial, known as the “characteristic polynomial” of the matrix. The eigenvalues are the roots of this polynomial, so to find them you apply the quadratic formula, which typically requires one or two more steps of simplification.
det
(
[
3
−
λ
1
4
1
−
λ
]
)
=
(
3
−
λ
)
(
1
−
λ
)
−
(
1
)
(
4
)
=
(
3
−
4
λ
+
λ
2
)
−
4
=
λ
2
−
4
λ
−
1
=
0
λ
1
,
λ
2
=
4
±
4
2
−
4
(
1
)
(
−
1
)
2
=
2
±
5
\begin{aligned} \operatorname{det}\left(\left[\begin{array}{cc} 3-\lambda & 1 \\ 4 & 1-\lambda \end{array}\right]\right) & =(3-\lambda)(1-\lambda)-(1)(4) \\ \rule{0pt}{1.5em} & =\left(3-4 \lambda+\lambda^2\right)-4 \\ \rule{0pt}{1.5em} & =\lambda^2-4 \lambda-1=0 \\ \rule{0pt}{2.0em} \lambda_1, \lambda_2 & = \frac{4 \pm \sqrt{4^2-4(1)(-1)}}{2} =2 \pm \sqrt{5} \end{aligned}
This process isn't
terrible
, but at least for 2x2 matrices, there's a much more direct way to get at this answer. If you want to rediscover this trick, there are three relevant facts you'll need to know, each of which is worth knowing in its own right and can help with other problem-solving.
The trace of a matrix, which is the sum of these two diagonal entries, is equal to the sum of the eigenvalues.
tr
(
[
a
b
c
d
]
)
=
a
+
d
=
λ
1
+
λ
2
\operatorname{tr}\left(\left[\begin{array}{cc}a & b \\ c & d\end{array}\right]\right)=a+d=\lambda_1+\lambda_2
Or another way to phrase it, more useful for our purposes, is that the mean of the two eigenvalues is the same as the mean of these two diagonal entries.
1
2
tr
(
[
a
b
c
d
]
)
=
a
+
d
2
=
λ
1
+
λ
2
2
\frac{1}{2} \operatorname{tr}\left(\left[\begin{array}{cc}a & b \\ c & d\end{array}\right]\right)=\frac{a+d}{2}=\frac{\lambda_1+\lambda_2}{2}
The determinant of a matrix, our usual
a
d
−
b
c
ad-bc
formula, equals the product of the two eigenvalues.
det
(
[
a
b
c
d
]
)
=
a
d
−
b
c
=
λ
1
λ
2
\operatorname{det}\left(\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]\right)=a d-b c=\lambda_1 \lambda_2
This should make sense if you understand that an eigenvalue describes how much an operator stretches space in a particular direction and that the determinant describes how much an operator scales areas (or volumes).
(We'll get to this...)
Before getting to the third fact, notice how you can essentially read these first two values out of the matrix. Take this matrix
[
8
4
2
6
]
\left[\begin{array}{ll}8 & 4 \\ 2 & 6\end{array}\right]
as an example. Straight away you can know that the mean of the eigenvalues is the same as the mean of
8
8
and
6
6
, which is
7
7
.
m
=
λ
1
+
λ
2
2
=
7
m = \frac{\lambda_1 + \lambda_2}{2} = 7
Likewise, most linear algebra students are well-practiced at finding the determinant, which in this case is
8
⋅
6
−
4
⋅
2
=
48
−
8
8 \cdot 6 - 4 \cdot 2 = 48 - 8
, so you know the product of our two eigenvalues is
40
40
.
p
=
λ
1
λ
2
=
40
p = \lambda_1 \lambda_2= 40
Take a moment to see how you can derive what will be our third relevant fact, which is how to recover two numbers when you know their mean and product.
Focus on this example. You know the two values are evenly spaced around
7
7
, so they look like
7
7
plus or minus something; let's call it
d
d
for distance.
You also know that the product of these two numbers is
40
40
.
40
=
(
7
+
d
)
(
7
−
d
)
40 = (7+d)(7-d)
To find
d
d
, notice how this product expands nicely as a difference of squares. This lets you cleanly solve for
d
d
:
40
=
(
7
+
d
)
(
7
−
d
)
40
=
7
2
−
d
2
d
2
=
7
2
−
40
d
2
=
9
d
=
3
\begin{aligned} 40 & = (7+d)(7-d) \\ \rule{0pt}{1.25em} 40 & = 7^2-d^2 \\ \rule{0pt}{1.25em} d^2 & =7^2-40 \\ \rule{0pt}{1.25em} d^2 & =9 \\ \rule{0pt}{1.25em} d & =3 \end{aligned}
In other words, the two values for this very specific example work out to be
4
4
and
10
10
.
But our goal is a quick trick and you wouldn't want to think through this each time, so let's wrap up what we just did in a general formula.
For any mean,
m
m
, and product,
p
p
, the distance squared is always going to be
m
2
−
p
m^2 - p
. This gives the third key fact, which is that when two numbers have a mean and a product, you can write those two numbers as
m
±
m
2
−
p
m \pm \sqrt{m^2 - p}
.
This is decently fast to rederive on the fly if you ever forget it and it's essentially just a rephrasing of the difference of squares formula, but even still it's a fact worth memorizing. In fact, Tim from Acapella Science wrote us a quick jingle to make it a little more memorable.
Examples
Let me show you how this works, say for the matrix
[
3
1
4
1
]
\left[\begin{array}{cc}3 & 1 \\ 4 & 1\end{array}\right]
. You start by thinking of the formula, stating it all in your head.
But as you write it down, you fill in the appropriate values of
m
m
and
p
p
as you go. Here, the mean of the eigenvalues is the same as the mean of
3
3
and
1
1
, which is
2
2
, so you start by writing:
λ
1
,
λ
2
=
2
±
2
2
−
…
\lambda_1, \lambda_2 = 2 \pm \sqrt{2^2 - …}
The product of the eigenvalues is the determinant, which in this example is
3
⋅
1
−
1
⋅
4
=
−
1
3 \cdot 1 - 1 \cdot 4 = -1
, so that's the final thing you fill in.
λ
1
,
λ
2
=
2
±
2
2
−
(
−
1
)
\lambda_1, \lambda_2 = 2 \pm \sqrt{2^2 - (-1)}
So the eigenvalues are
2
±
5
2 \pm \sqrt{5}
. You may noticed this is the same matrix we were using at the start, but notice how much more directly we can get at the answer compared to the characteristic polynomial route.
Here, let's try another one using the matrix
[
2
7
1
8
]
\left[\begin{array}{ll}2 & 7 \\ 1 & 8\end{array}\right]
. This time the mean of the eigenvalues is the same as the mean of
2
2
and
8
8
, which is
5
5
. So again, start writing out the formula, but writing
5
5
in place of
m
m
:
λ
1
,
λ
2
=
5
±
5
2
−
…
\lambda_1, \lambda_2 = 5 \pm \sqrt{5^2 - …}
The determinant is
2
⋅
8
−
7
⋅
1
=
9
2 \cdot 8 - 7 \cdot 1 = 9
. So in this example, the eigenvalues look like
5
±
16
5 ± \sqrt{16}
, which gives us
9
9
and
1
1
.
λ
1
,
λ
2
=
5
±
5
2
−
9
=
9
,
1
\lambda_1, \lambda_2 = 5 \pm \sqrt{5^2 - 9} = 9, 1
You see what we mean about how you can basically just write down the eigenvalues while staring at the matrix? It's typically just the tiniest bit of simplifying at the end.
What are the eigenvalue(s) of the matrix
[
2
3
2
4
]
\left[\begin{array}{ll}2 & 3 \\ 2 & 4\end{array}\right]
?
3
±
11
3 \pm \sqrt{11}
3
±
7
3 \pm \sqrt{7}
3
±
2
3 \pm \sqrt{2}
3
±
10
3 \pm \sqrt{10}
This trick is especially useful when you need to read off the eigenvalues from small examples without losing the main line of thought by getting bogged down in calculations.
For more practice, let's try this out on a common set of matrices which pop up in quantum mechanics, known as the Pauli spin matrices.
If you know quantum mechanics, you'll know the eigenvalues of these are highly relevant to the physics they describe, and if you don't then let this just be a little glimpse of how these computations are actually relevant to real applications.
The mean of the diagonal in all three cases is
0
0
, so the mean of the eigenvalues in all cases is
0
0
, making our formula look especially simple.
What about the products of the eigenvalues, the determinants? For the first one, it's
0
−
1
0 - 1
or
−
1
-1
. The second also looks like
0
−
1
0 - 1
, though it takes a moment more to see because of the complex numbers. And the final one looks like
−
1
−
0
-1 - 0
. So in all three cases, the eigenvalues are
±
1
±1
.
Although in this case you don't even really need the formula to find two values evenly spaced around zero whose product is
−
1
-1
.
If you're curious, in the context of quantum mechanics, these matrices correspond with observations you might make about the spin of a particle in the
x
x
,
y
y
or
z
z
-direction, and the fact that these eigenvalues are
±
1
±1
corresponds with the idea that the values for the spin you would observe would be entirely in one direction or entirely in another, as opposed to something continuously ranging in between.
Maybe you'd wonder how exactly this works, and why you'd use 2x2 matrices with complex numbers to describe spin in three dimensions. Those would be valid questions, just beyond the scope of what we're talking about here.
You know it's funny, this section is supposed to be about a case where 2x2 matrices are not just toy examples or homework problems, but actually come up in practice, and quantum mechanics is great for that. However, the example kind of undercuts the whole point we're trying to make. For these specific matrices, if you use the traditional method with characteristic polynomials, it's essentially just as fast, and might actually faster.
For the first matrix, the relevant determinant directly gives you a characteristic polynomial of
λ
2
−
1
\lambda^2 - 1
, which clearly has roots of plus and minus
1
1
. Same answer for the second matrix. And for the last, forget about doing any computations, traditional or otherwise, it's already a diagonal matrix, so those diagonal entries are the eigenvalues!
However, the example is not totally lost on our cause, where you would actually feel the speed up is the more general case where you take a linear combination of these matrices and then try to compute the eigenvalues.
We might write this as
a
a
times the first one, plus
b
b
times the second, plus
c
c
times the third. In physics, this would describe spin observations in the direction of a vector
[
a
b
c
]
\left[\begin{array}{c} a \\ b \\ c \end{array}\right]
.
More specifically, you should assume this vector is normalized, meaning
a
2
+
b
2
+
c
2
=
1
a^2 + b^2 + c^2 = 1
. When you look at this new matrix, it's immediate to see that the mean of the eigenvalues here is still zero, and you may enjoy pausing for a brief moment to confirm that the product of those eigenvalues is still
−
1
-1
, and from there concluding what the eigenvalues must be.
The characteristic polynomial approach, on the other hand, is now actually more cumbersome to do in your head.
Relation to the quadratic formula
To be clear, using the mean-product formula is the same thing as finding roots of the characteristic polynomial; it has to be. In fact, this formula is a nice way to think about solving quadratics in general and some viewers of the channel may recognize this.
If you're trying to find the roots of a quadratic given its coefficients, you can think of that as a puzzle where you know the sum of two values, and you know their product, and you're trying to recover the original two values.
Specifically, if the polynomial is normalized so that the leading coefficient is
1
1
, then the mean of the roots is
−
1
/
2
-1/2
times the linear coefficient, for the example on screen that would be
5
5
, and the product of the roots is even easier, it's just the constant term. From there, you'd apply the mean product formula to find the roots.
Now, you could think of the mean product formula as being a lighter-weight reframing of the traditional quadratic formula. But the real advantage is that the terms have more meaning to them.
The whole point of this eigenvalue trick is that because you can read out the mean and product directly from the matrix, you can jump straight to writing down the roots without thinking about what the characteristic polynomial looks like. But to do that, we need a version of the quadratic formula where the terms carry some kind of meaning.
What are the eigenvalue(s) of the matrix
[
3
1
5
7
]
\left[\begin{array}{ll}3 & 1 \\ 5 & 7\end{array}\right]
?
5
5
3
3
and
7
7
2
2
and
8
8
4
4
and
6
6
What are the eigenvalue(s) of the matrix
[
8
4
2
6
]
\left[\begin{array}{ll}8 & 4 \\ 2 & 6\end{array}\right]
?
7
7
6
6
and
8
8
3
3
and
11
11
4
4
and
10
10
Last thoughts
The hope is that it's not just one more thing to memorize, but that the framing reinforces other nice facts worth knowing, like how the trace and determinant relate to eigenvalues. If you want to prove these facts, by the way, take a moment to expand out the characteristic polynomial for a general matrix, and think hard about the meaning of each coefficient.
Many thanks to Tim, for ensuring that the mean-product formula will stay stuck in all of our heads for at least a few months.
If you don't know about
his channel
, do check it out. The
Molecular Shape of You
, in particular, is one of the greatest things on the internet.
Table of Contents
A quick trick for computing eigenvalues
Examples
Relation to the quadratic formula
Last thoughts
Thanks
Thanks
Special thanks to those below for supporting the original video behind this post, and to
current patrons
for funding ongoing projects. If you find these lessons valuable,
consider joining
.
Ronnie Cheng
Raymond Fowkes
Michael Hardel
Sean Barrett
Maty Siman
Andrew Foster
Alan Stein
Arne Tobias Malkenes Ødegaard
Ron Capelli
Nick
Vignesh Valliappan
Cooper Jones
Holger Flier
Chelase
Vignan Velivela
Jon Adams
Aidan Shenkman
Joshua Ouellette
773377
Suthen Thomas
Scott Gibbons
Pi Nuessle
D Dasgupta
Eero Hippeläinen
C Carothers
Chris Druta
Brendan Coleman
Sinan Taifour
Ted Suzman
Keith Smith
Jimmy Yang
Rish Kundalia
Christian Kaiser
Eric Flynn
Kevin Steck
Tyler Parcell
Justin Chandler
Jim Powers
Calvin Lin
Arkajyoti Misra
Aman Karunakaran
Siobhan Durcan
Chris Seaby
Karma Shi
Nitu Kitchloo
Jarred Harvey
Martin Mauersberg
Johan Auster
rehmi post
Allen Stenger
Carlos Iriarte
Nero Li
Rebecca Lin
Ada Cohen
Alex Hackman
Krishnamoorthy Venkatraman
Trevor Settles
YinYangBalance.Asia
Daniel Badgio
Kros Dai
Constantine Goltsev
James Sugrim
Jaewon Jung
Joseph Kelly
RICHARD C BRIDGES
Oleksandr Marchukov
Pesho Ivanov
Brendan Shah
Vasu Dubey
Luc Ritchie
Evan Miyazono
Peter Ehrnstrom
Curt Elsasser
Joseph Rocca
Patch Kessler
Octavian Voicu
Augustine Lim
Emilio Mendoza
Matt Godbolt
Tim Ferreira
Hal Hildebrand
Andre Au
Gregory Hopper
Pethanol
Mads Elvheim
Yana Chernobilsky
噗噗兔
James D. Woolery, M.D.
Randy True
lukvol
Dave B
Henry Reich
Stefan Grunspan
John Luttig
Mike
Pete Dietl
Jeremy Cole
Matt Roveto
dave nicponski
Donal Botkin
Jonathan Whitmore
Jacob Wallingford
James Golab
Ivan Sorokin
Eric Johnson
Pradeep Gollakota
Jameel Syed
Olga Cooperman
Yetinother
Gordon Gould
Robert van der Tuuk
Anisha Patil
Andreas Nautsch
Jeff Linse
Karl WAN NAN WO
Nag Rajan
Aditya Munot
Sebastian Braunert
Ripta Pasay
Kai-Siang Ang
Andrew Busey
Stefan Korntreff
Hitoshi Yamauchi
Marc Cohen
Bpendragon
Laura Gast
Jason Hise
Bartosz Burclaf
Ilya Latushko
Charles Pereira
Garbanarba
Stevie Metke
Dan Kinch
Ian Mcinerney
Harald
Benjamin Bailey
Jeff R
lardysoft
Aidan De Angelis
Albin Egasse
Jack Thull
Karl Niu
Aljoscha Schulze
Vince Gabor
Yushi Wang
Christian Opitz
Magister Mugit
Sundar Subbarayan
Karim Safin
Jamie Warner
Kevin Fowler
Chien Chern Khor
Linh Tran
Mattéo Boissière
Arthur Lazarte
Everett Knag
Hugh Zhang
Xierumeng
SansWord Huang
John Griffith
Gabriele Siino
Jerris Heaton
AZsorcerer
Doug Lundin
Steven Siddals
soekul
anul kumar sinha
Zachariah Rosenberg
Rod S
Michael Bos
Luka Korov
Keith Tyson
Andres
Beckett Madden-Woods
Steve Huynh
Eugene Pakhomov
Jonathan Wilson
Yoon Suk Oh
Mahrlo Amposta
Adam Cedrone
Chandra Sripada
Eugene Foss
Daniel Pang
Ben Gutierrez
Peter Mcinerney
David Johnston
Parker Burchett
André Sant’Anna
Dan Laffan
Josh Kinnear
Max Filimon
Alexander Janssen
Andrew Cary
Tyler Veness
Eddy Lazzarin
Antoine Bruguier
Arcus
Yurii Monastyrshyn
Pierre Lancien
Joshua Claeys
Samuel Judge
Siddhesh Vichare
Johnny Holzeisen
Ram Eshwar Kaundinya
David Bar-On
Eric Younge
Steve Cohen
Dallas De Atley
Mark Mann
Molly Mackinlay
Mike Dussault
Ahmed Elkhanany
Joe Pregracke
Tim Erbes
D. Sivakumar
Izzy Gomez
Marina Piller
Andrew Wyld
Omar Zrien
Ernest Hymel
Alex
John McClaskey
Duane Rich
Julien Dubois
Bill Gatliff
Lunchbag Rodriguez
Jed Yeiser
Randy C. Will
Kevin
Ivan
Mitch Harding
André Yuji Hisatsuga
LAI Oscar
Jeff Dodds
Kartik Cating-Subramanian
Jeremy
ivo galic
levav ferber tas
Tianyu Ge
Marek Gluszczuk
Tarrence N
Manuel Garcia
Vijay
Mert Öz
Matt Russell
supershabam
Rob Granieri
Wooyong Ee
Dominik Wagner
Marcial Abrahantes
JayCore
Nayantara Jain
Benjamin R.² M.
Bruce Malcolm
Joseph O'Connor
Sean Chittenden
Illia Tulupov
Krishanu Sankar
Tanmayan Pande
Juan Benet
John Zelinka
Jonathan
Ian Ray
Michael W White
SonOfSofaman
Adam Margulies
MingFung Law
Matt Parlmer
Clark Gaebel
Максим Аласкаров
Debbie Newhouse
Brandon Huang
J. Chris Wesley
Jean-Manuel Izaret
Oliver Steele
Petar Veličković
Zachary Gidwitz
Yair kass
Majid Alfifi
Thomas Peter Berntsen
Jacob Taylor
Peter Freese
Mr. William Shakespaw
Zachary Meyer
Ali Yahya
Christopher Suter
Roobie
John Haley
Pāvils Jurjāns
Cardozo Family
Brian King
Ho Woo Nam
Jacob Harmon
Ben Granger
Nathan Winchester
emptymachine
Навальный
Bernd Sing
David Clark
Solara570
Christopher Lorton
Damian Marek
John Le
Rich Johnson
Terry Hayes
Adam Miels
Max Anderson
Mathias Jansson
Lee Burnette
Jeff Straathof
Joshua Lin
Tyler VanValkenburg
Valentin Mayer-Eichberger
RabidCamel
Cody Merhoff
Linda Xie
Killian McGuinness
Chris Connett
Maxim Nitsche
Veritasium
Arun Iyer
Jeroen Swinkels
Brian Cloutier
Patrick Lucas
Jullcifer
Brian Staroselsky
will
Daniel Brown
Robert Von Borstel
Charles Southerland
ConvenienceShout
John Rizzo
J. Dmitri Gallow
Frank R. Brown, Jr.
Vassili Philippov
Carl-Johan R. Nordangård
Douglas Lee Hall
Burt Humburg
Imran Ahmed
Mehmet Budak
justpwd
Ero Carrera
Scott Walter, Ph.D.
Arjun Chakroborty
Perry Taga
DrTadPhd
Jayne Gabriele
Liang ChaoYi
Jono Forbes
Alonso Martinez
Aravind C V
Matthew Bouchard
Edan Maor
Tim Kazik
Gerhard van Niekerk
Matt Berggren
Pavel Dubov
Lael S Costa
Matthäus Pawelczyk
David Barker
Adam Dřínek
Charlie Ellison
Taras Bobrovytsky
Axel Ericsson
Barry Fam
Alexis Olson
Daniel Herrera C
Arthur Zey
Steve Muench
Peter Bryan
Smarter Every Day
Jan Pfeifer
James Winegar
Ashwany Rayu
Nick Lucas
MR. FANTASTIC
Henri Stern
Jalex Stark
Tal Einav
Kenneth Larsen
Max Welz
Jacob Hartmann
Jan-Hendrik Prinz
Jonathon Krall
Cristian Amitroaie
Andrew Guo
Elle Nolan
Dan Martin
Vignesh Ganapathi Subramanian
Chris Sachs
1stViewMaths
David Gow
Niranjan Shivaram
Nikita Lesnikov
John Camp
Vladimir Solomatin
Mark Heising
Ettore Randazzo
Guillaume Sartoretti
Vai-Lam Mui
Александр Горленко
Jay Ebhomenye
Joshua Davis
Ben Campbell
Samuel Cahyawijaya
Tyler Herrmann
Alexander Mai
Márton Vaitkus
Scott Gray
Mikko
Dan Herbatschek
dancing through life...
Jim Caruso
Arnaldo Leon
Delton Ding
fluffyfunnypants
Mads Munch Andreasen
Paul Pluzhnikov
Timothy Chklovski
Marshall McQuillen
JN
Lukas Biewald
Britt Selvitelle
Andrew Mohn
Rex Godby
d5b
Marc Folini
Elias
Patrick Gibson
Eryq Ouithaqueue
Ubiquity Ventures
Xueqi
Lee Redden
Eric Robinson
Robert Klock
Victor Castillo
cinterloper
Magnus Hiie
otavio good
Aaron Binns
David J Wu
Victor Kostyuk
Corey Ogburn
Mateo Abascal
Sergey Ovchinnikov
David B. Hill
Mike Dour
Ryan Atallah
Sohail Farhangi
泉辉致鉴
Britton Finley
Ben Delo
Eric Koslow
Carl Schell
Cy 'kkm' K'Nelson
Magnus Dahlström
Paul Wolfgang
Nate Pinsky
Federico Lebron
Bradley Pirtle
Boris Veselinovich
Christian Broß
Nipun Ramakrishnan
Dan Davison
Martin Price
Joseph John Cox
Harry Eakins |
| Markdown | [3Blue1BrownAnimated math](https://www.3blue1brown.com/)
Menu
[Talent](https://www.3blue1brown.com/talent)[Patreon](https://www.patreon.com/c/3blue1brown)[Store](https://store.dftba.com/collections/3blue1brown)[Blog](https://www.3blue1brown.com/blog)[Extras](https://www.3blue1brown.com/extras)[FAQ/Contact](https://www.3blue1brown.com/faq)[About](https://www.3blue1brown.com/about)
[Linear Algebra](https://www.3blue1brown.com/topics/linear-algebra)

# Chapter 15A quick trick for computing eigenvalues
Published May 7, 2021
Updated Apr 3, 2026
Lesson by [Grant Sanderson](https://www.3blue1brown.com/about)
Text adaptation by [Kurt Bruns](https://www.3blue1brown.com/about#kurt-bruns)
[Source Code](https://github.com/3b1b/videos/tree/master/_2021/quick_eigen.py)
This is a video for anyone who already knows what eigenvalues and eigenvectors are, and who might enjoy a quick way to compute them in the case of 2x2 matrices. If you're unfamiliar with eigenvalues, take a look at the [previous chapter](https://www.3blue1brown.com/lessons/eigenvalues) which introduces them.
You can skip ahead if you just want to see the trick, but if possible we'd like you to rediscover it for yourself, so let's lay down a little background.
As a quick reminder, if the effect of a linear transformation on a given vector is to scale it by some constant, we call that vector an eigenvector of the transformation, and we call the relevant scaling factor the corresponding eigenvalue, often denoted with the letter lambda, λ \\lambda λ.
Still
Animation

When you write this as an equation and rearrange a little, what you see is that if a number λ \\lambda λ is an eigenvalue of a matrix A A A, then the matrix ( A − λ I ) (A - \\lambda I) (A−λI) must send some nonzero vector, namely the corresponding eigenvector, to the zero vector, which in turn means the determinant of this modified matrix must be 0 0 0.
A
v
⃗
\=
λ
I
v
⃗
A
v
⃗
−
λ
I
v
⃗
\=
0
⃗
(
A
−
λ
I
)
v
⃗
\=
0
⃗
det
(
A
−
λ
I
)
\=
0
\\begin{aligned} A \\vec{\\mathbf{v}} & =\\lambda I \\vec{\\mathbf{v}} \\\\ \\rule{0pt}{1.25em} A \\vec{\\mathbf{v}}-\\lambda I \\vec{\\mathbf{v}} & =\\vec{\\mathbf{0}} \\\\ \\rule{0pt}{1.25em} (A-\\lambda I) \\vec{\\mathbf{v}} & =\\vec{\\mathbf{0}} \\\\ \\rule{0pt}{1.25em} \\operatorname{det}(A-\\lambda I) &= 0 \\end{aligned}
A
v
A
v
−λI
v
(A−λI)
v
det(A−λI)
\=λI
v
\=
0
\=
0
\=0
That's a bit of a mouthful, but again, we're assuming this is review for anyone reading.
The usual way to compute eigenvalues and how most students are taught to carry it out, is to subtract a variable lambda off the diagonals of a matrix and solve for when the determinant equals 0 0 0. For example, when finding the eigenvalues of the matrix \[ 3 1 4 1 \] \\left\[\\begin{array}{ll} 3 & 1 \\\\ 4 & 1 \\end{array}\\right\] \[3411\] this looks like:
det
(
\[
3
−
λ
1
4
1
−
λ
\]
)
\=
(
3
−
λ
)
(
1
−
λ
)
−
(
1
)
(
4
)
\=
0
\\operatorname{det}\\left(\\left\[\\begin{array}{cc} 3-\\lambda & 1 \\\\ 4 & 1-\\lambda \\end{array}\\right\]\\right)=(3-\\lambda)(1-\\lambda)-(1)(4) = 0
det(\[3−λ411−λ\])\=(3−λ)(1−λ)−(1)(4)\=0
This always involves a few steps to expand this and simplify it to get a clean quadratic polynomial, known as the “characteristic polynomial” of the matrix. The eigenvalues are the roots of this polynomial, so to find them you apply the quadratic formula, which typically requires one or two more steps of simplification.
det
(
\[
3
−
λ
1
4
1
−
λ
\]
)
\=
(
3
−
λ
)
(
1
−
λ
)
−
(
1
)
(
4
)
\=
(
3
−
4
λ
\+
λ
2
)
−
4
\=
λ
2
−
4
λ
−
1
\=
0
λ
1
,
λ
2
\=
4
±
4
2
−
4
(
1
)
(
−
1
)
2
\=
2
±
5
\\begin{aligned} \\operatorname{det}\\left(\\left\[\\begin{array}{cc} 3-\\lambda & 1 \\\\ 4 & 1-\\lambda \\end{array}\\right\]\\right) & =(3-\\lambda)(1-\\lambda)-(1)(4) \\\\ \\rule{0pt}{1.5em} & =\\left(3-4 \\lambda+\\lambda^2\\right)-4 \\\\ \\rule{0pt}{1.5em} & =\\lambda^2-4 \\lambda-1=0 \\\\ \\rule{0pt}{2.0em} \\lambda\_1, \\lambda\_2 & = \\frac{4 \\pm \\sqrt{4^2-4(1)(-1)}}{2} =2 \\pm \\sqrt{5} \\end{aligned}
det(\[3−λ411−λ\])λ1,λ2
\=(3−λ)(1−λ)−(1)(4)\=(3−4λ\+λ2)−4\=λ2−4λ−1\=0
\=
2
4±
42−4(1)(−1)
\=2±
5
This process isn't *terrible*, but at least for 2x2 matrices, there's a much more direct way to get at this answer. If you want to rediscover this trick, there are three relevant facts you'll need to know, each of which is worth knowing in its own right and can help with other problem-solving.
1. The trace of a matrix, which is the sum of these two diagonal entries, is equal to the sum of the eigenvalues.
tr
(
\[
a
b
c
d
\]
)
\=
a
\+
d
\=
λ
1
\+
λ
2
\\operatorname{tr}\\left(\\left\[\\begin{array}{cc}a & b \\\\ c & d\\end{array}\\right\]\\right)=a+d=\\lambda\_1+\\lambda\_2
tr(\[acbd\])\=a\+d\=λ1\+λ2
Or another way to phrase it, more useful for our purposes, is that the mean of the two eigenvalues is the same as the mean of these two diagonal entries.
1
2
tr
(
\[
a
b
c
d
\]
)
\=
a
\+
d
2
\=
λ
1
\+
λ
2
2
\\frac{1}{2} \\operatorname{tr}\\left(\\left\[\\begin{array}{cc}a & b \\\\ c & d\\end{array}\\right\]\\right)=\\frac{a+d}{2}=\\frac{\\lambda\_1+\\lambda\_2}{2}
21tr(\[acbd\])\=2a\+d\=2λ1\+λ2
1. The determinant of a matrix, our usual a d − b c ad-bc ad−bc formula, equals the product of the two eigenvalues.
det
(
\[
a
b
c
d
\]
)
\=
a
d
−
b
c
\=
λ
1
λ
2
\\operatorname{det}\\left(\\left\[\\begin{array}{ll}a & b \\\\ c & d\\end{array}\\right\]\\right)=a d-b c=\\lambda\_1 \\lambda\_2
det(\[acbd\])\=ad−bc\=λ1λ2
This should make sense if you understand that an eigenvalue describes how much an operator stretches space in a particular direction and that the determinant describes how much an operator scales areas (or volumes).
1. (We'll get to this...)
Before getting to the third fact, notice how you can essentially read these first two values out of the matrix. Take this matrix \[ 8 4 2 6 \] \\left\[\\begin{array}{ll}8 & 4 \\\\ 2 & 6\\end{array}\\right\] \[8246\] as an example. Straight away you can know that the mean of the eigenvalues is the same as the mean of 8 8 8 and 6 6 6, which is 7 7 7.
m
\=
λ
1
\+
λ
2
2
\=
7
m = \\frac{\\lambda\_1 + \\lambda\_2}{2} = 7
m\=2λ1\+λ2\=7
Likewise, most linear algebra students are well-practiced at finding the determinant, which in this case is 8 ⋅ 6 − 4 ⋅ 2 \= 48 − 8 8 \\cdot 6 - 4 \\cdot 2 = 48 - 8 8⋅6−4⋅2\=48−8, so you know the product of our two eigenvalues is 40 40 40.
p
\=
λ
1
λ
2
\=
40
p = \\lambda\_1 \\lambda\_2= 40
p\=λ1λ2\=40
Take a moment to see how you can derive what will be our third relevant fact, which is how to recover two numbers when you know their mean and product.
Focus on this example. You know the two values are evenly spaced around 7 7 7, so they look like 7 7 7 plus or minus something; let's call it d d d for distance.

You also know that the product of these two numbers is 40 40 40.
40
\=
(
7
\+
d
)
(
7
−
d
)
40 = (7+d)(7-d)
40\=(7\+d)(7−d)
To find d d d, notice how this product expands nicely as a difference of squares. This lets you cleanly solve for d d d:
40
\=
(
7
\+
d
)
(
7
−
d
)
40
\=
7
2
−
d
2
d
2
\=
7
2
−
40
d
2
\=
9
d
\=
3
\\begin{aligned} 40 & = (7+d)(7-d) \\\\ \\rule{0pt}{1.25em} 40 & = 7^2-d^2 \\\\ \\rule{0pt}{1.25em} d^2 & =7^2-40 \\\\ \\rule{0pt}{1.25em} d^2 & =9 \\\\ \\rule{0pt}{1.25em} d & =3 \\end{aligned}
4040d2d2d\=(7\+d)(7−d)\=72−d2\=72−40\=9\=3
In other words, the two values for this very specific example work out to be 4 4 4 and 10 10 10.

But our goal is a quick trick and you wouldn't want to think through this each time, so let's wrap up what we just did in a general formula.
For any mean, m m m, and product, p p p, the distance squared is always going to be m 2 − p m^2 - p m2−p. This gives the third key fact, which is that when two numbers have a mean and a product, you can write those two numbers as m ± m 2 − p m \\pm \\sqrt{m^2 - p} m± m2−p .

This is decently fast to rederive on the fly if you ever forget it and it's essentially just a rephrasing of the difference of squares formula, but even still it's a fact worth memorizing. In fact, Tim from Acapella Science wrote us a quick jingle to make it a little more memorable.
## Examples
Let me show you how this works, say for the matrix \[ 3 1 4 1 \] \\left\[\\begin{array}{cc}3 & 1 \\\\ 4 & 1\\end{array}\\right\] \[3411\]. You start by thinking of the formula, stating it all in your head.

But as you write it down, you fill in the appropriate values of m m m and p p p as you go. Here, the mean of the eigenvalues is the same as the mean of 3 3 3 and 1 1 1, which is 2 2 2, so you start by writing:
λ
1
,
λ
2
\=
2
±
2
2
−
…
\\lambda\_1, \\lambda\_2 = 2 \\pm \\sqrt{2^2 - …}
λ1,λ2\=2±
22−…
The product of the eigenvalues is the determinant, which in this example is 3 ⋅ 1 − 1 ⋅ 4 \= − 1 3 \\cdot 1 - 1 \\cdot 4 = -1 3⋅1−1⋅4\=−1, so that's the final thing you fill in.
λ
1
,
λ
2
\=
2
±
2
2
−
(
−
1
)
\\lambda\_1, \\lambda\_2 = 2 \\pm \\sqrt{2^2 - (-1)}
λ1,λ2\=2±
22−(−1)
So the eigenvalues are 2 ± 5 2 \\pm \\sqrt{5} 2± 5 . You may noticed this is the same matrix we were using at the start, but notice how much more directly we can get at the answer compared to the characteristic polynomial route.

Here, let's try another one using the matrix \[ 2 7 1 8 \] \\left\[\\begin{array}{ll}2 & 7 \\\\ 1 & 8\\end{array}\\right\] \[2178\]. This time the mean of the eigenvalues is the same as the mean of 2 2 2 and 8 8 8, which is 5 5 5. So again, start writing out the formula, but writing 5 5 5 in place of m m m:
λ
1
,
λ
2
\=
5
±
5
2
−
…
\\lambda\_1, \\lambda\_2 = 5 \\pm \\sqrt{5^2 - …}
λ1,λ2\=5±
52−…
The determinant is 2 ⋅ 8 − 7 ⋅ 1 \= 9 2 \\cdot 8 - 7 \\cdot 1 = 9 2⋅8−7⋅1\=9. So in this example, the eigenvalues look like 5 ± 16 5 ± \\sqrt{16} 5± 16 , which gives us 9 9 9 and 1 1 1.
λ
1
,
λ
2
\=
5
±
5
2
−
9
\=
9
,
1
\\lambda\_1, \\lambda\_2 = 5 \\pm \\sqrt{5^2 - 9} = 9, 1
λ1,λ2\=5±
52−9
\=
9,1
You see what we mean about how you can basically just write down the eigenvalues while staring at the matrix? It's typically just the tiniest bit of simplifying at the end.
What are the eigenvalue(s) of the matrix \[ 2 3 2 4 \] \\left\[\\begin{array}{ll}2 & 3 \\\\ 2 & 4\\end{array}\\right\] \[2234\]?
3 ± 11 3 \\pm \\sqrt{11} 3± 11
3 ± 7 3 \\pm \\sqrt{7} 3± 7
3 ± 2 3 \\pm \\sqrt{2} 3± 2
3 ± 10 3 \\pm \\sqrt{10} 3± 10
Check Answer
This trick is especially useful when you need to read off the eigenvalues from small examples without losing the main line of thought by getting bogged down in calculations.1
For more practice, let's try this out on a common set of matrices which pop up in quantum mechanics, known as the Pauli spin matrices.

If you know quantum mechanics, you'll know the eigenvalues of these are highly relevant to the physics they describe, and if you don't then let this just be a little glimpse of how these computations are actually relevant to real applications.
The mean of the diagonal in all three cases is 0 0 0, so the mean of the eigenvalues in all cases is 0 0 0, making our formula look especially simple.

What about the products of the eigenvalues, the determinants? For the first one, it's 0 − 1 0 - 1 0−1 or − 1 \-1 −1. The second also looks like 0 − 1 0 - 1 0−1, though it takes a moment more to see because of the complex numbers. And the final one looks like − 1 − 0 \-1 - 0 −1−0. So in all three cases, the eigenvalues are ± 1 ±1 ±1.

Although in this case you don't even really need the formula to find two values evenly spaced around zero whose product is − 1 \-1 −1.
If you're curious, in the context of quantum mechanics, these matrices correspond with observations you might make about the spin of a particle in the x x x, y y y or z z z\-direction, and the fact that these eigenvalues are ± 1 ±1 ±1 corresponds with the idea that the values for the spin you would observe would be entirely in one direction or entirely in another, as opposed to something continuously ranging in between.
Maybe you'd wonder how exactly this works, and why you'd use 2x2 matrices with complex numbers to describe spin in three dimensions. Those would be valid questions, just beyond the scope of what we're talking about here.
You know it's funny, this section is supposed to be about a case where 2x2 matrices are not just toy examples or homework problems, but actually come up in practice, and quantum mechanics is great for that. However, the example kind of undercuts the whole point we're trying to make. For these specific matrices, if you use the traditional method with characteristic polynomials, it's essentially just as fast, and might actually faster.
For the first matrix, the relevant determinant directly gives you a characteristic polynomial of λ 2 − 1 \\lambda^2 - 1 λ2−1, which clearly has roots of plus and minus 1 1 1. Same answer for the second matrix. And for the last, forget about doing any computations, traditional or otherwise, it's already a diagonal matrix, so those diagonal entries are the eigenvalues\!

However, the example is not totally lost on our cause, where you would actually feel the speed up is the more general case where you take a linear combination of these matrices and then try to compute the eigenvalues.

We might write this as a a a times the first one, plus b b b times the second, plus c c c times the third. In physics, this would describe spin observations in the direction of a vector \[ a b c \] \\left\[\\begin{array}{c} a \\\\ b \\\\ c \\end{array}\\right\] ⎣ ⎡ abc ⎦ ⎤ .

More specifically, you should assume this vector is normalized, meaning a 2 \+ b 2 \+ c 2 \= 1 a^2 + b^2 + c^2 = 1 a2\+b2\+c2\=1. When you look at this new matrix, it's immediate to see that the mean of the eigenvalues here is still zero, and you may enjoy pausing for a brief moment to confirm that the product of those eigenvalues is still − 1 \-1 −1, and from there concluding what the eigenvalues must be.

The characteristic polynomial approach, on the other hand, is now actually more cumbersome to do in your head.

## Relation to the quadratic formula
To be clear, using the mean-product formula is the same thing as finding roots of the characteristic polynomial; it has to be. In fact, this formula is a nice way to think about solving quadratics in general and some viewers of the channel may recognize this.

If you're trying to find the roots of a quadratic given its coefficients, you can think of that as a puzzle where you know the sum of two values, and you know their product, and you're trying to recover the original two values.

Specifically, if the polynomial is normalized so that the leading coefficient is 1 1 1, then the mean of the roots is − 1 / 2 \-1/2 −1/2 times the linear coefficient, for the example on screen that would be 5 5 5, and the product of the roots is even easier, it's just the constant term. From there, you'd apply the mean product formula to find the roots.

Now, you could think of the mean product formula as being a lighter-weight reframing of the traditional quadratic formula. But the real advantage is that the terms have more meaning to them.
The whole point of this eigenvalue trick is that because you can read out the mean and product directly from the matrix, you can jump straight to writing down the roots without thinking about what the characteristic polynomial looks like. But to do that, we need a version of the quadratic formula where the terms carry some kind of meaning.

What are the eigenvalue(s) of the matrix \[ 3 1 5 7 \] \\left\[\\begin{array}{ll}3 & 1 \\\\ 5 & 7\\end{array}\\right\] \[3517\]?
5 5 5
3 3 3 and 7 7 7
2 2 2 and 8 8 8
4 4 4 and 6 6 6
Check Answer
What are the eigenvalue(s) of the matrix \[ 8 4 2 6 \] \\left\[\\begin{array}{ll}8 & 4 \\\\ 2 & 6\\end{array}\\right\] \[8246\]?
7 7 7
6 6 6 and 8 8 8
3 3 3 and 11 11 11
4 4 4 and 10 10 10
Check Answer
## Last thoughts
The hope is that it's not just one more thing to memorize, but that the framing reinforces other nice facts worth knowing, like how the trace and determinant relate to eigenvalues. If you want to prove these facts, by the way, take a moment to expand out the characteristic polynomial for a general matrix, and think hard about the meaning of each coefficient.

Many thanks to Tim, for ensuring that the mean-product formula will stay stuck in all of our heads for at least a few months.
If you don't know about [his channel](https://www.youtube.com/@acapellascience), do check it out. The [Molecular Shape of You](https://www.youtube.com/watch?v=f8FAJXPBdOg), in particular, is one of the greatest things on the internet.
Enjoy this lesson? Consider sharing it.
[Twitter](https://twitter.com/share?url=https%3A%2F%2Fwww.3blue1brown.com%2Flessons%2Fquick-eigen&text=A+quick+way+to+compute+eigenvalues+of+a+2x2+matrix%0A%0A&via=3Blue1Brown)[Reddit](https://www.reddit.com/submit?url=https%3A%2F%2Fwww.3blue1brown.com%2Flessons%2Fquick-eigen&title=A+quick+way+to+compute+eigenvalues+of+a+2x2+matrix%0A%0A)[Facebook](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.3blue1brown.com%2Flessons%2Fquick-eigen)
Want more math in your life?
Notice a mistake? [Submit a correction on GitHub](https://github.com/3b1b/3Blue1Brown.com/edit/main/public/content/lessons/2021/quick-eigen/index.mdx)
[](https://www.3blue1brown.com/lessons/eigenvalues#title)
[Eigenvectors and eigenvalues](https://www.3blue1brown.com/lessons/eigenvalues#title)
[Read](https://www.3blue1brown.com/lessons/eigenvalues#title)
[](https://www.3blue1brown.com/lessons/abstract-vector-spaces#title)
[Abstract vector spaces](https://www.3blue1brown.com/lessons/abstract-vector-spaces#title)
[Read](https://www.3blue1brown.com/lessons/abstract-vector-spaces#title)
Table of Contents
[A quick trick for computing eigenvalues](https://www.3blue1brown.com/lessons/quick-eigen#title)[Examples](https://www.3blue1brown.com/lessons/quick-eigen#examples)[Relation to the quadratic formula](https://www.3blue1brown.com/lessons/quick-eigen#relation-to-the-quadratic-formula)[Last thoughts](https://www.3blue1brown.com/lessons/quick-eigen#last-thoughts)[Thanks](https://www.3blue1brown.com/lessons/quick-eigen#thanks)
# Thanks
Special thanks to those below for supporting the original video behind this post, and to [current patrons](https://www.3blue1brown.com/thanks) for funding ongoing projects. If you find these lessons valuable, [consider joining](https://www.patreon.com/3blue1brown).
Ronnie ChengRaymond FowkesMichael HardelSean BarrettMaty SimanAndrew FosterAlan SteinArne Tobias Malkenes ØdegaardRon CapelliNickVignesh ValliappanCooper JonesHolger FlierChelaseVignan VelivelaJon AdamsAidan ShenkmanJoshua Ouellette773377Suthen ThomasScott GibbonsPi NuessleD DasguptaEero HippeläinenC CarothersChris DrutaBrendan ColemanSinan TaifourTed SuzmanKeith SmithJimmy YangRish KundaliaChristian KaiserEric FlynnKevin SteckTyler ParcellJustin ChandlerJim PowersCalvin LinArkajyoti MisraAman KarunakaranSiobhan DurcanChris SeabyKarma ShiNitu KitchlooJarred HarveyMartin MauersbergJohan Austerrehmi postAllen StengerCarlos IriarteNero LiRebecca LinAda CohenAlex HackmanKrishnamoorthy VenkatramanTrevor SettlesYinYangBalance.AsiaDaniel BadgioKros DaiConstantine GoltsevJames SugrimJaewon JungJoseph KellyRICHARD C BRIDGESOleksandr MarchukovPesho IvanovBrendan ShahVasu DubeyLuc RitchieEvan MiyazonoPeter EhrnstromCurt ElsasserJoseph RoccaPatch KesslerOctavian VoicuAugustine LimEmilio MendozaMatt GodboltTim FerreiraHal HildebrandAndre AuGregory HopperPethanolMads ElvheimYana Chernobilsky噗噗兔James D. Woolery, M.D.Randy TruelukvolDave BHenry ReichStefan GrunspanJohn LuttigMikePete DietlJeremy ColeMatt Rovetodave nicponskiDonal BotkinJonathan WhitmoreJacob WallingfordJames GolabIvan SorokinEric JohnsonPradeep GollakotaJameel SyedOlga CoopermanYetinotherGordon GouldRobert van der TuukAnisha PatilAndreas NautschJeff LinseKarl WAN NAN WONag RajanAditya MunotSebastian BraunertRipta PasayKai-Siang AngAndrew BuseyStefan KorntreffHitoshi YamauchiMarc CohenBpendragonLaura GastJason HiseBartosz BurclafIlya LatushkoCharles PereiraGarbanarbaStevie MetkeDan KinchIan McinerneyHaraldBenjamin BaileyJeff RlardysoftAidan De AngelisAlbin EgasseJack ThullKarl NiuAljoscha SchulzeVince GaborYushi WangChristian OpitzMagister MugitSundar SubbarayanKarim SafinJamie WarnerKevin FowlerChien Chern KhorLinh TranMattéo BoissièreArthur LazarteEverett KnagHugh ZhangXierumengSansWord HuangJohn GriffithGabriele SiinoJerris HeatonAZsorcererDoug LundinSteven Siddalssoekulanul kumar sinhaZachariah RosenbergRod SMichael BosLuka KorovKeith TysonAndresBeckett Madden-WoodsSteve HuynhEugene PakhomovJonathan WilsonYoon Suk OhMahrlo AmpostaAdam CedroneChandra SripadaEugene FossDaniel PangBen GutierrezPeter McinerneyDavid JohnstonParker BurchettAndré Sant’AnnaDan LaffanJosh KinnearMax FilimonAlexander JanssenAndrew CaryTyler VenessEddy LazzarinAntoine BruguierArcusYurii MonastyrshynPierre LancienJoshua ClaeysSamuel JudgeSiddhesh VichareJohnny HolzeisenRam Eshwar KaundinyaDavid Bar-OnEric YoungeSteve CohenDallas De AtleyMark MannMolly MackinlayMike DussaultAhmed ElkhananyJoe PregrackeTim ErbesD. SivakumarIzzy GomezMarina PillerAndrew WyldOmar ZrienErnest HymelAlexJohn McClaskeyDuane RichJulien DuboisBill GatliffLunchbag RodriguezJed YeiserRandy C. WillKevinIvanMitch HardingAndré Yuji HisatsugaLAI OscarJeff DoddsKartik Cating-SubramanianJeremyivo galiclevav ferber tasTianyu GeMarek GluszczukTarrence NManuel GarciaVijayMert ÖzMatt RussellsupershabamRob GranieriWooyong EeDominik WagnerMarcial AbrahantesJayCoreNayantara JainBenjamin R.² M.Bruce MalcolmJoseph O'ConnorSean ChittendenIllia TulupovKrishanu SankarTanmayan PandeJuan BenetJohn ZelinkaJonathanIan RayMichael W WhiteSonOfSofamanAdam MarguliesMingFung LawMatt ParlmerClark GaebelМаксим АласкаровDebbie NewhouseBrandon HuangJ. Chris WesleyJean-Manuel IzaretOliver SteelePetar VeličkovićZachary GidwitzYair kassMajid AlfifiThomas Peter BerntsenJacob TaylorPeter FreeseMr. William ShakespawZachary MeyerAli YahyaChristopher SuterRoobieJohn HaleyPāvils JurjānsCardozo FamilyBrian KingHo Woo NamJacob HarmonBen GrangerNathan WinchesteremptymachineНавальныйBernd SingDavid ClarkSolara570Christopher LortonDamian MarekJohn LeRich JohnsonTerry HayesAdam MielsMax AndersonMathias JanssonLee BurnetteJeff StraathofJoshua LinTyler VanValkenburgValentin Mayer-EichbergerRabidCamelCody MerhoffLinda XieKillian McGuinnessChris ConnettMaxim NitscheVeritasiumArun IyerJeroen SwinkelsBrian CloutierPatrick LucasJullciferBrian StaroselskywillDaniel BrownRobert Von BorstelCharles SoutherlandConvenienceShoutJohn RizzoJ. Dmitri GallowFrank R. Brown, Jr.Vassili PhilippovCarl-Johan R. NordangårdDouglas Lee HallBurt HumburgImran AhmedMehmet BudakjustpwdEro CarreraScott Walter, Ph.D.Arjun ChakrobortyPerry TagaDrTadPhdJayne GabrieleLiang ChaoYiJono ForbesAlonso MartinezAravind C VMatthew BouchardEdan MaorTim KazikGerhard van NiekerkMatt BerggrenPavel DubovLael S CostaMatthäus PawelczykDavid BarkerAdam DřínekCharlie EllisonTaras BobrovytskyAxel EricssonBarry FamAlexis OlsonDaniel Herrera CArthur ZeySteve MuenchPeter BryanSmarter Every DayJan PfeiferJames WinegarAshwany RayuNick LucasMR. FANTASTICHenri SternJalex StarkTal EinavKenneth LarsenMax WelzJacob HartmannJan-Hendrik PrinzJonathon KrallCristian AmitroaieAndrew GuoElle NolanDan MartinVignesh Ganapathi SubramanianChris Sachs1stViewMathsDavid GowNiranjan ShivaramNikita LesnikovJohn CampVladimir SolomatinMark HeisingEttore RandazzoGuillaume SartorettiVai-Lam MuiАлександр ГорленкоJay EbhomenyeJoshua DavisBen CampbellSamuel CahyawijayaTyler HerrmannAlexander MaiMárton VaitkusScott GrayMikkoDan Herbatschekdancing through life...Jim CarusoArnaldo LeonDelton DingfluffyfunnypantsMads Munch AndreasenPaul PluzhnikovTimothy ChklovskiMarshall McQuillenJNLukas BiewaldBritt SelvitelleAndrew MohnRex Godbyd5bMarc FoliniEliasPatrick GibsonEryq OuithaqueueUbiquity VenturesXueqiLee ReddenEric RobinsonRobert KlockVictor CastillocinterloperMagnus Hiieotavio goodAaron BinnsDavid J WuVictor KostyukCorey OgburnMateo AbascalSergey OvchinnikovDavid B. HillMike DourRyan AtallahSohail Farhangi泉辉致鉴Britton FinleyBen DeloEric KoslowCarl SchellCy 'kkm' K'NelsonMagnus DahlströmPaul WolfgangNate PinskyFederico LebronBradley PirtleBoris VeselinovichChristian BroßNipun RamakrishnanDan DavisonMartin PriceJoseph John CoxHarry Eakins
Show More
© 2026 Grant Sanderson |
| Readable Markdown | This is a video for anyone who already knows what eigenvalues and eigenvectors are, and who might enjoy a quick way to compute them in the case of 2x2 matrices. If you're unfamiliar with eigenvalues, take a look at the [previous chapter](https://www.3blue1brown.com/lessons/eigenvalues) which introduces them.
You can skip ahead if you just want to see the trick, but if possible we'd like you to rediscover it for yourself, so let's lay down a little background.
As a quick reminder, if the effect of a linear transformation on a given vector is to scale it by some constant, we call that vector an eigenvector of the transformation, and we call the relevant scaling factor the corresponding eigenvalue, often denoted with the letter lambda, λ \\lambda.

When you write this as an equation and rearrange a little, what you see is that if a number λ \\lambda is an eigenvalue of a matrix A A, then the matrix ( A − λ I ) (A - \\lambda I) must send some nonzero vector, namely the corresponding eigenvector, to the zero vector, which in turn means the determinant of this modified matrix must be 0 0.
A v ⃗ \= λ I v ⃗ A v ⃗ − λ I v ⃗ \= 0 ⃗ ( A − λ I ) v ⃗ \= 0 ⃗ det ( A − λ I ) \= 0 \\begin{aligned} A \\vec{\\mathbf{v}} & =\\lambda I \\vec{\\mathbf{v}} \\\\ \\rule{0pt}{1.25em} A \\vec{\\mathbf{v}}-\\lambda I \\vec{\\mathbf{v}} & =\\vec{\\mathbf{0}} \\\\ \\rule{0pt}{1.25em} (A-\\lambda I) \\vec{\\mathbf{v}} & =\\vec{\\mathbf{0}} \\\\ \\rule{0pt}{1.25em} \\operatorname{det}(A-\\lambda I) &= 0 \\end{aligned}
That's a bit of a mouthful, but again, we're assuming this is review for anyone reading.
The usual way to compute eigenvalues and how most students are taught to carry it out, is to subtract a variable lambda off the diagonals of a matrix and solve for when the determinant equals 0 0. For example, when finding the eigenvalues of the matrix \[ 3 1 4 1 \] \\left\[\\begin{array}{ll} 3 & 1 \\\\ 4 & 1 \\end{array}\\right\] this looks like:
det ( \[ 3 − λ 1 4 1 − λ \] ) \= ( 3 − λ ) ( 1 − λ ) − ( 1 ) ( 4 ) \= 0 \\operatorname{det}\\left(\\left\[\\begin{array}{cc} 3-\\lambda & 1 \\\\ 4 & 1-\\lambda \\end{array}\\right\]\\right)=(3-\\lambda)(1-\\lambda)-(1)(4) = 0
This always involves a few steps to expand this and simplify it to get a clean quadratic polynomial, known as the “characteristic polynomial” of the matrix. The eigenvalues are the roots of this polynomial, so to find them you apply the quadratic formula, which typically requires one or two more steps of simplification.
det ( \[ 3 − λ 1 4 1 − λ \] ) \= ( 3 − λ ) ( 1 − λ ) − ( 1 ) ( 4 ) \= ( 3 − 4 λ \+ λ 2 ) − 4 \= λ 2 − 4 λ − 1 \= 0 λ 1 , λ 2 \= 4 ± 4 2 − 4 ( 1 ) ( − 1 ) 2 \= 2 ± 5 \\begin{aligned} \\operatorname{det}\\left(\\left\[\\begin{array}{cc} 3-\\lambda & 1 \\\\ 4 & 1-\\lambda \\end{array}\\right\]\\right) & =(3-\\lambda)(1-\\lambda)-(1)(4) \\\\ \\rule{0pt}{1.5em} & =\\left(3-4 \\lambda+\\lambda^2\\right)-4 \\\\ \\rule{0pt}{1.5em} & =\\lambda^2-4 \\lambda-1=0 \\\\ \\rule{0pt}{2.0em} \\lambda\_1, \\lambda\_2 & = \\frac{4 \\pm \\sqrt{4^2-4(1)(-1)}}{2} =2 \\pm \\sqrt{5} \\end{aligned}
This process isn't *terrible*, but at least for 2x2 matrices, there's a much more direct way to get at this answer. If you want to rediscover this trick, there are three relevant facts you'll need to know, each of which is worth knowing in its own right and can help with other problem-solving.
1. The trace of a matrix, which is the sum of these two diagonal entries, is equal to the sum of the eigenvalues.
tr ( \[ a b c d \] ) \= a \+ d \= λ 1 \+ λ 2 \\operatorname{tr}\\left(\\left\[\\begin{array}{cc}a & b \\\\ c & d\\end{array}\\right\]\\right)=a+d=\\lambda\_1+\\lambda\_2
Or another way to phrase it, more useful for our purposes, is that the mean of the two eigenvalues is the same as the mean of these two diagonal entries.
1 2 tr ( \[ a b c d \] ) \= a \+ d 2 \= λ 1 \+ λ 2 2 \\frac{1}{2} \\operatorname{tr}\\left(\\left\[\\begin{array}{cc}a & b \\\\ c & d\\end{array}\\right\]\\right)=\\frac{a+d}{2}=\\frac{\\lambda\_1+\\lambda\_2}{2}
1. The determinant of a matrix, our usual a d − b c ad-bc formula, equals the product of the two eigenvalues.
det ( \[ a b c d \] ) \= a d − b c \= λ 1 λ 2 \\operatorname{det}\\left(\\left\[\\begin{array}{ll}a & b \\\\ c & d\\end{array}\\right\]\\right)=a d-b c=\\lambda\_1 \\lambda\_2
This should make sense if you understand that an eigenvalue describes how much an operator stretches space in a particular direction and that the determinant describes how much an operator scales areas (or volumes).
1. (We'll get to this...)
Before getting to the third fact, notice how you can essentially read these first two values out of the matrix. Take this matrix \[ 8 4 2 6 \] \\left\[\\begin{array}{ll}8 & 4 \\\\ 2 & 6\\end{array}\\right\] as an example. Straight away you can know that the mean of the eigenvalues is the same as the mean of 8 8 and 6 6, which is 7 7.
m \= λ 1 \+ λ 2 2 \= 7 m = \\frac{\\lambda\_1 + \\lambda\_2}{2} = 7
Likewise, most linear algebra students are well-practiced at finding the determinant, which in this case is 8 ⋅ 6 − 4 ⋅ 2 \= 48 − 8 8 \\cdot 6 - 4 \\cdot 2 = 48 - 8, so you know the product of our two eigenvalues is 40 40.
p \= λ 1 λ 2 \= 40 p = \\lambda\_1 \\lambda\_2= 40
Take a moment to see how you can derive what will be our third relevant fact, which is how to recover two numbers when you know their mean and product.
Focus on this example. You know the two values are evenly spaced around 7 7, so they look like 7 7 plus or minus something; let's call it d d for distance.

You also know that the product of these two numbers is 40 40.
40 \= ( 7 \+ d ) ( 7 − d ) 40 = (7+d)(7-d)
To find d d, notice how this product expands nicely as a difference of squares. This lets you cleanly solve for d d:
40 \= ( 7 \+ d ) ( 7 − d ) 40 \= 7 2 − d 2 d 2 \= 7 2 − 40 d 2 \= 9 d \= 3 \\begin{aligned} 40 & = (7+d)(7-d) \\\\ \\rule{0pt}{1.25em} 40 & = 7^2-d^2 \\\\ \\rule{0pt}{1.25em} d^2 & =7^2-40 \\\\ \\rule{0pt}{1.25em} d^2 & =9 \\\\ \\rule{0pt}{1.25em} d & =3 \\end{aligned}
In other words, the two values for this very specific example work out to be 4 4 and 10 10.

But our goal is a quick trick and you wouldn't want to think through this each time, so let's wrap up what we just did in a general formula.
For any mean, m m, and product, p p, the distance squared is always going to be m 2 − p m^2 - p. This gives the third key fact, which is that when two numbers have a mean and a product, you can write those two numbers as m ± m 2 − p m \\pm \\sqrt{m^2 - p}.

This is decently fast to rederive on the fly if you ever forget it and it's essentially just a rephrasing of the difference of squares formula, but even still it's a fact worth memorizing. In fact, Tim from Acapella Science wrote us a quick jingle to make it a little more memorable.
## Examples
Let me show you how this works, say for the matrix \[ 3 1 4 1 \] \\left\[\\begin{array}{cc}3 & 1 \\\\ 4 & 1\\end{array}\\right\]. You start by thinking of the formula, stating it all in your head.

But as you write it down, you fill in the appropriate values of m m and p p as you go. Here, the mean of the eigenvalues is the same as the mean of 3 3 and 1 1, which is 2 2, so you start by writing:
λ 1 , λ 2 \= 2 ± 2 2 − … \\lambda\_1, \\lambda\_2 = 2 \\pm \\sqrt{2^2 - …}
The product of the eigenvalues is the determinant, which in this example is 3 ⋅ 1 − 1 ⋅ 4 \= − 1 3 \\cdot 1 - 1 \\cdot 4 = -1, so that's the final thing you fill in.
λ 1 , λ 2 \= 2 ± 2 2 − ( − 1 ) \\lambda\_1, \\lambda\_2 = 2 \\pm \\sqrt{2^2 - (-1)}
So the eigenvalues are 2 ± 5 2 \\pm \\sqrt{5}. You may noticed this is the same matrix we were using at the start, but notice how much more directly we can get at the answer compared to the characteristic polynomial route.

Here, let's try another one using the matrix \[ 2 7 1 8 \] \\left\[\\begin{array}{ll}2 & 7 \\\\ 1 & 8\\end{array}\\right\]. This time the mean of the eigenvalues is the same as the mean of 2 2 and 8 8, which is 5 5. So again, start writing out the formula, but writing 5 5 in place of m m:
λ 1 , λ 2 \= 5 ± 5 2 − … \\lambda\_1, \\lambda\_2 = 5 \\pm \\sqrt{5^2 - …}
The determinant is 2 ⋅ 8 − 7 ⋅ 1 \= 9 2 \\cdot 8 - 7 \\cdot 1 = 9. So in this example, the eigenvalues look like 5 ± 16 5 ± \\sqrt{16}, which gives us 9 9 and 1 1.
λ 1 , λ 2 \= 5 ± 5 2 − 9 \= 9 , 1 \\lambda\_1, \\lambda\_2 = 5 \\pm \\sqrt{5^2 - 9} = 9, 1
You see what we mean about how you can basically just write down the eigenvalues while staring at the matrix? It's typically just the tiniest bit of simplifying at the end.
What are the eigenvalue(s) of the matrix \[ 2 3 2 4 \] \\left\[\\begin{array}{ll}2 & 3 \\\\ 2 & 4\\end{array}\\right\]?
3 ± 11 3 \\pm \\sqrt{11}3 ± 7 3 \\pm \\sqrt{7}3 ± 2 3 \\pm \\sqrt{2}3 ± 10 3 \\pm \\sqrt{10}
This trick is especially useful when you need to read off the eigenvalues from small examples without losing the main line of thought by getting bogged down in calculations.
For more practice, let's try this out on a common set of matrices which pop up in quantum mechanics, known as the Pauli spin matrices.

If you know quantum mechanics, you'll know the eigenvalues of these are highly relevant to the physics they describe, and if you don't then let this just be a little glimpse of how these computations are actually relevant to real applications.
The mean of the diagonal in all three cases is 0 0, so the mean of the eigenvalues in all cases is 0 0, making our formula look especially simple.

What about the products of the eigenvalues, the determinants? For the first one, it's 0 − 1 0 - 1 or − 1 \-1. The second also looks like 0 − 1 0 - 1, though it takes a moment more to see because of the complex numbers. And the final one looks like − 1 − 0 \-1 - 0. So in all three cases, the eigenvalues are ± 1 ±1.

Although in this case you don't even really need the formula to find two values evenly spaced around zero whose product is − 1 \-1.
If you're curious, in the context of quantum mechanics, these matrices correspond with observations you might make about the spin of a particle in the x x, y y or z z\-direction, and the fact that these eigenvalues are ± 1 ±1 corresponds with the idea that the values for the spin you would observe would be entirely in one direction or entirely in another, as opposed to something continuously ranging in between.
Maybe you'd wonder how exactly this works, and why you'd use 2x2 matrices with complex numbers to describe spin in three dimensions. Those would be valid questions, just beyond the scope of what we're talking about here.
You know it's funny, this section is supposed to be about a case where 2x2 matrices are not just toy examples or homework problems, but actually come up in practice, and quantum mechanics is great for that. However, the example kind of undercuts the whole point we're trying to make. For these specific matrices, if you use the traditional method with characteristic polynomials, it's essentially just as fast, and might actually faster.
For the first matrix, the relevant determinant directly gives you a characteristic polynomial of λ 2 − 1 \\lambda^2 - 1, which clearly has roots of plus and minus 1 1. Same answer for the second matrix. And for the last, forget about doing any computations, traditional or otherwise, it's already a diagonal matrix, so those diagonal entries are the eigenvalues\!

However, the example is not totally lost on our cause, where you would actually feel the speed up is the more general case where you take a linear combination of these matrices and then try to compute the eigenvalues.

We might write this as a a times the first one, plus b b times the second, plus c c times the third. In physics, this would describe spin observations in the direction of a vector \[ a b c \] \\left\[\\begin{array}{c} a \\\\ b \\\\ c \\end{array}\\right\].

More specifically, you should assume this vector is normalized, meaning a 2 \+ b 2 \+ c 2 \= 1 a^2 + b^2 + c^2 = 1. When you look at this new matrix, it's immediate to see that the mean of the eigenvalues here is still zero, and you may enjoy pausing for a brief moment to confirm that the product of those eigenvalues is still − 1 \-1, and from there concluding what the eigenvalues must be.

The characteristic polynomial approach, on the other hand, is now actually more cumbersome to do in your head.

## Relation to the quadratic formula
To be clear, using the mean-product formula is the same thing as finding roots of the characteristic polynomial; it has to be. In fact, this formula is a nice way to think about solving quadratics in general and some viewers of the channel may recognize this.

If you're trying to find the roots of a quadratic given its coefficients, you can think of that as a puzzle where you know the sum of two values, and you know their product, and you're trying to recover the original two values.

Specifically, if the polynomial is normalized so that the leading coefficient is 1 1, then the mean of the roots is − 1 / 2 \-1/2 times the linear coefficient, for the example on screen that would be 5 5, and the product of the roots is even easier, it's just the constant term. From there, you'd apply the mean product formula to find the roots.

Now, you could think of the mean product formula as being a lighter-weight reframing of the traditional quadratic formula. But the real advantage is that the terms have more meaning to them.
The whole point of this eigenvalue trick is that because you can read out the mean and product directly from the matrix, you can jump straight to writing down the roots without thinking about what the characteristic polynomial looks like. But to do that, we need a version of the quadratic formula where the terms carry some kind of meaning.

What are the eigenvalue(s) of the matrix \[ 3 1 5 7 \] \\left\[\\begin{array}{ll}3 & 1 \\\\ 5 & 7\\end{array}\\right\]?
5 53 3 and 7 72 2 and 8 84 4 and 6 6
What are the eigenvalue(s) of the matrix \[ 8 4 2 6 \] \\left\[\\begin{array}{ll}8 & 4 \\\\ 2 & 6\\end{array}\\right\]?
7 76 6 and 8 83 3 and 11 114 4 and 10 10
## Last thoughts
The hope is that it's not just one more thing to memorize, but that the framing reinforces other nice facts worth knowing, like how the trace and determinant relate to eigenvalues. If you want to prove these facts, by the way, take a moment to expand out the characteristic polynomial for a general matrix, and think hard about the meaning of each coefficient.

Many thanks to Tim, for ensuring that the mean-product formula will stay stuck in all of our heads for at least a few months.
If you don't know about [his channel](https://www.youtube.com/@acapellascience), do check it out. The [Molecular Shape of You](https://www.youtube.com/watch?v=f8FAJXPBdOg), in particular, is one of the greatest things on the internet.
[](https://www.3blue1brown.com/lessons/eigenvalues#title)
[](https://www.3blue1brown.com/lessons/abstract-vector-spaces#title)
Table of Contents
[A quick trick for computing eigenvalues](https://www.3blue1brown.com/lessons/quick-eigen#title)[Examples](https://www.3blue1brown.com/lessons/quick-eigen#examples)[Relation to the quadratic formula](https://www.3blue1brown.com/lessons/quick-eigen#relation-to-the-quadratic-formula)[Last thoughts](https://www.3blue1brown.com/lessons/quick-eigen#last-thoughts)[Thanks](https://www.3blue1brown.com/lessons/quick-eigen#thanks)
## Thanks
Special thanks to those below for supporting the original video behind this post, and to [current patrons](https://www.3blue1brown.com/thanks) for funding ongoing projects. If you find these lessons valuable, [consider joining](https://www.patreon.com/3blue1brown).
Ronnie ChengRaymond FowkesMichael HardelSean BarrettMaty SimanAndrew FosterAlan SteinArne Tobias Malkenes ØdegaardRon CapelliNickVignesh ValliappanCooper JonesHolger FlierChelaseVignan VelivelaJon AdamsAidan ShenkmanJoshua Ouellette773377Suthen ThomasScott GibbonsPi NuessleD DasguptaEero HippeläinenC CarothersChris DrutaBrendan ColemanSinan TaifourTed SuzmanKeith SmithJimmy YangRish KundaliaChristian KaiserEric FlynnKevin SteckTyler ParcellJustin ChandlerJim PowersCalvin LinArkajyoti MisraAman KarunakaranSiobhan DurcanChris SeabyKarma ShiNitu KitchlooJarred HarveyMartin MauersbergJohan Austerrehmi postAllen StengerCarlos IriarteNero LiRebecca LinAda CohenAlex HackmanKrishnamoorthy VenkatramanTrevor SettlesYinYangBalance.AsiaDaniel BadgioKros DaiConstantine GoltsevJames SugrimJaewon JungJoseph KellyRICHARD C BRIDGESOleksandr MarchukovPesho IvanovBrendan ShahVasu DubeyLuc RitchieEvan MiyazonoPeter EhrnstromCurt ElsasserJoseph RoccaPatch KesslerOctavian VoicuAugustine LimEmilio MendozaMatt GodboltTim FerreiraHal HildebrandAndre AuGregory HopperPethanolMads ElvheimYana Chernobilsky噗噗兔James D. Woolery, M.D.Randy TruelukvolDave BHenry ReichStefan GrunspanJohn LuttigMikePete DietlJeremy ColeMatt Rovetodave nicponskiDonal BotkinJonathan WhitmoreJacob WallingfordJames GolabIvan SorokinEric JohnsonPradeep GollakotaJameel SyedOlga CoopermanYetinotherGordon GouldRobert van der TuukAnisha PatilAndreas NautschJeff LinseKarl WAN NAN WONag RajanAditya MunotSebastian BraunertRipta PasayKai-Siang AngAndrew BuseyStefan KorntreffHitoshi YamauchiMarc CohenBpendragonLaura GastJason HiseBartosz BurclafIlya LatushkoCharles PereiraGarbanarbaStevie MetkeDan KinchIan McinerneyHaraldBenjamin BaileyJeff RlardysoftAidan De AngelisAlbin EgasseJack ThullKarl NiuAljoscha SchulzeVince GaborYushi WangChristian OpitzMagister MugitSundar SubbarayanKarim SafinJamie WarnerKevin FowlerChien Chern KhorLinh TranMattéo BoissièreArthur LazarteEverett KnagHugh ZhangXierumengSansWord HuangJohn GriffithGabriele SiinoJerris HeatonAZsorcererDoug LundinSteven Siddalssoekulanul kumar sinhaZachariah RosenbergRod SMichael BosLuka KorovKeith TysonAndresBeckett Madden-WoodsSteve HuynhEugene PakhomovJonathan WilsonYoon Suk OhMahrlo AmpostaAdam CedroneChandra SripadaEugene FossDaniel PangBen GutierrezPeter McinerneyDavid JohnstonParker BurchettAndré Sant’AnnaDan LaffanJosh KinnearMax FilimonAlexander JanssenAndrew CaryTyler VenessEddy LazzarinAntoine BruguierArcusYurii MonastyrshynPierre LancienJoshua ClaeysSamuel JudgeSiddhesh VichareJohnny HolzeisenRam Eshwar KaundinyaDavid Bar-OnEric YoungeSteve CohenDallas De AtleyMark MannMolly MackinlayMike DussaultAhmed ElkhananyJoe PregrackeTim ErbesD. SivakumarIzzy GomezMarina PillerAndrew WyldOmar ZrienErnest HymelAlexJohn McClaskeyDuane RichJulien DuboisBill GatliffLunchbag RodriguezJed YeiserRandy C. WillKevinIvanMitch HardingAndré Yuji HisatsugaLAI OscarJeff DoddsKartik Cating-SubramanianJeremyivo galiclevav ferber tasTianyu GeMarek GluszczukTarrence NManuel GarciaVijayMert ÖzMatt RussellsupershabamRob GranieriWooyong EeDominik WagnerMarcial AbrahantesJayCoreNayantara JainBenjamin R.² M.Bruce MalcolmJoseph O'ConnorSean ChittendenIllia TulupovKrishanu SankarTanmayan PandeJuan BenetJohn ZelinkaJonathanIan RayMichael W WhiteSonOfSofamanAdam MarguliesMingFung LawMatt ParlmerClark GaebelМаксим АласкаровDebbie NewhouseBrandon HuangJ. Chris WesleyJean-Manuel IzaretOliver SteelePetar VeličkovićZachary GidwitzYair kassMajid AlfifiThomas Peter BerntsenJacob TaylorPeter FreeseMr. William ShakespawZachary MeyerAli YahyaChristopher SuterRoobieJohn HaleyPāvils JurjānsCardozo FamilyBrian KingHo Woo NamJacob HarmonBen GrangerNathan WinchesteremptymachineНавальныйBernd SingDavid ClarkSolara570Christopher LortonDamian MarekJohn LeRich JohnsonTerry HayesAdam MielsMax AndersonMathias JanssonLee BurnetteJeff StraathofJoshua LinTyler VanValkenburgValentin Mayer-EichbergerRabidCamelCody MerhoffLinda XieKillian McGuinnessChris ConnettMaxim NitscheVeritasiumArun IyerJeroen SwinkelsBrian CloutierPatrick LucasJullciferBrian StaroselskywillDaniel BrownRobert Von BorstelCharles SoutherlandConvenienceShoutJohn RizzoJ. Dmitri GallowFrank R. Brown, Jr.Vassili PhilippovCarl-Johan R. NordangårdDouglas Lee HallBurt HumburgImran AhmedMehmet BudakjustpwdEro CarreraScott Walter, Ph.D.Arjun ChakrobortyPerry TagaDrTadPhdJayne GabrieleLiang ChaoYiJono ForbesAlonso MartinezAravind C VMatthew BouchardEdan MaorTim KazikGerhard van NiekerkMatt BerggrenPavel DubovLael S CostaMatthäus PawelczykDavid BarkerAdam DřínekCharlie EllisonTaras BobrovytskyAxel EricssonBarry FamAlexis OlsonDaniel Herrera CArthur ZeySteve MuenchPeter BryanSmarter Every DayJan PfeiferJames WinegarAshwany RayuNick LucasMR. FANTASTICHenri SternJalex StarkTal EinavKenneth LarsenMax WelzJacob HartmannJan-Hendrik PrinzJonathon KrallCristian AmitroaieAndrew GuoElle NolanDan MartinVignesh Ganapathi SubramanianChris Sachs1stViewMathsDavid GowNiranjan ShivaramNikita LesnikovJohn CampVladimir SolomatinMark HeisingEttore RandazzoGuillaume SartorettiVai-Lam MuiАлександр ГорленкоJay EbhomenyeJoshua DavisBen CampbellSamuel CahyawijayaTyler HerrmannAlexander MaiMárton VaitkusScott GrayMikkoDan Herbatschekdancing through life...Jim CarusoArnaldo LeonDelton DingfluffyfunnypantsMads Munch AndreasenPaul PluzhnikovTimothy ChklovskiMarshall McQuillenJNLukas BiewaldBritt SelvitelleAndrew MohnRex Godbyd5bMarc FoliniEliasPatrick GibsonEryq OuithaqueueUbiquity VenturesXueqiLee ReddenEric RobinsonRobert KlockVictor CastillocinterloperMagnus Hiieotavio goodAaron BinnsDavid J WuVictor KostyukCorey OgburnMateo AbascalSergey OvchinnikovDavid B. HillMike DourRyan AtallahSohail Farhangi泉辉致鉴Britton FinleyBen DeloEric KoslowCarl SchellCy 'kkm' K'NelsonMagnus DahlströmPaul WolfgangNate PinskyFederico LebronBradley PirtleBoris VeselinovichChristian BroßNipun RamakrishnanDan DavisonMartin PriceJoseph John CoxHarry Eakins |
| Shard | 78 (laksa) |
| Root Hash | 2122703424142777278 |
| Unparsed URL | com,3blue1brown!www,/lessons/quick-eigen s443 |