9 | Eigenvalues and eigenvectors

This chapter of Linear Algebra by Dr JH Klopper is licensed under an Attribution-NonCommercial-NoDerivatives 4.0 International Licence available at http://creativecommons.org/licenses/by-nc-nd/4.0/?ref=chooser-v1 .

9.1 Introduction

In this notebook we investigate interesting matrices that act as operators on vectors.

9.2 Eigenvalues

Definition 9.2.1 Consider a square matrix
A
n×n
and a non-zero vector
v∈
n

. We define
v
as an eigenvector of
A
if the following property holds, shown in (1).
Av=λv
(
1
)
Definition 9.2.1 Here
Av
is a scalar,
λ
, multiple of
v
. This scalar,
λ
, is the eigenvalue of of
A
and
v
is the eigenvector of
A
corresponding to
λ
.
Below, we have a line through the origin and a vector,
v=(1,2)
on that line.
In[]:=
Show[Plot[2x,{x,-1,4}],Graphics[{Red,Thick,Arrow[{{0,0},{1,2}}]}]]
Out[]=
In (2) we have a matrix
A
.
A=
3
0
8
-1

(
2
)
Note the result of the multiplication of this matrix and the vector,
v
.
In[]:=
A={{3,0},{8,-1}};​​v={1,2};​​MatrixForm[A.v]
Out[]//MatrixForm=

3
6

In the plot below, we see that the result is a scalar multiple of
v
. Indeed we have
Av=3v
. It is still on the same line and is just a scalar multiple of
v
.
In[]:=
Show[Plot[2x,{x,-1,4}],Graphics[{Green,Dashed,Arrow[{{0,0},{3,6}}]}],Graphics[{Red,Thick,Arrow[{{0,0},{1,2}}]}]]
Out[]=
Multiplying an identity matrix of appropriate dimension with a vector returns the same vector. We can therefor, do the following manipulation, shown in (3).
Av=λIv​​λIv-Av=0​​(λI-A)v=0
(
3
)
Since we chose
v
to be a non-zero vector, the only way for (3) to hold is for the determinant of
λI-A
to be
0
, shown in (4).
|λI-A|=0
(
4
)
Equation (4) is known as the characteristic equation of
A
. This result is always a polynomial known as the characteristic polynomial of
A
, shown in (5).
n
λ
+
c
1
n-1
λ
+…+
c
n
=0
(
5
)
As an example, we have the (6).
λI=λ
1
0
0
1
=
λ
0
0
λ
,A=
1
2
3
4
​​λI-A=
λ-1
2
3
λ-4
​​|λI-A|=[(λ-1)(λ-4)]-(2)(3)=0​​
2
λ
-5λ-2=0
(
6
)
We check on our work using the Det function.
In[]:=
Det[{{λ-1,2},{3,λ-4}}]
Out[]=
-2-5λ+
2
λ
We solve for
λ
using the Solve function.
In[]:=
Solve[Det[{{λ-1,2},{3,λ-4}}]0,λ]
Out[]=
λ
1
2
(5-
33
),λ
1
2
(5+
33
)
These are the two eigenvalues of
A
. The Eigenvalues function will return the same results.
In[]:=
Eigenvalues[{{1,2},{3,4}}]
Out[]=

1
2
(5+
33
),
1
2
(5-
33
)
It is a simple task to calculate the determinant of triangular matrix. It is the product of the entries along the main diagonal. The characteristic polynomial of such an upper triangular matrix is shown in (7).
|A-λI|=
λ-
a
11
-
a
12
⋯
-
a
1n
0
λ-
a
22
⋯
-
a
2n
⋮
⋮
⋱
⋮
0
0
⋯
λ-
a
nn
=0​​(λ-
a
11
)(λ-
a
22
)…(λ-
a
nn
)=0
(
7
)
The eigenvalues are then as shown in (8), i.e. the entries along the main diagonal.
λ={
a
11
,
a
22
,…,
a
nn
}
(
8
)
As an example we have the lower triangular matrix below.
In[]:=
L={{2,0,0},{3,-1,0},{-1,2,4}};​​MatrixForm[L]​​LowerTriangularMatrixQ[L]
Out[]//MatrixForm=
2
0
0
3
-1
0
-1
2
4
Out[]=
True
In[]:=
Eigenvalues[L]
Out[]=
{4,2,-1}
It is possible for a matrix to have complex eigenvalues. Consider the matrix below, ().
A=
-3
-1
6
1

(
9
)
The characteristic polynomial of this matrix determines the eigenvalues, ().

λ+3
-1
6
λ-1
=0​​(λ+3)(λ-1)-(-1)(6)=0​​
2
λ
+2λ+3=0
(
10
)
In[]:=
Solve[
2
λ
+2λ+30,λ]
Out[]=
{λ-1-
2
},{λ-1+
2
}
We note the complex roots, with the eigenvalues confirmed using the Eigenvalues function.
In[]:=
Eigenvalues[A]
Out[]=
-1+
2
,-1-
2

Some square matrices are not invertible, shown in (11).
A=
3
6
-2
-4

(
11
)
Their determinants are
0
.
In[]:=
Det[{{3,6},{-2,-4}}]
Out[]=
0
Their eigenvalues include
0
.
In[]:=
Eigenvalues[{{3,6},{-2,-4}}]
Out[]=
{-1,0}
This means that square matrix is invertible if and only it does not have
0
as an eigenvalue.

9.3 Eigenvectors

We have seen the definition of an eigenvector of a matrix
A
corresponding to an eigenvalue,
λ
. The eigenvectors are then the solutions in the nullspace of
(λI-A)
that is all the vectors,
v
, such that
(λI-A)v=0
. This nullspace is called the eigenspace of
A
corresponding to
λ
.
To calculate the eigenvectors, we need to find a basis for the eigenspace. Consider the example matrix
A
below, shown in (12).
A=
0
0
-2
1
2
1
1
0
3
(
12
)
In[]:=
A={{0,0,-2},{1,2,1},{1,0,3}};​​Eigenvalues[A]
The basis is calculated below, shown in (13).

9.3.1 Matrix powers

Note the eigenvalues of this matrix.

9.4 Diagonalization

Here we are concerned with finding the basis for a square matrix, such that the basis consists of eigenvectors.
Below, we use code to demonstrate that the result is a diagonal matrix.
Some matrices are not diagonalizable, ().
There are only two eigenvectors (which are non-zero vectors).
This is not to say that matrices with non-distinct eigenvalues are not diagonalizable. Look below at the identity matrix. It is triangular, with three identical eigenvalues.
It has three eigenvectors, though, and is diagonalizable.
One of our previous examples also has repeated eigenvalues, but is diagonalizable.
It is indeed a diagonal matrix. Now we raise it to the fifth power.

9.5 Orthogonal diagonalization