Today we finish 4.4. **Read** Markov chains part of Section 4.6 for Friday.
Not covering Section 4.5, or rest of 4.6 (which contains many
interesting applications!)
Work through recommended homework questions.
Some updated!

**Tutorials:** Quiz 5 is this week. It covers Appendix C, 4.1, 4.2
and 4.3

**Office hour:** Wednesday, 12:30-1:30, MC103B.

**Help Centers:** Monday-Friday 2:30-6:30 in MC 106.

The **final exam** will take place on Mon Dec 9, 2-5pm.

See course home page for rooms and for the
**conflict** policy.

You should **already** have notified the registrar or your Dean
(and me) of any conflicts!

**True/false:**
Every polynomial of degree $n$ has exactly $n$ distinct roots over $\C$.

**True/false:**
The complex eigenvalues of a matrix always come in conjugate pairs.

**True/false:**
If $\lambda$ is an eigenvalue of $A$ and $k \geqslant 0$,
then $\lambda^k$ is an eigenvalue of $A^k$.

**True/false:**
The identity matrix is similar to every matrix.

**True/False:**
If $A$ and $B$ have the same eigenvalues, then $A$ and $B$ are similar.

**Definition:** Let $A$ and $B$ be $n \times n$ matrices. We say that
$A$ is **similar** to $B$ if there is an invertible matrix $P$ such that $P^{-1} A P = B$.
When this is the case, we write $A \sim B$.

It is equivalent to say that $AP = PB$ or $A = PBP^{-1}$.

It is tricky in general to find such a $P$ when it exists. We'll learn a method that works in a certain situation in this section.

**Theorem 4.21:** Let $A$, $B$ and $C$ be $n \times n$ matrices. Then:

a. $A \sim A$.

b. If $A \sim B$ then $B \sim A$.

c. If $A \sim B$ and $B \sim C$, then $A \sim C$.

Similar matrices have a lot of properties in common.

**Theorem 4.22:**
Let $A$ and $B$ be similar matrices. Then:

a. $\det A = \det B$

b. $A$ is invertible iff $B$ is invertible.

c. $A$ and $B$ have the same rank.

d. $A$ and $B$ have the same characteristic polynomial.

e. $A$ and $B$ have the same eigenvalues.

**Question:** Are $\bmat{rr} 1 & 2 \\ 3 & 4 \emat$ and $\bmat{rr} 1 & 1 \\ 2 & -1 \emat$
similar?

**Definition:** $A$ is **diagonalizable** if it is similar to some diagonal matrix.

If $A$ is similar to a diagonal matrix $D$, then $D$ must have the eigenvalues of $A$ on the diagonal. But how to find $P$?

More precisely, there exist an invertible matrix $P$ and a diagonal matrix $D$ with $P^{-1}AP = D$ if and only if the columns of $P$ are $n$ linearly independent eigenvectors of $A$ and the diagonal entries of $D$ are the corresponding eigenvalues in the same order.

This theorem is one of the main reasons we want to be able to find eigenvectors of a matrix. Moreover, the more eigenvectors the better, so this motivates allowing complex eigenvectors.

In section 4.3, we saw:

**Theorem 4.20:** If $\vv_1, \vv_2, \ldots, \vv_m$ are eigenvectors of $A$
corresponding to distinct eigenvalues
$\lambda_1, \lambda_2, \ldots, \lambda_m$, then
$\vv_1, \vv_2, \ldots, \vv_m$ are linearly independent.

**Example:** Is the matrix
$A = \bmat{rrr} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \emat$
diagonalizable?

Yes. The eigenvalues
are $1$, $4$ and $6$, and for each one there is at least one
eigenvector. These are linearly independent (by Theorem 4.20),
and there are three of them, so $A$ is diagonalizable (by Theorem 4.23).

To **find** the matrix $P$ explicitly, we need to solve the
three systems to find the eigenvectors.

**Theorem 4.25:** If $A$ in an $n \times n$ matrix with $n$ distinct eigenvalues,
then $A$ is diagonalizable.

**Example 4.25:** Is $A = \bmat{rrr} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 2 & -5 & 4 \emat$
diagonalizable? If so, find a matrix $P$ that diagonalizes it.

**Solution:**
In Example 4.18 we found that the eigenvalues are
$\lambda = 1$ (with algebraic multiplicity 2) and
$\lambda = 2$ (with algebraic multiplicity 1).
A basis for $E_1$ is $\colll 1 1 1$ and a basis for $E_2$ is $\colll 1 2 4$.
Since every eigenvector is a scalar multiple of one of these, it is not
possible to find three linearly independent eigenvectors.
So $A$ is not diagonalizable.

**Example 4.26:** Is $A = \bmat{rrr} -1 & 0 & 1 \\ 3 & 0 & -3 \\ 1 & 0 & -1 \emat$
diagonalizable? If so, find a matrix $P$ that diagonalizes it.

**Solution:**
In Example 4.19 (done mostly on whiteboard, but also
in text) we found that the eigenvalues are
$\lambda = 0$ (with algebraic multiplicity 2) and
$\lambda = -2$ (with algebraic multiplicity 1).
A basis for $E_0$ is $\vp_1 = \colll 0 1 0$ and $\vp_2 = \colll 1 0 1$.
A basis for $E_{-2}$ is $\vp_3 = \colll {-1} 3 1$.
These are linearly independent (see below). Thus
$$
P = [\, \vp_1 \, \vp_2 \, \vp_3 \,]
= \bmat{rrr} 0 & 1 & -1 \\ 1 & 0 & 3 \\ 0 & 1 & 1 \emat
$$
is invertible, and by the theorem, we must have
$$
P^{-1} A P = \bmat{rrr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -2 \emat = D
$$
(Note that to check an answer like this, it is usually easiest
to check that $AP = PD$. Do so!)

Note: Different orders work too.

**Theorem 4.24:** If $\lambda_1, \ldots, \lambda_k$ are distinct eigenvalues of $A$
and, for each $i$, $\cB_i$ is a basis for the eigenspace $E_{\lambda_i}$,
then the union of the $\cB_i$'s is a linearly independent set.

The proof of this is similar to the proof of Theorem 4.20, where we had only one non-zero vector in each eigenspace.

Combining Theorems 4.23 and 4.24 gives the following important consequence:

**Theorem:** An $n \times n$ matrix is diagonalizable if and only if
the sum of the geometric multiplicities of the eigenvalues is $n$.

Look at Examples 4.25 and 4.26 again.

So it is important to understand the geometric multiplicities better. Here is a helpful result:

**Lemma 4.26:** If $\lambda_1$ is an eigenvalue of an $n \times n$ matrix $A$, then
$$
\text{geometric multiplicity of } \lambda_1 \leqslant
\text{algebraic multiplicity of } \lambda_1
$$

We'll prove this in a minute. First, let's look at what it implies:

Let $A$ be an $n \times n$ matrix with distinct eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_k$. Let their geometric multiplicities be $g_1, g_2, \ldots, g_k$ and their algebraic multiplicities be $a_1, a_2, \ldots, a_k$. We know $$ g_i \leqslant a_i \qtext{for each $i$} $$ and so $$ g_1 + \cdots + g_k \leqslant a_1 + \cdots + a_k \leqslant n $$ So the only way to have $g_1 + \cdots + g_k = n$ is to have $g_i = a_i$ for each $i$ and $a_1 + \cdots + a_k = n$.

This gives the **main theorem** of the section:

**Theorem 4.27 (The Diagonalization Theorem):**
Let $A$ be an $n \times n$ matrix with distinct eigenvalues
$\lambda_1, \lambda_2, \ldots, \lambda_k$.
Let their geometric multiplicities be $g_1, g_2, \ldots, g_k$
and their algebraic multiplicities be $a_1, a_2, \ldots, a_k$.
Then the following are equivalent:

a. $A$ is diagonalizable.

b. $g_1 + \cdots + g_k = n$.

c. $g_i = a_i$ for each $i$ and $a_1 + \cdots + a_k = n$.

**Note:** This is stated incorrectly in the text.
The red part must be added unless you are working over $\C$,
in which case it is automatic that $a_1 + \cdots + a_k = n$.
With the way I have stated it, it is correct over $\R$ or over $\C$.

**Example:** Is $A = \bmat{rr} 0 & -1 \\ 1 & 0 \emat$
diagonalizable?

It
depends.
If we are working over $\R$,
there are no eigenvalues and no eigenvectors, so no, it is not diagonalizable,
and (a), (b) and (c) all fail.

If we are working over $\C$, then $i$ and $-i$ are eigenvalues, and are distinct, so $A$ is diagonalizable, and (b) and (c) hold too. Note that in this case, $P$ will be complex: To find $P$, we first find that corresponding eigenvectors are $\vp_1 = \coll i 1$ and $\vp_2 = \coll 1 i$. So if we take $$ P = [\,\vp_1\,\vp_2\,] = \bmat{rr} i & 1 \\ 1 & i \emat $$ we find that $$ P^{-1} A P = \bmat{rr} i & 0 \\ 0 & -i \emat = D $$

We still need to prove Lemma 4.26, and will do so next class.