Math 1600A Lecture 30, Section 2, 20 Nov 2013

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} $

Announcements:

Today we finish 4.4. Read Markov chains part of Section 4.6 for Friday. Not covering Section 4.5, or rest of 4.6 (which contains many interesting applications!) Work through recommended homework questions. Some updated!

Tutorials: Quiz 5 is this week. It covers Appendix C, 4.1, 4.2 and 4.3
Office hour: Wednesday, 12:30-1:30, MC103B.
Help Centers: Monday-Friday 2:30-6:30 in MC 106.

The final exam will take place on Mon Dec 9, 2-5pm.
See course home page for rooms and for the conflict policy.
You should already have notified the registrar or your Dean (and me) of any conflicts!

True/false: Every polynomial of degree $n$ has exactly $n$ distinct roots over $\C$.

True/false: The complex eigenvalues of a matrix always come in conjugate pairs.

True/false: If $\lambda$ is an eigenvalue of $A$ and $k \geqslant 0$, then $\lambda^k$ is an eigenvalue of $A^k$.

True/false: The identity matrix is similar to every matrix.

True/False: If $A$ and $B$ have the same eigenvalues, then $A$ and $B$ are similar.

Review of Section 4.4

Definition: Let $A$ and $B$ be $n \times n$ matrices. We say that $A$ is similar to $B$ if there is an invertible matrix $P$ such that $P^{-1} A P = B$. When this is the case, we write $A \sim B$.

It is equivalent to say that $AP = PB$ or $A = PBP^{-1}$.

It is tricky in general to find such a $P$ when it exists. We'll learn a method that works in a certain situation in this section.

Theorem 4.21: Let $A$, $B$ and $C$ be $n \times n$ matrices. Then:
a. $A \sim A$.
b. If $A \sim B$ then $B \sim A$.
c. If $A \sim B$ and $B \sim C$, then $A \sim C$.

Similar matrices have a lot of properties in common.

Theorem 4.22: Let $A$ and $B$ be similar matrices. Then:
a. $\det A = \det B$
b. $A$ is invertible iff $B$ is invertible.
c. $A$ and $B$ have the same rank.
d. $A$ and $B$ have the same characteristic polynomial.
e. $A$ and $B$ have the same eigenvalues.

Question: Are $\bmat{rr} 1 & 2 \\ 3 & 4 \emat$ and $\bmat{rr} 1 & 1 \\ 2 & -1 \emat$ similar?

Diagonalization

Definition: $A$ is diagonalizable if it is similar to some diagonal matrix.

If $A$ is similar to a diagonal matrix $D$, then $D$ must have the eigenvalues of $A$ on the diagonal. But how to find $P$?

Theorem 4.23: Let $A$ be an $n \times n$ matrix. Then $A$ is diagonalizable if and only if $A$ has $n$ linearly independent eigenvectors.

More precisely, there exist an invertible matrix $P$ and a diagonal matrix $D$ with $P^{-1}AP = D$ if and only if the columns of $P$ are $n$ linearly independent eigenvectors of $A$ and the diagonal entries of $D$ are the corresponding eigenvalues in the same order.

This theorem is one of the main reasons we want to be able to find eigenvectors of a matrix. Moreover, the more eigenvectors the better, so this motivates allowing complex eigenvectors.

In section 4.3, we saw:

Theorem 4.20: If $\vv_1, \vv_2, \ldots, \vv_m$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_m$, then $\vv_1, \vv_2, \ldots, \vv_m$ are linearly independent.

New material: Section 4.4 continued

Example: Is the matrix $A = \bmat{rrr} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \emat$ diagonalizable?

Theorem 4.25: If $A$ in an $n \times n$ matrix with $n$ distinct eigenvalues, then $A$ is diagonalizable.

Example 4.25: Is $A = \bmat{rrr} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 2 & -5 & 4 \emat$ diagonalizable? If so, find a matrix $P$ that diagonalizes it.

Solution: In Example 4.18 we found that the eigenvalues are $\lambda = 1$ (with algebraic multiplicity 2) and $\lambda = 2$ (with algebraic multiplicity 1). A basis for $E_1$ is $\colll 1 1 1$ and a basis for $E_2$ is $\colll 1 2 4$. Since every eigenvector is a scalar multiple of one of these, it is not possible to find three linearly independent eigenvectors. So $A$ is not diagonalizable.

Example 4.26: Is $A = \bmat{rrr} -1 & 0 & 1 \\ 3 & 0 & -3 \\ 1 & 0 & -1 \emat$ diagonalizable? If so, find a matrix $P$ that diagonalizes it.

Solution: In Example 4.19 (done mostly on whiteboard, but also in text) we found that the eigenvalues are $\lambda = 0$ (with algebraic multiplicity 2) and $\lambda = -2$ (with algebraic multiplicity 1). A basis for $E_0$ is $\vp_1 = \colll 0 1 0$ and $\vp_2 = \colll 1 0 1$. A basis for $E_{-2}$ is $\vp_3 = \colll {-1} 3 1$. These are linearly independent (see below). Thus $$ P = [\, \vp_1 \, \vp_2 \, \vp_3 \,] = \bmat{rrr} 0 & 1 & -1 \\ 1 & 0 & 3 \\ 0 & 1 & 1 \emat $$ is invertible, and by the theorem, we must have $$ P^{-1} A P = \bmat{rrr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -2 \emat = D $$ (Note that to check an answer like this, it is usually easiest to check that $AP = PD$. Do so!)

Note: Different orders work too.

Theorem 4.24: If $\lambda_1, \ldots, \lambda_k$ are distinct eigenvalues of $A$ and, for each $i$, $\cB_i$ is a basis for the eigenspace $E_{\lambda_i}$, then the union of the $\cB_i$'s is a linearly independent set.

The proof of this is similar to the proof of Theorem 4.20, where we had only one non-zero vector in each eigenspace.

Combining Theorems 4.23 and 4.24 gives the following important consequence:

Theorem: An $n \times n$ matrix is diagonalizable if and only if the sum of the geometric multiplicities of the eigenvalues is $n$.

Look at Examples 4.25 and 4.26 again.

So it is important to understand the geometric multiplicities better. Here is a helpful result:

Lemma 4.26: If $\lambda_1$ is an eigenvalue of an $n \times n$ matrix $A$, then $$ \text{geometric multiplicity of } \lambda_1 \leqslant \text{algebraic multiplicity of } \lambda_1 $$

We'll prove this in a minute. First, let's look at what it implies:

Let $A$ be an $n \times n$ matrix with distinct eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_k$. Let their geometric multiplicities be $g_1, g_2, \ldots, g_k$ and their algebraic multiplicities be $a_1, a_2, \ldots, a_k$. We know $$ g_i \leqslant a_i \qtext{for each $i$} $$ and so $$ g_1 + \cdots + g_k \leqslant a_1 + \cdots + a_k \leqslant n $$ So the only way to have $g_1 + \cdots + g_k = n$ is to have $g_i = a_i$ for each $i$ and $a_1 + \cdots + a_k = n$.

This gives the main theorem of the section:

Theorem 4.27 (The Diagonalization Theorem): Let $A$ be an $n \times n$ matrix with distinct eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_k$. Let their geometric multiplicities be $g_1, g_2, \ldots, g_k$ and their algebraic multiplicities be $a_1, a_2, \ldots, a_k$. Then the following are equivalent:
a. $A$ is diagonalizable.
b. $g_1 + \cdots + g_k = n$.
c. $g_i = a_i$ for each $i$ and $a_1 + \cdots + a_k = n$.

Note: This is stated incorrectly in the text. The red part must be added unless you are working over $\C$, in which case it is automatic that $a_1 + \cdots + a_k = n$. With the way I have stated it, it is correct over $\R$ or over $\C$.

Example: Is $A = \bmat{rr} 0 & -1 \\ 1 & 0 \emat$ diagonalizable?

We still need to prove Lemma 4.26, and will do so next class.