Math 1600 Lecture 27, Section 2, 10 Nov 2014

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\text{?}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} %\require{AMScd} $

Announcements:

Today we continue with 4.3. Read 4.3 and Appendix C for next class. Work through recommended homework questions.

Tutorials: Quiz 7 covers 4.2, the parts of Appendix D that we covered, and the part of 4.3 we finish today. No complex eigenvalues/roots.

Office hour: Monday, 3:00-3:30, MC103B.

Help Centers: Monday-Friday 2:30-6:30 in MC 106.

Question: If $P$ is invertible, how do $\det A$ and $\det(P^{-1}AP)$ compare?

Partial review of last class: Section 4.3

Definition: If $A$ is $n \times n$, $\det (A - \lambda I)$ will be a degree $n$ polynomial in $\lambda$. It is called the characteristic polynomial of $A$, and $\det (A - \lambda I) = 0$ is called the characteristic equation.

By the fundamental theorem of invertible matrices, the solutions to the characteristic equation are exactly the eigenvalues.

Finding eigenvalues and eigenspaces: Let $A$ be an $n \times n$ matrix.

1. Compute the characteristic polynomial $\det(A - \lambda I)$.
2. Find the eigenvalues of $A$ by solving the characteristic equation $\det(A - \lambda I) = 0$.
3. For each eigenvalue $\lambda$, find a basis for the eigenspace $E_\lambda = \null (A - \lambda I)$ by solving the system $(A - \lambda I) \vx = \vec 0$.

So we need to get good at solving polynomial equations. Solutions are called zeros or roots.

Theorem D.4 (The Fundamental Theorem of Algebra): A polynomial of degree $n$ has at most $n$ distinct roots.

Therefore:

Theorem: An $n \times n$ matrix $A$ has at most $n$ distinct eigenvalues.

Also:

Theorem D.2 (The Factor Theorem): Let $f$ be a polynomial and let $a$ be a constant. Then $a$ is a zero of $f(x)$ (i.e. $f(a) = 0$) if and only if $x - a$ is a factor of $f(x)$ (i.e. $f(x) = (x - a) g(x)$ for some polynomial $g$).

New material: 4.3 continued

A root $a$ of a polynomial $f$ implies that $f(x) = (x-a) g(x)$. Sometimes, $a$ is also a root of $g(x)$, as we found above. Then $f(x) = (x-a)^2 h(x)$. The largest $k$ such that $(x-a)^k$ is a factor of $f$ is called the multiplicity of the root $a$ in $f$.

In the case of an eigenvalue, we call its multiplicity in the characteristic polynomial the algebraic multiplicity of this eigenvalue.

We also define the geometric multiplicity of an eigenvalue $\lambda$ to be the dimension of the corresponding eigenspace.

Example 4.19: Find the eigenvalues and eigenspaces of $A = \bmat{rrr} -1 & 0 & 1 \\ 3 & 0 & -3 \\ 1 & 0 & -1 \emat^\strut$. Do partially, on board.

In this case, we find that $\lambda = 0$ has algebraic multiplicity 2 and geometric multiplicity 2.

These multiplicities will be important in Section 4.4.

Theorem 4.15: The eigenvalues of a triangular matrix are the entries on its main diagonal (repeated according to their algebraic multiplicity).

Example: If $A = \bmat{rrr} 1 & 0 & 0 \\ 2 & 3 & 0 \\ 4 & 5 & 1 \emat$, then $$ \kern-6ex \det(A - \lambda I) = \bdmat{ccc} 1-\lambda & 0 & 0 \\ 2 & 3-\lambda & 0 \\ 4 & 5 & 1-\lambda \edmat = (1 - \lambda)^2 (3 - \lambda) , $$ so the eigenvalues are $\lambda = 1$ (with algebraic multiplicity 2) and $\lambda = 3$ (with algebraic multiplicity 1).

Question: What are the eigenvalues of a diagonal matrix?

Question: What are the eigenvalues of $\bmat{cc} 0 & 4 \\ 1 & 0 \emat$?

Question: How can we tell whether a matrix $A$ is invertible using eigenvalues?

So we can extend the fundamental theorem with two new entries:

Theorem 4.17: Let $A$ be an $n \times n$ matrix. The following are equivalent:
a. $A$ is invertible.
b. $A \vx = \vb$ has a unique solution for every $\vb \in \R^n$.
c. $A \vx = \vec 0$ has only the trivial (zero) solution.
d. The reduced row echelon form of $A$ is $I_n$.
f. $\rank(A) = n$
g. $\nullity(A) = 0$
h. The columns of $A$ are linearly independent.
i. The columns of $A$ span $\R^n$.
j. The columns of $A$ are a basis for $\R^n$.
k. The rows of $A$ are linearly independent.
l. The rows of $A$ span $\R^n$.
m. The rows of $A$ are a basis for $\R^n$.
n. $\det A \neq 0$
o. $0$ is not an eigenvalue of $A$

Eigenvalues of powers and inverses

Suppose $\vx$ is an eigenvector of $A$ with eigenvalue $\lambda$. What can we say about $A^2$ or $A^3$? If $A$ is invertible, how about the eigenvalues/vectors of $A^{-1}$? On board.

We've shown:

In contrast to some other recent results, this one is very useful computationally:

Example 4.21: Compute $\bmat{rr} 0 & 1 \\ 2 & 1 \emat^{10} \coll 5 1$.

Solution: By finding the eigenspaces of the matrix, we can show that $$ \kern-6ex \bmat{rr} 0 & 1 \\ 2 & 1 \emat \coll 1 {-1} = - \coll 1 {-1} \qtext{and} \bmat{rr} 0 & 1 \\ 2 & 1 \emat \coll 1 2 = 2 \coll 1 2 $$ Write $A = \bmat{rr} 0 & 1 \\ 2 & 1 \emat$, $\vx = \coll 5 1$, $\vv_1 = \coll 1 {-1}$ and $\vv_2 = \coll 1 2$. Since $\vx = 3 \vv_1 + 2 \vv_2$ we have $$ \begin{aligned} A^{10} \vx \ &= A^{10} (3 \vv_1 + 2 \vv_2) = 3 A^{10} \vv_1 + 2 A^{10} \vv_2 \\ &= 3 (-1)^{10} \vv_1 + 2(2^{10}) \vv_2 = \coll {3+2^{11}}{-3+2^{12}} \end{aligned} $$ Much faster than repeated matrix multiplication, especially if $10$ is replaced with $100$.

This raises an interesting question. In the example, the eigenvectors were a basis for $\R^2$, so we could use this method to compute $A^k \vx$ for any $\vx$. However, last class we saw a $3 \times 3$ matrix with two one-dimensional eigenspaces, so the eigenvectors didn't span $\R^3$. We will study this further in Section 4.4, but right now we can answer a related question about linear independence.

Theorem: If $\vv_1, \vv_2, \ldots, \vv_m$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_m$, then $\vv_1, \vv_2, \ldots, \vv_m$ are linearly independent.

Proof in case $m = 2$: If $\vv_1$ and $\vv_2$ are linearly dependent, then $\vv_1 = c \vv_2$ for some $c$. Therefore $$ A \vv_1 = A \, c \vv_2 = c A \vv_2 $$ so $$ \lambda_1 \vv_1 = c \lambda_2 \vv_2 = \lambda_2 \vv_1 $$ Since $\vv_1 \neq \vec 0$, this forces $\lambda_1 = \lambda_2$, a contradiction.$\quad\Box$

The general case is very similar; see text.

Next: how to become a Billionaire using the material from this course.