Math 1600A Lecture 27, Section 2, 13 Nov 2013

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} $

Announcements:

Today we finish 4.2 and start 4.3. Continue reading Section 4.3 for Friday and also read Appendix D on polynomials (self-study).
Work through recommended homework questions.

Tutorials: No quiz this week, just review.
Office hour: Wednesday, 12:30-1:30, MC103B.
Help Centers: Monday-Friday 2:30-6:30 in MC 106.

Midterm 2 Solutions are available from the course home page. I don't know the class average yet.

Review Questions

Question: True/false: If $A$ is not invertible, then $AB$ is not invertible.

Question: True/false: $\det(A+B) = \det A + \det B$.

Question: $\det (3 I_2) = \query{3^2 \det I_2 = 3^2 = 9}$

Question: $\bdmat{rrr} 0 & 0 & a \\ 0 & b & c \\ d & e & f \edmat = \query{-\bdmat{rrr} d & e & f \\ 0 & b & c \\ 0 & 0 & a \edmat = -abd \qtext{(not triangular!)}}$

Partial review of last class: Cramer's Rule

Notation: If $A$ is an $n \times n$ matrix and $\vb \in \R^n$, we write $A_i(\vb)$ for the matrix obtained from $A$ by replacing the $i$th column with the vector $\vb$: $$ A_i(\vb) = [ \va_1 \cdots \va_{i-1} \vb \, \va_{i+1} \cdots \va_n \,] $$

Theorem: Let $A$ be an invertible $n \times n$ matrix and let $\vb$ be in $\R^n$. Then the unique solution $\vx$ of the system $A \vx = \vb$ has components $$ x_i = \frac{\det(A_i(\vb))}{\det A},\quad \text{for } i = 1, \ldots, n $$

New material: Matrix Inverse using the Adjoint

Suppose $A$ is invertible. We'll use Cramer's rule to find a formula for $X = A^{-1}$. We know that $AX = I$, so the $j$th column of $X$ satisfies $A \vx_j = \ve_j$. By Cramer's Rule, $$ x_{ij} = \frac{\det(A_i(\ve_j))}{\det A} $$ By expanding along the $i$th column, we see that $$ \det(A_i(\ve_j)) = C_{ji} $$ So $$ x_{ij} = \frac{1}{\det A} C_{ji},\qtext{i.e.,} X = \frac{1}{\det A} [C_{ij}]^T $$ The matrix $$ \kern-5ex \adj A := [C_{ji}] = [C_{ij}]^T = \bmat{cccc} C_{11} & C_{21} & \cdots & C_{n1} \\ C_{12} & C_{22} & \cdots & C_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ C_{1n} & C_{2n} & \cdots & C_{nn} \emat $$ is called the adjoint of $A$.

Theorem: If $A$ is an invertible matrix, then $$ A^{-1} = \frac{1}{\det A} \adj A $$

Example: If $A = \bmat{rr} a & b \\ c & d \emat$, then the cofactors are $$ \begin{aligned} C_{11} &= + \det [d] = +d & C_{12} &= - \det [c] = -c \\ C_{21} &= - \det [b] = -b & C_{22} &= + \det [a] = +a \\ \end{aligned} $$ so the adjoint matrix is $$ \adj A = \bmat{rr} d & -b \\ -c & a \emat $$ and $$ A^{-1} = \frac{1}{\det A} \adj A = \frac{1}{\det A} \bmat{rr} d & -b \\ -c & a \emat $$ as we saw before.

See Example 4.17 in the text for a $3 \times 3$ example. Again, this is not generally a good computational approach. It's importance is theoretical.

Section 4.3: Eigenvalues and Eigenvectors

Recall from Section 4.1:

Definition: Let $A$ be an $n \times n$ matrix. A scalar $\lambda$ (lambda) is called an eigenvalue of $A$ if there is a nonzero vector $\vx$ such that $A \vx = \lambda \vx$. Such a vector $\vx$ is called an eigenvector of $A$ corresponding to $\lambda$.

The eigenvectors for a given eigenvalue $\lambda$ are the nonzero solutions to $(A - \lambda I) \vx = \vec 0$.

Definition: The collection of all solutions to $(A - \lambda I) \vx = \vec 0$ is a subspace called the eigenspace of $\lambda$ and is denoted $E_\lambda$. In other words, $$ E_\lambda = \null(A - \lambda I) . $$ It consists of the eigenvectors plus the zero vector.

By the fundamental theorem of invertible matrices, $A - \lambda I$ has a nontrivial null space if and only if it is not invertible, and we now know that this is the case if and only if $\det (A - \lambda I) = 0$.

The expression $\det (A - \lambda I)$ is always a polynomial in $\lambda$. For example, when $A = \bmat{rr} a & b \\ c & d \emat$, $$ \kern-8ex \det(A- \lambda I) = \bdmat{cc} a-\lambda & b \\ c & d-\lambda \edmat = (a - \lambda)(d-\lambda) - bc = \lambda^2 - (a+d)\lambda + (ad - bc) $$ In $A$ is $3 \times 3$, then $$ \kern-8ex \det(A - \lambda I) = (a_{11} - \lambda) \bdmat{cc} a_{22} - \lambda\! & a_{23} \\ a_{32} & \!a_{33} - \lambda \edmat - a_{12} \bdmat{cc} a_{21} & a_{23} \\ a_{31} & \!a_{33} - \lambda \edmat + a_{13} \bdmat{cc} a_{21} & \!a_{22} -\lambda \\ a_{31} & a_{32} \edmat $$ which is a degree 3 polynomial in $\lambda$.

Similarly, if $A$ is $n \times n$, $\det (A - \lambda I)$ will be a degree $n$ polynomial in $\lambda$. It is called the characteristic polynomial of $A$, and $\det (A - \lambda I) = 0$ is called the characteristic equation.

Finding eigenvalues and eigenspaces: Let $A$ be an $n \times n$ matrix.

1. Compute the characteristic polynomial $\det(A - \lambda I)$.
2. Find the eigenvalues of $A$ by solving the characteristic equation $\det(A - \lambda I) = 0$.
3. For each eigenvalue $\lambda$, find a basis for $E_\lambda = \null (A - \lambda I)$ by solving the system $(A - \lambda I) \vx = \vec 0$.

So we need to get good at solving polynomial equations. Solutions are called zeros or roots.

Theorem D.4 (The Fundamental Theorem of Algebra): A polynomial of degree $n$ has at most $n$ distinct roots.

Therefore:

Theorem: An $n \times n$ matrix $A$ has at most $n$ distinct eigenvalues.

Also:

Theorem D.2 (The Factor Theorem): Let $f$ be a polynomial and let $a$ be a constant. Then $a$ is a zero of $f(x)$ (i.e. $f(a) = 0$) if and only if $x - a$ is a factor of $f(x)$ (i.e. $f(x) = (x - a) g(x)$ for some polynomial $g$).

Example 4.18: Find the eigenvalues and eigenspaces of $A = \bmat{rrr} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 2 & -5 & 4 \emat$.

Solution: 1. On whiteboard, compute the characteristic polynomial: $$ \det (A - \lambda I) = - \lambda^3 + 4 \lambda^2 - 5 \lambda + 2 $$ 2. To find the roots, it is often worth trying a few small integers to start. We see that $\lambda = 1$ works. So by the factor theorem, we know $\lambda - 1$ is a factor: $$ - \lambda^3 + 4 \lambda^2 - 5 \lambda + 2 = (\lambda - 1)(\query{-} \lambda^2 + \query{3} \lambda \toggle{+ \text{?}}{-2}\endtoggle) $$ Now we need to find roots of $-\lambda^2 + 3 \lambda - 2$. Again, $\lambda = 1$ works, and this factors as $-(\lambda - 1)(\lambda - 2)$. So $$ \det (A - \lambda I) = - \lambda^3 + 4 \lambda^2 - 5 \lambda + 2 = - (\lambda - 1)^2 (\lambda - 2) $$ and the roots are $\lambda = 1$ and $\lambda = 2$.

3. To find the $\lambda = 1$ eigenspace, we do row reduction: $$ [A - I \mid 0\,] = \bmat{rrr|r} -1 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 2 & -5 & 3 & 0 \emat \lra{} \bmat{rrr|r} -1 & 0 & -1 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \emat $$ We find that $x_3 = t$ is free and $x_1 = x_2 = x_3$, so $$ E_1 = \left\{ \colll t t t \right\} = \span \left( \colll 1 1 1 \right) $$ So $\colll 1 1 1$ is a basis of the eigenspace corresponding to $\lambda = 1$.

Finding a basis for $E_2$ is similar; see text.

A root $a$ of a polynomial $f$ implies that $f(x) = (x-a) g(x)$. Sometimes, $a$ is also a root of $g(x)$, as we found above. Then $f(x) = (x-a)^2 h(x)$. The largest $k$ such that $(x-a)^k$ is a factor of $f$ is called the multiplicity of the root $a$ in $f$.

In the case of an eigenvalue, we call its multiplicity in the characteristic polynomial the algebraic multiplicity of this eigenvalue.

In the previous example, $\lambda = 1$ has algebraic multiplicity 2 and $\lambda = 2$ has algebraic multiplicity 1.

We also define the geometric multiplicity of an eigenvalue $\lambda$ to be the dimension of the corresponding eigenspace. In the previous example, $\lambda = 1$ has geometric multiplicity 1 (and so does $\lambda = 2$).

Example 4.19: Find the eigenvalues and eigenspaces of $A = \bmat{rrr} -1 & 0 & 1 \\ 3 & 0 & -3 \\ 1 & 0 & -1 \emat$. Do partially, on whiteboard.

In this case, we find that $\lambda = 0$ has algebraic multiplicity 2 and geometric multiplicity 2.

These multiplicities will be important in Section 4.4.