Math 1600 Lecture 23, Section 2, 29 Oct 2014

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\text{?}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} %\require{AMScd} $

Announcements:

Read Section 4.2 for next class (Monday). Work through recommended homework questions.

Midterms will be returned in tutorials next week. Grades should be posted to OWL by today. Solutions will be posted soon. Please do not e-mail us about grades until you have seen your midterm.

Tutorials: Quiz next week covers until what we get to on Monday, focusing on the material after the midterm material. Details Monday.

Help Centers: Monday-Friday 2:30-6:30 in MC 106, but not Thursday and Friday this week (fall break).

Review of last lecture: Section 3.6: Linear Transformations

New linear transformations from old

If $T : \R^m \to \R^\red{n}$ and $S : \R^\red{n} \to \R^p$, then the composition of $S$ and $T$ is the transformation $S \circ T : \R^m \to \R^p$ defined by $$ (S \circ T)(\vx) = S(T(\vx)) . $$ If $S$ and $T$ are linear, it is easy to check that this new transformation $S \circ T$ is automatically linear.

Theorem 3.32: $[S \circ T] = [S][T]$.

We write $I$ for the identity transformation, which is defined by $I(\vx) = \vx$. Then $[I] = I$, the identity matrix.

Inverses of Linear Transformations

Definition: Let $S$ and $T$ be linear transformations from $\R^\red{n}$ to $\R^\red{n}$. Then $S$ and $T$ are inverse transformations if $S \circ T = I$ and $T \circ S = I$. When this is the case, we say that $S$ and $T$ are invertible and are inverses.

The same argument as for matrices shows that an inverse is unique when it exists, so we write $S = T^{-1}$ and $T = S^{-1}$.

Theorem 3.33: Let $T : \R^n \to \R^n$ be a linear transformation. Then $T$ is invertible if and only if $[T]$ is an invertible matrix. In this case, $[T^{-1}] = [T]^{-1}$.

Review continued: Section 3.7: Markov Chains

A Markov chain has states $1, 2, \ldots n$ and transition probabilities $P_{ij}$ for moving from state $j$ to state $i$ at each time step. These probabilities are constant and depend only on the current state.

$P$ is a stochastic matrix, which means that it is square, has non-negative entries, and the columns each sum to $1$.

If $\vx_k$ is the state vector after $k$ time steps, then $\vx_{k+1} = P \vx_k$.

A state $\vx$ with $P \vx = \vx$ is called a steady state vector. That is, $\vx - P\vx = \vec 0$, or $(I - P) \vx = \vec 0$. To find a non-trivial steady state vector for this Markov chain, we solve the homogeneous system with coefficient matrix $I-P$.

Theorem: Every Markov chain has a non-trivial steady state vector.

This appears in the book as Theorem 4.30 in Section 4.6, but I proved it in class.

New material: Section 4.1: Eigenvalues and eigenvectors

We saw when studying Markov chains that it was important to find solutions to the system $A \vx = \vx$, where $A$ is a square matrix. We did this by solving $(I-A)\vx = \vec 0$.

More generally, a central problem in linear algebra is to find $\vx$ such that $A \vx$ is a scalar multiple of $\vx$.

Definition: Let $A$ be an $\red{n} \times \red{n}$ matrix. A scalar $\lambda$ (lambda) is called an eigenvalue of $A$ if there is a nonzero vector $\vx$ such that $A \vx = \lambda \vx$. Such a vector $\vx$ is called an eigenvector of $A$ corresponding to $\lambda$.

We showed that $\lambda = 1$ is an eigenvalue of every stochastic matrix $P$.

Example A: Since $$ \bmat{rr} 1 & 2 \\ 2 & -2 \emat \coll 2 1 = \coll 4 2 = 2 \coll 2 1 , $$ we see that $2$ is an eigenvalue of $\bmat{rr} 1 & 2 \\ 2 & -2 \emat$ with eigenvector $\coll 2 1$.

Example 4.2: Show that $5$ is an eigenvalue of $A = \bmat{rr} 1 & 2 \\ 4 & 3 \emat$ and determine all eigenvectors corresponding to this eigenvalue.

Solution: We are looking for nonzero solutions to $A \vx = 5 \vx$. This is the same as $(A - 5I) \vx = \vec 0$, so we compute the coefficient matrix: $$ \kern-7ex A - 5I = \bmat{rr} 1 & 2 \\ 4 & 3 \emat - \bmat{rr} 5 & 0 \\ 0 & 5 \emat = \bmat{rr} -4 & 2 \\ 4 & -2 \emat $$ The columns are linearly dependent, so the null space of $A-5I$ is nonzero. So $A \vx = 5 \vx$ has a nontrivial solution, which is what it means for $5$ to be an eigenvalue.

To find the eigenvectors, we compute the null space of $A - 5I$: $$ \kern-7ex [\, A-5I \mid \vec 0\,] = \bmat{rr|r} -4 & 2 & 0 \\ 4 & -2 & 0 \emat \lra{} \bmat{rr|r} 1 & -1/2 & 0 \\ 0 & 0 & 0 \emat $$ The solutions are of the form $\ccoll {t/2} t = t \ccoll {1/2} 1$. So the eigenvectors for the eigenvalue $5$ are the nonzero multiples of ${\ccoll {1/2} 1}^{\strut}$.

Definition: Let $A$ be an $n \times n$ matrix and let $\lambda$ be an eigenvalue of $A$. The collection of all eigenvectors corresponding to $\lambda$, together with the zero vector, is a subspace called the eigenspace of $\lambda$ and is denoted $E_\lambda$. In other words, $$ E_\lambda = \null(A - \lambda I) . $$

In the above Example, $E_5 = \span\left\{ \ccoll {1/2} 1 \right\}$.

Example: Give an eigenvalue of the matrix $A = \bmat{rr} 2 & 0 \\ 0 & 2 \emat$ and compute its eigenspace.

Example: If $0$ is an eigenvalue of $A$, what is another name for $E_0$?

An applet illustrating the transformation $T_A : \R^2 \to \R^2$, for $A$ the $2 \times 2$ matrix shown. The black vector is the input $\vx$, and the blue vector is the output $\color{blue} T_A(\vx) = A \vx$.

$$ $$ $$ $$
Reflection in $x$-axis.
Reflection in $y$-axis.
Projection onto $x$-axis.
Rotation by $90^\circ$ ccw.
Rotate and scale.
Example A from above.
A rank 1 example.
Custom:

(Click to move input vector. Hit 't' to toggle modes. Click on a phrase to the right to change the matrix. Enter four numbers, separated by spaces, for a custom matrix.)

Other applets: See also this java applet. (Instructions.) If that doesn't work, here is another applet.

Read Example 4.3 in the text for a $3 \times 3$ example.

Finding eigenvalues

Given a specific number $\lambda$, we now know how to check whether $\lambda$ is an eigenvalue: we check whether $A - \lambda I$ has a nontrivial null space. And we can find the eigenvectors by finding the null space.

We also have a geometric way to find all eigenvalues $\lambda$, at least in the $2 \times 2$ case. Is there an algebraic way to check all $\lambda$ at once?

By the fundamental theorem of invertible matrices, $A - \lambda I$ has a nontrivial null space if and only if it is not invertible. For $2 \times 2$ matrices, we can check invertibility using the determinant!

Example: Find all eigenvalues of $A = \bmat{rr} 1 & 2 \\ 2 & -2 \emat$.

Solution: We need to find all $\lambda$ such that $\det(A-\lambda I) = 0$. $$ \kern-6ex \begin{aligned} \det(A-\lambda I) &= \det \bmat{cc} 1-\lambda & 2 \\ 2 & -2-\lambda \emat \\ &= (1-\lambda)(-2-\lambda)-4 = \lambda^2 + \lambda - 6 , \end{aligned} $$ so we need to solve the quadratic equation $\lambda^2 + \lambda - 6 = 0$. This can be factored as $(\lambda - 2)(\lambda + 3) = 0$, and so $\lambda = 2$ or $\lambda = -3$, the same as we saw above and with the applet.

We proceed to find the eigenvectors for these eigenvalues, by solving $(A - 2I) \vx = \vec 0$ and $(A + 3I) \vx = \vec 0$. On board, if time.

Appendix D provides review of polynomials and their solutions. Look it over now. We'll discuss it in Lecture 26.

See also Example 4.5 in text.

So now we know how to handle the $2 \times 2$ case. To handle larger matrices, we need to learn about their determinants, which is Section 4.2.

We won't discuss eigenvectors and eigenvalues for matrices over $\Z_m$. We will discuss complex numbers $\C$ in a later lecture.