Math 1600 Lecture 21, Section 2, 24 Oct 2014

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\text{?}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} %\require{AMScd} $

Announcements:

Read Markov chains part of Section 3.7 for next class. Work through recommended homework questions.

Extra Midterm Review: Today, 4:30-6:00pm, MC105B. Bring questions.

Midterm: Saturday, October 25, 7-10pm. Rooms, based on first letter of last name: A-E: UCC37. F-Ma: UCC56 (this room). Mc-Z: UCC146. Be sure to write in the correct room! It will cover the material up to and including Monday's lecture. Review the policies about illness on course website.

Help Centers: Monday-Friday 2:30-6:30 in MC106.

Last class, we finished Section 3.5. That was a key section, so please study it carefully. We won't use that material today, so I will jump right into Section 3.6.

Section 3.6: Linear Transformations

Given an $m \times n$ matrix $A$, we can use $A$ to transform a column vector in $\R^n$ into a column vector in $\R^m$. We write: $$ T_A(\vx) = A \vx \quad\text{for $\vx$ in $\R^n$} $$

Example: If $ A = \bmat{rr} 0 & 1 \\ 2 & 3 \\ 4 & 5 \emat $ then $$ \kern-9ex T_A\left(\coll {\!-1} 2\right) = A \coll {\!-1} 2 = \bmat{rr} 0 & 1 \\ 2 & 3 \\ 4 & 5 \emat \coll {-1} 2 = -1 \colll 0 2 4 + 2 \colll 1 3 5 = \colll 2 4 6 $$ In general, $$ \kern-9ex T_A \left( \coll x y \right) = A \coll x y = \bmat{rr} 0 & 1 \\ 2 & 3 \\ 4 & 5 \emat \coll x y = x \colll 0 2 4 + y \colll 1 3 5 = \colll y {2x+3y} {4x + 5y} $$ Note that the matrix $A$ is visible in the last expression.

Here is an applet giving many examples.

Any rule $T$ that assigns to each $\vx$ in $\R^n$ a unique vector $T(\vx)$ in $\R^m$ is called a transformation from $\R^n$ to $\R^m$ and is written $T : \R^n \to \R^m$.

For our $A$ above, we have $T_A : \R^2 \to \R^3$. $T_A$ is in fact a linear transformation.

Definition: A transformation $T : \R^n \to \R^m$ is called a linear transformation if:
1. $T(\vu + \vv) = T(\vu) + T(\vv)$ for all $\vu$ and $\vv$ in $\R^n$, and
2. $T(c \vu) = c \, T(\vu)$ for all $\vu$ in $\R^n$ and all scalars $c$.

You can check directly that our $T_A$ is linear. For example, $$ \kern-9ex T_A \left( c \coll x y \right) = T_A \left( \coll {cx} {cy} \right) = \colll {cy} {2cx + 3cy} {4cx + 5cy} = c \colll y {2x+3y} {4x + 5y} = c \, T_A \left( \coll x y \right) $$ Check condition (1) yourself, or see Example 3.55.

In fact, every $T_A$ is linear:

Theorem 3.30: Let $A$ be an $m \times n$ matrix. Then $T_A : \R^n \to \R^m$ is a linear transformation.

Proof: Let $\vu$ and $\vv$ be vectors in $\R^n$ and let $c \in \R$. Then $$ T_A(\vu + \vv) = A(\vu + \vv) = A \vu + A \vv = T_A(\vu) + T_A(\vv) $$ and $$ T_A(c \vu) = A(c \vu) = c \, A \vu = c \, T_A(\vu) \qquad\Box $$

Example 3.56: Let $F : \R^2 \to \R^2$ be the transformation that sends each point to its reflection in the $x$-axis. Show that $F$ is linear.

Solution: We need to show that $$ F(\vu + \vv) = F(\vu) + F(\vv) \qtext{and} F(c \vu) = c \, F(\vu) $$ Give a geometrical explanation on the board.

Algebraically, note that $F(\coll x y) = \coll x {-y}$, from which you can check directly that $F$ is linear. (Exercise.)

Or, observe that $F(\coll x y) = \bmat{rr} 1 & 0 \\ 0 & -1 \emat \coll x y$, so $F = T_A$ where $A = \bmat{rr} 1 & 0 \\ 0 & -1 \emat$.

Example: Let $N : \R^2 \to \R^2$ be the transformation $$ N \left( \coll x y \right) := \coll {xy} {x+y} $$ Is $N$ linear?

It turns out that every linear transformation is a matrix transformation.

Theorem 3.31: Let $T : \R^n \to \R^m$ be a linear transformation. Then $T = T_A$, where $$ A = [\, T(\ve_1) \mid T(\ve_2) \mid \cdots \mid T(\ve_n) \,] $$

Proof: We just check: $$ \kern-4ex \begin{aligned} T(\vx)\ &= T(x_1 \ve_1 + \cdots + x_n \ve_n) \\ &= x_1 T(\ve_1) + \cdots + x_n T(\ve_n) \qtext{since $T$ is linear} \\ &= [\, T(\ve_1) \mid T(\ve_2) \mid \cdots \mid T(\ve_n) \,] \colll {x_1} {\vdots} {x_n} \\ &= A \vx = T_A(\vx) \qquad\qquad\Box \end{aligned} $$

The matrix $A$ is called the standard matrix of $T$ and is written $[T]$.

Example: Consider the transformation $T : \R^3 \to \R^2$ defined by $$ T\left(\colll x y z\right) = \coll {2x + 3y - z} {y + z} . $$ Is $T$ linear? If so, find $[T]$. On board.

Example 3.58: Let $R_\theta : \R^2 \to \R^2$ be rotation by an angle $\theta$ counterclockwise about the origin. Show that $R_\theta$ is linear and find its standard matrix.

Solution: We need to show that $$ \kern-6ex R_\theta(\vu + \vv) = R_\theta(\vu) + R_\theta(\vv) \qtext{and} R_\theta(c \vu) = c \, R_\theta(\vu) $$ A geometric argument shows that $R_\theta$ is linear. On board.

To find the standard matrix, we note that $$ \kern-6ex R_\theta \left( \coll 1 0 \right) = \coll {\cos \theta} {\sin \theta} \qqtext{and} R_\theta \left( \coll 0 1 \right) = \coll {-\sin \theta} {\cos \theta} $$ Therefore, the standard matrix of $R_\theta$ is $\bmat{rr} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \emat$.

Now that we know the matrix, we can compute rotations of arbitrary vectors. For example, to rotate the point $(2, -1)$ by $60^\circ$: $$ \kern-7ex \begin{aligned} R_{60} \left( \coll 2 {-1} \right) \ &= \bmat{rr} \cos 60^\circ & -\sin 60^\circ \\ \sin 60^\circ & \cos 60^\circ \emat \coll 2 {-1} \\ &= \bmat{rr} 1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & 1/2 \emat \coll 2 {-1} = \coll {(2+\sqrt{3})/2} {(2 \sqrt{3}-1)/2} \end{aligned} $$

Rotations will be one of our main examples.

The applet gives examples involving rotations.

New linear transformations from old

If $T : \R^m \to \R^\red{n}$ and $S : \R^\red{n} \to \R^p$, then $S(T(\vx))$ makes sense for $\vx$ in $\R^m$. The composition of $S$ and $T$ is the transformation $S \circ T : \R^m \to \R^p$ defined by $$ (S \circ T)(\vx) = S(T(\vx)) . $$ If $S$ and $T$ are linear, it is easy to check that this new transformation $S \circ T$ is automatically linear. For example, $$ \kern-8ex \begin{aligned} (S \circ T)(\vu + \vv)\ &= S(T(\vu + \vv)) = S(T(\vu) + T(\vv)) \\ &= S(T(\vu)) + S(T(\vv)) = (S \circ T)(\vu) + (S \circ T)(\vv) . \end{aligned} $$

Any guesses for how the the matrix for $S \circ T$ is related to the matrices for $S$ and $T$?

It's because of this that matrix multiplication is defined how it is! Notice also that the condition on the sizes of matrices in a product matches the requirement that $S$ and $T$ be composable.

Example 3.61: Find the standard matrix of the transformation that rotates $90^\circ$ counterclockwise and then reflects in the $x$-axis. How do $F \circ R_{90}$ and $R_{90} \circ F$ compare? On board.