Math 1600 Lecture 17, Section 2, 15 Oct 2014

$ \newcommand{\bdmat}[1]{\left|\begin{array}{#1}} \newcommand{\edmat}{\end{array}\right|} \newcommand{\bmat}[1]{\left[\begin{array}{#1}} \newcommand{\emat}{\end{array}\right]} \newcommand{\coll}[2]{\bmat{r} #1 \\ #2 \emat} \newcommand{\ccoll}[2]{\bmat{c} #1 \\ #2 \emat} \newcommand{\colll}[3]{\bmat{r} #1 \\ #2 \\ #3 \emat} \newcommand{\ccolll}[3]{\bmat{c} #1 \\ #2 \\ #3 \emat} \newcommand{\collll}[4]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\ccollll}[4]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \emat} \newcommand{\colllll}[5]{\bmat{r} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\ccolllll}[5]{\bmat{c} #1 \\ #2 \\ #3 \\ #4 \\ #5 \emat} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\lra}[1]{\mbox{$\xrightarrow{#1}$}} \newcommand{\rank}{\textrm{rank}} \newcommand{\row}{\textrm{row}} \newcommand{\col}{\textrm{col}} \newcommand{\null}{\textrm{null}} \newcommand{\nullity}{\textrm{nullity}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\Arg}{\operatorname{Arg}} \renewcommand{\arg}{\operatorname{arg}} \newcommand{\adj}{\textrm{adj}} \newcommand{\mystack}[2]{\genfrac{}{}{0}{0}{#1}{#2}} \newcommand{\mystackthree}[3]{\mystack{\mystack{#1}{#2}}{#3}} \newcommand{\qimplies}{\quad\implies\quad} \newcommand{\qtext}[1]{\quad\text{#1}\quad} \newcommand{\qqtext}[1]{\qquad\text{#1}\qquad} \newcommand{\smalltext}[1]{{\small\text{#1}}} \newcommand{\svec}[1]{\,\vec{#1}} \newcommand{\querytext}[1]{\toggle{\text{?}\vphantom{\text{#1}}}{\text{#1}}\endtoggle} \newcommand{\query}[1]{\toggle{\text{?}\vphantom{#1}}{#1}\endtoggle} \newcommand{\smallquery}[1]{\toggle{\text{?}}{#1}\endtoggle} \newcommand{\bv}{\mathbf{v}} %\require{AMScd} $

Announcements:

Read Section 3.5 for next class. This is also core material. We aren't covering 3.4. Work through recommended homework questions.

Quiz 5 will focus on 3.1, 3.2 and the first half of 3.3 (up to and including Example 3.26).

Midterm: Saturday, October 25, 7-10pm. It will cover the material up to and including the lecture on Monday, Oct 20. Practice midterms are available on the exercises page.

Office hour: Today, 11:30-noon, MC103B.

Help Centers: Monday-Friday 2:30-6:30 in MC 106.

Partial review of Lecture 16:

Definition: An inverse of an $n \times n$ matrix $A$ is an $n \times n$ matrix $A'$ such that $$ A A' = I \qtext{and} A' A = I . $$ If such an $A'$ exists, we say that $A$ is invertible.

Theorem 3.6: If $A$ is an invertible matrix, then its inverse is unique.

Because of this, we write $A^{-1}$ for the inverse of $A$, when $A$ is invertible. We do not write $\frac{1}{A}$.

Example: If $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$, then $A^{-1} = \bmat{rr} 7 & -2 \\ -3 & 1 \emat$ is the inverse of $A$.

But the zero matrix and the matrix $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$ are not invertible.

Theorem 3.7: If $A$ is an invertible matrix $n \times n$ matrix, then the system $A \vx = \vb$ has the unique solution $\vx = A^{-1} \vb$ for any $\vb$ in $\R^n$.

Remark: This is not in general an efficient way to solve a system.

Theorem 3.8: The matrix $A = \bmat{cc} a & b \\ c & d \emat$ is invertible if and only if $ad - bc \neq 0$. When this is the case, $$ A^{-1} = \frac{1}{ad-bc} \, \bmat{rr} \red{d} & \red{-}b \\ \red{-}c & \red{a} \emat . $$

We call $ad-bc$ the determinant of $A$, and write it $\det A$. It determines whether or not $A$ is invertible, and also shows up in the formula for $A^{-1}$.

Properties of Invertible Matrices

Theorem 3.9: Assume $A$ and $B$ are invertible matrices of the same size. Then:
  1. $A^{-1}$ is invertible and $(A^{-1})^{-1} = \query{A}$
  2. If $c$ is a non-zero scalar, then $cA$ is invertible and $(cA)^{-1} = \query{\frac{1}{c} A^{-1}}$
  3. $AB$ is invertible and $(AB)^{-1} = \query{B^{-1} A^{-1}}$ (socks and shoes rule)
  4. $A^T$ is invertible and $(A^T)^{-1} = \query{(A^{-1})^T}$
  5. $A^n$ is invertible for all $n \geq 0$ and $(A^n)^{-1} = \query{(A^{-1})^n}$

To verify these, in every case you just check that the matrix shown is an inverse.

Remark: Property (c) is the most important, and generalizes to more than two matrices, e.g. $(ABC)^{-1} = C^{-1} B^{-1} A^{-1}$.

Remark: For $n$ a positive integer, we define $A^{-n}$ to be $(A^{-1})^n = (A^n)^{-1}$. Then $A^n A^{-n} = I = A^0$, and more generally $A^r A^s = A^{r+s}$ for all integers $r$ and $s$.

Remark: There is no formula for $(A+B)^{-1}$. In fact, $A+B$ might not be invertible, even if $A$ and $B$ are.

We can use these properties to solve a matrix equation for an unknown matrix.

New material

Challenge problem:

Can you find a $2 \times 3$ matrix $A$ and a $3 \times 2$ matrix $A'$ such that $A A' = I_2$ and $A' A = I_3$?

The fundamental theorem of invertible matrices:

Very important! Will be used repeatedly, and expanded later.

Theorem 3.12: Let $A$ be an $n \times n$ matrix. The following are equivalent:
a. $A$ is invertible.
b. $A \vx = \vb$ has a unique solution for every $\vb \in \R^n$.
c. $A \vx = \vec 0$ has only the trivial (zero) solution.
d. The reduced row echelon form of $A$ is $I_n$.

Proof: We have seen that (a) $\implies$ (b) in Theorem 3.7 above.

We'll use our past work on solving systems to show that (b) $\implies$ (c) $\implies$ (d) $\implies$ (b), which will prove that (b), (c) and (d) are equivalent.

We will only partially explain why (b) implies (a).

(b) $\implies$ (c): If $A \vx = \vb$ has a unique solution for every $\vb$, then it's true when $\vb$ happens to be the zero vector.

(c) $\implies$ (d): Suppose that $A \vx = \vec 0$ has only the trivial solution.
That means that the rank of $A$ must be $n$.
So in reduced row echelon form, every row must have a leading $1$.
The only $n \times n$ matrix in reduced row echelon form with $n$ leading $1$'s is the identity matrix.

(d) $\implies$ (b): If the reduced row echelon form of $A$ is $I_n$, then the augmented matrix $[A \mid \vb\,]$ row reduces to $[I_n \mid \vc\,]$, from which you can read off the unique solution $\vx = \vc$.

(b) $\implies$ (a) (partly): Assume $A \vx = \vb$ has a solution for every $\vb$.
That means we can find $\vx_1, \ldots, \vx_n$ such that $A \vx_i = \ve_i$ for each $i$.
If we let $B = [ \vx_1 \mid \cdots \mid \vx_n\,]$ be the matrix with the $\vx_i$'s as columns, then $$ \kern-8ex AB = A \, [ \vx_1 \mid \cdots \mid \vx_n\,] = [ A \vx_1 \mid \cdots \mid A \vx_n\,] = [ \ve_1 \mid \cdots \mid \ve_n \,] = I_n . $$ So we have found a right inverse for $A$.
It turns out that $BA= I_n$ as well, but this is harder to see. $\qquad\Box$

Note: We have omitted (e) from the theorem, since we aren't covering elementary matrices. They are used in the text to prove the other half of (b) $\implies$ (a).

We will see many important applications of Theorem 3.12. For now, we illustrate one theoretical application and one computational application.

Theorem 3.13: Let $A$ be a square matrix. If $B$ is a square matrix such that either $AB=I$ or $BA=I$, then $A$ is invertible and $B = A^{-1}$.

Proof: If $BA = I$, then the system $A \vx = \vec 0$ has only the trivial solution, as we saw in the challenge problem. So (c) is true. Therefore (a) is true, i.e. $A$ is invertible. Then: $$ \kern-6ex B = BI = BAA^{-1} = IA^{-1} = A^{-1} . $$ (The uniqueness argument again!)$\quad\Box$

This is very useful! It means you only need to check multiplication in one order to know you have an inverse.

Gauss-Jordan method for computing the inverse

Motivate on board: we'd like to find a $B$ such that $AB = I$.

Theorem 3.14: Let $A$ be a square matrix. If a sequence of row operations reduces $A$ to $I$, then the same sequence of row operations transforms $I$ into $A^{-1}$.

Why does this work? It's the combination of our arguments that (d) $\implies$ (b) and (b) $\implies$ (a). If we row reduce $[ A \mid \ve_i\,]$ to $[ I \mid \vc_i \,]$, then $A \vc_i = \ve_i$. So if $B$ is the matrix whose columns are the $\vc_i$'s, then $AB = I$. So, by Theorem 3.14, $B = A^{-1}$.

The trick is to notice that we can solve all of the systems $A \vx = \ve_i$ at once by row reducing $[A \mid I\,]$. The matrix on the right will be exactly $B$!

Example on board: Find the inverse of $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$.
Illustrate proof of Theorem 3.14.

Example on board: Find the inverse of $A = \bmat{rrr} 1 & 0 & 2 \\ 2 & 1 & 3 \\ 1 & -2 & 5 \emat$. Illustrate proof of Theorem 3.14.

Example on board: Find the inverse of $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$.

So now we have a general purpose method for determining whether a matrix $A$ is invertible, and finding the inverse:

1. Form the $n \times 2n$ matrix $[A \mid I\,]$.

2. Use row operations to get it into reduced row echelon form.

3. If a zero row appears in the left-hand portion, then $A$ is not invertible.

4. Otherwise, $A$ will turn into $I$, and the right hand portion is $A^{-1}$.

The trend continues: when given a problem to solve in linear algebra, we usually find a way to solve it using row reduction!

Note that finding $A^{-1}$ is more work than solving a system $A \vx = \vb$.

We aren't covering inverse matrices over $\Z_m$.

Questions:

Question: Let $A$ be a $4 \times 4$ matrix with rank $3$. Is $A$ invertible? What if the rank is $4$?

True/false: If $A$ is a square matrix, and the column vectors of $A$ are linearly independent, then $A$ is invertible.

True/false: If $A$ and $B$ are square matrices such that $AB$ is not invertible, then at least one of $A$ and $B$ is not invertible.

True/false: If $A$ and $B$ are matrices such that $AB = I$, then $BA = I$.

Question: Find invertible matrices $A$ and $B$ such that $A+B$ is not invertible.