# Math 1600A Lecture 13, Section 002, 7 Oct 2013

## Announcements:

Read Sections 3.1 and 3.2 for next class. Work through recommended homework questions.

Tutorials: No quizzes this week, focused on review.

Help Centers: Monday-Friday 2:30-6:30 in MC 106.

These lecture notes now available in pdf format as well, a day or two after each lecture. Be sure to let me know of technical problems.

## Partial review of Lecture 12:

We covered network analysis and electrical networks in Section 2.4. It turns out that the other section did not cover electrical networks, so that part of last lecture will not be on any quizzes or exams.

We also aren't covering Section 2.5.

## New material: Section 3.1: Matrix Operations

(Lots of definitions, but no tricky concepts.)

Definition: A matrix is a rectangular array of numbers called the entries. The entries are usually real (from $\R$), but may also be complex (from $\C$) or be from $\Z_m$.

Examples: $$\kern-8ex % The Rules create some space below the matrices: \mystack{ A = \bmat{ccc} 1 & -3/2 & \pi \\ \sqrt{2} & 2.3 & 0 \emat \Rule{0pt}{0pt}{18pt} }{2 \times 3} \qquad \mystack{ \bmat{rr} 1 & 2 \\ 3 & 4 \emat \Rule{0pt}{0pt}{22pt} }{\mystack{\strut 2 \times 2}{\textbf{square}}} \qquad \mystack{ \bmat{rrrr} 1 & 2 & 3 & 4 \emat \Rule{0pt}{0pt}{30pt} }{\mystackthree{1 \times 4}{\textbf{row matrix}}{\text{or }\textbf{row vector}}} \qquad \mystack{ \bmat{r} 1 \\ 2 \\ 3 \\ 4 \emat \Rule{0pt}{0pt}{30pt} }{\mystackthree{4 \times 1}{\textbf{column matrix}}{\text{or }\textbf{column vector}}}$$ The entry in the $i$th row and $j$th column of $A$ is usually written $a_{ij}$ or sometimes $A_{ij}$. For example, $$A_{11} = 1, \quad A_{23} = 0, \quad A_{32} \text{ doesn't make sense} .$$

Definition: An $m \times n$ matrix $A$ is square if $m = n$. The diagonal entries are $a_{11}, a_{22}, \ldots$. If $A$ is square and the nondiagonal entries are all zero, then $A$ is called a diagonal matrix. $$% The Rules create some space below the matrices: \kern-8ex \mystack{ \bmat{ccc} 1 & -3/2 & \pi \\ \sqrt{2} & 2.3 & 0 \emat \Rule{0pt}{0pt}{18pt} }{\text{not square or diagonal}} \qquad \mystack{ \bmat{rr} 1 & 2 \\ 3 & 4 \emat \Rule{0pt}{0pt}{22pt} }{\text{square}} \qquad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 4 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}} \qquad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 0 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}} \qquad \mystack{ \bmat{rr} 2 & 0 \\ 0 & 2 \emat \Rule{0pt}{0pt}{20pt} }{\text{scalar}}$$

Definition: A diagonal matrix with all diagonal entries equal is called a scalar matrix. A scalar matrix with diagonal entries all equal to $1$ is an identity matrix. $$% The Rules create some space below the matrices: \mystack{ I_3 = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \emat \Rule{0pt}{0pt}{18pt} }{\text{identity matrix}} \qquad \mystack{ O = \bmat{rrr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \emat \Rule{0pt}{0pt}{18pt} }{\text{scalar}}$$ Note: Identity $\implies$ scalar $\implies$ diagonal $\implies$ square.

Now we're going to mimick a lot of what we did when we first introduced vectors.

Definition: Two matrices are equal if they have the same size and their corresponding entries are equal. $$\kern-8ex % The Rules create some space below the matrices: \bmat{cc} 1 & -3/2 \\ \sqrt{2} & 0 \emat \qquad \bmat{cc} \cos 0 & -1.5 \\ \sqrt{2} & \sin 0 \emat \qquad \bmat{cc} 1 & 2 \\ 3 & 4 \emat \qquad \bmat{rrrr} 1 & 2 & 3 & 4 \emat \qquad \bmat{r} 1 \\ 2 \\ 3 \\ 4 \emat$$ The first two above are equal, but no other two are equal. We distinguish row matrices from column matrices!

Our first two operations are just like for vectors:

Definition: If $A$ and $B$ are both $m \times n$ matrices, then their sum $A + B$ is the $m \times n$ matrix obtained by adding the corresponding entries of $A$ and $B$. $$\bmat{rrr} 1 & 2 & 3 \\ 4 & 5 & 6 \emat + \bmat{rrr} 0 & -1 & 2 \\ \pi & 0 & -6 \emat = \bmat{ccc} 1 & 1 & 5 \\ 4+\pi & 5 & 0 \emat$$ Using the notation $A = [a_{ij}]$ and $B = [b_{ij}]$, we write $$A + B = [a_{ij} + b_{ij}] \qquad\text{ or }\qquad (A+B)_{ij} = a_{ij} + b_{ij} .$$

Definition: If $A$ is an $m \times n$ matrix and $c$ is a scalar, then the scalar multiple $cA$ is the $m \times n$ matrix obtained by multiplying each entry by $c$. $$3 \bmat{rrr} 0 & -1 & 2 \\ \pi & 0 & -6 \emat = \bmat{rrr} 0 & -3 & 6 \\ 3 \pi & 0 & -18 \emat$$ We write $cA = [c \, a_{ij}]$ or $(cA)_{ij} = c \, a_{ij}$.

Definition: As expected, $-A$ means $(-1)A$ and $A-B$ means $A + (-B)$.
The $m \times n$ zero matrix has all entries $0$ and is denoted $O$ or $O_{m\times n}$. Of course, $A + O = A$.
So we have the real number $0$, the zero vector $\vec 0$ (or $\boldsymbol{0}$ in the text) and the zero matrix $O$.

### Matrix multiplication

This is unlike anything we have seen for vectors.

Definition: If $A$ is $m \times \red{n}$ and $B$ is $\red{n} \times r$, then the product $C = AB$ is the $m \times r$ matrix whose $i,j$ entry is $$c_{ij} = a_{i\red{1}} b_{\red{1}j} + a_{i\red{2}} b_{\red{2}j} + \cdots + a_{i\red{n}} b_{\red{n}j} = \sum_{\red{k}=1}^{n} a_{i\red{k}} b_{\red{k}j} .$$ This is the dot product of the $i$th row of $A$ with the $j$th column of $B$.

Note that for this to make sense, the number of columns of $A$ must equal the number of rows of $B$. $$\mystack{A}{m \times n} \ \ \mystack{B}{n \times r} \mystack{=}{\strut} \mystack{AB}{m \times r}$$ This may seem very strange, but it turns out to be useful. We will never use componentwise multiplication, as it is not generally useful.

Examples on whiteboard: $2 \times 3$ times $3 \times 4$, $1 \times 3$ times $3 \times 1$, $3 \times 1$ times $1 \times 3$.

One motivation for this definition of matrix multiplication is that it comes up in linear systems.

Example 3.8: Consider the system \begin{aligned} 4 x + 2 y &= 4 \\ 5 x + \ph y &= 8 \\ 6 x + 3 y &= 6 \end{aligned} The left-hand sides are in fact a matrix product: $$\bmat{rr} 4 & 2 \\ 5 & 1 \\ 6 & 3 \emat \coll x y$$ Every linear system can be written as $A \vx = \vb$.

Question: If $A$ is an $m \times n$ matrix and $\ve_1$ is the first standard unit vector in $\R^n$, what is $A \ve_1$?

### Powers

In general, $A^2 = AA$ doesn't make sense. But if $A$ is $n \times n$ (square), then $A^2 = AA$ does make sense. $A^2$ is $n \times n$ as well, and so it also makes sense to define the power $$A^k = AA\cdots A \quad\text{with k factors}.$$

We write $A^1 = A$ and $A^0 = I_n$.

We will see later that $(AB)C = A(BC)$, so the expression for $A^k$ is unambiguous. And it follows that $$A^r A^s = A^{r+s} \qquad\text{and}\qquad (A^r)^s = A^{rs}$$ for all nonnegative integers $r$ and $s$.

Example 3.13 on whiteboard: Powers of $$A = \bmat{rr} 1 & 1 \\ 1 & 1 \emat$$

.