### Announcements:

Read Sections 3.1 and 3.2 for next class. Work through recommended homework questions.

Next office hour: Monday, 3:00-3:30, MC103B.

Help Centers: Monday-Friday 2:30-6:30 in MC 106.

### Lecture 12:

We covered network analysis and electrical networks in Section 2.4. Since the material won't be used today, I won't summarize it.

We aren't covering the other topics in Section 2.4, the Exploration after 2.4, or Section 2.5.

### New material: Section 3.1: Matrix Operations

(Lots of definitions, but no tricky concepts.)

Definition: A matrix is a rectangular array of numbers called the entries. The entries are usually real (from $\R$), but may also be complex (from $\C$).

Examples:

$$\kern-8ex \small % The Rules/struts create some space below the matrices: \mystack{ A = \bmat{ccc} 1 & -3/2 & \pi \\ \sqrt{2} & 2.3 & 0 \emat_\strut %\Rule{0pt}{0pt}{15pt} }{\qquad 2 \times 3} %\qquad %\mystack{ %\bmat{rr} % 1 & 2 \\ 3 & 4 %\emat %\Rule{0pt}{0pt}{10pt} %}{\mystack{\strut 2 \times 2}{\textbf{square}}} \qquad \mystack{ \bmat{rrrr} 1 & 2 & 3 \emat_\strut %\Rule{0pt}{60pt}{0pt} }{\mystackthree{1 \times 3}{\textbf{row matrix}}{\text{or }\textbf{row vector}}} \qquad \mystack{ \bmat{r} 1 \\ 2 \\ 3 \emat_\strut }{\mystackthree{3 \times 1}{\textbf{column matrix}}{\text{or }\textbf{column vector}}}$$ The entry in the $i$th row and $j$th column of $A$ is usually written $a_{ij}$ or sometimes $A_{ij}$. For example, $$a_{11} = 1, \quad a_{23} = 0, \quad a_{32} \text{ doesn't make sense} .$$

Definition: An $m \times n$ matrix $A$ is square if $m = n$. The diagonal entries are $a_{11}, a_{22}, \ldots$. If $A$ is square and the nondiagonal entries are all zero, then $A$ is called a diagonal matrix. $$% The Rules create some space below the matrices: \kern-8ex \small \mystack{ \bmat{ccc} 1 & -3/2 & \pi \\ \sqrt{2} & 2.3 & 0 \emat \Rule{0pt}{0pt}{18pt} }{\text{not square or diagonal}} \qquad \mystack{ \bmat{rr} 1 & 2 \\ 3 & 4 \emat \Rule{0pt}{0pt}{22pt} }{\text{square}} \qquad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 4 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}} \qquad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 0 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}}$$

Definition: A diagonal matrix with all diagonal entries equal is called a scalar matrix. A scalar matrix with diagonal entries all equal to $1$ is an identity matrix. $$\kern-8ex \small % The Rules create some space below the matrices: \mystack{ \bmat{rr} 2 & 0 \\ 0 & 2 \emat \Rule{0pt}{0pt}{20pt} }{\text{scalar}} \qquad \mystack{ I_3 = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \emat \Rule{0pt}{0pt}{18pt} }{\text{identity matrix}} \qquad \mystack{ O = \bmat{rrr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \emat \Rule{0pt}{0pt}{18pt} }{\text{scalar}}$$ Note: Identity $\implies$ scalar $\implies$ diagonal $\implies$ square.

Now we're going to mimick a lot of what we did when we first introduced vectors.

Definition: Two matrices are equal if they have the same size and their corresponding entries are equal. $$\kern-8ex \small % The Rules create some space below the matrices: \bmat{cc} 1 & -3/2 \\ \sqrt{2} & 0 \emat \quad \bmat{cc} \cos 0 & -1.5 \\ \sqrt{2} & \sin 0 \emat \quad \bmat{cc} 1 & 2 \\ 3 & 4 \emat \quad \bmat{rrrr} 1 & 2 & 3 & 4 \emat \quad \bmat{r} 1 \\ 2 \\ 3 \\ 4 \emat$$ The first two above are equal, but no other two are equal. We distinguish row matrices from column matrices!

### Matrix addition and scalar multiplication

Our first two operations are just like for vectors:

Definition: If $A$ and $B$ are both $m \times n$ matrices, then their sum $A + B$ is the $m \times n$ matrix obtained by adding the corresponding entries of $A$ and $B$. Using the notation $A = [a_{ij}]$ and $B = [b_{ij}]$, we write $$A + B = [a_{ij} + b_{ij}] \qquad\text{ or }\qquad (A+B)_{ij} = a_{ij} + b_{ij} .$$

Examples: $$\bmat{rrr} 1 & 2 & 3 \\ 4 & 5 & 6 \emat + \bmat{rrr} 0 & -1 & 2 \\ \pi & 0 & -6 \emat = \bmat{ccc} 1 & 1 & 5 \\ 4+\pi & 5 & 0 \emat$$ $$\bmat{rrr} 1 \\ 4 \emat + \bmat{rrr} 0 & -1 \emat \qtext{is not defined}$$

Definition: If $A$ is an $m \times n$ matrix and $c$ is a scalar, then the scalar multiple $cA$ is the $m \times n$ matrix obtained by multiplying each entry by $c$. We write $cA = [c \, a_{ij}]$ or $(cA)_{ij} = c \, a_{ij}$.

Example: $$3 \bmat{rrr} 0 & -1 & 2 \\ \pi & 0 & -6 \emat = \bmat{rrr} 0 & -3 & 6 \\ 3 \pi & 0 & -18 \emat$$

Definition: As expected, $-A$ means $(-1)A$ and $A-B$ means $A + (-B)$.

The $m \times n$ zero matrix has all entries $0$ and is denoted $O$ or $O_{m\times n}$. Of course, $A + O = A$.

So we have the real number $0$, the zero vector $\vec 0$ (or $\boldsymbol{0}$ in the text) and the zero matrix $O$.

### Matrix multiplication

This is unlike anything we have seen for vectors.

Definition: If $A$ is $m \times \red{n}$ and $B$ is $\red{n} \times r$, then the product $C = AB$ is the $m \times r$ matrix whose $i,j$ entry is $$c_{ij} = a_{i\red{1}} b_{\red{1}j} + a_{i\red{2}} b_{\red{2}j} + \cdots + a_{i\red{n}} b_{\red{n}j} = \sum_{\red{k}=1}^{n} a_{i\red{k}} b_{\red{k}j} .$$ This is the dot product of the $i$th row of $A$ with the $j$th column of $B$.

Note that for this to make sense, the number of columns of $A$ must equal the number of rows of $B$. $$\mystack{A}{m \times n} \ \ \mystack{B}{n \times r} \mystack{=}{\strut} \mystack{AB}{m \times r}$$ This may seem very strange, but it turns out to be useful. We will never use componentwise multiplication, as it is not generally useful.

Examples on board: $2 \times 3$ times $3 \times 4$, $1 \times 3$ times $3 \times 1$, $3 \times 1$ times $1 \times 3$.

One motivation for this definition of matrix multiplication is that it comes up in linear systems.

Example 3.8: Consider the system \begin{aligned} 4 x + 2 y\ &= 4 \\ 5 x + \ph y\ &= 8 \\ 6 x + 3 y\ &= 6 \end{aligned} The left-hand sides are in fact a matrix product: $$\bmat{rr} 4 & 2 \\ 5 & 1 \\ 6 & 3 \emat \coll x y$$ Every linear system with augmented matrix $[A \mid \vb\,]$ can be written as $A \vx = \vb$.

Note: In general, if $A$ is $m \times n$ and $B$ is a column vector in $\R^n$ ($n \times 1$), then $AB$ is a column vector in $\R^m$ ($m \times 1$). So one thing a matrix $A$ can do is transform column vectors into column vectors. This point of view will be important later.

Question: If $A$ is an $m \times n$ matrix and $\ve_1$ is the first standard unit vector in $\R^n$, what is $A \ve_1$?

### Powers

In general, $A^2 = AA$ doesn't make sense. But if $A$ is $n \times n$ (square), then $A^2 = AA$ does make sense. $A^2$ is $n \times n$ as well, and so it also makes sense to define the power $$A^k = AA\cdots A \quad\text{with k factors}.$$

We write $A^1 = A$ and $A^0 = I_n$ (the identity matrix).

We will see later that $(AB)C = A(BC)$, so the expression for $A^k$ is unambiguous. And it follows that $$A^r A^s = A^{r+s} \qquad\text{and}\qquad (A^r)^s = A^{rs}$$ for all nonnegative integers $r$ and $s$.

Example 3.13 on board: Powers of $$A = \bmat{rr} 1 & 2 \\ 3 & 4 \emat \qtext{and} B = \bmat{rr} 1 & 1 \\ 1 & 1 \emat$$

True/false: Every diagonal matrix is a scalar matrix.

True/false: If $A$ is diagonal, then so is $A^2$.

True/false: If $A$ and $B$ are both square, then $AB$ is square.

Challenge question (for next class): Is there a nonzero matrix $A$ such that $A^2 = O$?

Next class: We'll cover the properties these operations have, from Section 3.2.