## Announcements:

Continue reading Section 3.1 (partitioned matrices) and Section 3.2 for next class. Work through recommended homework questions.

Quiz 4 is this week, and will focus on the material in Section 2.3 (linear (in)dependence), 2.4 (networks) and the part of 3.1 we covered last class.

Help Centers: Monday-Friday 2:30-6:30 in MC 106.

## Partial review of Lecture 13:

### Section 3.1: Matrix Operations

Definition: An $m \times n$ matrix $A$ is a rectangular array of numbers called the entries, with $m$ rows and $n$ columns. $A$ is called square if $m = n$.

The entry in the $i$th row and $j$th column of $A$ is usually written $a_{ij}$ or sometimes $A_{ij}$.

The diagonal entries are $a_{11}, a_{22}, \ldots$.

If $A$ is square and the nondiagonal entries are all zero, then $A$ is called a diagonal matrix.

$$% The Rules create some space below the matrices: \kern-8ex \mystack{ \bmat{ccc} 1 & -3/2 & \pi \\ \sqrt{2} & 2.3 & 0 \emat \Rule{0pt}{0pt}{18pt} }{\text{not square or diagonal}} \quad \mystack{ \bmat{rr} 1 & 2 \\ 3 & 4 \emat \Rule{0pt}{0pt}{22pt} }{\text{square}} \quad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 4 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}} \quad \mystack{ \bmat{rr} 1 & 0 \\ 0 & 0 \emat \Rule{0pt}{0pt}{20pt} }{\text{diagonal}}$$

Definition: A diagonal matrix with all diagonal entries equal is called a scalar matrix. A scalar matrix with diagonal entries all equal to $1$ is an identity matrix.

$$\kern-8ex % The Rules create some space below the matrices: \mystack{ I_3 = \bmat{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \emat \Rule{0pt}{0pt}{18pt} }{\text{identity matrix}} \quad \mystack{ \bmat{rrr} 3 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 3 \emat \Rule{0pt}{0pt}{18pt} }{\text{scalar}} \quad \mystack{ O = \bmat{rrr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \emat \Rule{0pt}{0pt}{18pt} }{\text{scalar}}$$ Note: Identity $\implies$ scalar $\implies$ diagonal $\implies$ square.

### Matrix addition and scalar multiplication

Our first two operations are just like for vectors:

Definition: If $A$ and $B$ are both $m \times n$ matrices, then their sum $A + B$ is the $m \times n$ matrix obtained by adding the corresponding entries of $A$ and $B$:   $A + B = [a_{ij} + b_{ij}]$.

Definition: If $A$ is an $m \times n$ matrix and $c$ is a scalar, then the scalar multiple $cA$ is the $m \times n$ matrix obtained by multiplying each entry by $c$:   $cA = [c \, a_{ij}]$.

## New material: Section 3.2: Matrix Algebra

Addition and scalar multiplication for matrices behave exactly like addition and scalar multiplication for vectors, with the entries just written in a rectangle instead of in a row or column.

Theorem 3.2: Let $A$, $B$ and $C$ be matrices of the same size, and let $c$ and $d$ be scalars. Then:

 (a) $A + B = B + A$ (comm.) (b) $(A + B) + C = A + (B + C)$ (assoc.) (c) $A + O = A$ (d) $A + (-A) = O$ (e) $c(A+B) = cA + cB$ (dist.) (f) $(c+d)A = cA + dA$ (dist.) (g) $c(dA) = (cd)A$ (h) $1A = A$

Compare to Theorem 1.1.

This means that all of the concepts for vectors transfer to matrices.

E.g., manipulating matrix equations: $$\kern-8ex 2(A+B) - 3(2B - A) = 2A + 2B -6B +3A = 5A - 4B .$$

We define a linear combination to be a matrix of the form: $$c_1 A_1 + c_2 A_2 + \cdots + c_k A_k .$$

And we can define the span of a set of matrices to be the set of all their linear combinations.

And we can say that the matrices $A_1, A_2, \ldots, A_k$ are linearly independent if $$c_1 A_1 + c_2 A_2 + \cdots + c_k A_k = O$$ has only the trivial solution $c_1 = \cdots = c_k = 0$, and are linearly dependent otherwise.

Our techniques for vectors also apply to answer questions such as:

Example 3.16 (a): Suppose $$\kern-8ex \small A_1 = \bmat{rr} 0 & 1 \\ -1 & 0 \emat, \ A_2 = \bmat{rr} 1 & 0 \\ 0 & 1 \emat, \ A_3 = \bmat{rr} 1 & 1 \\ 1 & 1 \emat, \ B = \bmat{rr} 1 & 4 \\ 2 & 1 \emat$$ Is $B$ a linear combination of $A_1$, $A_2$ and $A_3$?

That is, are there scalars $c_1$, $c_2$ and $c_3$ such that $$\kern-6ex c_1 \bmat{rr} 0 & 1 \\ -1 & 0 \emat + c_2 \bmat{rr} 1 & 0 \\ 0 & 1 \emat + c_3 \bmat{rr} 1 & 1 \\ 1 & 1 \emat = \bmat{rr} 1 & 4 \\ 2 & 1 \emat ?$$ Rewriting the left-hand side gives $$\bmat{rr} c_2+c_3 & c_1+c_3 \\ -c_1+c_3 & c_2+c_3 \emat = \bmat{rr} 1 & 4 \\ 2 & 1 \emat$$ and this is equivalent to the system \begin{aligned} \phantom{-c_1 + {}} c_2 + c_3 &= 1 \\ \ph c_1 \phantom{{}+c_2} + c_3 &= 4 \\ -c_1 \phantom{{}+c_2} + c_3 &= 2 \\ \phantom{-c_1 + {}} c_2 + c_3 &= 1 \\ \end{aligned} and we can use row reduction to determine that there is a solution, and to find it if desired: $c_1 = 1, c_2 = -2, c_3 = 3$, so $A_1 - 2A_2 + 3A_3 = B$.

This works exactly as if we had written the matrices as column vectors and asked the same question.

## More review of Lecture 13:

### Matrix multiplication

Definition: If $A$ is $m \times \red{n}$ and $B$ is $\red{n} \times r$, then the product $C = AB$ is the $m \times r$ matrix whose $i,j$ entry is $$\kern-6ex c_{ij} = a_{i\red{1}} b_{\red{1}j} + a_{i\red{2}} b_{\red{2}j} + \cdots + a_{i\red{n}} b_{\red{n}j} = \sum_{\red{k}=1}^{n} a_{i\red{k}} b_{\red{k}j} .$$ This is the dot product of the $i$th row of $A$ with the $j$th column of $B$.

$$\mystack{A}{m \times n} \ \ \mystack{B}{n \times r} \mystack{=}{\strut} \mystack{AB}{m \times r}$$

### Powers

In general, $A^2 = AA$ doesn't make sense. But if $A$ is $n \times n$ (square), then it makes sense to define the power $$A^k = AA\cdots A \quad\text{with k factors}.$$

We write $A^1 = A$ and $A^0 = I_n$.

We will see in a moment that $(AB)C = A(BC)$, so the expression for $A^k$ is unambiguous. And it follows that $$A^r A^s = A^{r+s} \qquad\text{and}\qquad (A^r)^s = A^{rs}$$ for all nonnegative integers $r$ and $s$.

## New material: Section 3.2: Matrix Algebra (continued)

### Properties of Matrix Multiplication and Powers

This is new ground, as you can't multiply vectors.

For the most part, matrix multiplication behaves like multiplication of real numbers, but there are several differences:

Example 3.13 on whiteboard: Powers of $$B = \bmat{rr} 0 & -1 \\ 1 & 0 \emat$$

Example on whiteboard: Tell me the entries of two $2 \times 2$ matrices $A$ and $B$, and let's compute $AB$ and $BA$.

But most expected properties do hold:

Theorem 3.3: Let $A$, $B$ and $C$ be matrices of the appropriate sizes, and let $k$ be a scalar. Then:

 (a) $A(BC) = (AB)C$ (associativity) (b) $A(B + C) = AB + AC$ (left distributivity) (c) $(A+B)C = AC + BC$ (right distributivity) (d) $k(AB) = (kA)B = A(kB)$ (no cool name) (e) $I_m A = A = A I_n$ if $A$ is $m \times n$ (identity)

The text proves (b) and half of (e). (c) and the other half of (e) are the same, with right and left reversed.

Proof of (d): \kern-8ex \begin{aligned} (k(AB))_{ij} &= k (AB)_{ij} = k (\row_i(A) \cdot \col_j(B)) \\ &= (k \, \row_i(A)) \cdot \col_j(B) = \row_i(kA) \cdot \col_j(B) \\ &= ((kA)B)_{ij} \end{aligned} so $k(AB) = (kA)B$. The other part of (d) is similar.$\quad\Box$

Proof of (a): \kern-8ex %\small \begin{aligned} ((AB)C)_{ij} &= \sum_k (AB)_{ik} C_{kj} = \sum_k \sum_l A_{il} B_{lk} C_{kj} \\ &= \sum_l \sum_k A_{il} B_{lk} C_{kj} = \sum_l A_{il} (BC)_{lj} = (A(BC))_{ij} \end{aligned} so $A(BC) = (AB)C$.$\quad\Box$

Example on board: $A I = A$.

Example on board: Solve $$2 ( X - A) + (A + B)(B + I) = 0$$ for $X$ in terms of $A$ and $B$.

Example 3.20: If $A$ and $B$ are square matrices of the same size, is $(A+B)^2 = A^2 + 2 AB + B^2$? On board.

Note: Theorem 3.3 shows that a scalar matrix $kI_n$ commutes with every $n \times n$ matrix $A$. So $$\kern-8ex (A + kI_n)^2 = A^2 + 2 A (k I_n) + (k I_n)^2 = \query{A^2 + 2 k A + k^2 I_n}$$ ($I_n$ is like the number $1$.)

Note: The non-commutativity of matrices is directly related to quantum mechanics. Observables in quantum mechanics are described by matrices, and if the matrices don't commute, then you can't know both quantities at the same time! If time, mention $\frac{d}{dx}$ and multiplication by $x$.

On Friday: more from Sections 3.1 and 3.2: Transpose, symmetric matrices, partitioned matrices.