Continue **reading** Section 4.2 for next class.
Work through recommended homework questions.

**Tutorials:** Quiz next week: 3.7, 4.1 and some of 4.2.

**Office hour:** Monday, 1:30-2:30, MC103B.

**Help Centers:** Monday-Friday 2:30-6:30 in MC 106.

**Definition:** Let $A$ be an $n \times n$ matrix.
A scalar $\lambda$ (lambda) is called an **eigenvalue** of $A$ if
there is a nonzero vector $\vx$ such that $A \vx = \lambda \vx$.
Such a vector $\vx$ is called an **eigenvector** of $A$ corresponding to $\lambda$.

**Question:** Why do we only consider *square* matrices here?

**Example A:** Since
$$
\bmat{rr} 1 & 2 \\ 2 & -2 \emat \coll 2 1 = \coll 4 2 = 2 \coll 2 1 ,
$$
we see that $2$ is an eigenvalue of $\bmat{rr} 1 & 2 \\ 2 & -2 \emat$
with eigenvector $\coll 2 1$.

In general, the eigenvectors for a given eigenvalue $\lambda$ are the nonzero solutions to $(A - \lambda I) \vx = \vec 0$.

**Definition:** Let $A$ be an $n \times n$ matrix and let $\lambda$ be
an eigenvalue of $A$. The collection of all eigenvectors corresponding
to $\lambda$, together with the zero vector, is a subspace called the
**eigenspace** of $\lambda$ and is denoted $E_\lambda$. In other words,
$$ E_\lambda = \null(A - \lambda I) . $$

We worked out many examples, and used an applet to understand the geometry.

Given a specific number $\lambda$, we know how to check whether $\lambda$ is an eigenvalue: we check whether $A - \lambda I$ has a nontrivial null space. (And we can find the eigenvectors by finding the null space.)

By the fundamental theorem of invertible matrices, $A - \lambda I$ has a nontrivial null space if and only if it is not invertible. For $2 \times 2$ matrices, we can check invertibility using the determinant!

**Example:** Find all eigenvalues of $A = \bmat{rr} 1 & 2 \\ 2 & -2 \emat$.

**Solution:** We need to find all $\lambda$ such that $\det(A-\lambda I) = 0$.
$$
\kern-6ex
\begin{aligned}
\det(A-\lambda I) &= \det \bmat{cc} 1-\lambda & 2 \\ 2 & -2-\lambda \emat \\
&= (1-\lambda)(-2-\lambda)-4 = \lambda^2 + \lambda - 6 ,
\end{aligned}
$$
so we need to solve the quadratic equation $\lambda^2 + \lambda - 6 = 0$.
This can be factored as $(\lambda - 2)(\lambda + 3) = 0$, and so
$\lambda = 2$ or $\lambda = -3$ are the eigenvalues.

So now we know how to handle the $2 \times 2$ case. To handle larger matrices, we need to learn about their determinants, which is Section 4.2.

For a $3 \times 3$ matrix $A$, we define
$$
\kern-9ex
\det A = \bdmat{rrr}
\red{a_{11}} & \red{a_{12}} & \red{a_{13}} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\edmat = \red{a_{11}} \bdmat{rr} a_{22} & a_{23} \\ a_{32} & a_{33} \edmat
- \red{a_{12}} \bdmat{rr} a_{21} & a_{23} \\ a_{31} & a_{33} \edmat
+ \red{a_{13}} \bdmat{rr} a_{21} & a_{22} \\ a_{31} & a_{32} \edmat
$$
If we write $A_{ij}$ for the matrix obtained from $A$ by deleting the
$i$th row and the $j$th column, then this can be written
$$
\kern-9ex
\det A = a_{11} \det A_{11} - a_{12} \det A_{12} + a_{13} \det A_{13}
= \sum_{j=1}^{3} (-1)^{1+j} \, a_{1j} \det A_{1j}
$$
We call $\det A_{ij}$ the **$(i,j)$-minor** of $A$.

**Example:** On board.

Example 4.9 in the book shows another method, that doesn't generalize to larger matrices.

**Definition:** Let $A = [a_{ij}]$ be an $n \times n$ matrix.
Then the **determinant** of $A$ is the scalar
$$
\kern-9ex
\begin{aligned}
\det A = |A| &= a_{11} \det A_{11} - a_{12} \det A_{12} + \cdots + (-1)^{1+n} a_{1n} \det A_{1n} \\
&= \sum_{j=1}^{n} (-1)^{1+j} \, a_{1j} \det A_{1j} .
\end{aligned}
$$

This is a recursive definition!

**Example:**
$A = \bdmat{rrrr} 2 & 0 & 1 & 0 \\ 3 & 1 & 0 & 0 \\ 1 & 0 & 2 & 3 \\ 2 & 0 & 4 & 5 \edmat$,
on board.

The computation can be very long if there aren't many zeros! We'll learn some better methods.

Note that if we define the determinant of a $1 \times 1$ matrix $A = [a]$ to be $a$, then the general definition works in the $2 \times 2$ case as well. So, in this context, $|a| = a$ (not the absolute value!)

It will make the notation simpler if we define the **$(i,j)$-cofactor** of $A$
to be
$$
C_{ij} = (-1)^{i+j} \det A_{ij} .
$$
Then the definition above says
$$
\det A = \sum_{j=1}^{n} \, a_{1j} C_{1j} .
$$
This is called the **cofactor expansion along the first row**.
It turns out that *any* row or column works!

**Theorem 4.1 (The Laplace Expansion Theorem):**
Let $A$ be any $n \times n$ matrix. Then for each $i$ we have
$$
\kern-6ex
\det A = a_{i1} C_{i1} + a_{i2} C_{i2} + \cdots + a_{in} C_{in}
= \sum_{j=1}^{n} \, a_{ij} C_{ij}
$$
(**cofactor expansion along the $i$th row**).
And for each $j$ we have
$$
\kern-6ex
\det A = a_{1j} C_{1j} + a_{2j} C_{2j} + \cdots + a_{nj} C_{nj}
= \sum_{i=1}^{n} \, a_{ij} C_{ij}
$$
(**cofactor expansion along the $j$th column**).

The book proves this result at the end of this section, but we won't cover the proof.

The signs in the cofactor expansion form a checkerboard pattern: $$ \bmat{ccccc} + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ + & - & + & - & \cdots \\ - & + & - & + & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \emat $$

**Example:** Redo the previous $4 \times 4$ example, saving
work by expanding along the second column. On board.
Note that the $+-$ pattern for the $3 \times 3$ determinant is
not from the original matrix.

**Example:** A $4 \times 4$ triangular matrix, on board.

A **triangular** matrix is a square matrix that is all zero below the
diagonal or above the diagonal.

**Theorem 4.2:** If $A$ is triangular, then $\det A$ is the product
of the diagonal entries.

So how do we do better? Like always, we turn to row reduction! These properties will be what we need:

**Theorem 4.3:** Let $A$ be a square matrix.

a. If $A$ has a zero row, then $\det A = 0$.

**b.** If $B$ is obtained from $A$ by interchanging two rows,
then $\det B = - \det A$.

c. If $A$ has two identical rows, then $\det A = 0$.

**d.** If $B$ is obtained from $A$ by multiplying a row of $A$ by $k$,
then $\det B = k \det A$.

e. If $A$, $B$ and $C$ are identical in all rows except the $i$th row,
and the $i$th row of $C$ is the sum of the $i$th rows of $A$ and $B$,
then $\det C = \det A + \det B$.

**f.** If $B$ is obtained from $A$ by adding a multiple of one row to another,
then $\det B = \det A$.

All of the above statements are true with rows replaced by columns.

Explain verbally, making use of: $$ \kern-6ex \det A = a_{i1} C_{i1} + a_{i2} C_{i2} + \cdots + a_{in} C_{in} = \sum_{j=1}^{n} \, a_{ij} C_{ij} $$ The following will help explain how (f) follows from (d) and (e): $$ \kern-8ex A = \collll {\vr_1}{\vr_2}{\vr_3}{\vr_4},\quad B = \collll {\vr_1}{5 \vr_4}{\vr_3}{\vr_4},\quad B' = \collll {\vr_1}{\vr_4}{\vr_3}{\vr_4},\quad C = \ccollll {\vr_1}{\vr_2 + 5 \vr_4}{\vr_3}{\vr_4} $$ $$ \kern-9ex \det C = \det A + \det B = \det A + 5 \det B' = \det A + 5 (0) = \det A $$

The bold statements are the ones that are useful for understanding how row operations change the determinant.

**Example:** Use row operations to compute $\det A$ by reducing
to triangular form, where
$A = \bmat{rrrr} 2 & 4 & 6 & 8 \\ 1 & 4 & 1 & 2 \\ 2 & 2 & 12 & 8 \\ 1 & 2 & 3 & 9 \emat$.
On board.