Continue **reading** Section 3.5, start Section 3.6.
Work through recommended homework questions.

**Tutorials:** Quiz 4 this week covers Sections 3.2, 3.3 and
the beginning of Section 3.5 (up to and including Example 3.41).

**Office hour:** today, 12:30-1:30, MC103B.

**Help Centers:** Monday-Friday 2:30-6:30 in MC 106.

**Definition:** A **subspace** of $\R^n$ is any collection $S$ of
vectors in $\R^n$ such that:

1. The zero vector $\vec 0$ is in $S$.

2. $S$ is **closed under addition**:
If $\vu$ and $\vv$ are in $S$, then $\vu + \vv$ is in $S$.

3. $S$ is **closed under scalar multiplication**:
If $\vu$ is in $S$ and $c$ is any scalar, then $c \vu$ is in $S$.

Conditions (2) and (3) together are the same as saying that $S$ is
**closed under linear combinations**.

**Example:** $\R^n$ is a subspace of $\R^n$.
Also, $S = \{ \vec 0 \}$ is a subspace of $\R^n$.

A line or plane through the origin in $\R^3$ is a subspace. Applet.

On the other hand, a plane **not** through the origin is not a subspace.
It of course fails (1), but the other conditions fail as well, as shown in
the applet.

**Theorem 3.19:** Let $\vv_1, \vv_2, \ldots, \vv_k$ be vectors in $\R^n$.
Then $\span(\vv_1, \ldots, \vv_k)$ is a subspace of $\R^n$.

**Theorem 3.21:** Let $A$ be an $m \times n$ matrix and let $N$ be the
set of solutions of the homogeneous system $A \vx = \vec 0$.
Then $N$ is a subspace of $\R^n$.

**Aside:** At this point, the book states **Theorem 3.22**, which
says that every linear system has no solution, one solution or infinitely
many solutions, and gives a proof of this. We already know this is true,
using Theorem 2.2 from Section 2.2 (see Lecture 9).
The proof given here is in a sense better, since it doesn't rely on
knowing anything about row echelon form, but I won't use class time
to cover it.

Spans and null spaces are the *two main* sources of subspaces.

**Definition:** Let $A$ be an $m \times n$ matrix.

1. The **row space** of $A$ is the subspace $\row(A)$ of $\R^n$ spanned
by the rows of $A$.

2. The **column space** of $A$ is the subspace $\col(A)$ of $\R^m$ spanned
by the columns of $A$.

3. The **null space** of $A$ is the subspace $\null(A)$ of $\R^n$
consisting of the solutions to the system $A \vx = \vec 0$.

**Example:** The column space of $A = \bmat{rr} 1 & 2 \\ 3 & 4 \emat$
is $\span(\coll 1 3, \coll 2 4)$, which we saw is all of $\R^2$.
We also saw that the row space of $A$ is $\R^2$.

**Example:** The column space of $A = \bmat{rr} 1 & 2 \\ 3 & 4 \\ 5 & 6 \emat$
is the span of the two columns, which is a subspace of $\R^3$.
Since the columns are linearly independent, this is a plane through the origin in $\R^3$.

**Definition:** A **basis** for a subspace $S$ of $\R^n$ is a
set of vectors $\vv_1, \ldots, \vv_k$ such that:

1. $S = \span(\vv_1, \ldots, \vv_k)$, and

2. $\vv_1, \ldots, \vv_k$ are linearly independent.

Condition (2) ensures that none of the vectors is redundant, so we aren't being wasteful. Giving a basis for a subspace is a good way to "describe" it.

**Example 3.42:** The standard unit vectors $\ve_1, \ldots, \ve_n$
in $\R^n$ are linearly independent and span $\R^n$, so they form a
basis of $\R^n$ called the **standard basis**.

**Example:**
We saw above that $\coll 1 3$ and $\coll 2 4$ span $\R^2$.
They are also linearly independent, so they are a basis for $\R^2$.

Note that $\coll 1 0$ and $\coll 0 1$ are another basis for $\R^2$. A subspace will in general have many bases, but we'll see soon that they all have the same number of vectors! (Grammar: one basis, two bases.)

**Example:** Let $\cP$ be the plane through the origin with
direction vectors $\colll 1 3 5$ and $\colll 2 4 6$. Then $\cP$
is a subspace of $\R^3$ and these two vectors are a basis for $\cP$.

**Example:** Find a basis for
$S = \span(\colll 3 0 2, \colll {-2} 1 1, \colll 1 1 3)$.

In more complicated situations, there are two ways to find a basis of the span of a set of vectors. The first way uses the following result:

**Theorem 3.20:**
Let $A$ and $B$ be row equivalent matrices.
Then $\row(A) = \row(B)$.

**Proof:**
Suppose $B$ is obtained from $A$ by performing elementary
row operations. Each of these operations expresses the new row as a linear
combination of the previous rows.
So every row of $B$ is a linear combination of the rows of $A$.
So $\row(B) \subseteq \row(A)$.

On the other hand, each row operation is reversible, so reversing the above argument gives that $\row(A) \subseteq \row(B)$. Therefore, $\row(A) = \row(B). \quad \Box$

This will be useful, because it is easy to understand the row space of a matrix in row echelon form.

**Example:** Let's redo the above example. Consider the matrix
$$
A = \bmat{rrr} 3 & 0 & 2 \\ -2 & 1 & 1 \\ 1 & 1 & 3 \emat
$$
whose rows are the given vectors. So $S = \row(A)$.

Row reduction produces the following matrix $$ B = \bmat{ccc} 1 & 0 & 2/3 \\ 0 & 1 & 7/3 \\ 0 & 0 & 0 \emat $$ which is in reduced row echelon form. By Theorem 3.20, $S = \row(B)$. But the first two rows clearly give a basis for $\row(B)$, so another solution to the question is $\ccolll 1 0 {2/3}$ and $\ccolll 0 1 {7/3}$.

**Theorem:** If $R$ is a matrix in row echelon form, then the
nonzero rows of $R$ form a basis for $\row(R)$.

**Example:** Let
$$
R = \bmat{rrrr} 1 & 2 & 3 & 4 \\ 0 & 5 & 6 & 7 \\ 0 & 0 & 0 & 8 \\ 0&0&0&0 \emat
= \collll {\va_1} {\va_2} {\va_3} {\va_4}
$$
$\row(R)$ is the span of the nonzero rows, since zero rows don't contribute.
So we just need to see that the nonzero rows are linearly independent.
If we had $c_1 \va_1 + c_2 \va_2 + c_3 \va_3 = \vec 0$,
then $c_1 = 0$, by looking at the first component.
So $5 c_2 = 0$, by looking at the second component.
And so $8 c_3 = 0$, by looking at the fourth component.
So $c_1 = c_2 = c_3 = 0$.

The same argument works in general, by looking at the pivot columns, and this proves the Theorem.

This gives rise to the **row method** for finding a basis for
a subspace $S$ spanned by some vectors $\vv_1, \ldots, \vv_k$:

1. Form the matrix $A$ whose rows are $\vv_1, \ldots, \vv_k$, so $S = \row(A)$.

2. Reduce $A$ to row echelon form $R$.

3. The nonzero rows of $R$ will be a basis of $S = \row(A) = \row(R)$.

Notice that the vectors you get are usually different from the vectors you started with. Given $S = \span(\vv_1, \ldots, \vv_k)$, one can always find a basis for $S$ which just omits some of the given vectors. We'll explain this next.

Suppose we form a matrix $A$ whose __columns__ are $\vv_1, \ldots, \vv_k$.
A nonzero solution to the system $A \vx = \vec 0$ is exactly a dependency
relationship between the given vectors.
Also, recall that if $R$ is row equivalent to $A$, then $R \vx = \vec 0$ has the
same solutions as $A \vx = \vec 0$.
This means that the columns of $R$ have *the same* dependency relationships
as the columns of $A$.

**Example 3.47:** Find a basis for the column space of
$$
A = \bmat{rrrrr}
1 & 1 & 3 & 1 & 6 \\
2 & -1 & 0 & 1 & -1 \\
-3 & 2 & 1 & -2 & 1 \\
4 & 1 & 6 & 1 & 3
\emat
$$
**Solution:** The reduced row echelon form is
$$
R = \bmat{rrrrr}
1 & 0 & 1 & 0 & -1 \\
0 & 1 & 2 & 0 & 3 \\
0 & 0 & 0 & 1 & 4 \\
0 & 0 & 0 & 0 & 0
\emat
$$
Write $\vr_i$ for the columns of $R$ and $\va_i$ for the columns of $A$.
You can see immediately that $\vr_3 = \vr_1 + 2 \vr_2$
and $\vr_5 = -\vr_1 + 3\vr_2 + 4\vr_4$.
So $\col(R) = \span(\vr_1, \vr_2, \vr_4)$, and these three are linearly
independent since they are standard unit vectors.

It follows that the columns of $A$ have the same dependency relationships: $\va_3 = \va_1 + 2 \va_2$ and $\va_5 = -\va_1 + 3\va_2 + 4\va_4$. Also, $\va_1$, $\va_2$ and $\va_4$ must be linearly independent. So a basis for $\col(A)$ is given by $\va_1$, $\va_2$ and $\va_4$.

Note that these are the **columns corresponding to the leading 1's** in $R$!

**Warning:** Elementary row operations change the column space!
So $\col(A) \neq \col(R)$.
So while $\vr_1$, $\vr_2$ and $\vr_4$ are a basis for $\col(R)$,
they are not a solution to the question asked.

We already saw that from $R$ we can read off a basis of $\row(A)$. Since $\row(A) = \row(R)$, a basis for $\row(A)$ consists of the nonzero rows of $R$.

The other kind of subspace that arises a lot is the **null space** of a matrix $A$,
the subspace of solutions to the homogeneous system $A \vx = \vec 0$.
We learned in Chapter 2 how to find a basis for this subspace, even though
we didn't use this terminology.

**Example 3.48:** Find a basis for the null space of the $5 \times 4$ matrix $A$ above.

**Solution:** The reduced row echelon form of $[A \mid \vec 0\,]$ is
$$
[R \mid \vec 0\,] = \bmat{rrrrr|r}
1 & 0 & 1 & 0 & -1 & 0 \\
0 & 1 & 2 & 0 & 3 & 0 \\
0 & 0 & 0 & 1 & 4 & 0 \\
0 & 0 & 0 & 0 & 0 & 0
\emat
$$
We see that $x_3$ and $x_5$ are free variables, so we let $x_3 = s$ and $x_5 = t$
and use back substitution to find that
$$
\vx = \colllll {x_1} {x_2} {x_3} {x_4} {x_5}
= s \colllll {-1} {-2} 1 0 0 + t \colllll 1 {-3} 0 {-4} 1 \qqtext{(See text.)}
$$
Therefore, the two column vectors shown form a basis for the null space.

The vectors that arise in this way will always be linearly independent, since if all $x_i$'s are $0$, then the free variables must be zero, so the parameters must be zero.

1. Find the reduced row echelon form $R$ of $A$.

2. The nonzero rows of $R$ form a basis for $\row(A) = \row(R)$.

3. The columns of $A$ that correspond to the columns of $R$ with leading 1's
form a basis for $\col(A)$.

4. Use back substitution to solve $R \vx = \vec 0$; the vectors that
arise are a basis for $\null(A) = \null(R)$.

You just need to do row reduction *once* to answer all three questions!

We have seen two ways to compute a basis of a span of a set of vectors. One is to make them the columns of a matrix, and the other is to make them the rows. The column method produces a basis using vectors from the original set. Both ways require about the same amount of work.

Similarly, if asked to find a basis for $\row(A)$, one could use the column method on $A^T$.

.