## Math 1600B Lecture 4, Section 2, 13 Jan 2014

### Announcements:

Read Section 1.3 for next class. Work through recommended homework questions.

Tutorials start this week, and include a quiz covering Sections 1.1, 1.2 as well as the code vectors part of 1.4. It does not cover Section 1.3 or the Exploration after Section 1.2.
The quizzes last 20 minutes, and are at the end of the tutorial, so you have time for questions at the beginning.
Questions are similar to homework questions, but may be slightly different. There will be two true/false questions, for which you must explain your answers.
You must write in the tutorial you are registered in.
Different sections have different quizzes, but it is still considered an academic offense to share information about quizzes.

Office hour: today, 1:30-2:30, MC103B.
My office hour on Wednesday is cancelled this week.

Lecture notes (this page) available from course web page.

(Actually, a quiz this week and a midterm in 6 1/2 weeks...)

### Section 1.2: Length and Angle: The Dot Product

Definition: The dot product or scalar product of vectors $\vu$ and $\vv$ in $\R^n$ is the real number defined by $$\vu \cdot \vv := u_1 v_1 + \cdots + u_n v_n .$$

This has familiar properties; see Theorem 1.2.

Definition: The length or norm of $\vv$ is the scalar $\|\vv\|$ defined by $$\|\vv\| := \sqrt{\vv \cdot \vv} = \sqrt{v_1^2 + \cdots + v_n^2} .$$ A vector of length 1 is called a unit vector.

Theorem 1.5: The Triangle Inequality: For all $\vu$ and $\vv$ in $\R^n$, $$\| \vu + \vv \| \leq \| \vu \| + \| \vv \| .$$

We define the distance between vectors $\vu$ and $\vv$ by the formula $$d(\vu, \vv) := \| \vu - \vv \| = \sqrt{(u_1 - v_1)^2 + \cdots + (u_n - v_n)^2}.$$

### Angles from dot product

Theorem 1.4: The Cauchy-Schwarz Inequality: For all $\vu$ and $\vv$ in $\R^n$, $$| \vu \cdot \vv | \leq \| \vu \| \, \| \vv \| .$$

We can therefore use the dot product to define the angle between two vectors $\vu$ and $\vv$ in $\R^n$ by the formula $$\cos \theta = \frac{\vu \cdot \vv}{\| \vu \| \, \| \vv \|}, \quad \text{i.e.,} \quad \theta := \arccos \left( \frac{\vu \cdot \vv}{\| \vu \| \, \| \vv \|} \right),$$ where we choose $0 \leq \theta \leq 180^\circ$. This makes sense because the fraction is between -1 and 1.

The formula can also be written $$\vu \cdot \vv = \| \vu \| \, \| \vv \| \, \cos \theta .$$

### Orthogonal Vectors

How can we tell whether two vectors are orthogonal / perpendicular?

Example: If $\vu = [1, 2, 3]$ and $\vv = [1, 1, -1]$ in $\R^3$, then $\vu \cdot \vv = 1 \cdot 1 + 2 \cdot 1 + 3 \cdot (-1) = 1 + 2 - 3 = 0$, so $\vu$ and $\vv$ are orthogonal.

Also, $\vu = [1, 2, 3]$ and $\vv = [1, 1, 1]$ in $\Z_3^3$ are orthogonal, since $\vu \cdot \vv = 1 + 2 + 3 = 6 = 0 \pmod{3}$.

Pythagorean theorem in $\R^n$: If $\vu$ and $\vv$ are orthogonal, then $$\| \vu + \vv \|^2 = \| \vu \|^2 + \| \vv \|^2 .$$

Explain on board, using Theorem 1.2.

### Projections

Use board to derive formula for the projection of $\vv$ onto $\vu$:

The applet we saw before is useful for understanding projections as well. Java version.

Example: If $\vu = [-1, 1, 0]$ and $\vv = [1,2,3]$ then \kern-4ex \begin{aligned} \proj_\vu(\vv) = \frac{\vu \cdot \vv}{\vu \cdot \vu} \vu &= \frac{-1+2+0}{1+1+0} [-1, 1, 0] \\ &= \frac{1}{2} [-1,1,0] = [-\!\frac{1}{2}, \frac{1}{2}, 0] \end{aligned}

### Questions

True/false: If $\vu$, $\vv$ and $\vw$ are vectors in $\R^n$ such that $\vu \cdot \vv = \vu \cdot \vw$ and $\vu \neq \vec 0$, then $\vv = \vw$.

True/false: If $\vu$ is orthogonal to both $\vv$ and $\vw$, then $\vu$ is orthogonal to $2 \vv + 3 \vw$.

You only answer true if a statement is always true. You justify this answer by giving a general explanation of why it is always true, not just an example where it happens to be true.

You answer false if a statement can in some case be false. You justify this answer by giving an explicit example where the statement is false.

Question: Suppose I tell you that $\vu \cdot \vv = 1/2$ and $\vu \cdot \vw = -1$. What is $\vu \cdot (2 \vv + 3 \vw)$?

Question: Does $\proj_{\vu}(\vv)$ always point in the same direction as $\vu$?

### Section 1.4: Applications: Code Vectors (we aren't covering force vectors)

We're going to study a way to encode data that allows us to detect transmission errors. Used on CDs, UPC codes, ISBN numbers, credit card numbers, etc.

Example 1.37: Suppose we want to send the four commands "forward", "back", "left" and "right" as a sequence of 0s and 1s. We could use the following code: $$\small\kern-8ex \textrm{forward} = [0,0], \quad \textrm{back} = [0,1], \quad \textrm{left} = [1,0], \quad \textrm{right} = [1,1].$$ But if there is an error in our transmission, the Mars rover will get the wrong message and will drive off of a cliff, wasting billions of dollars of taxpayer money (but making for some good NASA jokes).

Here's a more clever code: $$\small\kern-8ex \textrm{forward} = [0,0,0], \quad \textrm{back} = [0,1,1], \quad \textrm{left} = [1,0,1], \quad \textrm{right} = [1,1,0].$$ If any single bit (binary digit, a 0 or a 1) is flipped during transmission, the Mars rover will notice the error, since all of the code vectors have an even number of 1s. It could then ask for retransmission of the command.

This is called an error-detecting code. Note that it is formed by adding a bit to the end of each of the original code vectors so that the total number of 1s is even.

In vector notation, we replace a vector $\vb = [v_1, v_2, \ldots, v_n]$ with the vector $\vv = [v_1, v_2, \ldots, v_n, d]$ such that $\vec 1 \cdot \vv = 0 \pmod{2}$, where $\vec 1 = [1, 1, \ldots, 1]$.

Exactly the same idea works for vectors in $\Z_3^n$; see Example 1.39 in the text.

Note: One problem with the above scheme is that transposition errors are not detected: if we want to send $[0, 1, 1]$ but the first two bits are exchanged, the rover receives $[1, 0, 1]$, which is also a valid command. We'll see codes that can detect transpositions.

Example 1.40 (UPC Codes): The Univeral Product Code (bar code) on a product is a vector in $\Z_{10}^{12}$, such as $$\vu = [6,7,1,8,6,0,0,1,3,6,2,4].$$ Instead of using $\vec 1$ as the check vector, UPC uses $$\vc = [ 3, 1, 3, 1, 3, 1, 3, 1, 3, 1, 3, 1] .$$ The last digit is chosen so that $\vc \cdot \vu = 0 \pmod{10}$.

For example, if we didn't know the last digit of $\vu$, we could compute $$\vc \cdot [6,7,1,8,6,0,0,1,3,6,2,d] = \cdots = 6 + d \pmod{10}$$ and so we would find that we need to take $d = 4$, since $6 + 4 = 0 \pmod{10}$.

This detects any single error. The pattern in $\vc$ was chosen so that it detects many transpositions, but it doesn't detect when digits whose difference is 5 are transposed. For example, $3 \cdot 5 + 1 \cdot 0 = 15$ and $3 \cdot 0 + 1 \cdot 5 = 5$, and these are the same modulo $10$.

Example 1.41 (ISBN Codes): ISBN codes use vectors in $\Z_{11}^{10}$. The check vector is $\vc = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]$. Because 11 is a prime number, this code detects all single errors and all single transposition errors.

Summary: To create a code, you choose $m$ (which determines the allowed digits), $n$ (the number of digits in a code word), and a check vector $\vc \in \Z_m^n$. Then the valid words $\vv$ are those with $\vc \cdot \vv = 0$. If $\vc$ ends in a $1$, then you can always choose the last digit of $\vv$ to make it valid.

Note: This kind of code can only reliably detect one error, but more sophisticated codes can detect multiple errors. There are even error-correcting codes, which can correct multiple errors in a transmission without needing it to be resent. In fact, you can drill small holes in a CD, and it will still play the entire content perfectly.

Question: The Dan code uses vectors in $\Z_4^3$ with check vector $\vc = [3,2,1]$. Find the check digit $d$ in the code word $\vv = [2, 2, d]$.

Solution: We compute \kern-4ex \begin{aligned} \vc \cdot \vv = [3,2,1]\cdot [2,2,d] &= 3 \cdot 2 + 2 \cdot 2 + 1 \cdot d \\ &= 10 + d = 2 + d \pmod{4} \end{aligned} To make $\vc \cdot \vv = 0 \pmod{4}$, we choose $d = 2$.

This is the end of the material for quiz 1.