Skip to main content

Section 7.1 Inner Products

A general vector space need not have any relevant geometry, and in most of our work up to this point, geometric notions did not play a central role. In this chapter, however, we will begin to take advantage of the geometry present in some vector spaces.

Subsection 7.1.1 The Dot Product

In Euclidean geometry, we are introduced to the dot product quite early. The dot product in \(\rr^n\) is essential to our understandings of length and distance.

Definition 7.1.1.

For two vectors \(\bfx, \bfy \in \rr^n\text{,}\) we have the dot product of \(\bfx\) and \(\bfy\) given by
\begin{equation*} \bfx \cdot \bfy = \sum_{i=1}^n x_iy_i\text{,} \end{equation*}
where \(\bfx = [x_i]\) and \(\bfy = [y_i]\text{.}\)

Example 7.1.2.

Suppose that \(\bfx\) and \(\bfy\) are the following two vectors in \(\rr^3\text{:}\)
\begin{equation*} \bfx = \begin{bmatrix} -1 \\ 2 \\ -2 \end{bmatrix}, \hspace{12pt} \bfy = \begin{bmatrix} 0 \\ 1 \\ -3 \end{bmatrix}\text{.} \end{equation*}
Then \(\bfx \cdot \bfy = (-1)(0) + 2(1) + (-2)(-3) = 8\text{.}\)

Note 7.1.3.

Having facility with matrix multiplication now, the observant reader will notice that \(\bfx \cdot \bfy = \bfy^T \bfx\text{.}\)

Definition 7.1.4.

The length or norm of a vector \(\bfx \in \rr^n\) is the nonnegative scalar \(\vnorm{\bfx}\) defined by
\begin{equation*} \vnorm{\bfx} = \sqrt{\bfx \cdot \bfx} = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}\text{.} \end{equation*}

Example 7.1.5.

If \(\bfx \in \rr^3\) is the same as in ExampleΒ 7.1.2, then
\begin{equation*} \vnorm{\bfx} = \sqrt{(-1)^2 + 2^2 + (-2)^2} = \sqrt{9} = 3\text{.} \end{equation*}
Of special relevance for us is the fact that the dot product gives us a notion of angles and perpendicularity.

Definition 7.1.6.

Two vectors \(\bfx\) and \(\bfy\) in \(\rr^n\) are orthogonal if \(\bfx \cdot \bfy = 0\text{.}\)

Note 7.1.7.

The word β€œorthogonal” is another way of saying β€œperpendicular,” but β€œorthogonal” is used much more frequently in linear algebra.

Example 7.1.8.

Let \(\bfu\text{,}\) \(\bfv\text{,}\) and \(\bfw\) be the following vectors in \(\rr^2\text{:}\)
\begin{equation*} \bfu = \begin{bmatrix} -2 \\ 3 \end{bmatrix}, \hspace{6pt} \bfv = \begin{bmatrix} 1 \\ 4 \end{bmatrix}, \hspace{6pt} \bfw = \begin{bmatrix} 6 \\ 4 \end{bmatrix}\text{.} \end{equation*}
We can see that \(\bfu\) and \(\bfv\) are not orthogonal, since \(\bfu \cdot \bfv = 10\text{.}\) However, \(\bfu\) and \(\bfw\) are orthogonal, as \(\bfu \cdot \bfw = 0\text{.}\)
As this chapter continues, the reader will see just how important orthogonality is. For now, we note that all of the vectors in the standard basis of \(\rr^n\text{,}\) \(\bfe_1, \ldots, \bfe_n\text{,}\) are orthogonal to each other. That is, \(\bfe_i \cdot \bfe_j = 0\) whenever \(i \neq j\text{.}\)
A consequence of this last fact is stated in the following proposition.

Proof.

By definition of the dot product, we have
\begin{equation*} \bfv \cdot \bfe_i = \sum_{j=1}^n v_j (\bfe_i)_j = v_i\text{,} \end{equation*}
since the only nonzero entry of \(\bfe_i\) is \((\bfe_i)_i = 1\text{.}\)
This has been a very brief review/introduction to the dot product. As we generalize this function in what follows, we will remind the reader of important facts and properties as we need them.

Subsection 7.1.2 The Inner Product

In the same way that vectors in \(\rr^n\) gave us the intuition to consider a general vector space, the dot product in \(\rr^n\) points us toward a more general function on vector spaces. Our generalization of the dot product is called the inner product.

Note 7.1.10.

Before this definition we need a quick reminder. For a complex number \(z = a + bi\text{,}\) recall that the complex conjugate of \(z\) is defined by \(\overline{z} = a-bi\text{.}\) This will be used in the following definition.

Definition 7.1.11.

Let \(V\) be a vector space over a field \(\ff\text{,}\) where \(\ff\) is either \(\rr\) or \(\cc\text{.}\) An inner product on \(V\) is a function that associates to each pair of vectors \(\bfu\) and \(\bfv\) in \(V\) an element of the field \(\lla \bfu, \bfv \rra\) satisfying all of the following axioms. For all \(\bfu\text{,}\) \(\bfv\text{,}\) and \(\bfw\) in \(V\text{,}\) and all \(c \in \ff\text{:}\)
  1. \(\lla \bfu, \bfv \rra = \overline{\lla \bfv, \bfu \rra}\text{;}\)
  2. \(\lla \bfu + \bfv, \bfw \rra = \lla \bfu, \bfw \rra + \lla \bfv, \bfw \rra\text{;}\)
  3. \(\lla c\bfu, \bfv \rra = c\lla \bfu, \bfv \rra\text{;}\) and
  4. \(\lla \bfu, \bfu \rra \ge 0\text{,}\) and \(\lla \bfu, \bfu \rra = 0\) if and only if \(\bfu = \bfo\text{.}\)
A vector space together with an inner product is called an inner product space.

Note 7.1.12.

If the field we have in mind is \(\rr\) instead of \(\cc\text{,}\) then the first property listed in the definition is just \(\lla \bfu, \bfv \rra = \lla \bfv, \bfu \rra\text{.}\) (If \(x \in \rr\text{,}\) then \(\overline{x}=x\text{.}\)) Also, if our field is \(\cc\text{,}\) we still require \(\lla \bfu, \bfu \rra\) to be a real number, as this is implicit in the fourth property where \(\lla \bfu, \bfu \rra \ge 0\text{.}\)
Before we introduce examples, we should address why the only fields we allow for inner product spaces are \(\rr\) and \(\cc\text{.}\) The inner product requires that a notion of order be present in any field over which a vector space is defined. This is inherent in the fourth property listed in the definition of an inner product, where we must have \(\lla \bfu, \bfu \rra \ge 0\) for all \(\bfu \in V\text{.}\) We do not have this sort of ordering in a finite field like \(\ff_5\text{.}\)
As we discuss in AppendixΒ A, each field \(\ff_p\) is really a set of equivalence classes of \(\zz\) under the equivalence relation congruence mod \(p\text{.}\) So when we write \(2\) as an element of \(\ff_5\text{,}\) we’re referring to \([2]\text{,}\) the equivalence class of all integers congruent to 2 mod 5. And although our convention is to use the integers \(0, 1, \ldots, p-1\) as the equivalence class representatives for the elements of \(\ff_p\text{,}\) this is not a requirement. So, 7 and 12 and \(-3\) could all by used as the representative of \([2] \in \ff_5\text{.}\) This means that we cannot in any coherent way say that \([2] \in \ff_5\) is β€œgreater than or equal to 0.” (Since \(0\) in \(\ff_5\) is \([0]\text{,}\) what we mean is that β€œ\([2] \ge [0]\)” has no meaning.) Because of this lack of ordering, finite fields do not have the geometric properties that we require for an inner product space. We must bid a fond farewell to these dear friends for now, knowing that we will cross paths with them again in our mathematical futures.

Example 7.1.13.

All real vector spaces \(\rr^n\) with the dot product are inner product spaces. (Once again, we would be particularly bad at generalizing if the motivating case were not an example of the general situation!)

Example 7.1.14.

For vectors \(\bfu, \bfv \in \cc^n\text{,}\) the standard inner product is defined by
\begin{equation*} \lla \bfu, \bfv \rra = \sum_{i=1}^n u_i\overline{v_i}\text{,} \end{equation*}
where \(\bfu = [u_i]\) and \(\bfv = [v_i]\text{.}\)
As an example calculation, we consider the following two vectors in \(\cc^2\text{:}\)
\begin{equation*} \bfu = \begin{bmatrix} 1 + i \\ -2i \end{bmatrix}, \hspace{6pt} \bfv = \begin{bmatrix} 2-i \\ 3+4i \end{bmatrix}\text{.} \end{equation*}
Then we have
\begin{align*} \lla \bfu, \bfv \rra \amp = (1+i)\overline{(2-i)} + (-2i)\overline{(3+4i)}\\ \amp = (1+i)(2+i) + (-2i)(3-4i)\\ \amp = -7-3i\text{.} \end{align*}
We will leave for the exercises the proof that the inner product axioms hold for this function.

Example 7.1.15.

Let \(C([0,1])\) denote the vector space of continuous real-valued functions on the interval \([0,1]\text{.}\) (See ExampleΒ 2.3.5 for a discussion of vector spaces like this one.) We can study an inner product on this space defined by the following:
\begin{equation*} \lla f,g \rra = \int_0^1 f(x)g(x)\; dx\text{.} \end{equation*}
Again, we provide an example of a calculation. If \(f(x) = 2x\) and \(g(x) = x^2-4\text{,}\) then
\begin{equation*} \lla f,g \rra = \int_0^1 2x(x^2-4)\; dx = \int_0^1 (2x^3 - 8x)\; dx = -\frac{7}{2}\text{.} \end{equation*}
Proving that the inner product axioms hold requires recalling a few facts from calculus. We leave this to the exercises.

Example 7.1.16.

We consider an inner product on \(P_2\text{,}\) the vector space of all real-valued polynomials of degree at most two. For \(p, q \in P_2\text{,}\) we define the function
\begin{equation*} \ip{p, q} = p(0)q(0) + p(1)q(1) + p(2)q(2)\text{.} \end{equation*}
To become familiar with this function, we can calculate the inner product of \(p = t-2t^2\) and \(q = 3 + 4t\text{.}\) Calculating \(\ip{p,q}\) only involves evaluating these polynomials at \(t=0\text{,}\) \(t=1\text{,}\) and \(t=2\) and then finding the sum of the products. We find that
\begin{align*} \ip{p,q} \amp = p(0)q(0) + p(1)q(1) + p(2)q(2)\\ \amp = (0)(3) + (-1)(7) + (-6)(11) = -73\text{.} \end{align*}
The first inner product property holds since multiplication in the real numbers is commutative. The second and third properties hold by the definitions of vector addition and scalar multiplication in \(P_2\text{.}\) The first part of the third property holds because the sum of squared real numbers must always be non-negative. The final part of the third property holds by an important fact about polynomials: any polynomial of degree \(n\) which has \(n+1\) zeros must be the zero polynomial. (This is why we must take three evaluation points for this function to be an inner product in \(P_2\text{.}\))
The following properties flow fairly quickly from the definition of an inner product.

Proof.

We will prove the second property and leave the others for the exercises. Using the first and third axioms from the definition of the inner product, we have
\begin{equation*} \lla \bfu, c\bfv \rra = \overline{\lla c\bfv, \bfu \rra} = \overline{c\lla \bfv, \bfu \rra} = \overline{c} \overline{\lla \bfv, \bfu \rra } = \overline{c} \lla \bfu, \bfv \rra\text{.} \end{equation*}
The presence of an inner product gives us a good way to define the length of a vector.

Definition 7.1.18.

Let \(V\) be an inner product space and let \(\bfv \in V\text{.}\) Then the norm of \(\bfv\) is
\begin{equation*} \vnorm{\bfv} = \sqrt{\ip{\bfv, \bfv}}\text{.} \end{equation*}
If \(\vnorm{\bfv} = 1\text{,}\) then \(\bfv\) is called a unit vector.
In the following examples we calculate the norm of a few vectors in different vector spaces.

Example 7.1.19.

We consider the following vector in \(\cc^3\text{:}\)
\begin{equation*} \bfv = \begin{bmatrix} 2 + 4i \\ -2 + 4i \\ 2i \end{bmatrix}\text{.} \end{equation*}
Using the standard inner product on \(\cc^3\text{,}\) we have
\begin{equation*} \vnorm{\bfv} = \sqrt{(4+16) + (4+16) +4} = \sqrt{44}\text{.} \end{equation*}

Example 7.1.20.

Returning to the vector space \(C([0,1])\) with the inner product defined in ExampleΒ 7.1.15, we can find the norm of \(f(x)=2+x\text{:}\)
\begin{equation*} \int_0^1 (2+x)^2\; dx = \int_0^1 (4+4x+x^2)\; = \frac{19}{3}\text{.} \end{equation*}
This means that \(\vnorm{f} = \sqrt{\frac{19}{3}}\text{.}\)
Using the definition of the norm, we can examine what happens to the β€œlength” of a vector when it is multiplied by a scalar:
\begin{equation*} \vnorm{c\bfv} = \sqrt{\ip{c\bfv, c\bfv}} = \sqrt{|c|^2\ip{\bfv, \bfv}} = |c| \vnorm{\bfv}\text{.} \end{equation*}
(Note that when \(\cc\) is our field, \(|c|\) for a scalar \(c = a + bi\) is \(|c| = \sqrt{a^2+b^2}\text{.}\)) From this calculation we can see that when a vector is multiplied by a scalar, its length is multiplied by that same scalar, in a way. (We can make the most geometric sense of this when \(\rr\) is our field and when \(c\) is positive.)

Example 7.1.21.

Often we will want a unit vector that points in the same direction as a given vector. We accomplish this by dividing a vector by its length in order to form a vector of length 1.
If we consider the vector \(\bfv = \begin{bmatrix} -1 \\ 4 \end{bmatrix}\) in \(\rr^2\) with the dot product, then we have
\begin{equation*} \vnorm{\bfv} = \sqrt{1 + 16} = \sqrt{17}\text{.} \end{equation*}
Therefore, a unit vector in the direction of \(\bfv\) would be
\begin{equation*} \frac{1}{\sqrt{17}} \bfv = \begin{bmatrix} -\frac{1}{\sqrt{17}} \\[6pt] \frac{4}{\sqrt{17}} \end{bmatrix}\text{.} \end{equation*}

Subsection 7.1.3 Orthogonality

In the same way that we used the dot product to define orthogonality in \(\rr^n\text{,}\) we can now extend that definition to our more general setting.

Definition 7.1.22.

Two vectors \(\bfu\) and \(\bfv\) in an inner product space \(V\) are orthogonal if \(\ip{\bfu, \bfv} = 0\text{.}\) A set of vectors \(\{\bfv_1, \ldots, \bfv_n \}\) is orthogonal if \(\ip{\bfv_i, \bfv_j} = 0\) whenever \(i \neq j\text{.}\)
One of the ways that orthogonality is used is through the following result.

Proof.

Let \(V' = \{\bfv_1, \ldots, \bfv_n\}\) be an orthogonal set of vectors in \(V\text{.}\) Suppose that
\begin{equation*} c_1 \bfv_1 + \cdots + c_n\bfv_n = \bfo \end{equation*}
for some scalars \(c_1, \ldots, c_n \in \ff\text{.}\) We want to show that all the scalars must be zero. Then, for each \(k\text{,}\) we have
\begin{equation*} 0 = \ip{\bfo, \bfv_k} = \ip{\sum_{i=1}^n c_i\bfv_i, \bfv_k} = \sum_{i=1}^n c_i \ip{\bfv_i, \bfv_k} = c_k \ip{\bfv_k, \bfv_k} = c_k \vnorm{\bfv_k}^2\text{.} \end{equation*}
Since \(c_k \vnorm{\bfv_k}^2 = 0\) but \(\bfv_k \neq \bfo\text{,}\) we know that \(\vnorm{\bfv_k}^2 \neq 0\text{,}\) so \(c_k = 0\text{.}\) This is true for each \(k\text{,}\) \(1 \le k \le n\text{,}\) so \(V'\) is linearly independent.
The next result is sometimes referred to as the Pythagorean Theorem for general inner product spaces. When there are only two orthogonal vectors, the reader will recognize the reference to the Pythagorean Theorem.

Subsection 7.1.4 Results for Inner Product Spaces

The property of orthogonality is so powerful that we will occasionally want to call upon it even when it is not already on the scene.

Proof.

If \(\bfv = \bfo\text{,}\) then we can take \(\bfw = \bfu\) and \(c = 1\text{,}\) as every vector is orthogonal to \(\bfo\text{.}\) So, we now suppose that \(\bfv \neq \bfo\text{.}\)
If there exists \(c \in \ff\) such that \(\bfu = c\bfv + \bfw\) with \(\bfw\) orthogonal to \(\bfv\text{,}\) then we must have
\begin{equation*} \ip{\bfu, \bfv} = \ip{c\bfv + \bfw, \bfv} = c\ip{\bfv,\bfv} + \ip{\bfw, \bfv} = c \vnorm{\bfv}^2\text{.} \end{equation*}
This shows that the only possibility for \(c\) is \(c = \frac{\ip{\bfu, \bfv}}{\vnorm{\bfv}^2}\text{.}\)
Once \(c\) has been determined, then the choice of \(\bfw\) is determined by (7.1)β€”we must have \(\bfw = \bfu - c\bfv\text{.}\) Now it is easy to check that, with these values, we indeed have \(\ip{\bfv,\bfw}=0\) and that (7.1) holds.

Example 7.1.26.

We consider two vectors in \(\rr^2\) to understand the relationship in this lemma:
\begin{equation*} \bfu = \begin{bmatrix} 2 \\ 3 \end{bmatrix}, \hspace{12pt} \bfv = \begin{bmatrix} -1 \\ 2 \end{bmatrix}\text{.} \end{equation*}
The lemma specifies our calculations:
\begin{equation*} c = \frac{\ip{\bfu,\bfv}}{\vnorm{\bfv}^2} = \frac{4}{5}, \hspace{6pt} \bfw = \bfu - c\bfv = \begin{bmatrix} \frac{14}{5} \\[6pt] \frac{7}{5} \end{bmatrix}\text{.} \end{equation*}
We will verify the properties specified by the lemma. Since
\begin{equation*} \bfv \cdot \bfw = -\frac{14}{5} + \frac{14}{5} = 0\text{,} \end{equation*}
we have \(\ip{\bfv,\bfw}=0\text{.}\) Additionally, since
\begin{equation*} c\bfv + \bfw = \begin{bmatrix} -\frac{4}{5} \\[6pt] \frac{8}{5} \end{bmatrix} + \begin{bmatrix} \frac{14}{5} \\[6pt] \frac{7}{5} \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}\text{,} \end{equation*}
we see that \(\bfu = c\bfv + \bfw\text{.}\)
There are two famous results which involve the norm in an inner product space. We present them without proof.
We end this section with one final example of an inner product space.

Example 7.1.29.

We consider the vector space \(\rr^2\) with a modified inner product:
\begin{equation*} \ip{\bfu,\bfv} = 2u_1v_1 + u_2v_2\text{.} \end{equation*}
The only change from the dot product in \(\rr^2\) is the coefficient 2 on the first term. It is not difficult to verify that this is an inner product.
Since an inner product provides a way to measure distance and length (as well as angles), it is instructive to consider how this inner product changes our experience of \(\rr^2\text{.}\) Just to take one example, if we think of the β€œunit circle” as the collection of all unit vectors in \(\rr^2\text{,}\) then using this inner product we no longer have a circle but an ellipse. The radii of this ellipse would be \(\frac{1}{\sqrt{2}}\) horizontally and 1 vertically.

Reading Questions 7.1.5 Reading Questions

1.

Consider ExampleΒ 7.1.29 and the inner product on \(\rr^2\) defined there.
  1. If \(\bfu = \begin{bmatrix} 2 \\ 3 \end{bmatrix}\text{,}\) calculate \(\vnorm{\bfu}\text{.}\)
  2. Describe all of the vectors in \(\rr^2\) which are orthogonal to \(\bfu\) using this inner product. All of these vectors fall on a line through the originβ€”what is that line?

2.

Consider the following function on \(P_1\text{.}\) For polynomials \(p\) and \(q\text{,}\) define \(\ip{p,q}\) by
\begin{equation*} \ip{p,q} = p(0)q(0) - p(1)q(1)\text{.} \end{equation*}
Explain why this function is not an inner product on \(P_1\text{.}\) (You must show why one of the inner product axioms fails, and to do this you should use an example.)

Exercises 7.1.6 Exercises

1.

Consider the following inner product on \(P_2\text{.}\) For \(p, q \in P_2\text{,}\)
\begin{equation*} \ip{p,q} = p(-1)q(-1) + p(0)q(0) + p(1)q(1)\text{.} \end{equation*}
(You do not need to prove that this is an inner product.)
  1. Calculate \(\ip{p,q}\) where \(p = 3-t\) and \(q = 2+2t^2\text{.}\)
  2. Find a nonzero vector \(r \in P_2\) which is orthogonal to the vector \(p\) from part (a).
  3. Calculate \(\vnorm{p}\) and \(\vnorm{q}\) for \(p,q\) from part (a).

2.

Use PropositionΒ 7.1.23 to prove that the following set of vectors in \(\cc^3\) is linearly independent: \(\{\bfv_1, \bfv_2, \bfv_3 \}\text{,}\) where
\begin{equation*} \bfv_1 = \begin{bmatrix} 1 \\ 1 \\ -1 \end{bmatrix}, \hspace{6pt} \bfv_2 = \begin{bmatrix} 1-i \\ 3+2i \\ 4+i \end{bmatrix}, \hspace{6pt} \bfv_3 = \begin{bmatrix} 8+38i \\ 5-25i \\ 13+13i \end{bmatrix}\text{.} \end{equation*}
Answer.
Calculation shows that the set \(\{\bfv_1, \bfv_2,\bfv_3\}\) is orthogonal, so then it is linearly independent.

3.

Consider the following inner product on \(\rr^2\text{:}\)
\begin{equation*} \ip{\bfu,\bfv} = u_1v_1 + 3u_2v_2\text{.} \end{equation*}
  1. Give an example of two vectors in \(\rr^2\) which are orthogonal with respect to the dot product but which are not orthogonal with respect to this inner product.
  2. Give an example of two vectors in \(\rr^2\) which are orthogonal with respect to this inner product but which are not orthogonal with respect to the dot product.
Answer.
  1. We let \(\bfu\) and \(\bfv\) be the following vectors:
    \begin{equation*} \bfu = \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \hspace{6pt} \text{and} \hspace{6pt} \bfv = \begin{bmatrix} -2 \\ 1 \end{bmatrix}\text{.} \end{equation*}
    Then it is easy to see that \(\bfu \cdot \bfv = 0\text{,}\) but \(\ip{\bfu, \bfv} = 4\text{.}\)
  2. We let \(\bfu\) and \(\bfv\) be the following vectors:
    \begin{equation*} \bfu = \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \hspace{6pt} \text{and} \hspace{6pt} \bfv = \begin{bmatrix} -6 \\ 1 \end{bmatrix}\text{.} \end{equation*}
    Then it is easy to see that \(\ip{\bfu, \bfv} = 0\text{,}\) but \(\bfu \cdot \bfv = -4\text{.}\)

4.

Let \(A\) be the following matrix over \(\rr\text{:}\)
\begin{equation*} A = \begin{bmatrix} 1 \amp 1 \\ 3 \amp 0 \end{bmatrix}\text{.} \end{equation*}
Define a function on \(\rr^2\) by
\begin{equation*} \ip{\bfu, \bfv} = (A\bfu) \cdot (A\bfv)\text{,} \end{equation*}
where the right side of the equals sign uses the standard dot product in \(\rr^2\text{.}\) (This function defines an inner product, but you do not need to prove this right now.)
  1. Let \(\bfu\) and \(\bfv\) be the following vectors:
    \begin{equation*} \bfu = \begin{bmatrix} 1 \\ -2 \end{bmatrix}, \hspace{6pt} \text{and} \hspace{6pt} \bfv = \begin{bmatrix} 2 \\ 4 \end{bmatrix}\text{.} \end{equation*}
    Calculate \(\ip{\bfu, \bfv}\) using this inner product.
  2. Calculate \(\vnorm{\bfu}\) and \(\vnorm{\bfv}\) for the vectors \(\bfu\) and \(\bfv\) given in part a.
  3. Find a vector \(\bfw \in \rr^2\) which is orthogonal to the vector \(\bfu\) (given in part a.) with respect to this inner product.

5.

Define the following function on \(M_2(\rr)\text{:}\)
\begin{equation*} \ip{A,B} = \tr(A^TB)\text{.} \end{equation*}
(This function defines an inner product, but you do not need to prove this right now.)
  1. Let \(A\) and \(B\) be the following matrices:
    \begin{equation*} A = \begin{bmatrix} 2 \amp 0 \\ 1 \amp -3 \end{bmatrix}, \hspace{6pt} \text{and} \hspace{6pt} B = \begin{bmatrix} -4 \amp 1 \\ 1 \amp 2 \end{bmatrix}\text{.} \end{equation*}
    Calculate \(\ip{A, B}\) using this inner product.
  2. Calculate \(\vnorm{A}\) and \(\vnorm{B}\) for the matrices \(A\) and \(B\) given in part a.
  3. Find a matrix \(C \in M_2(\rr)\) which is orthogonal to the matrix \(A\) (given in part a.) with respect to this inner product.

6.

Consider the following function defined on \(M_2(\rr)\text{:}\)
\begin{equation*} \ip{A,B} = \det(A) \cdot \det(B)\text{.} \end{equation*}
Show that this function is not an inner product.
Answer.
Let \(A = \begin{bmatrix} 1 \amp 1 \\ 2 \amp 2 \end{bmatrix}\text{.}\) It is fairly easy to see that \(\det(A) = 0\text{,}\) so we have
\begin{equation*} \ip{A, A} = 0 \cdot 0 = 0\text{.} \end{equation*}
However, since \(A\) is not the zero matrix (i.e., the zero vector for the vector space \(M_2(\rr)\)), the fourth axiom of the inner product does not hold for this function.

7.

Consider the following function defined on \(P_2\text{:}\)
\begin{equation*} \ip{p,q} = p(-1)q(-1) + p(2)q(2)\text{.} \end{equation*}
Show that this function is not an inner product.

Writing Exercises

8.
Consider the following function defined on \(\rr^2\text{:}\)
\begin{equation*} \ip{\bfu,\bfv} = u_1v_2 + u_2v_1\text{.} \end{equation*}
Prove or disprove: this function is an inner product.
Solution.
This function is not an inner product. Consider the vector \(\bfu = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\text{.}\) We have
\begin{equation*} \ip{\bfu, \bfu} = (1)(0) + (0)(1) = 0\text{.} \end{equation*}
However, since \(\bfu\) is not the zero vector in \(\rr^2\text{,}\) the fourth axiom of the inner product does not hold.
12.
Prove that the inner product defined in ExampleΒ 7.1.15 is an inner product.
Solution.
We will show that all four of the inner product axioms hold. For the first axiom, we note that everything here is real-valued, so we do not need to worry about any complex conjugates. Since the order of the functions within a definite integral can be switched, we have
\begin{align*} \ip{g, f} \amp = \int_0^1 g(x)f(x)\;dx\\ \amp = \int_0^1 f(x)g(x)\;dx = \ip{f,g}\text{.} \end{align*}
This shows that the first axiom holds.
For the second axiom, we let \(f, g, h \in C([0,1])\text{.}\) The definite integral is linear with respect to the sum of functions, so we have
\begin{align*} \ip{f+g,h} \amp = \int_0^1 (f(x)+g(x))h(x)\;dx\\ \amp = \int_0^1 (f(x)h(x) + g(x)h(x))\;dx\\ \amp = \int_0^1 f(x)h(x)\;dx + \int_0^1 g(x)h(x)\;dx\\ \amp = \ip{f,h} + \ip{g,h}\text{.} \end{align*}
This proves that the second axiom holds.
Let \(f,g \in C([0,1])\) and let \(c \in \rr\text{.}\) The definite integral is linear with respect to scalar multiplication by a real number, so we have
\begin{align*} \ip{cf,g} \amp = \int_0^1 (cf(x))g(x)\;dx\\ \amp = c \int_0^1 f(x)g(x)\;dx\\ \amp = c \ip{f,g}\text{.} \end{align*}
This proves that the third axiom holds.
Let \(f\in C([0,1])\text{.}\) We observe that \(f(x)^2\) is a function with values that are always non-negative (since each value of this function is a real number squared). Since the definite integral can be interpreted as calculating signed area between the graph of a function and the \(x\)-axis, we know that
\begin{equation*} \ip{f,f} = \int_0^1 f(x)^2\;dx \ge 0\text{.} \end{equation*}
Finally, using this same signed area interpretation of the definite integral, the only way a non-negative function could produce a zero value for the definite integral is if the function was uniformly the zero function. This means that if \(\ip{f,f}=0\text{,}\) we must have \(f(x)^2 = 0\) and therefore \(f(x) = 0\text{.}\) This proves that the fourth axiom holds.
13.
Suppose that \(V\) is a vector space, \(W\) is an inner product space, and that \(T \in L(V,W)\) is injective. For \(\bfv_1, \bfv_2 \in V\text{,}\) define \(\ip{\bfv_1,\bfv_2}_T\) by
\begin{equation*} \ip{\bfv_1,\bfv_2}_T = \ip{T(\bfv_1),T(\bfv_2)}\text{,} \end{equation*}
where the right-hand side is the inner product on \(W\text{.}\) Prove that this defines an inner product on \(V\text{.}\)