Skip to main content

Section 5.5 Coordinates

We have recently shown (CorollaryΒ 5.3.13) that all \(n\)-dimensional vector spaces over \(\ff\) are isomorphic to \(\ff^n\text{.}\) In this section we explore the vast implications of this isomorphism.

Subsection 5.5.1 Coordinates of Vectors

If \(V\) is a finite-dimensional vector space over \(\ff\text{,}\) then it has a basis \(\mcb\text{.}\) We have seen (TheoremΒ 5.2.11) that each vector in \(V\) then has a unique representation as a linear combination of these basis vectors. In the definition that follows, we focus on the coefficients in these linear combinations.

Definition 5.5.1.

The coordinates of a vector \(\bfv \in V\) with respect to a basis \(\mcb = \{\bfv_1, \ldots, \bfv_n\}\) are the unique scalars \(c_1, \ldots, c_n\) such that
\begin{equation*} \bfv = c_1 \bfv_1 + \cdots + c_n\bfv_n\text{.} \end{equation*}
The coordinate vector of \(\bfv\) with respect to \(\mcb\) is the vector \([\bfv]_{\mcb} \in \ff^n\text{,}\)
\begin{equation*} [\bfv]_{\mcb} = \begin{bmatrix} c_1 \\ \vdots \\ c_n \end{bmatrix}\text{.} \end{equation*}

Note 5.5.2.

When the basis \(\mcb\) we are using is unambiguous, we may drop a bit of the cumbersome terminology contained in the phrase β€œcoordinate vector of \(\bfv\) with respect to \(\mcb\)” and simply refer to the β€œcoordinate vector of \(\bfv\text{.}\)”
This process of assigning to a vector \(\bfv \in V\) a vector \([\bfv]_{\mcb} \in \ff^n\) is sometimes called a coordinate mapping, and it defines a function \(V \to \ff^n\text{.}\) This function is actually an isomorphism of vector spaces.

Proof.

The function \(C_{\mcb}\) is a linear transformation. (We ask the reader to verify this in the exercises.) We note that \(C_{\mcb}\) maps the basis vectors in \(\mcb\) to the standard basis in \(\ff^n\text{.}\) So, by TheoremΒ 5.3.11, \(C_{\mcb}\) is an isomorphism.
The existence of coordinate vectors means that just about everything for finite-dimensional vector spaces can be accomplished with vectors and matrices over \(\ff\text{.}\) We explore this in the following examples.

Example 5.5.4.

Let \(\mcb = \{ 1, t, t^2 \}\) be the standard basis of the vector space \(P_2\text{.}\) If \(p_1\) and \(p_2\) are
\begin{equation*} p_1 = 2 - t + 4t^2 \hspace{6pt} \text{and} \hspace{6pt} p_2 = -3t^2 + 10\text{,} \end{equation*}
then the coordinate vectors of \(p_1\) and \(p_2\) are
\begin{equation*} [p_1]_{\mcb} = \begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix} \hspace{6pt} \text{and} \hspace{6pt} [p_2]_{\mcb} = \begin{bmatrix} 10 \\ 0 \\ -3 \end{bmatrix}\text{.} \end{equation*}
Note that the order of the coordinates really matters, so in this case the terms in \(p_2\) had to be reordered (in increasing powers of \(t\)) before the coefficients were entered as the coordinate vector.

Example 5.5.5.

Within \(\ff_5^3\text{,}\) consider \(W = \spn\{\bfv_1, \bfv_2 \}\text{,}\) where
\begin{equation*} \bfv_1 = \begin{bmatrix} 2 \\ 3 \\ 1 \end{bmatrix} \hspace{6pt} \text{and} \hspace{6pt} \bfv_2 = \begin{bmatrix} 1 \\ 0 \\ 3 \end{bmatrix}\text{.} \end{equation*}
Since neither of these vectors is a scalar multiple of the other, \(\mcb = \{\bfv_1, \bfv_2 \}\) is a linearly independent set and therefore a basis for \(W\text{.}\) If we let \(\bfv_3\) be
\begin{equation*} \bfv_3 = \begin{bmatrix} 0 \\ 3 \\ 0 \end{bmatrix}\text{,} \end{equation*}
we can verify that \(\bfv_3 \in W\) by row-reducing the appropriate matrix:
\begin{equation*} \begin{bmatrix} 2 \amp 1 \amp 0 \\ 3 \amp 0 \amp 3 \\ 1 \amp 3 \amp 0 \end{bmatrix} \sim \begin{bmatrix} 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 3 \\ 0 \amp 0 \amp 0 \end{bmatrix} \text{.} \end{equation*}
Since there is no pivot in the final column, we see that \(\bfv_3 \in W\text{.}\) Further, we can write down the coordinate vector of \(\bfv_3\) with respect to \(\mcb\) by studying this row-reduced matrix. We see that
\begin{equation*} [\bfv_3]_{\mcb} = \begin{bmatrix} 1 \\ 3 \end{bmatrix}\text{.} \end{equation*}
It may seem strange for a vector in the three-dimensional space \(\ff_5^3\) to have a coordinate vector with only two entries, but this is due to the fact that \(W\) is two-dimensional. (It has a basis of only two vectors!) The coordinate mapping in this case says that \(W\) is isomorphic to \(\ff_5^2\text{,}\) and this is why the coordinate vector for any vector in \(W\) has only two entries.
There are some consequences of TheoremΒ 5.5.3 that we want to spell out explicitly because of their usefulness. The proof of the following proposition can be found as part of the proof of TheoremΒ 5.3.11.
Hopefully the reader can now see exactly how helpful the coordinate mapping isomorphism is. The following example should help to connect the dots.

Example 5.5.7.

Consider the set of vectors \(Y = \{p_1, p_2, p_3 \}\) in \(P_3\text{,}\) where
\begin{align*} p_1 \amp = 1-t-3t^2+2t^3\\ p_2 \amp = -5 +4t +2t^2 - t^3\\ p_3 \amp = 1 + 3t +4t^2 - 3t^3\text{.} \end{align*}
With respect to the standard basis \(\mcb\) of \(P_3\text{,}\) these are the coordinate vectors:
\begin{equation*} [p_1]_{\mcb} = \begin{bmatrix} 1 \\ -1 \\ -3 \\ 2 \end{bmatrix}, \hspace{6pt} [p_2]_{\mcb} = \begin{bmatrix} -5 \\ 4 \\ 2 \\ -1 \end{bmatrix}, \hspace{6pt} [p_3]_{\mcb} = \begin{bmatrix} 1 \\ 3 \\ 4 \\ -3 \end{bmatrix}\text{.} \end{equation*}
By row-reducing the matrix which has these coordinate vectors as its columns, we can see that the set of coordinate vectors \(\{[p_1]_{\mcb}, [p_2]_{\mcb}, [p_3]_{\mcb} \}\) is linearly independent in \(\rr^4\text{:}\)
\begin{equation*} \begin{bmatrix} 1 \amp -5 \amp 1 \\ -1 \amp 4 \amp 3 \\ -3 \amp 2 \amp 4 \\ 2 \amp -1 \amp -3 \end{bmatrix} \sim \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix}\text{.} \end{equation*}
Using PropositionΒ 5.5.6, we conclude that the set \(Y\) is linearly independent in \(P_3\text{.}\)
For dimension reasons, we already knew that the set \(Y\) cannot span \(P_3\text{,}\) however this row-reduced matrix confirms it. Since there is not a pivot in each row, the set of coordinate vectors does not span \(\rr^4\text{,}\) and this means that \(Y\) does not span \(P_3\text{.}\)

Subsection 5.5.2 Coordinates and Linear Transformations

Back in SectionΒ 3.2, we showed how every linear transformation \(\ff^n \to \ff^m\) could be realized as multiplication by a matrix over \(\ff\text{.}\) We now bring that understanding into contact with coordinate vectors. While not every linear transformation between vector spaces is multiplication by a matrix, every linear transformation between finite-dimensional vector spaces can be represented as multiplication by a matrix when considering the relevant coordinate vectors.

Definition 5.5.8.

Let \(V\) and \(W\) be \(n\)- and \(m\)-dimensional vector spaces over \(\ff\text{,}\) respectively, and let \(T:V \to W\) be a linear transformation. Further, suppose that \(\mcb = \{ \bfv_1, \ldots, \bfv_n \}\) is a basis for \(V\) and \(\mcc = \{\bfw_1, \ldots, \bfw_m \}\) is a basis for \(W\text{.}\) If, for each \(j\text{,}\) \(1 \le j \le n\text{,}\) we have \(a_{1j}, \ldots, a_{mj}\) as the coordinates of \(T(\bfv_j)\) with respect to \(\mcc\text{,}\) then the matrix of \(T\) with respect to \(\mcb\) and \(\mcc\) is the matrix \(A = [a_{ij}]\text{.}\) (In other words, column \(j\) of this matrix is the coordinate vector \([T(\bfv_j)]_{\mcc}\text{.}\)) We denote this matrix as \([T]_{\mcb,\mcc}\text{.}\)
When \(V = W\) and \(\mcb = \mcc\text{,}\) then we use the notation \([T]_{\mcb}\) and refer to the matrix of \(T\) with respect to \(\mcb\).
Finally, when the basis/bases we are using are unambiguous, we may refer to \([T]_{\mcb}\) or \([T]_{\mcb,\mcc}\) as the coordinate matrix of \(T\text{.}\)
The point of this rather long (and cumbersome!) definition is that we can represent a linear transformation \(T\) as multiplication by a matrix. That’s what the following proposition shows.

Proof.

Let the bases \(\mcb\) and \(\mcc\) be \(\mcb = \{\bfv_1, \ldots, \bfv_n \}\) and \(\mcc = \{ \bfw_1, \ldots, \bfw_m \}\text{.}\) For \(\bfv \in V\text{,}\) suppose that
\begin{equation*} \bfv = c_1 \bfv_1 + \cdots + c_n\bfv_n\text{,} \end{equation*}
or, in other words,
\begin{equation*} [\bfv]_{\mcb} = \begin{bmatrix} c_1 \\ \vdots \\ c_n \end{bmatrix} \text{.} \end{equation*}
We also assume that, for each \(j\text{,}\) \(1 \le j \le n\text{,}\) the coordinates of \(T(\bfv_j)\) with respect to \(\mcc\) are \(a_{1j}, \ldots, a_{mj}\text{.}\)
Then, using the linearity of \(T\text{,}\) we have
\begin{equation*} T(\bfv) = \sum_{j=1}^n c_jT(\bfv_j) = \sum_{j=1}^n c_j \left( \sum_{i=1}^m a_{ij}\bfw_i \right) = \sum_{i=1}^m \left( \sum_{j=1}^n a_{ij}c_j \right) \bfw_i\text{.} \end{equation*}
This says that the \(i\)th coordinate of \(T(\bfv)\) with respect to \(\mcc\) is \(\sum a_{ij}c_j\text{,}\) which is the same as the \(i\)th entry of \(A [\bfv]_{\mcb}\text{.}\)

Note 5.5.10.

According to this proposition, here is the way to realize a linear transformation as a matrix. Form \([T]_{\mcb, \mcc}\) by calculating the coordinate vector \([T(\bfv_j)]_{\mcc}\) for every vector \(\bfv_j \in \mcb\text{.}\) Then, to use this matrix to determine what happens to a vector \(\bfv \in V\text{,}\) find the coordinate vector \([\bfv]_{\mcb}\text{.}\) After multiplying this vector by \([T]_{\mcb, \mcc}\text{,}\) the result will be the coordinate vector \([T(\bfv)]_{\mcc}\text{.}\) In order to recover the value of \(T(\bfv)\text{,}\) use the basis vectors in \(\mcc\) and this coordinate vector to find the correct linear combination.

Note 5.5.11.

There are several linear transformations involved in PropositionΒ 5.5.9β€”the transformation \(T\text{,}\) multiplication by the matrix \(A\text{,}\) and the two coordinate mappings. Because it is easy to confuse the roles of these transformations, mathematicians frequently employ a tool called a commutative diagram to keep their ideas and symbols organized.
We will produce here the diagram that summarizes the conclusion of PropositionΒ 5.5.9.
A commutative diagram illustrating the previous proposition. See long description.
A rectangular commutative diagram. The upper left corner contains \(V\) with a horizontal arrow to the upper right corner containing \(W\text{;}\) this arrow is labeled with \(T\text{.}\) An arrow labeled \(M_{\mathcal{B}}\) is drawn from the \(V\) in the upper left to \(\ff^n\) in the lower left corner. An arrow labeled \(M_{\mathcal{C}}\) is drawn from the \(W\) in the upper right to \(\ff^m\) in the lower right. An arrow labeled \(T_{A}\) is drawn from the lower left to the lower right.
Figure 5.5.12. A commutative diagram illustrating the relationship between a linear transformation and multiplication by a matrix.
We have labeled the coordinate mappings from \(V\) to \(\ff^n\) and from \(W\) to \(\ff^m\) by \(M_{\mcb}\) and \(M_{\mcc}\text{,}\) respectively. Further, we have labeled the linear transformation which is multiplication by \(A\text{,}\) from \(\ff^n\) to \(\ff^m\text{,}\) with \(T_A\text{.}\)
The proposition claims that this diagram commutes. In other words, for any vector \(\bfv \in V\text{,}\) going around the rectangle in either of the possible directions (right then down or down then right) will produce the same result. This would mean that \(M_{\mcc}(T(\bfv)) = T_A(M_{\mcb}(\bfv))\text{.}\) With different notation, this is exactly the conclusion of PropositionΒ 5.5.9.

Example 5.5.13.

Let \(D: P_3 \to P_2\) be the differentiation function. (We proved that a very similar function was a linear transformation in ExampleΒ 3.1.3.) Let \(\mcb\) be the standard basis for \(P_3\text{,}\) and let \(\mcc\) be the standard basis for \(P_2\text{.}\) Here we calculate the coordinate vectors for the derivative of each of the polynomials in \(\mcb\text{:}\)
\begin{equation*} [D(1)]_{\mcc} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}, \hspace{6pt} [D(t)]_{\mcc} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \hspace{6pt} [D(t^2)]_{\mcc} = \begin{bmatrix} 0 \\ 2 \\ 0 \end{bmatrix}, \hspace{6pt} [D(t^3)]_{\mcc} = \begin{bmatrix} 0 \\ 0 \\ 3 \end{bmatrix}\text{.} \end{equation*}
These coordinate vectors form the columns of the matrix \([D]_{\mcb,\mcc}\text{.}\)
We will now use this matrix to carry out the action of \(D\text{.}\) Let’s take the derivative of \(p = -2 - 4t - t^2 - t^3\text{.}\) Since the coordinate vector of \(p\) with respect to \(\mcb\) is
\begin{equation*} [p]_{\mcb} = \begin{bmatrix} -2 \\ -4 \\ -1 \\ -1 \end{bmatrix}\text{,} \end{equation*}
we can multiply this vector by \([D]_{\mcb,\mcc}\) to get \([D(p)]_{\mcc}\text{:}\)
\begin{equation*} [D(p)]_{\mcc} = \begin{bmatrix} 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 3 \end{bmatrix} \begin{bmatrix} -2 \\ -4 \\ -1 \\ -1 \end{bmatrix} = \begin{bmatrix} -4 \\ -2 \\ -3 \end{bmatrix}\text{.} \end{equation*}
This tells us that the coordinates for \(D(p)\) with respect to \(\mcc\) are \(-4\text{,}\) \(-2\text{,}\) and \(-3\text{.}\) In other words,
\begin{equation*} D(p) = -4(1) - 2(t) - 3(t^2)\text{,} \end{equation*}
and this matches what we know to be the derivative of \(p\text{.}\)

Example 5.5.14.

We consider a linear transformation \(T: \rr^3 \to P_2\) defined by
\begin{equation*} T \left( \begin{bmatrix} a \\ b \\ c \end{bmatrix} \right) = (a+2b) + (-3a+4b-c)t + (2a - 4c)t^2\text{.} \end{equation*}
We let \(\mcb\) be the standard basis for \(\rr^3\) and \(\mcc\) be the standard basis for \(P_2\text{.}\) We now write \([T(\bfe_i)]_{\mcc}\) for each \(\bfe_i \in \mcb\text{:}\)
\begin{equation*} [T(\bfe_1)]_{\mcc} = \begin{bmatrix} 1 \\ -3 \\ 2 \end{bmatrix}, \hspace{6pt} [T(\bfe_2)]_{\mcc} = \begin{bmatrix} 2 \\ 4 \\ 0 \end{bmatrix}, \hspace{6pt} [T(\bfe_3)]_{\mcc} = \begin{bmatrix} 0 \\ -1 \\ -4 \end{bmatrix}\text{.} \end{equation*}
These coordinate vectors make up the columns of the matrix \([T]_{\mcb,\mcc}\text{.}\) If we wanted to calculate \(T(\bfv)\text{,}\) where
\begin{equation*} \bfv = \begin{bmatrix} 1 \\ -3 \\ 4 \end{bmatrix}\text{,} \end{equation*}
we could do so using coordinate vectors and the matrix \([T]_{\mcb, \mcc}\text{.}\) Since the coordinate vector of \(\bfv\) with respect to \(\mcb\) is fairly obviousβ€”it is \(\bfv\) itselfβ€”we can proceed with this calculation:
\begin{equation*} [T(\bfv)]_{\mcc} = [T]_{\mcb,\mcc} [\bfv]_{\mcb} = \begin{bmatrix} 1 \amp 2 \amp 0 \\ -3 \amp 4 \amp -1 \\ 2 \amp 0 \amp -4 \end{bmatrix} \begin{bmatrix} 1 \\ -3 \\ 4 \end{bmatrix} = \begin{bmatrix} -5 \\ -19 \\ -14 \end{bmatrix}\text{.} \end{equation*}
This tells us that \(T(\bfv) = -5 - 19t - 14t^2\text{.}\)
We will end this section with two results related to coordinate matrices. This first result says that the composition of linear transformation matches the multiplication of matrices in the expected way.

Proof.

We assume that \(\mcb = \{\bfu_1,\ldots,\bfu_p\}\text{,}\) \(\mcc = \{\bfv_1,\ldots,\bfv_n\}\text{,}\) and \({\mathcal{D} = \{\bfw_1,\ldots,\bfw_m\}}\text{.}\) Further, we let \(A = [S]_{\mcc,\mathcal{D}}\text{,}\) \(B = [T]_{\mcb,\mcc}\text{,}\) and \(C = [ST]_{\mcb, \mathcal{D}}\text{.}\) We want to show that \(AB = C\text{.}\)
The definition of \(A\) tells us that, for each \(k=1,\ldots,n\text{,}\) the \(k\)th column of \(A\) is the coordinate vector of \(S(\bfv_k)\) with respect to \(\mathcal{D}\text{.}\) So
\begin{equation*} S(\bfv_k) = \sum_{i=1}^m a_{ik}\bfw_i\text{.} \end{equation*}
Also, for each \(j=1,\ldots,p\text{,}\)
\begin{equation*} T(\bfu_j) = \sum_{k=1}^n b_{kj}\bfv_k\text{.} \end{equation*}
Using the linearity of these transformations, we have
\begin{align*} (ST)(\bfu_j) \amp = \sum_{k=1}^n b_{kj} S(\bfv_k)\\ \amp = \sum_{k=1}^n b_{kj} \left(\sum_{i=1}^m a_{ik}\bfw_i \right) \\ \amp = \sum_{i=1}^m \left( \sum_{k=1}^n a_{ik}b_{kj} \right) \bfw_i\text{.} \end{align*}
This means that
\begin{equation*} c_{ij} = \sum_{k=1}^n a_{ik}b_{kj} \end{equation*}
for all \(i\) and \(j\text{.}\) Since this is how the \((i,j)\)-entry of the matrix product \(AB\) is formed, this proves that \(C=AB\text{,}\) as desired.
 7 
This proof was taken from Linear Algebra (2018), Meckes and Meckes, pages 193-194.
This final result states that the invertibility of a linear transformation and the invertibility of its coordinate matrix are tied together in the predictable way.

Reading Questions 5.5.3 Reading Questions

1.

Let \(\bfv_1 = \begin{bmatrix} 1 \\ 2 \end{bmatrix}\) and \(\bfv_2 = \begin{bmatrix} -1 \\ 1 \end{bmatrix}\text{.}\)
  1. The set \(\mcb = \{\bfv_1, \bfv_2 \}\) is a basis for \(\rr^2\text{.}\) Without doing any calculations, explain why this is so. (I’m not looking for the definition of a basis, I want an explanation as to why this set satisfies that definition.)
  2. Let \(\bfw = \begin{bmatrix} 0 \\ 4 \end{bmatrix}\text{.}\) What is the coordinate vector of \(\bfw\) with respect to \(\mcb\text{?}\)

2.

Let \(T:P_2 \to P_2\) be the following function:
\begin{equation*} T(p) = p(-1) + p(0)t + p(1)t^2\text{.} \end{equation*}
Let \(\mcb\) be the standard basis for \(P_2\text{.}\)
  1. Find the coordinate matrix \([T]_{\mcb}\) for \(T\text{.}\)
  2. Use this coordinate matrix to calculate \(T(q)\text{,}\) if
    \begin{equation*} q = -3 -5t + 3t^2\text{.} \end{equation*}

Exercises 5.5.4 Exercises

1.

For the given basis \(\mcb\) of \(\rr^2\) and the given coordinate vector \([\bfv]_{\mcb}\text{,}\) find \(\bfv\text{.}\)
  1. \([\bfv]_{\mcb} = \begin{bmatrix} -1 \\ -2 \end{bmatrix}\text{,}\) \(\mcb = \left\{ \begin{bmatrix} 2 \\ 5 \end{bmatrix}, \begin{bmatrix} -3 \\ 1 \end{bmatrix} \right\}\)
  2. \([\bfv]_{\mcb} = \begin{bmatrix} 3 \\ -4 \end{bmatrix}\text{,}\) \(\mcb = \left\{ \begin{bmatrix} -1 \\ 2 \end{bmatrix}, \begin{bmatrix} 4 \\ 2 \end{bmatrix} \right\}\)
Answer.
  1. \(\displaystyle \bfv = \begin{bmatrix} 4 \\ -7 \end{bmatrix}\)
  2. \(\displaystyle \bfv = \begin{bmatrix} -19 \\ -2 \end{bmatrix}\)

2.

For the basis \(\mcb = \{ p_1, p_2, p_3 \}\) of \(P_2\) and the coordinate vector \([p]_{\mcb}\text{,}\) find \(p\) if
\begin{equation*} p_1 = 2 - 4t^2, \hspace{6pt} p_2 = -1 - t, \hspace{6pt} p_3 = 3t + 2t^2 \end{equation*}
and
\begin{equation*} [p]_{\mcb} = \begin{bmatrix} -2 \\ 0 \\ 5 \end{bmatrix}\text{.} \end{equation*}

3.

Find the coordinate vectors \([\bfv]_{\mcb}\) for each of the following vectors \(\bfv\) with respect to the basis \(\mcb = \{ \bfv_1, \bfv_2, \bfv_3 \}\) of \(\rr^3\text{,}\) if
\begin{equation*} \bfv_1 = \begin{bmatrix} -2 \\ 5 \\ 3 \end{bmatrix}, \hspace{6pt} \bfv_2 = \begin{bmatrix} 7 \\ 4 \\ -4 \end{bmatrix}, \hspace{6pt} \bfv_3 = \begin{bmatrix} 2 \\ -7 \\ 2 \end{bmatrix}\text{.} \end{equation*}
  1. \(\displaystyle \bfv = \begin{bmatrix} 16 \\ 5 \\ -16 \end{bmatrix} \)
  2. \(\displaystyle \bfv = \begin{bmatrix} 6 \\ -23 \\ 11 \end{bmatrix} \)
Answer.
  1. \(\displaystyle [\bfv]_{\mcb} = \begin{bmatrix} -2 \\ 2 \\ -1 \end{bmatrix}\)
  2. \(\displaystyle [\bfv]_{\mcb} = \begin{bmatrix} 1 \\ 0 \\ 4 \end{bmatrix}\)

4.

Find the coordinate vectors \([p]_{\mcb}\) for each of the following polynomials \(p\) with respect to the basis \(\mcb = \{ p_1, p_2, p_3 \}\) of \(P_2\text{,}\) if
\begin{equation*} p_1 = 8 + 4t - 4t^2, \hspace{6pt} p_2 = 5 + 8t + 3t^2, \hspace{6pt} p_3 = -6 - 2t - 5t^2\text{.} \end{equation*}
  1. \(\displaystyle p = -2 + 2t - 23t^2\)
  2. \(\displaystyle p = 23 + 28t + 5t^2\)

5.

Use coordinate vectors to test the linear independence of the following sets of polynomials in \(P_3\text{.}\)
  1. \(\{p_1, p_2, p_3 \}\) if
    \begin{align*} p_1 \amp = -6 + 7t + 6t^2 + 3t^3\\ p_2 \amp = 2t - 4t^2 + 7t^3\\ p_3 \amp = 2 + 6t - t^2 - 5t^3 \end{align*}
  2. \(\{p_1, p_2, p_3 \}\) if
    \begin{align*} p_1 \amp = 6 + 7t -t^2 - 2t^3\\ p_2 \amp = -5 - 7t - 6t^2 + 8t^3\\ p_3 \amp = 7 + 7t - 8t^2 + 4t^3 \end{align*}

6.

Use coordinate vectors to test whether the following sets of vectors span \(P_2\text{.}\)
  1. \(\{p_1, p_2, p_3, p_4 \}\) if
    \begin{align*} p_1 \amp = -4 + t + t^2\\ p_2 \amp = 3 + 5t + t^2\\ p_3 \amp = -2 -4t + 2t^2\\ p_4 \amp = 2 - 4t - t^2 \end{align*}
  2. \(\{p_1, p_2, p_3, p_4 \}\) if
    \begin{align*} p_1 \amp = 4 + 6t + 5t^2\\ p_2 \amp = -3 t^2\\ p_3 \amp = 4 + 6t - 4t^2\\ p_4 \amp = 8 +12t + t^2 \end{align*}
Answer.
  1. This set of vectors spans \(P_2\text{.}\)
  2. This set of vectors does not span \(P_2\text{.}\)

7.

Let \(T: P_2 \to \rr^2\) be the linear transformation
\begin{equation*} T(p) = \begin{bmatrix} p(0) + p(1) \\ p(1) - p(2) \end{bmatrix} \text{.} \end{equation*}
Let \(\mcb\) be the standard basis for \(P_2\) and let \(\mce\) be the standard basis for \(\rr^2\text{.}\)
  1. Find the coordinate matrix \([T]_{\mcb, \mce}\text{.}\)
  2. Use this coordinate matrix to calculate \(T(-10 + 3t^2)\text{.}\)
Answer.
  1. \(\displaystyle [T]_{\mcb,\mce} = \begin{bmatrix} 2 \amp 1 \amp 1 \\ 0 \amp -1 \amp -3 \end{bmatrix}\)
  2. \(\displaystyle T(-10+3t^2) = [T]_{\mcb,\mce} [p]_{\mcb} = \begin{bmatrix} -17 \\ -9 \end{bmatrix}\)

8.

Let \(T: \rr^3 \to P_2\) be the linear transformation
\begin{equation*} T \left( \begin{bmatrix} a \\ b \\ c \end{bmatrix} \right) = (2a-b) + (b-3c)t + (a-b+c)t^2\text{.} \end{equation*}
Let \(\mce\) be the standard basis for \(\rr^3\) and let \(\mcb\) be the standard basis for \(P_2\text{.}\)
  1. Find the coordinate matrix \([T]_{\mce, \mcb}\text{.}\)
  2. Use this coordinate matrix to calculate \(T(\bfv)\) for
    \begin{equation*} \bfv = \begin{bmatrix} 2 \\ -2 \\ 3 \end{bmatrix} \text{.} \end{equation*}

9.

Let \(T: P_2 \to P_2\) be the linear transformation
\begin{equation*} T(p) = p' + p(1)t^2\text{.} \end{equation*}
Let \(\mcb\) be the standard basis for \(P_2\text{.}\)
  1. Choose a basis \(\mcc\) for \(P_2\) which is not the standard basis. Prove that your set of polynomials is a basis.
  2. Find the coordinate matrix \([T]_{\mcc, \mcb}\text{.}\)
  3. Use this coordinate matrix to calculate \(T(2 + t - 4t^2)\text{.}\)

10.

Let \(D:P_3 \to P_2\) be the derivative and let \(T:P_2 \to P_3\) be the linear transformation which is multiplication by \(t\text{.}\) Let \(\mcb\) be the standard basis for \(P_2\) and let \(\mcc\) be the standard basis for \(P_3\text{.}\)
  1. Find the coordinate matrix \([T]_{\mcb, \mcc}\text{.}\)
  2. Find the coordinate matrix \([DT]_{\mcb}\text{.}\)
  3. Find the coordinate matrix \([TD]_{\mcc}\text{.}\)

11.

Consider the plane \(P\) in \(\rr^3\) defined by \(x-2y+3z = 0\text{.}\)
  1. Find a basis for \(P\text{.}\)
  2. Determine whether each of the following vectors is in \(P\text{,}\) and for each one that is, find its coordinate vector in terms of the basis you gave in part a.
    1. \(\displaystyle \bfv_1 = (1,-1,-1)\)
    2. \(\displaystyle \bfv_2 = (2,3,1)\)
    3. \(\displaystyle \bfv_3 = (5,-2,-3)\)

Writing Exercises

13.
Without using TheoremΒ 5.3.11, prove that the coordinate mapping in TheoremΒ 5.5.3 is injective.
Solution.
Let \(V\) be an \(n\)-dimensional vector space over \(\ff\text{,}\) and let
\begin{equation*} \mcb = \{\bfv_1,\ldots,\bfv_n\} \end{equation*}
be a basis for \(V\text{.}\) Let \(C_{\mcb}:V \to \ff^n\) be the coordinate mapping. We will prove that \(C_{\mcb}\) is injective by showing that it has a trivial kernel.
Suppose that \(\bfv \in \ker(C_{\mcb})\text{.}\) This means that \(C_{\mcb}(\bfv) = \bfo \in \ff^n\text{,}\) so \([\bfv]_{\mcb} = \bfo\text{.}\) Since this is the coordinate vector of \(\bfv\text{,}\) this tells us that
\begin{equation*} \bfv = 0 \bfv_1 + \cdots + 0\bfv_n\text{.} \end{equation*}
This proves that \(\bfv = \bfo \in V\text{,}\) and therefore \(\ker(C_{\mcb}) = \{\bfo\}\text{.}\) This proves that the coordinate mapping is injective.
15.
Let \(T:V \to W\) be a linear transformation between finite-dimensional vector spaces, and let \(\mcb\) and \(\mcc\) be bases for \(V\) and \(W\text{,}\) respectively. Prove that \(\rank(T)\) (the rank of \(T\) as a linear transformation) is the same as \(\rank([T]_{\mcb,\mcc})\) (the rank of the coordinate matrix of \(T\)).