Skip to main content

Section 6.1 Eigenvalues and Eigenvectors

For linear transformations \(T:V \to W\text{,}\) there isn’t often a connection between \(\bfv \in V\) and \(T(\bfv) \in W\) that is easy to describe. These vectors, after all, live in different vector spaces, so they need not have any obvious relationship to each other. When \(W=V\text{,}\) we sometimes have a different story to tell (for some vectors).
Suppose that \(V\) is a vector space with \(U\) as a subspace. If \(T:V \to V\) is a linear transformation, then we can ask whether or not \(T(U) \subseteq U\text{.}\) If \(T(U) \subseteq U\text{,}\) then \(U\) is what is known as an invariant subspace under \(T\text{.}\) Though \(T\) may have all sorts of actions on \(V\) outside of \(U\text{,}\) when \(T\) is applied to vectors in \(U\) they are sent to other vectors in \(U\text{.}\)
This section is concerned with a version of an invariant subspace. We will shortly be able to analyze (nonzero) vectors \(\bfv \in V\) for which \({T(\bfv) \in \spn\{\bfv\}}\text{.}\) This is close to satisfying what we might mean by a one-dimensional invariant subspace of \(V\) under \(T\text{.}\)

Subsection 6.1.1 Defining Eigenvalues and Eigenvectors

We first define the sorts of vectors we alluded to in the previous paragraphs.

Definition 6.1.1.

Let \(V\) be a vector space over \(\ff\text{,}\) and let \(T \in L(V)\text{.}\) A nonzero vector \(\bfv \in V\) is an eigenvector for \(T\) if \(T(\bfv) = \lambda \bfv\) for some \(\lambda \in \ff\text{.}\) A scalar \(\lambda\) is called an eigenvalue of \(T\) if there is a nontrivial solution to the equation \(T(\bfx) = \lambda \bfx\text{.}\) Such a solution is called an eigenvector for \(T\) corresponding to \(\lambda\).
If \(A \in M_n(\ff)\text{,}\) the eigenvectors and eigenvalues of \(A\) are the eigenvectors and eigenvalues of the transformation \(T \in L(\ff^n)\) defined by \(T(\bfx) = A\bfx\text{.}\)
Informally, eigenvectors for \(T\) are nonzero vectors on which \(T\) acts by scalar multiplication. The next example shows that for a \(T\) that has eigenvectors, it is not (always) every vector in \(V\) that has this special property.

Example 6.1.2.

When we are given a matrix \(A\) and a vector \(\bfv\text{,}\) it is easy to determine whether or not \(\bfv\) is an eigenvector for \(A\text{.}\) Consider the following:
\begin{equation*} A = \begin{bmatrix} 3 \amp 0 \\ 7 \amp -1 \end{bmatrix}, \hspace{6pt} \bfu = \begin{bmatrix} 4 \\ 7 \end{bmatrix}, \hspace{6pt} \bfv = \begin{bmatrix} -2 \\ 1 \end{bmatrix}\text{.} \end{equation*}
We take the product \(A\bfu\text{,}\)
\begin{equation*} \begin{bmatrix} 3 \amp 0 \\ 7 \amp -1 \end{bmatrix} \begin{bmatrix} 4 \\ 7 \end{bmatrix} = \begin{bmatrix} 12 \\ 21 \end{bmatrix}\text{.} \end{equation*}
Since \(A\bfu = 3 \bfu\text{,}\) \(\bfu\) is an eigenvector for \(A\) with eigenvalue \(3\text{.}\) Also, since
\begin{equation*} A \bfv = \begin{bmatrix} 3 \amp 0 \\ 7 \amp -1 \end{bmatrix} \begin{bmatrix} -2 \\ 1 \end{bmatrix} = \begin{bmatrix} -6 \\ -15 \end{bmatrix}\text{,} \end{equation*}
we can see that \(\bfv\) is not an eigenvector for \(A\text{,}\) because \(A\bfv\) is not a scalar multiple of \(\bfv\text{.}\)
When \(\bfv\) is an eigenvector of \(T\text{,}\) then applying \(T\) may change the length of \(\bfv\) but it will not change the direction of \(\bfv\text{.}\) (To say this we must include β€œpointing in the exact opposite direction” as being in the same direction.) This is a simplification, because not every vector space has a neat, geometric interpretation.

Example 6.1.3.

Let \(T:P_2 \to P_2\) be the following linear transformation:
\begin{equation*} T(a + bt + ct^2) = (4a-b+6c) + (2a+b+6c)t + (2a-b+8c)t^2\text{.} \end{equation*}
If \(p = 1 + t + t^2\text{,}\) it is not difficult to check that
\begin{equation*} T(p) = 9 + 9t + 9t^2 = 9p\text{.} \end{equation*}
Therefore, \(p\) is an eigenvector for \(T\) with eigenvalue \(9\text{.}\)

Example 6.1.4.

Let \(T:\rr^2 \to \rr^2\) be the linear transformation which is counterclockwise rotation about the origin by an angle of \(\theta\text{.}\) We can see that \(T\) will have an eigenvector if and only if \(\theta\) is an integer multiple of \(\pi\) radians. If \(\theta\) is an even integer multiple of \(\pi\text{,}\) then every nonzero vector in \(\rr^2\) is an eigenvector for \(T\) with eigenvalue \(1\text{,}\) and if \(\theta\) is an odd integer multiple of \(\pi\text{,}\) then every nonzero vector in \(\rr^2\) is an eigenvector for \(T\) with eigenvalue \(-1\text{.}\)
We take a slightly different approach in our next example. Instead of verifying that a vector is an eigenvector, we provide the eigenvalue and then search for the eigenvector(s).

Example 6.1.5.

We consider the matrix \(A\) from ExampleΒ 6.1.2. Let’s show that \(-1\) is an eigenvalue of \(A\) and find the corresponding eigenvectors.
We know that \(-1\) is an eigenvalue of \(A\) if the equation \(A\bfx = -\bfx\) has a nontrivial solution for some \(\bfx \in \rr^2\text{.}\) This is equivalent to saying that the equation \(A\bfx + \bfx = \bfo\) has a nontrivial solution. We can also view \(\bfx\) as \(I\bfx\text{,}\) so if \(-1\) is an eigenvalue of \(A\text{,}\) there is a nonzero vector \(\bfx\) which satisfies
\begin{equation} (A + I)\bfx = \bfo\text{.}\tag{6.1} \end{equation}
Viewed from the correct angle, we have reduced this problem to finding the null space of a matrix.
We will calculate \(A + I\text{:}\)
\begin{equation*} A + I = \begin{bmatrix} 3 \amp 0 \\ 7 \amp -1 \end{bmatrix} + \begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix} = \begin{bmatrix} 4 \amp 0 \\ 7 \amp 0 \end{bmatrix}\text{.} \end{equation*}
We can see that the columns of \((A+I)\) are linearly dependent, so we know that (6.1) has nontrivial solutions. This proves that \(-1\) is an eigenvalue of \(A\text{.}\)
In order to find the eigenvectors of \(A\) that correspond to \(\lambda = -1\text{,}\) we describe the null space of the appropriate matrix. We row-reduce \((A + I)\text{:}\)
\begin{equation*} \begin{bmatrix} 4 \amp 0 \\ 7 \amp 0 \end{bmatrix} \sim \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}\text{.} \end{equation*}
This shows that every eigenvector of \(A\) corresponding to \(\lambda = -1\) has the form \(x_2 \begin{bmatrix} 0 \\ 1 \end{bmatrix}\text{,}\) as long as \(x_2 \neq 0\text{.}\) The interested/vigilant reader can check that, for example, \(A \begin{bmatrix} 0 \\ 3 \end{bmatrix} = \begin{bmatrix} 0 \\ -3 \end{bmatrix}\text{.}\)
The process we undertook in the previous example showed that there are almost always multiple eigenvectors for a linear transformation which correspond to a specific eigenvalue. In fact, the collection of such vectors forms almost an entire subspace.

Proof.

We note that a nonzero vector \(\bfv \in V\) is an eigenvector for \(T\) corresponding to \(\lambda\) if and only if \(T(\bfv) = \lambda \bfv\text{.}\) The vector \(\bfv\) satisfies this equation if and only if \(T(\bfv) - \lambda I(\bfv) = \bfo\text{,}\) which is true exactly when \((T - \lambda I) \bfv = \bfo\text{.}\) This shows that a nonzero \(\bfv\) is an eigenvector for \(T\) corresponding to \(\lambda\) if and only if \(\bfv \in \kerr(T- \lambda I)\text{.}\)
Since we already know (TheoremΒ 3.4.2) that the kernel of a linear transformation is a subspace, this completes the proof of this theorem.

Note 6.1.7.

The awkwardness in the statement of this theorem regarding the zero vector is only present because the zero vector (by definition) cannot be an eigenvector.
This previous theorem justifies the following definition.

Definition 6.1.8.

Let \(V\) be a vector space and let \(T \in L(V)\text{.}\) If \(\lambda \in \ff\) is an eigenvalue of \(T\text{,}\) then the eigenspace of \(T\) corresponding to \(\lambda\) is the subspace of \(V\) defined by
\begin{equation*} \mathrm{eig}_{\lambda}(T) = \{ \bfv \in V \mid T(\bfv) = \lambda \bfv \}\text{.} \end{equation*}
We may refer to the eigenspace corresponding to \(\lambda\) as the \(\lambda\)-eigenspace.
In the following example, we will calculate the eigenspace corresponding to an eigenvalue.

Example 6.1.9.

We consider the following matrix \(A\text{:}\)
\begin{equation*} A = \begin{bmatrix} 4.5 \amp -2.5 \amp -2.5 \\ 2.5 \amp -0.5 \amp -2.5 \\ 5 \amp -5 \amp -3 \end{bmatrix}\text{.} \end{equation*}
Let \(T \in L(\rr^3)\) be multiplication by \(A\text{.}\) If we know that \(\lambda = 2\) is an eigenvalue for \(A\text{,}\) we can calculate a basis for \(\mathrm{eig}_2(T)\text{.}\)
We need to form the matrix \(A - 2I\) and then find the RREF:
\begin{equation*} A-2I = \begin{bmatrix} 2.5 \amp -2.5 \amp -2.5 \\ 2.5 \amp -2.5 \amp -2.5 \\ 5 \amp -5 \amp -5 \end{bmatrix} \sim \begin{bmatrix} 1 \amp -1 \amp -1 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix}\text{.} \end{equation*}
The presence of free variables here confirms that \(2\) is an eigenvalue of \(A\text{.}\) If \(\bfx \in \nll(A-2I)\text{,}\) then
\begin{equation*} \bfx = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} x_2 + x_3 \\ x_2 \\ x_3 \end{bmatrix} = x_2 \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}\text{.} \end{equation*}
From this calculation we can see that \(\mathrm{eig}_2(T)\) is two-dimensional, with basis \(\mcb = \{\bfv_1, \bfv_2 \}\text{,}\) where
\begin{equation*} \bfv_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \hspace{6pt} \text{and} \hspace{6pt} \bfv_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}\text{.} \end{equation*}

Subsection 6.1.2 Results About Eigenvalues and Eigenvectors

In general, the eigenvalues of a linear transformation are not easy to spot. There are some situations, however, when we can identify eigenvalues at a glance.

Proof.

Suppose \(A \in M_n(\ff)\) is an upper triangular matrix. Then \(A-\lambda I\) is
\begin{equation*} A - \lambda I = \begin{bmatrix} a_{11} - \lambda \amp a_{12} \amp a_{13} \amp \cdots \amp a_{1n} \\ 0 \amp a_{22} - \lambda \amp a_{23} \amp \cdots \amp a_{2n} \\ 0 \amp 0 \amp a_{33} - \lambda \amp \cdots \amp a_{3n} \\ \vdots \amp \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp 0 \amp 0 \amp a_{nn} - \lambda \end{bmatrix} \text{.} \end{equation*}
We can see that \(\lambda\) is an eigenvalue for \(A\) if and only if \(\nll(A-\lambda I) \neq \{ \bfo \}\text{,}\) and this happens if and only if \(A - \lambda I\) has at least one non-pivot column. Because \(A\) (and therefore \(A-\lambda I\)) is upper triangular, \(A - \lambda I\) has at least one non-pivot column if and only if at least one of the entries on the main diagonal of \(A - \lambda I\) is zero. This happens if and only if \(\lambda\) equals one of the entries on the main diagonal of \(A\text{.}\)
We have saved the case of a lower triangular matrix for the exercises.

Example 6.1.11.

If \(A \in M_3(\ff_5)\) is given by
\begin{equation*} A = \begin{bmatrix} 4 \amp 0 \amp 3 \\ 0 \amp 2 \amp 3 \\ 0 \amp 0 \amp 1 \end{bmatrix}\text{,} \end{equation*}
the eigenvalues of \(A\) are \(4\text{,}\) \(2\text{,}\) and \(1\text{.}\) The reader might use this opportunity to find the associated eigenvectors/eigenspaces!
Of all possible scalars \(\lambda \in \ff\text{,}\) it is especially noteworthy when \(\lambda = 0\) is an eigenvalue for \(T \in L(V)\text{.}\) In this situation, there is a nonzero vector \(\bfx\) such that \(T(\bfx) = \bfo\text{.}\) In other words, \(\bfx\) is a nonzero vector in \(\kerr(T)\text{.}\)
This short argument establishes a connection between the injectivity of \(T\) and the presence of \(\lambda = 0\) as an eigenvalue for \(T\text{.}\) Because of previous results, we have the following theorem. (We leave the proof of both statements in this theorem as exercises so the reader might get practice connecting the logic of results in various chapters of the book.)
The final result in this section will be useful later, but we have all of the tools we need to prove it now.

Proof.

We will argue by contradiction. Suppose that the set \(\{ \bfv_1, \ldots, \bfv_k \}\) is linearly dependent. Since \(\bfv_1 \neq \bfo\) (because eigenvectors cannot be \(\bfo\)), we can apply the Linear Dependence Lemma (TheoremΒ 5.1.19). Therefore, there is some \(j \ge 2\) such that \(\bfv_j \in \spn\{ \bfv_1, \ldots, \bfv_{j-1} \}\text{.}\) There may be multiple subscripts \(j\) for which this is true; we use the smallest such \(j\text{.}\) We therefore have
\begin{equation} \bfv_j = a_1\bfv_1 + \cdots + a_{j-1}\bfv_{j-1}\text{,}\tag{6.2} \end{equation}
for scalars \(a_i \in \ff\text{.}\)
We now apply \(T\) to both sides of this equation and use the eigenvector assumptions (as well as the linearity of \(T\)) to get
\begin{align} T(\bfv_j) \amp = T(a_1\bfv_1 + \cdots + a_{j-1}\bfv_{j-1})\notag\\ T(\bfv_j) \amp = a_1T(\bfv_1) + \cdots + a_{j-1}T(\bfv_{j-1})\notag\\ \lambda_j\bfv_j \amp = a_1\lambda_1\bfv_1 + \cdots + a_{j-1}\lambda_{j-1}\bfv_{j-1}\text{.}\tag{6.3} \end{align}
If we multiply both sides of (6.2) by \(\lambda_j\) and subtract the result from (6.3), we get
\begin{equation*} \bfo = a_1(\lambda_1 - \lambda_j)\bfv_1 + \cdots + a_{j-1}(\lambda_{j-1} - \lambda_j)\bfv_{j-1}\text{.} \end{equation*}
Since \(\{ \bfv_1, \ldots, \bfv_{j-1} \}\) is linearly independent by assumption, we must have \(a_i(\lambda_i - \lambda_j) = 0\) for each \(i\text{,}\) \(1 \le i \le j-1\text{.}\) But we assumed that the eigenvalues are all distinct, so this means that \(\lambda_i - \lambda_j \neq 0\) for all \(i\text{,}\) and therefore we must have \(a_i = 0\) for all \(i\text{.}\) But this implies, from (6.2), that \(\bfv_j = \bfo\text{,}\) which is a contradiction as \(\bfo\) cannot be an eigenvector.
This contradiction proves that \(\{ \bfv_1, \ldots, \bfv_k \}\) must be linearly independent.

Reading Questions 6.1.3 Reading Questions

1.

Consider the following matrix \(A \in M_2(\rr)\) and the vectors \(\bfu, \bfv \in \rr^2\text{:}\)
\begin{equation*} A = \begin{bmatrix} -1 \amp 2 \\ 3 \amp 4 \end{bmatrix}, \hspace{6pt} \bfu = \begin{bmatrix} 1 \\ 5 \end{bmatrix}, \hspace{6pt} \bfv = \begin{bmatrix} 3 \\ 9 \end{bmatrix}\text{.} \end{equation*}
  1. Is \(\bfu\) an eigenvector for \(A\text{?}\) How do you know?
  2. Is \(\bfv\) an eigenvector for \(A\text{?}\) How do you know?

2.

Consider the matrix \(A\) from the previous reading question. Show that \(-2\) is an eigenvalue of \(A\) and find the corresponding eigenvectors. Follow ExampleΒ 6.1.5.

Exercises 6.1.4 Exercises

1.

Let \(A \in M_2(\rr)\) be the matrix
\begin{equation*} A = \begin{bmatrix} -12 \amp -14 \\ 7 \amp 9 \end{bmatrix}\text{.} \end{equation*}
  1. Is \(\bfu = \begin{bmatrix} -2 \\ 1 \end{bmatrix}\) an eigenvector for \(A\text{?}\) If so, find the eigenvalue.
  2. Is \(\bfv = \begin{bmatrix} 2 \\ -4 \end{bmatrix}\) an eigenvector for \(A\text{?}\) If so, find the eigenvalue.

2.

Let \(A \in M_3(\rr)\) be the matrix
\begin{equation*} A = \begin{bmatrix} 1 \amp -3 \amp -1 \\ 2 \amp 6 \amp 1 \\ -4 \amp -10 \amp -1 \end{bmatrix}\text{.} \end{equation*}
  1. Is \(4\) an eigenvalue for \(A\text{?}\) If so, find at least one eigenvector.
  2. Is \(3\) an eigenvalue for \(A\text{?}\) If so, find at least one eigenvector.

3.

Let \(A \in M_3(\rr)\) be the following matrix
\begin{equation*} A = \begin{bmatrix} -8 \amp 20 \amp 10 \\ 4 \amp -6 \amp -4 \\ -12 \amp 24 \amp 14 \end{bmatrix} \text{.} \end{equation*}
  1. Show that \(\lambda = -4\) is an eigenvalue for \(A\) and find a basis for \(\mathrm{eig}_{-4}(A)\text{.}\)
  2. Show that \(\lambda = 2\) is an eigenvalue for \(A\) and find a basis for \(\mathrm{eig}_{2}(A)\text{.}\)
Answer.
  1. The RREF of \(A + 4I\) has one non-pivot column, so \(\lambda = -4\) is an eigenvalue for \(A\text{.}\) A basis for \(\mathrm{eig}_{-4}(A)\) is
    \begin{equation*} \left\{ \begin{bmatrix} 5 \\ -2 \\ 6 \end{bmatrix} \right\}\text{.} \end{equation*}
  2. The RREF of \(A - 2I\) has two non-pivot columns, so \(\lambda = 2\) is an eigenvalue for \(A\text{.}\) A basis for \(\mathrm{eig}_{2}(A)\) is
    \begin{equation*} \left\{ \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} \right\}\text{.} \end{equation*}

4.

Let \(A \in M_3(\rr)\) be the following matrix
\begin{equation*} A = \begin{bmatrix} -3 \amp 1 \amp -1 \\ 8 \amp 4 \amp 1 \\ 7 \amp 7 \amp -2 \end{bmatrix} \text{.} \end{equation*}
  1. Show that \(\lambda = 5\) is an eigenvalue for \(A\) and find a basis for \(\mathrm{eig}_{5}(A)\text{.}\)
  2. Show that \(\lambda = -2\) is an eigenvalue for \(A\) and find a basis for \(\mathrm{eig}_{-2}(A)\text{.}\)

5.

Let \(A \in M_3(\ff_5)\) be the following matrix
\begin{equation*} A = \begin{bmatrix} 4 \amp 0 \amp 0 \\ 4 \amp 0 \amp 3 \\ 1 \amp 4 \amp 1 \end{bmatrix}\text{.} \end{equation*}
  1. Show that \(\lambda = 4\) is an eigenvalue for \(A\) and find a basis for \(\mathrm{eig}_{4}(A)\text{.}\)
  2. Show that \(\lambda = 2\) is an eigenvalue for \(A\) and find a basis for \(\mathrm{eig}_{2}(A)\text{.}\)
Answer.
  1. The RREF of \(A - 4I\) has two non-pivot columns, so \(\lambda = 4\) is an eigenvalue for \(A\text{.}\) A basis for \(\mathrm{eig}_{4}(A)\) is
    \begin{equation*} \left\{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 3 \\ 0 \\ 1 \end{bmatrix} \right\}\text{.} \end{equation*}
  2. The RREF of \(A - 2I\) has one non-pivot column, so \(\lambda = 2\) is an eigenvalue for \(A\text{.}\) A basis for \(\mathrm{eig}_{2}(A)\) is
    \begin{equation*} \left\{ \begin{bmatrix} 0 \\ 4 \\ 1 \end{bmatrix} \right\}\text{.} \end{equation*}

6.

Let \(A \in M_3(\rr)\) be the following matrix:
\begin{equation*} A = \begin{bmatrix} 1 \amp 3 \amp 5 \\ 1 \amp 3 \amp 5 \\ 1 \amp 3 \amp 5 \end{bmatrix}\text{.} \end{equation*}
Find one eigenvalue of \(A\) without any calculation. Explain your reasoning.

7.

Let \(T:\rr^2 \to \rr^2\) be the linear transformation which is orthogonal projection onto the line \(y=5x\text{.}\) Find all eigenvalues and all eigenvectors of this transformation.

8.

Let \(T:\rr^3 \to \rr^3\) be the linear transformation which is reflection across the \(xy\)-plane. Find all eigenvalues and all eigenvectors of this transformation.

9.

Let \(A \in M_n(\rr)\) be the matrix with the number 1 in every entry. Find all eigenvalues and eigenvectors for \(A\text{.}\)
  1. Explain why \(A\) is not invertible. Based on this fact, use a result from this section to find an eigenvalue for \(A\text{.}\)
  2. Find the eigenvectors for \(A\) that correspond to the eigenvalue found in part a.
  3. There is one more eigenvalue of \(A\text{.}\) Consider the effect multiplication by \(A\) has on a vector \(\bfv \in \rr^n\text{.}\) Using your answer, determine the remaining eigenvalue and eigenvector(s) for \(A\text{.}\) (Some students may find it easier to locate the eigenvector first; some may not!)
Answer.
Since \(A\) is clearly not invertibleβ€”its columns are linearly dependentβ€”it has 0 as an eigenvalue by TheoremΒ 6.1.12. A basis for \(\mathrm{eig}_0(A)\) is
\begin{equation*} \left\{ \begin{bmatrix} -1 \\ 1 \\ 0 \\ \vdots \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \ldots, \begin{bmatrix} -1 \\ 0 \\ \vdots \\ 0 \\ 0 \\ 1 \end{bmatrix} \right\}\text{.} \end{equation*}
The dimension of \(\mathrm{eig}_0(A)\) is \(n-1\text{.}\)
The other eigenvalue is \(\lambda = n\text{,}\) and a basis for \(\mathrm{eig}_n(A)\) is
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ \vdots \\ 1 \end{bmatrix} \right\}\text{.} \end{equation*}

Writing Exercises

10.
Prove that if \(A^2\) is the zero matrix, then the only eigenvalue of \(A\) is 0.
Solution.
Suppose that \(\lambda\) is an eigenvalue of \(A\text{.}\) Then there exists a nonzero vector \(\bfv\) such that \(A\bfv = \lambda\bfv\text{.}\) But then
\begin{equation*} A^2\bfv = A(\lambda\bfv) = \lambda \cdot A\bfv = \lambda^2\bfv\text{.} \end{equation*}
If \(A^2 = 0\text{,}\) then \(A^2\bfv = \bfo\text{,}\) so we have \(\lambda^2\bfv = \bfo\text{.}\) But since \(\bfv\) is nonzero, and since \(\ff\) is a field, we must have \(\lambda = 0\text{.}\) Therefore, the only eigenvalue of \(A\) is 0.
11.
Prove that an \(n\times n\) matrix \(A\) can have at most \(n\) distinct eigenvalues.
Solution.
We will argue by contradiction. Suppose that an \(n\times n\) matrix \(A\) has \(n+1\) distinct eigenvalues. This means that there are corresponding eigenvectors \(\bfv_1,\ldots, \bfv_{n+1}\text{.}\) By TheoremΒ 6.1.13, we know that the set \(S = \{\bfv_1,\ldots, \bfv_{n+1}\}\) is linearly independent. However, this gives us a linearly independent set of \(n+1\) vectors in \(\ff^n\text{,}\) which is a contradiction by CorollaryΒ 5.1.13. This proves that \(A\) can have at most \(n\) distinct eigenvalues.
12.
If \(\lambda\) is an eigenvalue for an invertible linear transformation \(T\text{,}\) prove that \(\lambda^{-1}\) is an eigenvalue for \(T^{-1}\text{.}\)
13.
  1. Let \(A\) be an \(n\times n\) matrix. Prove that \(\lambda\) is an eigenvalue for \(A\) if and only if \(\lambda\) is an eigenvalue for \(A^T\text{.}\) (Hint: Figure out how \(A - \lambda I\) and \(A^T - \lambda I\) are related.)
  2. Use part (a) to complete the proof of TheoremΒ 6.1.10 for lower triangular matrices.
14.
In a matrix, a row sum refers to the sum of all of the entries in a particular row.
Let \(A\) be an \(n\times n\) matrix where all of the row sums are equal to the same number \(k\text{.}\) Show that \(k\) is an eigenvalue of \(A\text{.}\) (Hint: Find an eigenvector.)
15.
Suppose that \(A \in M_n(\ff)\) and that for each \(j = 1, \ldots, n\text{,}\) \(\bfe_j\) is an eigenvector of \(A\text{.}\) Prove that \(A\) is a diagonal matrix.
16.
  1. Suppose that \(\lambda\) is an eigenvalue of \(T \in L(V)\) and that \(k \in \nn\text{.}\) Prove that \(\lambda^k\) is an eigenvalue of \(T^k\) and that \(\mathrm{eig}_{\lambda}(T) \subseteq \mathrm{eig}_{\lambda^k}(T^k)\text{.}\) (Here \(T^k\) means the composition of \(T\) with itself \(k\) times.)
  2. Give an example of a linear transformation \(T:V \to V\) with an eigenvalue \(\lambda\) such that \(\mathrm{eig}_{\lambda}(T) \neq \mathrm{eig}_{\lambda^2}(T^2)\text{.}\)