Skip to main content

Section 4.2 Properties of the Determinant

We have introduced the determinant, but we have not yet backed up our assertion that the determinant is useful or powerful. Our goal in this section is to establish just that. In particular, by the end of this section we will be able to conclude that the determinant gives a characterization of the invertibility of a square matrix.

Subsection 4.2.1 The Determinant and Elementary Row Operations

In this subsection we will discover how elementary row operations affect the determinant of a matrix. These will be essential facts for proving the big theorems of this chapter. We begin with a result that is obvious in light of Theoremย 4.1.7.

Proof.

To calculate \(\det(A)\text{,}\) we use cofactor expansion along the row of zeros. This immediately shows that \(\det(A)=0\text{.}\)

Note 4.2.2.

We observe that Propositionย 4.2.1 is also true if the word โ€œrowโ€ is replaced by โ€œcolumnโ€ since a matrix and its transpose have equal determinants. The reader should consider each result in this section and reflect on whether the statement would still hold after making the same word exchange.
Now, we examine the effect of the switch elementary row operation.

Proof.

We will proceed by induction on \(n\text{.}\) This result only makes sense for \(n \ge 2\text{,}\) and the base case of \(n=2\) was covered in Exerciseย 4.1.4.11.
We let \(k\) be an integer such that \(k \ge 2\) and we assume the result is true for all \(k\times k\) matrices. Let \(A\) be a \((k+1)\times (k+1)\) matrix and let \(B\) be the result of switching two rows in \(A\text{.}\) We want to show that \(\det(B)=-\det(A)\text{.}\)
Since \(k \ge 2\text{,}\) we have \(k+1 \ge 3\text{,}\) which means that we can calculate \(\det(B)\) by expansion along a row that is not involved in the row exchange. Suppose that \(B\) was produced by switching rows \(p\) and \(q\text{.}\) We will calculate \(\det(B)\) by expanding along row \(i\text{,}\) where \(i\) is distinct from both \(p\) and \(q\text{.}\) We have
\begin{equation*} \det(B) = \sum_{j=1}^{k+1} (-1)^{i+j}[B]_{ij} \det(B_{ij})\text{.} \end{equation*}
We note that since \(i\) will never be \(p\) or \(q\text{,}\) \([B]_{ij} = [A]_{ij}\) for all \(j\text{.}\) Additionally, for all \(j\text{,}\) \(B_{ij}\) can be obtained by performing a switch row operation on \(A_{ij}\text{.}\) This means that, by the inductive hypothesis, we have \(\det(B_{ij}) = -\det(A_{ij})\) for all \(j\) since these matrices are \(k\times k\text{.}\) So, we have
\begin{align*} \det(B) \amp = \sum_{j=1}^{k+1} (-1)^{i+j}[A]_{ij}(-1) \det(A_{ij}) \\ \amp = - \sum_{j=1}^{k+1} (-1)^{i+j}[A]_{ij} \det(A_{ij})\\ \amp = -\det(A)\text{.} \end{align*}
This completes the inductive step.
We have shown that the result holds for all \(n \ge 2\) by the Principle of Mathematical Induction.
The second elementary row operation we will consider is the scale operation. How is the determinant of a matrix affected if one row is multiplied by a non-zero element of the field?

Proof.

We will not need induction for this argument. Suppose that \(B\) is formed by multiplying row \(i\) in \(A\) by \(c \in \ff\) where \(c \neq 0\text{.}\) We will calculate \(\det(B)\) by expanding along row \(i\text{.}\) Note that since row \(i\) is the only row affected by this operation, \(B_{ij}=A_{ij}\) for all \(1 \le j \le n\text{.}\) Additionally, we note that \([B]_{ij} = c[A]_{ij}\) for all \(1 \le j \le n\text{.}\) Now we have
\begin{align*} \det(B) \amp = \sum_{j=1}^n (-1)^{i+j}[B]_{ij} \det(B_{ij})\\ \amp = \sum_{j=1}^n (-1)^{i+j}c[A]_{ij} \det(A_{ij})\\ \amp = c\sum_{j=1}^n (-1)^{i+j}[A]_{ij} \det(A_{ij})\\ \amp = c\det(A)\text{.} \end{align*}
We now present the result related to the remaining elementary row operation, the replace operation.

Proof.

We proceed by induction on \(n\text{.}\) This result only makes sense when \(n \ge 2\text{,}\) and the base case of \(n=2\) is covered in Exerciseย 4.1.4.12.
We let \(k\) be an integer such that \(k \ge 2\) and we assume the result is true for all \(k\times k\) matrices. Let \(A\) be a \((k+1)\times (k+1)\) matrix and let \(B\) be the result of replacing row \(q\) in \(A\) with the sum of row \(q\) and \(c\) times row \(p\) in \(A\text{.}\) We want to show that \(\det(B)=\det(A)\text{.}\)
We observe that \(k\) is large enough that we can calculate \(\det(B)\) by expanding along a row which is not row \(q\text{;}\) we will call that row \(i\text{.}\) Since \(i \neq q\text{,}\) we have \([B]_{ij} = [A]_{ij}\) for all \(1 \le j \le n\text{.}\) Additionally, for each \(j\text{,}\) \(B_{ij}\) is a \(k\times k\) matrix which has been obtained from \(A_{ij}\) by a replace row operation. The inductive hypothesis means that \(\det(B_{ij}) = \det(A_{ij})\) for all \(1 \le j \le n\text{.}\) Therefore, we have the following:
\begin{align*} \det(B) \amp = \sum_{j=1}^n (-1)^{i+j} [B]_{ij} \det(B_{ij})\\ \amp = \sum_{j=1}^n (-1)^{i+j} [A]_{ij} \det(A_{ij})\\ \amp = \det(A)\text{.} \end{align*}
This completes the inductive step.
We have shown that the result holds for all \(n \ge 2\) by the Principle of Mathematical Induction.
The following example shows how these three theorems can be used to calculate the determinant of a matrix using row reduction.

Example 4.2.6.

Let \(A\) be the following matrix:
\begin{equation*} A = \begin{bmatrix} 2 \amp 0 \amp -3 \\ 1 \amp -1 \amp 2 \\ -2 \amp 3 \amp 0 \end{bmatrix}\text{.} \end{equation*}
We will find \(\det(A)\) using row reduction. We first switch rows 1 and 2, which introduces a negative sign:
\begin{equation*} \det(A) = - \begin{vmatrix} 1 \amp -1 \amp 2 \\ 2 \amp 0 \amp -3 \\ -2 \amp 3 \amp 0 \end{vmatrix} \text{.} \end{equation*}
Once we reduce the matrix to a triangular form, we can use Propositionย 4.1.13, so we do not need to reduce the matrix to RREF, only to REF. This means that the rest of the row reduction can be performed using only the replace operation, which does not change the determinant:
\begin{align*} \det(A) \amp = - \begin{vmatrix} 1 \amp -1 \amp 2 \\ 2 \amp 0 \amp -3 \\ 0 \amp 3 \amp -3 \end{vmatrix} = - \begin{vmatrix} 1 \amp -1 \amp 2 \\ 0 \amp 2 \amp -7 \\ 0 \amp 3 \amp -3 \end{vmatrix} = - \begin{vmatrix} 1 \amp -1 \amp 2 \\ 0 \amp 2 \amp -7 \\ 0 \amp 0 \amp \tfrac{15}{2} \end{vmatrix}\text{.} \end{align*}
We have reduced the matrix far enough so that we can calculate its determinant using the product of the entries along the main diagonal:
\begin{equation*} \det(A) = (-1)(1)(2)(\tfrac{15}{2}) = -15\text{.} \end{equation*}

Subsection 4.2.2 Invertibility and the Determinant

We will use the results that have accumulated thus far in this section to prove two major results. First, we need to record an easy fact.

Proof.

Since the identity matrix is, among other things, a triangular matrix, Propositionย 4.1.13 applies. The entries along the main diagonal are all \(1\text{,}\) so \(\det(I_n)=1\text{.}\)
We will now apply this lemma to record the determinant of any elementary matrix.

Proof.

Every elementary matrix in \(M_n(\ff)\) is the result of performing a single elementary row operation on \(I_n\text{.}\) We have theorems in this section which tell us how these elementary row operations affect the determinant of a matrix, and since from Lemmaย 4.2.7 we know that \(\det(I_n)=1\text{,}\) we will be able to arrive at our result.
If \(E\) performs a switch row operation, then by Theoremย 4.2.3 we have \(\det(E) = \det(EI_n) = -\det(I_n)=-1\text{.}\)
If \(E\) scales one row of a matrix by a non-zero \(c \in \ff\text{,}\) then by Theoremย 4.2.4 we have \(\det(E) = \det(EI_n) = c\det(I_n) = c\text{.}\)
Finally, if \(E\) performs a replace row operation, then by Theoremย 4.2.5 we have \(\det(E) = \det(EI_n) = \det(I_n) = 1\text{,}\) which completes the proof.

Example 4.2.9.

Sometimes, the easiest way to find a determinant by hand is to use a combination of cofactor expansion and row reduction techniques. Let \(A \in M_4(\rr)\) be the following matrix:
\begin{equation*} A = \begin{bmatrix} 0 \amp 1 \amp -1 \amp 2 \\ 1 \amp 3 \amp 0 \amp -2 \\ 2 \amp 4 \amp 1 \amp -1 \\ -2 \amp 0 \amp -1 \amp -3 \end{bmatrix}\text{.} \end{equation*}
To find \(\det(A)\text{,}\) we first use the replace row operation, using the \(1\) in the \((2,1)\) position to put zeros in the column below it:
\begin{equation*} A \sim \begin{bmatrix} 0 \amp 1 \amp -1 \amp 2 \\ 1 \amp 3 \amp 0 \amp -2 \\ 0 \amp -2 \amp 1 \amp 3 \\ 0 \amp 6 \amp -1 \amp -7 \end{bmatrix} = B\text{.} \end{equation*}
Since the replace row operation doesnโ€™t change the determinant, we have \(\det(A) = \det(B)\text{.}\) We now use cofactor expansion along the first column to calculate \(\det(B)\text{.}\) Since there is only one non-zero entry in that column, we have
\begin{equation*} \det(B) = - \begin{vmatrix} 1 \amp -1 \amp 2 \\ -2 \amp 1 \amp 3 \\ 6 \amp -1 \amp -7 \end{vmatrix} \text{.} \end{equation*}
We can now use the replace row operation three more times, to produce zeros in the \((2,1)\text{,}\) \((3,1)\text{,}\) and \((3,2)\) positions of this \(3\times 3\) matrix:
\begin{equation*} \det(B) = - \begin{vmatrix} 1 \amp -1 \amp 2 \\ 0 \amp -1 \amp 7 \\ 0 \amp 5 \amp -19 \end{vmatrix} = - \begin{vmatrix} 1 \amp -1 \amp 2 \\ 0 \amp -1 \amp 7 \\ 0 \amp 0 \amp 16 \end{vmatrix} \text{.} \end{equation*}
We now invoke Propositionย 4.1.13 to see that \(\det(B) = -(-1)(16) = 16\text{.}\) Since \(\det(B)=\det(A)\text{,}\) we have \(\det(A)=16\text{.}\)
In this next result, we use Propositionย 4.2.8 to show that the determinant respects matrix multiplication, at least when one of the factors is an elementary matrix.

Proof.

This argument uses Propositionย 4.2.8 and requires three cases. If \(E\) performs a switch row operation, then we know from Theoremย 4.2.3 that \(\det(EA) = -\det(A)\text{.}\) Since we now know that \(\det(E)=-1\text{,}\) we have
\begin{equation*} \det(EA) = -\det(A) = \det(E)\det(A)\text{.} \end{equation*}
If \(E\) performs a scale row operation, and if the scaling is by a non-zero \(c \in \ff\text{,}\) then we know from Theoremย 4.2.4 that \(\det(EA) = c\det(A)\text{.}\) Since \(\det(E)=c\text{,}\) we have
\begin{equation*} \det(EA) = c\det(A) = \det(E)\det(A)\text{.} \end{equation*}
Finally, if \(E\) performs a replace row operation, then we know from Theoremย 4.2.5 that \(\det(EA) = \det(A)\text{.}\) We know that \(\det(E)=1\text{,}\) so
\begin{equation*} \det(EA) = \det(A) = 1\cdot \det(A) = \det(E)\det(A)\text{.} \end{equation*}
Armed with this result, we can now prove one of the most useful facts about determinants.

Proof.

For \(A \in M_n(\ff)\text{,}\) let \(B\in M_n(\ff)\) be the unique RREF of \(A\text{.}\) From Propositionย 3.3.11, we know there exist elementary matrices \(E_1, \ldots, E_k\) such that
\begin{equation*} A = E_1 \cdots E_k B\text{.} \end{equation*}
We can apply Theoremย 4.2.10 repeatedly to see that
\begin{equation*} \det(A) = \det(E_1 \cdots E_k B) = \det(E_1)\cdots \det(E_k)\det(B)\text{.} \end{equation*}
Since \(\det(E_i) \neq 0\) for each \(i\) by Propositionย 4.2.8, we conclude that \(\det(A) \neq 0\) if and only if \(\det(B) \neq 0\text{.}\)
We now assume that \(A\) is invertible. Theoremย 3.3.13 tells us that \(B = I_n\text{,}\) so \(\det(B) \neq 0\text{.}\) This proves one direction of the theorem.
We will prove the contrapositive of the other direction of the theorem. We now assume that \(A\) is not invertible, which (again by Theoremย 3.3.13) means that \(B \neq I_n\text{.}\) Specifically, \(B\) must have fewer than \(n\) pivots, which means that \(B\) must have at least one row of zeros. By Propositionย 4.2.1 we have \(\det(B)=0\text{.}\) Therefore, we must also have \(\det(A)=0\text{.}\)
If a casual math student spends some time away from linear algebra, this previous theorem might be the one and only fact they remember about the determinant. It is powerful and used frequently.

Example 4.2.12.

Using this theorem, if \(A \in M_3(\rr)\) is
\begin{equation*} A = \begin{bmatrix} 2 \amp -4 \amp 2 \\ 1 \amp 0 \amp 3 \\ 3.5 \amp 2 \amp 12.5 \end{bmatrix}\text{,} \end{equation*}
then we can say that \(A\) is not invertible since \(\det(A)=0\text{.}\)
We can also analyze the invertibility of matrices over other fields. Consider the matrix \(B \in M_3(\ff_5)\) given by
\begin{equation*} B = \begin{bmatrix} 3 \amp 4 \amp 1 \\ 1 \amp 4 \amp 0 \\ 1 \amp 2 \amp 4 \end{bmatrix}\text{.} \end{equation*}
We find that \(\det(B)=0\text{,}\) so \(B\) is not invertible. (If \(B\) were a matrix over \(\rr\text{,}\) we would have \(\det(B)=30\text{.}\) But this means that, in \(\ff_5\text{,}\) \(\det(B)=0\text{.}\))
Finally, we consider another matrix \(C \in M_3(\ff_5)\text{:}\)
\begin{equation*} C = \begin{bmatrix} 0 \amp 2 \amp 3 \\ 1 \amp 2 \amp 1 \\ 2 \amp 2 \amp 1 \end{bmatrix}\text{.} \end{equation*}
Since \(\det(C)=1\) in \(\ff_5\text{,}\) \(C\) is invertible.
We present one final, important result about determinants in the last theorem of this chapter.

Proof.

We will prove this in two cases. First, if \(A\) is not invertible, then neither is \(AB\text{,}\) by Exerciseย 3.3.5.15. This means that \(\det(AB)=\det(A)\det(B)\) since, by Theoremย 4.2.11, both sides of the equation are zero.
If \(A\) is invertible, then \(A\) is row equivalent to \(I_n\text{,}\) and there exist elementary matrices \(E_1, \ldots, E_k\) such that
\begin{equation*} A = E_1 \cdots E_kI_n = E_1 \cdots E_k \text{.} \end{equation*}
In the calculation that follows, we use this factorization as well as repeated application of Theoremย 4.2.10. We first use Theoremย 4.2.10 to peel the determinant of elementary matrices away from \(\det(B)\text{;}\) we then use the same result to put those determinants back together to form \(\det(A)\text{.}\) Here is the argument:
\begin{align*} \det(AB) \amp = \det(E_1\cdots E_kB)\\ \amp = \det(E_1) \det(E_2 \cdots E_kB)\\ \amp = \det(E_1) \det(E_2) \det(E_3 \cdots E_kB) = \cdots \\ \amp = \det(E_1) \cdots \det(E_k)\det(B)\\ \amp = \det(E_1E_2) \cdots \det(E_k)\det(B) = \cdots \\ \amp = \det(E_1 \cdots E_k) \det(B) = \det(A)\det(B)\text{.} \end{align*}
This completes the proof.
We take a step back for a moment to marvel at this theorem. We defined matrix multiplication in the context of the composition of linear transformations (see Subsectionย 3.2.2), and the calculations were quite involved. The definition of the determinant was also complicated, but in a different way, so the fact that these two notions fit together so nicely is worthy of our admiration.

Example 4.2.14.

In this example, we will verify Theoremย 4.2.13 for a specific example. Let \(A\) and \(B\) be the following matrices:
\begin{equation*} A = \begin{bmatrix} -1 \amp -2 \\ -3 \amp -4 \end{bmatrix} \hspace{6pt} \text{and} \hspace{6pt} B = \begin{bmatrix} -2 \amp 4 \\ 4 \amp 2 \end{bmatrix}\text{.} \end{equation*}
We calculate \(AB\) as
\begin{equation*} AB = \begin{bmatrix} -6 \amp -8 \\ -10 \amp -20 \end{bmatrix}\text{.} \end{equation*}
We see that \(\det(A)=-2\text{,}\) \(\det(B)=-20\text{,}\) and \(\det(AB)=40\text{,}\) so the relationship \(\det(AB)=\det(A)\det(B)\) holds.

Subsection 4.2.3 Proving Theoremย 4.1.9

We will devote the final subsection to the proof of Theoremย 4.1.9. The reader holding their breath since the statement of this theorem will soon be able to exhale.
Results about the multiplicative property of the determinant and the behavior of the transpose under matrix multiplication make this result easy to prove with but a single lemma.

Proof.

We have three cases to consider, but two of the cases are immediate. If \(E\) is an elementary matrix that performs the scale or switch row operation, then \(E = E^T\text{,}\) so the result follows easily. (We ask the reader to prove this fact in Exerciseย 4.2.5.15.)
We now suppose that \(E\) performs the replace row operation. We assume that \(E\) performs the row operation of adding \(k\) times row \(i\) to row \(j\text{,}\) where \(i \neq j\text{.}\) This means that \(E\) is the matrix \(I_n\) with the extra feature of containing the entry \(k\) in position \((j,i)\text{.}\) From Propositionย 4.2.8, we know that \(\det(E)=1\text{,}\) so we only need to prove that \(\det(E^T)=1\text{.}\)
The matrix \(E^T\) is \(I_n\) except for the fact that it contains the element \(k\) in position \((i,j)\text{.}\) We will calculate \(\det(E^T)\) by using cofactor expansion along row \(j\text{:}\)
\begin{equation*} \det(E^T) = \sum_{q=1}^n a_{jq}C_{jq}= 1\cdot C_{jj}\text{.} \end{equation*}
This calculation reduces to one term because there is only one nonzero element in row \(j\) of \(E^T\text{.}\) (Choosing row \(j\) for expansion means the element \(k\) in position \((i,j)\) is removed when calculating the determinant of the submatrix.) Since \((E^T)_{jj} = I_{n-1}\text{,}\) we have
\begin{equation*} \det(E^T) = (-1)^{j+j} \det((E^T)_{jj}) = (-1)^{2j} \det(I_{n-1}) = 1\cdot 1 = 1\text{.} \end{equation*}
Since \(\det(E^T)=1\) and \(\det(E)=1\text{,}\) this concludes our final case.
We are now ready for the long-promised proof of Theoremย 4.1.9.

Proof of Theoremย 4.1.9.

We assume that \(A \in M_n(\ff)\text{.}\) If \(A\) is not invertible, then \(A^T\) is also not invertible (see Exerciseย 3.3.5.9), meaning that both \(\det(A)=0\) and \(\det(A^T)=0\) by Theoremย 4.2.11. This proves that \(\det(A) = \det(A^T)\text{.}\)
We now assume that \(A\) is invertible. Using Propositionย 3.3.11 and Theoremย 3.3.13, we know that there exist elementary matrices \(E_1, \ldots, E_k\) such that
\begin{equation} A = E_1 \cdots E_k I_n = E_1 \cdots E_k\text{.}\tag{4.1} \end{equation}
By repeated use of Theoremย 4.2.10, we know that
\begin{equation*} \det(A) = \det(E_1) \cdots \det(E_k)\text{.} \end{equation*}
We can take the transpose of both sides of (4.1), and using Theoremย 3.2.15 (part 4) repeatedly we have
\begin{equation*} A^T = (E_1 \cdots E_k)^T = E_k^T \cdots E_1^T\text{.} \end{equation*}
We again use Theoremย 4.2.10 repeatedly (the transpose of an elementary matrix is an elementary matrix, see Exerciseย 3.3.5.10), and we have
\begin{equation*} \det(A^T) = \det(E_k^T) \cdots \det(E_1^T)\text{.} \end{equation*}
Finally, using Lemmaย 4.2.15 and the fact that multiplication within \(\ff\) is commutative, we conclude that \(\det(A) = \det(A^T)\text{.}\)

Reading Questions 4.2.4 Reading Questions

1.

Consider the following three matrices:
\begin{equation*} A = \begin{bmatrix} 3 \amp -1 \amp -2 \\ 1 \amp 2 \amp 0 \\ 1 \amp 1 \amp 2 \end{bmatrix}, \hspace{6pt} A_1 = \begin{bmatrix} 1 \amp 1 \amp 2 \\ 1 \amp 2 \amp 0 \\ 3 \amp -1 \amp -2 \end{bmatrix}, \hspace{6pt} A_2 = \begin{bmatrix} 3 \amp -1 \amp -2 \\ 1 \amp 2 \amp 0 \\ 0 \amp -1 \amp 2 \end{bmatrix}\text{.} \end{equation*}
  1. Calculate \(\det(A)\) using cofactor expansion along some row or column. Show your work.
  2. The matrix \(A_1\) was obtained from \(A\) by a single elementary row operation. Which one?
  3. Knowing \(\det(A)\) and given your answer to (b), what do you predict \(\det(A_1)\) to be? (Consult Theoremย 4.2.3.)
  4. Calculate \(\det(A_1)\) using cofactor expansion along some row or column. Show your work.
  5. The matrix \(A_2\) was obtained from \(A\) by a single elementary row operation. Which one?
  6. Knowing \(\det(A)\) and given your answer to (e), what do you predict \(\det(A_2)\) to be? (Consult Theoremย 4.2.5.)
  7. Calculate \(\det(A_2)\) using cofactor expansion along some row or column. Show your work.

2.

Verify Theoremย 4.2.13 for the following two matrices \(A\) and \(B\text{:}\)
\begin{equation*} A = \begin{bmatrix} 4 \amp 1 \\ 5 \amp 2 \end{bmatrix} \hspace{6pt} \text{and} \hspace{6pt} B = \begin{bmatrix} -1 \amp 2 \\ 1 \amp -3 \end{bmatrix}\text{.} \end{equation*}
(You should follow Exampleย 4.2.14)

Exercises 4.2.5 Exercises

1.

Find the determinant of the matrix using row reduction.
  1. \(\displaystyle A = \begin{bmatrix} 1 \amp 2 \amp -1 \\ 2 \amp -4 \amp -2 \\ -4 \amp -3 \amp 2 \end{bmatrix}\)
  2. \(\displaystyle A = \begin{bmatrix} -1 \amp -2 \amp 0 \amp 3 \\ -2 \amp -2 \amp 0 \amp -2 \\ 0 \amp 2 \amp 1 \amp 0 \\ 3 \amp 8 \amp 3 \amp 7 \end{bmatrix}\)

3.

Find the determinant using a combination of row reduction and cofactor expansion:
\begin{equation*} A = \begin{bmatrix} 2 \amp 1 \amp -3 \amp 1 \\ 4 \amp 3 \amp -1 \amp 0 \\ 0 \amp -1 \amp 3 \amp -1 \\ -2 \amp 1 \amp 2 \amp 1 \end{bmatrix}\text{.} \end{equation*}

4.

Find the determinant using a combination of row reduction and cofactor expansion:
\begin{equation*} A = \begin{bmatrix} -1 \amp 2 \amp 1 \amp 4 \\ 3 \amp -4 \amp 1 \amp -3 \\ 4 \amp -10 \amp -1 \amp 0 \\ -1 \amp 4 \amp 2 \amp 3 \end{bmatrix}\text{.} \end{equation*}
Answer.

5.

Use the determinant to determine whether or not the matrix is invertible. (Note that not all fields are \(\rr\text{!}\))
  1. \(A \in M_3(\ff_3)\text{,}\) \(A = \begin{bmatrix} 2 \amp 0 \amp 1 \\ 0 \amp 0 \amp 2 \\ 2 \amp 2 \amp 0 \end{bmatrix}\)
  2. \(A \in M_3(\rr)\text{,}\) \(A = \begin{bmatrix} -3 \amp -1 \amp -1 \\ 0 \amp -3 \amp -3 \\ 2 \amp -3 \amp 3 \end{bmatrix}\)
  3. \(A \in M_3(\ff_5)\text{,}\) \(A = \begin{bmatrix} 3 \amp 1 \amp 0 \\ 0 \amp 3 \amp 1 \\ 4 \amp 1 \amp 3 \end{bmatrix}\)
  4. \(A \in M_2(\cc)\text{,}\) \(A = \begin{bmatrix} 2+i \amp 2-3i \\ 4-i \amp -2+4i \end{bmatrix}\)
  5. \(A \in M_3(\cc)\text{,}\) \(A = \begin{bmatrix} 0 \amp 3-2i \amp -2-4i \\ -2 \amp 2+4i \amp 0 \\ 3+i \amp -1+i \amp 0 \end{bmatrix}\)

6.

Calculate \(\det(A^3)\) if
\begin{equation*} A = \begin{bmatrix} 2 \amp 1 \amp 0 \\ 0 \amp 1 \amp 1 \\ 1 \amp 1 \amp 2 \end{bmatrix}\text{.} \end{equation*}
Answer.

7.

Construct an invertible matrix \(A \in M_3(\rr)\text{.}\) For each entry of \(A\text{,}\) compute the corresponding cofactor. Create a new \(3\times 3\) matrix with these cofactors in the same position as the entry of \(A\) on which they were based; call this matrix \(C\text{.}\) Calculate \(AC^T\text{.}\) What do you observe?

Writing Exercises

8.
Suppose that \(A\) is a square matrix with two identical columns. Prove that \(\det(A)=0\text{.}\)
Solution.
If \(A\) has identical columns, then \(A^T\) has identical rows. We can use an elementary row operation to add \(-1\) times one of these rows to the other, producing a row of zeros in this matrix we will call \(B\text{.}\) Since we used the replace row operation to go from \(A^T\) to \(B\text{,}\) we have \(\det(A^T) = \det(B)\) by Theoremย 4.2.5. Since \(B\) has a row of zeros, we know that \(\det(B) = 0\) by Propositionย 4.2.1. This means that \(\det(A^T)=0\text{,}\) and since we have \(\det(A)=\det(A^T)\) by Theoremย 4.1.9, this means that \(\det(A)=0\text{,}\) as desired.
9.
Suppose that \(A \in M_n(\ff)\) is invertible. Prove that \(\det(A^{-1})= \dfrac{1}{\det(A)}\text{.}\)
11.
Suppose that \(A, B \in M_n(\ff)\text{.}\) Show that \(\det(AB)=\det(BA)\) regardless of whether or not \(AB=BA\text{.}\)
Solution.
If \(A, B \in M_n(\ff)\text{,}\) we have
\begin{equation*} \det(AB) = \det(A)\det(B) = \det(B)\det(A) = \det(BA)\text{.} \end{equation*}
This string of equations uses Theoremย 4.2.13 twice as well as the fact that the determinant of a matrix is an element of \(\ff\text{,}\) and elements of \(\ff\) commute via multiplication.
12.
Let \(A \in M_n(\ff)\) and let \(k \in \ff\text{.}\) Find a formula for \(\det(kA)\) and prove that your formula is correct.
13.
  1. Verify that \(\det(A) = \det(B) + \det(C)\) where \(A\text{,}\) \(B\text{,}\) and \(C\) are
    \begin{equation*} A = \begin{bmatrix} a+e \amp b+f \\ c \amp d \end{bmatrix}, \hspace{6pt} B = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}, \hspace{6pt} C = \begin{bmatrix} e \amp f \\ c \amp d \end{bmatrix}\text{.} \end{equation*}
  2. Let \(A\) and \(B\) be
    \begin{equation*} A = \begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \hspace{12pt} B = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \text{.} \end{equation*}
    Show that \(\det(A+B) = \det(A) + \det(B)\) if and only if \(a+d=0\text{.}\)
  3. Provide an example where \(A, B \in M_3(\rr)\) to prove that \({\det(A+B) = \det(A) + \det(B)}\) is not always true.
14.
Consider the following matrix (called a Vandermond matrix):
\begin{equation*} V = \begin{bmatrix} 1 \amp a \amp a^2 \\ 1 \amp b \amp b^2 \\ 1 \amp c \amp c^2 \end{bmatrix}\text{.} \end{equation*}
  1. Use row operations to explain why \(\det(V) = (b-a)(c-a)(c-b)\text{.}\)
  2. Explain why \(V\) is invertible if and only if \(a\text{,}\) \(b\text{,}\) and \(c\) are all distinct real numbers.
15.
Suppose that \(\ff\) is a field and that \(E \in M_n(\ff)\) is an elementary matrix which performs the scale or switch row operation. Prove that \(E\) is a symmetric matrix.