If
\(A\) is invertible, then by
LemmaΒ 3.3.12 the equation
\(A\bfx = \bfb\) has a unique solution for every
\(\bfb \in \ff^n\text{.}\) Now
TheoremΒ 1.3.6 means that the RREF of
\(A\) has a pivot in each of its
\(n\) columns. Since
\(A\) is square, this means the RREF has a pivot in each row as well, meaning that the RREF of
\(A\) must be
\(I_n\text{.}\)
Conversely, suppose that
\(A\) is row equivalent to
\(I_n\text{.}\) By
PropositionΒ 3.3.11, there exist elementary matrices
\(E_1,\ldots,E_k\) such that
\begin{equation}
A = E_1\cdots E_kI_n\text{.}\tag{3.7}
\end{equation}
This means that \(A = E_1\cdots E_k\text{,}\) and since the product of invertible matrices is invertible, this proves that \(A\) is invertible.
If we multiply both sides of
(3.7) by
\((E_1\cdots E_k)^{-1}\text{,}\) we get
\begin{equation*}
E_k^{-1}\cdots E_1^{-1}A = I_n\text{,}
\end{equation*}
which shows the sequence of elementary row operations (through multiplication by elementary matrices) used to transform \(A\) into \(I_n\text{.}\) On the other hand, if we take the equation \(A = E_1\cdots E_k\) from the previous paragraph and invert both sides, we get
\begin{equation*}
A^{-1} = E_k^{-1}\cdots E_1^{-1}\text{,}
\end{equation*}
which we can easily adjust to
\begin{equation*}
A^{-1} = E_k^{-1}\cdots E_1^{-1}I_n\text{.}
\end{equation*}
This establishes the final claim in the theorem.