Reversing orders does not change eigen-spaces (for non-zero eigen-values)

2022-12-30

This is a proof I found when testing one of the examination questions for a course that I serve as a teaching assistant.

Statement

Suppose \(A\) is a \(4\times3\) matrix and \(B\) a \(3\times 4\) matrix with real coefficients such that

\[ AB = \begin{pmatrix} 1 & -2 & -1 & -1 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ -2 & -2 & -1 & 2 \\ \end{pmatrix}. \]

Then \(BA = 3I_{3\times 3}\), where \(I_{3\times 3}\) is the \(3\times 3\) identity matrix.

The proof the teacher had in mind

The teacher wanted the student to notice that, if the statement is true, then we would have the equality \[ABAB = A (BA) B = 3AB,\] by the associativity of matrix products. So we would first multiply the given matrix \(AB\) by itself to verify that it is indeed three times itself.

Then the next step is to verify that \(B\) is surjective and \(A\) is injective, so that we can cancel out the left \(A\) and the right \(B\) in the above equation to obtain that \(AB=3I_{3\times3}\).

An alternative proof

I found an alternative proof, which is the main subject of this post.

First we prove a lemma.

Lemma

Let \(F\) be a field, \(V_1, V_2\) vector spaces over \(F\). Let \(T_1: V_1\rightarrow V_2\) and \(T_2: V_2\rightarrow V_1\) be linear transformations.

For any \(\lambda\in F\), and for \(i=1,2\), define \(E_{i,\lambda}\) as the subspace of \(V_i\) consisting of elements \(x\in V_i\) such that \(T_j \circ T_i (x)=\lambda x\), where \(j = \begin{cases} 1 & i=2\\ 2 & i = 1\end{cases}\).

If \(\lambda \neq 0\) in \(F\), then \(E_{1,\lambda}\cong E_{2,\lambda}\).

Let \(\lambda\in F\) be non-zero in the following.

Define a linear transformation \[f:E_{1,\lambda} \rightarrow E_{2,\lambda}\] by sending \(v\in E_{1,\lambda}\) to \(T_1(v)\). We first verify that \(T_1(v)\in E_{2,\lambda}\).

Since \(v\in E_{1,\lambda}\), we know \(T_2 \circ T_1 (v) = \lambda v\), and hence \(T_1 \circ T_2 (T_1 (v)) = \lambda T_1 (v)\). This shows that \(T_1(v)\in E_{2,\lambda}\).

Since \(T_1\) is a linear transformation, so is \(f\).

Moreover, if \(f(v) = 0\), then \(T_1(v)=0\). But \(v\in E_{1,\lambda}\) implies that \[v = \frac1\lambda \cdot \lambda v = \frac1\lambda T_2 (T_1 (v)) = \frac1\lambda 0 = 0.\] Therefore \(f\) is an injective linear transformatoin.

Similarly, we have an injective linear transformation \(g: E_{2,\lambda} \rightarrow E_{1,\lambda}\) sending \(v\in E_{2,\lambda}\) to \(T_2 (v)\).

Then we see that \[f\circ g(v) = T_1 \circ T_2 (v) = \lambda v,\,\forall v\in E_{2,\lambda},\] and hence \(f\) is also surjective. Therefore \(f\) is an isomorphism from \(E_{1,\lambda}\) to \(E_{2,\lambda}\).

Proof

We can use the above lemma to prove our statement easily.

The above lemma tells us that the kernel of \(AB-3I_{4\times 4}\) is isomorphic to the kernel of \(BA-3I_{3\times 3}\).

By a simple calculation, we see that the kernel of \(AB-3I_{4\times 4}\) has dimension \(3\). Hence the kernel of \(BA-3I_{3\times 3}\) also has dimension \(3\). Since \(BA-3I_{3\times 3}\) is a \(3\times 3\) matrix, if its kernel has dimension \(3\), it is the zero matrix. Therefore \(BA = 3I_{3\times 3}\) indeed.

Remarks

The second proof, albeit longer, is more general and more beautiful, in my eyes. For example, it does not assume that the vector-spaces have finite dimension.

All original content is licensed under the free copyleft license CC BY-SA .

Author: JSDurand

Email: durand@jsdurand.xyz

Date: 2022-12-30 Ven 13:00:00 CST

GNU Emacs 29.3.50 of 2024-04-07 (Org mode 9.6.15)

Validate