# Finite unions of subspaces

2021-12-06

This post collects some proofs of the following statement.

## Theorem

Let \(V\) be a vector-space over a field \(F\) with infinitely many elements, and let \(V_1, \ldots, V_n\) be finitely many proper subspaces. Then \(\bigcup_{i=1}^n V_i \ne V\).

## Induction

This is the first proof I have ever seen of this statement.

It proceeds by induction:

If \(n=1\), then the statement holds obviously.

Suppose it holds for \(n-1\).

Now let the notations be as in the statement. If \(V_1 \subseteq \bigcup_{i=2}^{n-1} V_i \), then we are reduced to the \(n-1\) case: \(V_2, \ldots, V_n\). So we can assume that \(\exists x\in V_1 \setminus \bigcup_{i=2}^n V_i\).

Similarly, we can assume that \(\exists V_i\) such that \(V_i \not\subseteq V_1\). By renumbering if necessary, we can assume that \(i=2\). Let \(y\in V_2 \setminus V_1\).

Then the set \( \{ x + \lambda y \mid \lambda \in F \} \) is infinite by the hypothesis. So if the union of \(V_i\) is the whole \(V\), then by the pigeon-hole principle, there exist \(\lambda \ne \mu \in F\) such that \( x + \lambda y \in V_j\) and \( x + \mu y \in V_j \) for some \(j=1,\ldots,n\).

This means \( (\lambda - \mu ) \cdot y \in V_j\), and hence \(y\in V_j\). It follows that \(x\in V_j\) as well. But this is impossible: \(x\in V_j\) implies that \(j=1\) by our assumption of \(x\), while \(y\notin V_1\).

This contradiction proves the statement.

## Vandermonde matrix

This is quite a surprising proof: I have never thought that the Vandermonde matrices can have a play here!

First, if the field \(F\) has characteristic \(0\), then define \(a_i:=i,\,\forall i\in \mathbb N_{\geq1}\). If the characteristic is a prime \(p\), then define \(a_i := \sum_{j=1}^i f_j\), where for every \(j\in\mathbb N_{\geq1}\), we choose some \(f_j\) in \(\mathbb F_{p^{j+1}}^*\) but not in \(\mathbb F_{p^j}^*\), which belongs to \(F\) since it is infinite.

Let \(v_1, \ldots, v_n\) be a basis for \(V\).

For every \(i = 1,\ldots\), define \[ \alpha_i := v_1 + a_i v_2 \cdots + {a_i}^{n-1} v_n. \]

Then for any \(n\) numbers \(i_1, \ldots, i_n\), we know that the matrix

\begin{bmatrix} 1 & a_{i_1} & \cdots & {a_{i_1}}^{n-1} \\ \vdots & \vdots & \cdots & \vdots \\ 1 & a_{i_n} & \cdots & {a_{i_n}}^{n-1} \end{bmatrix}has determinant \(\prod_{1\leq a \lt b \leq n} (a_{i_a} - a_{i_b}) \ne 0\), and hence the vectors \(\{\alpha_{i_1}, \ldots, \alpha_{i_n} \}\) form a basis for \(V\).

This shows that every sub-space of \(V\) contains at most \(n-1\) of theses \(\alpha_i\). Therefore the union of finitely many sub-spaces contains finitely many these \(\alpha_i\). Consequently there are infinitely many \(\alpha_i\) not contained in the union.

## Polynomials

I had the idea of this proof earlier, but I was thinking about using schemes, which did not end up practical. But that idea can be modified to give the following proof.

Since \(V\) has finite dimension, it has a finite basis, say \(\beta_1, \ldots, \beta_n\), where \(n\) is the dimension of \(V\). Then a subspace has a basis as well, say \(\gamma_1, \ldots, \gamma_m\), where \(m\lt n\) is the dimension of the subspace. Then we can find linearly independent vectors \(\gamma_{m+1}, \ldots, \gamma_n\) such that \(\{\gamma_i, i=1,\ldots,n\}\) is a basis for \(V\).

We can express \(\beta_i\) as a linear combination \(\sum_{i=1}^n A_{ij} \gamma_j\) of \(\gamma_j\). So the subspace consists exactly of the vectors which are linear combinations of \(\gamma_i,\, i=1,\ldots,m\), which are exactly the vectors \(\sum_{i=1}^n a_i \beta_i\) such that \[\sum_{i=1}^n a_i \cdot A_{ij} = 0,\, \forall j = m+1,\ldots,n.\] So a subspace consists of points whose coordinates satisfy a finite set of linear equations.

Moreover, if \(m\lt n-1\), then the given subspace is contained in the subspace generated by \(\gamma_1, \ldots, \gamma_{n-1}\), so we may assume that \(m=n-1\) and each subspace under consideration is given by one linear equation.

Then the intersection of finitely many subspaces (each of codimension \(1\)) is given as the points whose coordinates satisfy the product of the linear equations. Thus if an intesection of finitely many subspaces is the whose vector-space \(V\), then every point of \(V\) satisfies the product of the linear equations. But this is impossible, as the prodoct of linear equations is a polynomial equation, and a polynomial has at most \(d\) roots over a field, where \(d\) is the degree of the polynomial over the field, while the field \(F\) has infinitely many elements.

## Others

If I found other proofs, I will supply them here.