When I was an undergraduate, I used to think that

Gaussian elimination was a

tiresome statement of the obvious which necessitated a

tedious proof by cases, and that a way to solve linear equations

by rote was of interest only to

schoolchildren and

computer scientists. And yet

**how wrong I was**. It turns out that

secretly, Gaussian elimination is a method for constructing a

cellular decomposition of the

Grassmannian ...

Thrilled? Intrigued? *No?! * Would it help if I mentioned that the Grassmannian is a generalization of projective space?^{1}

Oookay, maybe there's a little more explaining to do than I anticipated. So, choose two integers *n* and *k* with *0 ≤ k ≤ n*. Let *V* be an *n*-dimensional (real) vector space, and consider **the set of all ***k*-dimensional subspaces of *V*. There's an intuitively obvious way to make this set into a topological space—basically, you can define a metric where two subspaces are within ε of each other if you can convert some orthonormal basis of one into a basis of the other by moving each basis vector by at most ε—but I'll assume you either know enough to work out the details yourself or, more likely, don't care. In either case, the resulting space is called the **(real) ***Grassmannian*, after the mathematician-physicist Hermann Günther Grassmann, and denoted *G(n,k)*.

At this point, I hear my audience asking sceptically: how the hell am I meant to imagine that? And there, dear reader, you walked right into my next point: mathematicians can't imagine it either, which is *exactly* why we need a cellular decomposition.

A *cellular decomposition* is simply a way to understand a topological space by dividing it up into *n*-dimensional disks. (If you can't understand *n*-dimensional disks, apparently, you're on your own.) A rule for decomposing any Grassmannian in this way was discovered by Hermann Schubert: that is, he found a way to write down a list of how many disks of each dimension to use, together with rules for how you should glue them together to reconstruct the Grassmannian space after you had cut them out of *n*-dimensional paper with your *(n+1)*-dimensional scissors. His method was based on that of Gaussian elimination!

His idea is simply this: choose, once for all, a canonical basis for *V*. Then given a *k*-dimensional subspace of *V*, choose any basis for the subspace and write it as a *k × n* matrix of row vectors. Apply Gaussian elimination to write this matrix in row echelon form and take note of just one thing: **which columns the pivots** *(i.e., the leading ones; see Aresds' excellent writeup)* **are in**.

If we number the columns from 1 to *n*, this means that to each *k*-dimensional subspace we've associated a list of *k* column numbers. Now we simply define a Schubert cell to consist of all the subspaces with the same column numbers. Thus the Grassmannian is partitioned into ^{n}C_{k} disjoint cells, where ^{n}C_{k} denotes the binomial coefficient counting the number of ways to choose *k* distinct column numbers from 1, ..., *n*.

Of course chopping a big space up into lots of little spaces is all very well, but it wouldn't be much help if the little spaces were just as intractable as the big one was. The beauty of this construction is that each Schubert cell is topologically equivalent to an *r*-dimensional disk (i.e., a filled-in sphere) for some *r*. To determine *r*, recall that each set of pivot column numbers, hence each Schubert cell, corresponds to a matrix shape such as:

| 0 1 * * * * * * |
[ 0 0 0 1 * * * * ]
| 0 0 0 0 1 * * * |

Then

*r* is the number of

degrees of freedom, which is simply the number of stars (

arbitrary entries) in the matrix shape. For example, in the matrix above (which corresponds to the Schubert cell (2,4,5) in

*G*(8,3)),

*r* would be 13.

Suppose

*k* is 1, so we're looking at

one-dimensional subspaces of a vector space. Then each subspace is spanned by just one vector, so our shape matrix is only one line deep and the

^{n}C_{1} = n possibilities for row echelon form are just:

[ 1 * * ... * * * ],
[ 0 1 * ... * * * ],
[ 0 0 1 ... * * * ],
down to
[ 0 0 0 ... 0 0 1 ].

(Which of these does a given subspace correspond to? It's the one with the same number of leading zeros as any nonzero element of the subspace.) Counting the stars we notice that the cells have respective dimensions

*n-1, n-2, ..., 0*.

So we deduce that *G(n,1)*, which is also known as *(n-1)*-dimensional projective space, can be written as the union of *n* disks, one of each dimension. In fact it's even easier: these constructions are compatible for different values of *n*, so to make an *n*-dimensional projective space all you have to do is to take an *(n-1)*-dimensional projective space, which you've constructed by induction, and glue an *n*-dimensional disk onto it.
This construction is precisely equivalent to the one Noether explains under projective space, in which the *n*-dimensional disk is viewed as the “finite part” and the *(n-1)*-dimensional projective space is the “hyperplane at infinity”.

^{1}Surely not

*infinite-dimensional* projective space, you ask? And indeed, it can be if you want it to be, but for the moment let's stick with plain

vanilla *n*-dimensional projective space.

This is a perfect example of how as you continue to study mathematics you begin to understand

old theorems in

new ways, often using

concepts you simply didn't have the

vocabulary to express when you first knew the theorem. And of course,

it doesn't stop there. Because, you see,

deep down, the Grassmannian isn't a mere topological space at all, it's a

scheme *G(n,k)* in the sense of

Grothendieck, and the Grassmannian over a

field (such as the reals) is a special case of the Grassmannian over an arbitrary scheme

*S*, which you can construct as the

pull-back *G(n,k) × S* ...

*[Next week, same time, same place: “Why Jordan normal form is really just a parametrization of the Hilbert scheme”]*