When working with a

linear map, it is often convenient to make use of the

isomorphism between such an

operator and its

matrix representation. However, any given

vector space will have many bases, and in general a

change of basis alters the matrix version of a given operator. Hence, it is useful to examine both

invariants of a

map (properties which do not change regardless of the basis, such as the

determinant or

trace) and to come up with

canonical forms for representing a map by a matrix. The best-case (i.e., simplest) scenario is when a

linear map is

diagonalisable; that is, we can find a basis with respect to which the matrix only has entries on the lead diagonal. Unfortunately, such a

decomposition is not always possible- it is a special case of the more general

jordan canonical form (with 1X1 Jordan blocks).

The most we can do is decompose a linear operator into a smaller, simpler collection of operators which tell us how the linear operator works. More formally, for α:V→V, where V is a finite dimensional vector space over any field, the aim is to decompose V as a direct sum of α-invariant subspaces. (a subspace W is α-invariant if for any w∈W, α(w)∈W.)

The primary decomposition theorem states that this decomposition is determined by the minimal polynomial of α :

*Let α:V→V be a linear operator whose minimal polynomial factorises into monic, coprime polynomials-
*

*m*_{α}(t)=p_{1}(t)p_{2}(t)*
*

Then,
V= W_{1}⊕W_{2}

Where the W_{i} are α-invariant subspaces such that p_{i} is the minimum polynomial of α|_{Wi}
Repeated application of this result (i.e., fully factorising m_{α} into pairwise coprime factors) gives a more general version: if m_{α}(t)=p_{1}(t)...p_{k}(t) as described, then

V=W_{1}⊕...⊕W_{k}
With α-invariant W

_{i} with corresponding minimal polynomal p

_{i}.

It may now be apparent that diagonalisation is a special case in which each W_{i} has a minimal polynomial which consists of a single factor, (t-λ_{i}); i.e. if m_{α}=(t-λ_{1})...(t-λ_{k}) for distinct λ_{i} then α is diagonalisable with the λ_{i} as the diagonal entries.

**Proof of the Primary Decomposition Theorem**

The theorem makes two assertions; that we can construct α-invariant subspaces W_{i} based on the p_{i}; and that the direct sum of these W_{i} constructs V.

For the first, a result about invariant subspaces is needed:

*Lemma*: if α,β:V→V are linear maps such that αβ = βα, then ker β is α-invariant.

*Proof*: Take w∈Ker β - we need to show that α(w) is also in ker β. Now β(α(w))=α(β(w)) by assumption, so β(α(w))=α(__zero vector__) since w∈ker β , = __0__ since α is a linear map. But if β(α(w))=0 then α(w) ∈ ker β, hence ker β is α-invariant.

Given this result, we now take the W_{i} as Ker p_{i}(α). Then since p_{i}(α)α = αp_{i}(α), it follows that ker p_{i} = W_{i} is α-invariant.

We now seek to show that (i) V = W_{1} + W_{2}, and (ii) W_{1}∩W_{2} = {__0__} (that is, V decomposes as a direct sum of the W_{i}'s.)

Using Euclid's Algorithm for polynomials, since the p_{i} are coprime there are polynomials q_{i} such that p_{1}(t)q_{1}(t) + p_{2}(t)q_{2}(t) =1.

So for any v ∈V, consider w_{1}=p_{2}(α)q_{2}(α)v and w_{2}=p_{1}(α)q_{1}(α)v. Then v= w_{1} + w_{2} by the above identity. We can confirm that w_{1}∈W_{1}: p_{1}(α)w_{1}=m_{α}(α)q_{2}(α)v = 0. Similarly, w_{2}∈W_{2}. So we have (i).

For (ii), let v ∈ W_{1}∩W_{2}. Then

v = id(v) = q_{1}(α)p_{1}(α)v + q_{2}(α)p_{2}(α)v = __0__. So W_{1}∩W_{2} = {__0__}.

Finally, for the claimed minimum polynomials. Let m_{i} be the min.poly. of α|_{Wi} . We have that p_{i}(α|_{Wi})=0, so the degree of p_{i} is at least that of m_{i}. This holds for each i. However,
p_{1}(t)p_{2}(t)=m_{α}(t)=lcm{m_{1}(t), m_{2}(t)} so we obtain

deg p_{1} + deg p_{2} = deg m_{α} ≤ deg m_{1} + deg m_{2}.

It follows that deg p_{i} = deg m_{i} for each i, and given monic p_{i} it must be that m_{i}=p_{i}. The proof is complete.