Proof by Descent:

Suppose that a prime number does have a rational square root: that is, p being a prime, we can write sqrt(p) = a / b, for natural numbers a and b. Squaring both sides yields p = a2 / b2, so that a2 = p*b2. We now see that p divides a2.

But when a prime number divides the product of two numbers, it must divide at least one of them. Fortunately, a2 = a*a, so we can simply submit that p divides a. Which means, by definition, that we can write a = p*a1. So, we have a^2 = (p*a1)a2 = p*ba2. Dividing through by p, p*a1a2 = ba2.

Clearly, we have the same situation we just dealt with... p divides ba2, so p divides b, so b = p*b1, so we have p*a1a2 = (p*b1)a2. And dividing through, we get a1a2 = p*b1a2.

But this is a solution of the form we had earlier, except that a1 < a, and b1 < b. The solution's existence implies an infinitude of smaller and smaller solutions, and since the natural numbers are bounded at 1, we will eventually imply a nonsense solution with aka2 < 1, a contradiction. So the premise that there are numbers a and b such that sqrt(p) = a / b is incorrect, and the square root of a prime is irrational.

Ok proof by contradiction here we go.

Let p be any prime number.

Assume √p is a rational number. √p can therefore be written as a fraction, a/b where a and b are coprime integers. (That it can be written as a fraction comes from the definition of rational but to choose a and b to be coprime we require the fundamental theorem of arithmetic. If you don't know a proof of this I suggest you read the node for the sake of thoroughness.)

√p= a/b


Ok next stage (this isn't related to the first bit.) Take the highest integer lower than √p and call this number c.(e.g. √5 is approximately 2.236 so c would be 2.)


Now b×√p is an integer and b×c is an integer so the result, let’s call it d, must also be an integer. The next step is to multiply this result by √p


√p=b×p- b×c×√p

Now b×p is an integer and c×(b×√p) is also an integer. Therefore d×√p is an integer.

d is less than b. {d=b(√p-c} but in choosing and a and b to be comprime we ensured that b was the smallest interger which when multiplied by √p gave an interger.

Voila the contradiction!

This proof not only covers primes but extendeds to all intergers with nonintergal square roots. (If p is a perfect square then (√p - c) is zero and the rest of the proof goes down the tube, unsurprisingly.)

The proof that the square root of any prime number is irrational is easy using prime decomposition. We use proof by contradiction: suppose that sqrt(n) is rational. Then

n = a2/b2

for some a and b. Now write a and b as products of prime factors, and cancel any common factors. Then we have

n = p12×p22×···/q12×q22×···

where p1, p2, …, q1, q2, … are primes and pj≠qk for all j, k. (This is just a fancy way of saying there are no common factors.)

But n is an integer. Therefore the denominator q12×q22×… is equal to unity. If it were not, then the numerator would have a common factor with the denominator, since it would be an integer multiple of the denominator; but we already cancelled all common factors.

Therefore, if sqrt(n) is rational, n is a product of squares of integers, and not a prime. In fact we have proved a stronger statement: the square root of any non-square number is irrational. Thus, the square root of any integer is either an integer, or irrational.

The original statement has a slightly more elegant proof using the fundamental theorem of arithmetic, to be precise the fact that the prime decomposition is unique. If sqrt(n) is rational, then

n×b2 = a2.

The prime decomposition is unique, therefore the number of prime factors must be the same on both sides. Both a2 and b2, being squares, have an even number of prime factors. But n is prime. Therefore the LHS has an odd number of prime factors, and we have a contradiction. This also proves that the square root of any integer with an odd number of prime factors is irrational.
According to Excursions in Mathematics by Ogilvie (Dover) Joseph Louis LaGrange (1736-1813) expressed the proof of the irrationality of the square root of 2, in a handy sentence:

It (square root of 2) cannot be found in fractions, for if you take a fraction reduced to its lowest terms, the square of that fraction will again be a fraction reduced to its lowest terms and consequently cannot be equal to the whole number 2.

I think that is interesting and related to the other proofs presented.

Log in or register to write something here or to contact authors.