A hidden variables theory of quantum mechanics is one which does not take the statistical predictions of the quantum formalism as the ultimate truth about reality. That is, it is an interpretation of the mathematics of quantum physics which does not necessarily go along with the traditional Copenhagen interpretation in viewing nature as inherently random, and the process of wave-function collapse as indescribable beyond what our existing equations tell us.

You see, quantum mechanics - one of the cornerstones of modern physics - makes only statistical predictions about experiments. This is in contrast to Newtonian physics, which in principle makes definite predictions about where an object will be at any point in the future if it is fed sufficiently precise specifications of the object's initial position and velocity, and all forces acting on it. Quantum physics generally only assigns probabilities to each of a range of possible outcomes - Heisenberg uncertainty apparently makes it impossible to make any more confident prediction, even if we were to know everything it is possible to know about a given experimental setup. The Copenhagen interpretation, as set out by Niels Bohr and his contemporaries, holds that this will always be true; that reality is fundamentally random, and the uncertainty in the position and momentum of a particle is as real as the position and momentum themselves. We can only say where a particle is at the cost of being able to say how fast it is moving, if Bohr was correct, because at a fundamental level a particle does not possess a well-defined position and momentum simultaneously.

It is not hard to see why this idea of reality was met with some resistance; the clockwork universe of Newton and Laplace was deeply ingrained in public consciousness by this point, so the idea that the universe was fundamentally random was one that many found difficult to accept; hence Albert Einstein's famous hope that God does not play dice. Cause and effect were such useful ideas that people tended to assume - not unreasonably - they applied to everything at all times. The idea that a particle doesn't possess an exact position or momentum also came as a surprise to many, although of course we are used to objects taking up a certain amount of space (making it impossible to assign an exact position to the whole thing) and to different parts of an object travelling at different speeds - so this is not as counter-intuitive as is sometimes suggested. It does, however, make it clear that it is misleading to think of electrons and the like as point particles unless they are constantly being buffeted every which way by unseen, interacting waves. It seems sensible to think of a particle as being spread out in space in various shapes, although precisely how it is spread out in space remains somewhat obscure.

In 1932 John von Neumann produced a proof which was supposed to show that a hidden variables theory of quantum mechanics was impossible; that there couldn't possibly be a deeper layer of reality hiding behind the statistical predictions of quantum physics, that no further explanation would be possible. This more or less put an end to the debate over hidden variables theories for many years; the only serious proposition for a hidden variables theory, formulated by Louis de Broglie, had in any case been trashed already by de Broglie's fellow physicists (perhaps unfairly) when he presented it at the 1927 Solvay Conference.

Von Neumann's 'impossibility proof', it turned out, was deeply flawed. The details of this are rather technical - presumably, if they were easy to understand then more people would have spotted them sooner - but briefly, the flaw has to do with commutativity: the property of a mathematical operation which means that it works the same forwards as backwards. For example, addition and multiplication are commutative functions because a+b=b+a and a×b=b×a; subtraction and division are not commutative, because a-bb-a and a÷bb÷a. Von Neumann's proof rested on an assumption that a function which according to quantum mechanics commutes on average would commute in individual cases; had he stopped to check whether this condition applied to de Broglie's theory, he would have quickly found that it did not. In fact, the assumption is quite wrong-headed, and as a consequence the 'proof' as a whole shouldn't be seen as any evidence against hidden variables theories at all; however, mud sticks and even now it is sometimes presented as such.

In 1935, two things happened which are significant in the history of hidden variables theories: Firstly, Grete Hermann, a German mathematician, produced a paper pointing out exactly what was wrong with von Neumann's alleged proof; unfortunately, nobody seems to have noticed. Secondly, Albert Einstein, together with two of his students, presented what is often called the Einstein-Podolsky-Rosen Paradox. They intended this as a proof of the existence of hidden variables, although thirty years later John Stewart Bell turned it around and used it as an argument for non-locality, which is the way it is most often thought of today.

The essence of EPR's argument is relatively straightforward: They pointed out that the mathematics of quantum theory predict that under certain conditions, when two particles leave a common starting point together their individual momentum is not known, but the sum of their momentums is. Since you know the sum of their momentums, measuring one gives you a precise figure for the other. Now, recall that according to orthodox interpretations of quantum mechanics a particle like the second particle here does not possess a well-defined momentum until it is observed. Since we are not directly observing the particle, but rather a different particle with which it once interacted, one of two things must be happening here: Either the measurement we make on the first particle somehow instantaneously imparts a well-defined momentum on the other, however far away it is; or else the particle possesses a well-defined momentum after all, in which case the Copenhagen interpretation is wrong.

The first possibility implies what Einstein called spooky action at a distance; faster-than-light interactions of any sort are extremely hard to reconcile with the theory of relativity, and the idea of an 'instantaneous' interaction between two widely separated particles is not well-defined given the relativity of simultaneity - and besides all this, nobody could explain what could possibly be mediating this interaction. Unsurprisingly then, Einstein and his students opted for the altenative conclusion: that quantum theory as it stood was not complete, and hidden variables were needed to flesh it out.

By this point Einstein was no longer held in quite the reverent awe he had once enjoyed; his resistance to various aspects of quantum physics - a field he had helped to found with his 1905 paper on the photo-electric effect - had led many of his younger colleagues to think of him as a little old-fashioned. So when the mighty Niels Bohr launched his blistering attack on the EPR paper, most people were willing to accept that he had defeated the older man's arguments, even if they had no idea what Bohr himself was talking about. Bohr's argument followed from his idealistic view of quantum physics; he had long held a view of quantum physics that placed the observer at the centre, essentially building his own reality by taking measurements, and from that point of view perhaps it isn't a problem for the state of a particle to change, or come into existence, as a result of measurements taking place elsewhere, since the thing being measured isn't really real up until that point anyway.

His exact argument was highly abstract, and I am certainly not the only one who has looked at it and found themselves with very little idea of what he was trying to say or how he thought he had defeated the arguments of Einstein and co; but it hardly mattered. Bohr was Bohr, by far the most important quantum physicist in the world, and he must have known what he was talking about; and besides, hadn't von Neumann already proven three years previously that a hidden variables theory of quantum physics could never work?

With the EPR paper swept from view, and the Hermann refutation of the von Neumann 'proof' universally ignored, nothing much was heard about hidden variables in quantum mechanics until 1952, when David Bohm produced an interpretation based on 'pilot waves', building on de Broglie's earlier work. In the Bohm picture, electrons, photons and the rest really are point particles with well-defined position, momentum and so on; however, their movement is controlled by a quantum potential pervading the whole of space, with each particle having its own 'pilot wave' capable of producing interference, diffraction and so on in the way we expect from waves.

Bohm's theory is explicitly non-local: something which affects a particle will influence its pilot wave, and whatever it interacts with, throughout the whole of space. As has already been noted, it is difficult to reconcile any faster-than-light (non-local) interaction with Relativity; this was the principal objection made against Bohm's theory, besides vague suggestions that it was 'metaphysical'. The non-locality of the theory was seen as a real problem, and the theory did not receive widespread acceptance; however, nobody could deny that it seemed to work perfectly well (indeed, it precisely reproduces the predictions of standard quantum mechanics), and non-local or not, it provided a clear counter-example to von Neumann's misguided impossibility proof.

The Bohm interpretation theory might not have been the most elegant of theories, but it got the young physicist John Stewart Bell thinking; he saw that it stood in stark defiance of von Neumann's 'proof'. What, Bell wondered, had von Neumann got wrong that everybody had missed? Before long Bell found where the problem lay and produced the definitive refutation of von Neumann; then he got to work on an impossibility proof of his own, one which did not purport to rule out the existence of any existing theories. Bell's Theorem - the details of which need not concern us here - showed (if correct) that no theory of quantum mechanics could be both realistic - in the sense that particles possess precise properties at all times, as is the general aim of hidden variables theories - and local, in the sense that faster-than-light interactions are forbidden. Bell published his famous inequality in 1964; his demolition of von Neumann's arguments (completed first) followed in 1966.

Bell's results opened the way for hidden-variables theories as never before; people started taking David Bohm's theory rather more seriously than they had, now that they knew that its existence wasn't the logical impossibility they had once supposed. Bohm's theory remains the most successful hidden variables theory of quantum physics; indeed, it seems to be the only hidden variables theory which has achieved more than fringe recognition. Bell's results achieved widespread but not universal acceptance; a number of people have argued that they fail to rule out local hidden variables theories, while others have argued that any interpretation of quantum mechanics must be non-local whether they invoke hidden variables or not. I have written a bit about this controversy in the quantum non-locality node (as well as covering some of the same ground as this writeup).

Although it remains a rather small minority of physicists who consider this area worthy of much thought, a number of researchers have been working on taking Bohm's ideas further. Gerard 't Hooft, co-winner of the 1999 Nobel Prize in Physics for his work on the electroweak force, has argued it is perfectly possible that a deterministic framework lies behind seemingly stochastic quantum interactions, causing apparent randomness in the same way we are used to seeing apparent randomness produced in the macroscopic world, by large numbers of more-or-less unknowable factors working together in concert. 't Hooft suspects that we will never have any direct experimental evidence for or against determinism underlying the statistical predictions of quantum mechanics.

On the other hand Antony Valentini of Imperial College in London has developed a deterministic extension of Bohm's theory which might be testable in experiments in the not-too-distant future. Valentini's idea is that the 'quantum noise' which (among other things) appears to render quantum non-locality useless for faster-than-light communication is not an inherent property of matter but rather a state, the result of the statistical behaviour of ensembles of particles. If he's right then it is possible that in the early universe the quantum noise may not have been sufficient to prevent non-local communication of information; this might help explain why such far-apart regions of the universe display such similar energy densities. If this is the case, then Heisenberg uncertainty would have been a less powerful force in the early universe, and we could see this reflected in the distribution of the cosmic microwave background radiation.

It is also possible that some matter which hasn't interacted with any other particles may have survived in a non-quantum state. Such matter would be susceptible to having its position pinned down more tightly than the Uncertainty Principle would usually allow, and because its interactions are subject to less noise it could theoretically be used to eavesdrop on quantum cryptography communications channels which would normally be totally secure, as well as making possible massively more powerful quantum computers and perhaps - most radically of all - making it possible to exploit quantum non-locality for communication. This would probably lead to causal paradoxes unless - as John Stewart Bell believed - Einstein was wrong and there is in fact 'one true frame of reference' relative to which the time dilation and Lorentz-Fitzgerald contraction associated with special relativity happen. This has not been ruled out by any experiment, but it would invalidate the central insight on which Einstein's relativity is based, namely the application of the principle of relativity to the electromagnetic field and gravity, which would make the success of his theories somewhat mysterious.

It is anyone's guess whether such matter is possible, whether in practice it could do all the things it has been suggested it could do, and how common it might be if it does exist - Valentini puts it forward as a candidate for at least some of the dark matter that astronomers believe makes up most of the mass of the universe. Whatever the case, the idea of an ontological interpretation of quantum physics - one which at least tries to describe what is going on while an electron wave or a photon is travelling, and what is involved in the process of wave function collapse - has not gone away, despite being pronounced dead on many occasions, and it continues to spawn interesting new theories about the nature of reality. It remains to be seen whether the Copenhagen interpretation - or rather the Copenhagen non-interpretation, the denial that it is possible to meaningfully interpret quantum physics - will prevail in the long run.


Mostly based on research I did for my BSc dissertation on Quantum Entanglement and Causality, from which my writeups on quantum non-locality and quantum entanglement are adapted. The full text of the dissertation (partially re-written for a somewhat less technical audience) can be found at http://oolong.co.uk/Causality.html; its original bibliography, annotated, is at http://oolong.co.uk/Bibliography.htm
Information on 't Hooft and Valentini theories from New Scientist, June 29, 2002.
Gerard 't Hooft's piece "How does God play dice? Pre-determinism at the Planck scale" is at http://www.arxiv.org/abs/hep-th/0104219
Antony Valentini's abstracts are at http://arXiv.org/find/quant-ph/1/au:+valentini/0/1/0/past/0/1

Log in or register to write something here or to contact authors.