is better described as the size of the algorithm
needed to compute a given number. For the sake of comparison this is minimized -- the smallest possible
algorithm is chosen for each computation to be measured. It's interesting because it describes the complexity of a number in a way that's completely independent from the number itself. Since it doesn't use any external
measurements on the number, its much less prone to miscalculation
than pattern-based methods of determining complexity
. Also, a number doesn't even have to be computed to determine its Kolmogorov complexity, so its useful for comparing the complexities of huge numbers.
An often cited example is the comparison of a million digits of pi to a million statistically random digits. Pi has low Kolmogorov complexity as it can be specified by a short algorithm, whether it's Gauss's summation or one of the newer digit-specifying algorithms. On the other side of the spectrum, the million random digits can't be derived in any other way besides specifying them in their entirety. The algorithm used to compute pi doesn't change in size depending on how many digits are needed, it has a constant Kolmogorov complexity. Since the algorithm used to determine the random digits is in fact the digits themselves, it gets larger with each additional digit -- it has a linearly increasing Kolmogorov complexity. With this comparison we can tell that even though pi looks imposingly random to most methods of calculating complexity, it's much much less complex than a truly random number.
Also, modern measurements of Kolmogorov complexity don't necessarily use Turing machines at all. Since universal Turing machines are the common denominator of computation, they were used by Kolmogorov in his work. However, other ways of encoding algorithms, such as combinatory logic, are perfectly reasonable and have been successfully used for the purpose.