In a computer science
context, "complexity" usually refers to the "cost" of an algorithm
or data structure
There are two main formas of complexity:
Furthermore, attention is usually focused on the asymptotic
complexity in relation to the number n
of objects that are handled, for large numbers. Constant factors are by and large ignored, because they vary with the execution environment (mainly hardware
), not with the problem. But if the time your algorithm takes increases exponential
ly with n
, then you are fucked
for large n
, no matter how fast your hardware is.
This is usually expressed by using big-oh notation
Finally, there are different measures of complexity:
- worst case complexity: you assume that the input is always such that your algorithm works pessimal. This is the most common form, because of Finagle's Law.
- average case complexity: you assume that your input is "average", in most cases this is interpreted as random. The problem is that this interpretation is often wrong.
- amortized complexity: a mix of the above where you prove that the worst case cannot happen many times in a row and you instead use the average complexity over a sequence with a maximum number of worst cases.