Why This Matters
Big-O is the language algorithm engineers use to compare designs without running them. Two implementations of the same problem can differ by a constant factor on a benchmark and still differ by an order of magnitude on inputs the benchmark never reached. Big-O is the tool that catches that.
The notation is widely abused. Every interview asks for it; few candidates can state the formal definition. The mistakes compound: people quote for an algorithm that is in fact in the worst case, then act surprised when production traffic finds the worst case.
Formal Definition
A function is in if there exist positive constants and such that for all ,
In words: past some input size , is at most a constant multiple of . The notation is an upper bound on growth rate, not an exact rate. An algorithm that runs in time is also in , in , and in — Big-O does not promise tightness.
Worked Examples
Constant time, . Accessing an array element by index. The cost does not depend on the array's length.
Logarithmic, . Binary search on a sorted array. Each comparison halves the search space, so the depth of the recursion is .
Linear, . Computing the sum of an array. One pass, one operation per element.
Linearithmic, . Merge sort. The recursion tree has levels and each level does work to merge.
Quadratic, . A naive nested loop comparing all pairs. The number of pairs is , which is .
Exponential, . Enumerating all subsets of an -element set. The set has subsets, so any algorithm that materializes them all is at least .
Big-O vs Big-Omega vs Big-Theta
Three related notations describe asymptotic growth at different bounds:
- means is at most a constant times asymptotically. Upper bound.
- means is at least a constant times asymptotically. Lower bound.
- means is both and . Exact rate up to constants.
Saying merge sort is "" is correct but loose. Saying merge sort is "" is the tighter and more useful claim: it both upper- and lower-bounds the cost. Use when you can prove it; use when you only have the upper bound.
Common Mistakes
Confusing average case with worst case. Quicksort with a random pivot is in expectation. Its worst case is . "Quicksort is " is true on average and false in the worst case. Both facts matter; know which you are quoting.
Dropping constants when they matter. Big-O hides constants by definition. For two algorithms in , one with constant 3 and one with constant 30, the second is 10× slower at every input size. On a hot path, that constant decides whether your service meets its SLO. Big-O is for asymptotic comparison, not for picking between constant-factor variants.
Conflating time and space. Many algorithms have different time and space complexity. Hash-table lookup is time and space; merge sort is time and space. State which dimension you are bounding.
Ignoring the input distribution. " in the worst case" is a statement about adversarial inputs. If your traffic is structured (sorted arrays, sparse graphs, low entropy), the worst case may never occur. The right answer depends on what you actually run, not what an adversary could pick.
References
- Cormen, Leiserson, Rivest, Stein. Introduction to Algorithms (4th ed., 2022). Ch. 3 — Growth of Functions.
- Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd ed.). §1.2.11.
Next Topics
- Complexity analysis — time, space, worst case, average case, and amortized cost.
- Big-O, Big-Omega, and Big-Theta — upper, lower, and tight bounds.
- Arrays — the base random-access data structure.
- Arrays and pointers — the simplest concrete data structure and the source of random access.
- Binary search — the canonical algorithm and a trap-filled implementation exercise.