Discrete Complexity
This is the quantitatively refined parallel to discrete computability theory and usually based on the Turing machine model. Decision problems (i.e. languages) $L\subseteq\{0,1\}^*$ are generally considered solvable efficiently if a Turing machine $\mathcal{M}$ can do so in polynomial time. This means that there exists some $k,C\in\mathbb{N}$ such that, on every input $\bar x\in\{0,1\}^n$ of length $|\bar x|=n$, $\mathcal{M}$ performs no more than $C+C\cdot n^k$ steps before reporting (correctly) whether $\bar x$ belongs to $L$ or not. Note that asymptotical growth here is meant with respect to the length of the input as (only) parameter. The class of languages thus decidable in polynomial time is called $\mathcal{P}$ and has proven reasonable: First because most practically efficient algorithms run in polynomial time whereas those considered inefficient don't; and secondly because it is closed under composition---if the output of a polynomial-time Turing machine is fed into another one, both combined will still run (slower yet) in polynomial time. Many problems for which no efficient algorithm is known ask, given one object, for the existence of another so-called witness:
- SAT asks, given a Boolean formula, whether it admis a satisfying assignment
- Hamilton Circuit asks, given a graph, whether it admits a tour traversing each node precisely once
- Clique asks, given a graph G and an integer k, whether there exist k vertices in G mutually joined by edges.
- Papadimitriou: Computational Complexity, Addison Wesley (1993)
- Moore, Mertens: The Nature of Computation (2011)