Every description of this sort should necessarily be highly biased and reflect personal tastes of its authors. With this disclaimer in mind, here comes a miscellaneous and unsorted collection of concepts and ideas that we consider as some of the milestones in the development of our discipline.
- The focus of the field changed from the (relatively understood) notion of "computation" to the (much more elusive) notion of "efficient computation". The fundamental notion of NP-completeness was formulated and its near universal impact was gradually understood. Long term goals, such as the P vs NP question, were set.
- The theory of algorithms was developed, with the fundamental focus on asymptotic and worst-case analysis. Numerous techniques, maturing in mathematical sophistication, were invented to efficiently manipulate information in various contexts and to efficiently solve major computational problems.
- A variety of computational models, designed to explain and sometimes anticipate existing computer systems and practical problems were developed and studied. Among them are parallel and distributed models, asynchronous and fault-tolerant computation, on-line algorithms and competitive analysis.
- Randomness was introduced as a key tool and resource. This revolutionized the theory of algorithms. In many cases, probabilistic algorithms and protocols can achieve goals which are impossible deterministically. In other cases they enable much more efficient solutions than deterministic ones. Following this, a series of derandomization techniques developed to convert in general cases probabilistic algorithms to deterministic ones.
- The concept of randomness also turned out extremely useful in more theoretical (complexity) studies, and we list here only three most prominent examples.
- The emergence of the (complexity based) notion of one-way function, together with essential use of randomness, has lead to the development of modern cryptography. What seemed to many at first as mental games of little use, such as trying to play poker without cards, turned into a powerful theory, and from it to practical systems of major economic force, forming a basis to the use of computers and networks today.
- Many new models arose from the study of interacting adversaries, which became objects of independent importance. Probabilistic proof systems, with their many variants --- zero knowledge, Arthur-Merlin, multi-prover, and probabilistically checkable, have enriched to a tremendous extent our understanding of basic complexity classes which have nothing to do with randomness and interaction, such as space bounded computation or approximate solutions to optimization problems.
- Another theory which arose initially from the combination of one-way functions and randomness is the theory of pseudo-randomness. The gradual revelation of the intimate connection between computational difficulty and pseudo-randomness has brought us closer to understanding the power of randomness in various computational settings, as well as (surprisingly) in purely information theoretic settings. See also Special Year on Computational Complexity 2000-2001, topic page.
- Complexity theory, attempting to classify problems according to their computational difficulty, has integrated many of the ideas above, and is now much more mature and intricate than 25 years ago. There were many nontrivial successes in proving lower bounds on restricted models of computation. Moreover, the failure to do so for general models (towards the P vs NP question) has led to the introspective field of proof complexity, trying to quantify what constitutes a difficult proof. Many exciting results and new connections to complexity theory were discovered in the last few years. See also Special Year on Computational Complexity 2000-2001, topic page.
Some related external links
- O. Goldreich, A. Wigderson. Theory of Computation: A Scientific Perspective, an essay.
- A. Razborov. Theoretical Computer Science: a mathematician's look (Russian), full version of an essay written for Computerra.


