|Computer Science/Discrete Mathematics Seminar I|
|Topic:||On the possibility of an instance-based complexity theory.|
|Date:||Monday, April 15|
|Time/Room:||11:00am - 12:00pm/Simonyi Hall 101|
Worst-case analysis of algorithms has been the central method of theoretical computer science for the last 60 years, leading to great discoveries in algorithm design and a beautiful theory of computational hardness. However, worst-case analysis can be sometimes too rigid, and lead to an discrepancy between the "real world" complexity of a problem and its theoretical analysis, as well as fail to shed light on theoretically fascinating questions arising in connections with statistical physics, machine learning, and other areas. On the other hand, average-complexity comes with its own limitations, and in particular the strong dependency on the modelling choice for distributions on inputs. In this talk, I will discuss what I hope can be a promising research direction of investigating an instance-based theory of computational complexity for optimization problems. In particular, we will talk about what properties can and cannot be satisfied by a measure of per-instance complexity. While we do have some preliminary observations, most of this research direction is completely unexplored, and even formulating the right questions, let alone answering them, is still work in progress. I am looking forward for feedback and discussion with the audience.