|COMPUTER SCIENCE/DISCRETE MATH II|
|Topic:||Algorithms vs. Hardness|
|Affiliation:||Microsoft Research India|
|Date:||Tuesday, March 9|
|Time/Room:||10:30am - 12:30pm/S-101|
This talk will be concerned with how well can we approximate NP-hard problems. One of the most successful algorithmic strategies, from an upper bound perspective, is to write a polynomial time computable relaxation for an NP-hard problem and present a "rounding" algorithm. To prove a hardness of approximation result, on the other hand, one often gives a PCP-type reduction from some hard problem where the key is to design a "dictatorship test". Until recently, these seemed to be two different facets of the problem. This distinction is now blurring: we are beginning to understand general recipes of how to use the output of algorithmic relaxations to come up with dictatorship tests and how to use the "decoding" algorithm in the "soundness" analysis of these tests to come up with a rounding algorithm for the relaxation itself. This viewpoint has led to the discovery of new (optimal) algorithms and reductions for several important problems such as Max-Cut, Vertex Cover and beyond for which elegant, but seemingly unrelated, algorithms and hardness reductions were previously known, and seems to point to a promising future for approximability. In this talk I will demonstrate this emerging duality between algorithms and hardness. The only pre-requisites for the talk will be the knowledge of the Central Limit Theorem and some interest in NP-hardness. This talk will be partially based on a result of Raghavendra for Max-Cut and a joint work with Kumar, Manokaran and Tulsiani on Vertex Cover.