Monday, 24 September 2001
Michael Elkin, Weizmann Institute, Israel
(1+\epsilon, \beta)-spanner constructions for general
graphs
Abstract:
(1+ \epsilon, \beta)-spanner of size O(n^{1 + \delta}). It follows that for any
constant \epsilon, \delta > 0 there exists a constant \beta(\epsilon, \delta)
such that
for any n-vertex graph G = (V,E) there exists an efficiently constructible
subgraph $(V,H)$ with O(n^{1 +\delta}) edges such that d_H(u,w) \le (1 +
\epsilon)
d_G(u,w) for every pair of vertices u,w \in V such that d_G(u,w) \ge
\beta(\epsilon,
\delta).
The talk is based on two papers. The first is joint with David Peleg, appeared
in STOC 2001. The second is to appear in PODC 2001.
Monday, 1 October 2001
Ronitt Rubinfeld, NEC
What can we do in sublinear time?
Abstract:
We have long considered showing the existence of a linear time algorithm
for a problem to be the gold standard of achievement. Indeed, it is hard
to imagine doing any better than that, since for any nontrivial problem, any
algorithm must consider all of the input in order to make a decision.
However, as extremeley large data sets grow more prevalent in a wide variety
of settings, it is natural to wonder what one can do in *sublinear* time.
In fact, there has been a lot of recent interest in this direction.
This talk will contain a brief, nonexhaustive survey of the types of problems
that can be solved in sublinear time. Since any sublinear time algorithm
can only view a small portion of the input, the problems that are solvable
must necessarily be approximate in some sense. For example, we will see
that there are classical optimization problems whose values can be
approximated in sublinear time. In addition, property testing, an alternative
notion of approximation for decision problems, has been applied to give
sublinear algorithms for a wide variety of problems.
I will spend the rest of the talk describing joint work with Bernard Chazelle
and Luca Trevisan, in which we consider sublinear time algorithms for
approximating the weight of a minimum spanning tree. We give a
probabilistic algorithm for estimating the weight of a minimum spanning
tree in time O(dw log(w)), where d is the average degree of the graph and w
is a bound on the ratio of the maximum weight edge to the minimum weight edge.
In particular, for constant degree graphs in which the ratio w is constant,
our running time will also be constant. We also give a nearly matching lower
bound.
Tuesday, 2 October 2001
Boaz Barak, Weizmann Institute, Israel
How to go beyond the black-box simulation barrier
graphs
Abstract:
The simulation paradigm is central to cryptography. A simulator is an
lgorithm that tries to simulate the interaction of the adversary with an
honest party, without knowing the private input of this honest party. Almost
all known simulators use the adversary's algorithm as a black-box. We present
the first constructions of non-black-box simulators. Using these new
non-black-box techniques we obtain several results there previously proven to
be impossible to obtain using black-box simulation.
Specifically, assuming the existence of collision-resistent hash functions, we
construct a new zero-knowledge argument system for NP that satisfies the
following properties:
1. This system has a constant number of rounds with negligible soundness error.
2. It remains zero-knowledge even when composed concurrently n times, where n
is the security parameter.
(Obtaining 1 and 2 simultaneously has been proven to be impossible to achieve
using black-box simulation.)
3. It is an Arthur-Merlin (public coins) protocol.
(Obtaining 1 and 3 simultaneously has also been proven impossible to achieve
using black-box simulation.)
4. It has a simulator that runs in strict polynomial time, rather than expected
polynomial time.
(all previously known constant-round negligible-error zero-knowledge arguments
utilized expected polynomial time simulators.)
Monday, 8 October 2001
Ehud Friedgut, The Hebrew University
What makes a graph be a Ramsey Graph?
An analysis via random graphs, a threshold phenomenon, and
Szemeredi
(hypergraph) regularity
Abstract:
When is it true that every Blue/Red coloring of the edges of a
graph contains a monochromatic triangle? We will show that, in a well
defined sense, in most graphs the above property depends on the number of
triangles per edge in the graph. An interesting ingredient in the proof
is a generalization of Szemeredi's regularity lemma to a hypergraph setting.
Joint work with V. Rodl , A. Rucinski and P. Tetali .
Tuesday, 9 October 2001
Michael Capalbo, IAS
Explicit unique-neighbor expanders
Abstract:
A bounded-degree bipartite graph \Gamma = (I,I',E), where |I'| is
a constant fraction smaller than |I|, is a unique-neighbor expander if, for
each subset S of I that is not too large, there are many vertices in I' that
are adjacent to exactly one vertex in S. Unique-neighbor expanders are
interesting for two reasons. First, they are important for several problems
in theoretical computer science, such sorting and routing. Second, the explicit
construction of an infinite family of unique-neighbor expanders has been a
long-standing open question that, for a long time, has been just out of reach
of our most powerful methods of analyzing expanders.
We answer this open question by presenting the first known explicit
constructions of infinite family of unique-neighbor expanders. En route, we
develop and use a new graph product and use this to develop new methods of
constructing and analyzing expanders.
Monday, 22 October 2001
Amit Sahai, Princeton University
Zero knowledge and chosen-ciphertext security
Abstract:
Zero-knowledge proofs, introduced by Goldwasser, Micali, and Rackoff,
are fascinating constructs in which one party (the "prover") convinces
another party (the "verifier") that some assertion is true, without
revealing anything else to the verifier.
In this talk, we present a connection between the theory of zero-knowledge
proofs and one of the classical notions in cryptography, encryption. In
particular, we introduce the notion of simulation-sound zero knowledge,
and show how the non-interactive form of this notion can be used to
achieve the strongest definition of security for encryption, namely
adaptive chosen-ciphertext security, in a simple manner. We also present
constructions of simulation-sound non-interactive zero-knowledge proofs
for all NP languages, and discuss other applications of this notion.
Tuesday, 23 October 2001
Irit Dinur, IAS
On the hardness of approximating vertex cover
Abstract:
The Minimum Vertex Cover problem is to find the smallest set of vertices
that touch all edges in a given graph. We show that it is NP-hard to
approximate this problem to within a factor of > 4/3, improving on the
previously known 7/6 factor due to Hastad, and taking us one step closer
to the factor of 2-o(1), for which it is easy to approximate this
problem.
We will present a two-step graph reduction that establishes our result.
In our proof we visit two areas in combinatorics. The first is that of
influences of variables on Boolean functions, applying Friedgut's Lemma
to "list-decode" any monotone function on {0,1}^n. We also venture into
the field of intersecting families of subsets. Specifically, we are
interested in Erdos-Ko-Rado type bounds on the maximal size of such
subsets.
Based on joint work with Muli Safra.
Wednesday, 24 October 2001
David Galvin, Rutgers University
Phase transition in the hard-core model on Z^d
Abstract:
We show that for large dimension d, the hard-core model on the integer
lattice Z^d with all activities equal to lambda exhibits a phase transition
when lambda >= C (log^3(d)/d)^(1/4).
More specifically:
Let B = [-N,N]^d be a large box in the lattice. We say a subset I of the
lattice is independent if no two points of I are adjacent (at distance 1).
For
lambda >0, we consider the probability distribution on the independent subsets
of B where each such I has probability proportional to lambda^|I|.
We define the distance between any two points in the lattice to be the L^1
distance. Thus each point is either even (lies at even distance from the
origin) or is odd. Let p_e (resp. p_o) be the probability that I contains the
origin conditioned on the event that I contains all the even (resp. odd)
points
of the boundary of B.
We show that for N large and for lambda in the range above, these quantities
are very different; namely, p_e is very close to lambda/(1 + lambda),
whereas p_o = o(1/d).
This is joint work with Jeff Kahn.
Monday, 29 October, 2001
Andrew V. Goldberg, Research Fellow, InterTrust STAR Lab
A Practical Shortest Path Algorithm with Linear Average Time
Abstract:
The problem of finding shortest paths in a directed graph with nonnegative
arc lengths is important both in theory and in practice. Many algorithms
and data structures have been developed for this problem, bringing the
worst-case time bound close to linear. However, no linear-time algorithm
for this problem is currently known.
We show that a natural modification of an algorithm of Denardo and Fox
yields an algorithm that runs in expected linear time on arbitrary
graphs with a uniform arc length distribution. The modification
is based on a relaxation of Dijkstra's selection rule.
The resulting algorithm is practical. If the input lengths are not
too big and intermediate distances fit into 64 bits, running time
of our implementation of the algorithm is within a small constant
factor of that of breadth-first search on the same graph. It follows
that for such problems, algorithmic improvements cannot lead to large
improvements in practical performance.
Monday, 5 November 2001
Andrew Yao, Princeton University
How to Generate a Random Polygon
Abstract:
The generation of random geometric objects is
of much practical and theoretical interest. In contrast
to the generation of numerical distributions,
its theoretical development is still at an early stage.
Even for the simplest geometric objects, the design of
efficient algorithms is often nontrivial.
In this talk we raise a scale-invariant model for
generating random polygons. We give in particular
a polynomial time algorithm for generating a random convex
polygon in this model. This is joint work with Dan Greene.
Tuesday, November 6, 2001
Oded Regev, IAS
The Unsplittable Flow Problem
Abstract:
We present the first strongly polynomial algorithms with the best
approximation ratio for all three variants of the unsplittable flow
problem (UFP). In this problem we are given a (possibly directed)
capacitated graph with n vertices and m edges, and a set of terminal
pairs each with its own demand and profit.
The objective is to connect a subset of the terminal pairs each by
a single flow path as to maximize the total profit of the satisfied
terminal pairs subject to the capacity constraints.
Joint work with Yossi Azar.
Monday, November 12, 2001
Amit Chakrabarti, Princeton University
Informational Complexity and the Direct Sum Problem for Simultaneous Message Complexity
Abstract:
Given m copies of the same problem, does it take m
times the amount of resources to solve these m problems? This is the direct sum
problem, a fundamental question that has been studied in many computational
models. We study this question in the simultaneous message (SM) model of
communication introduced by Yao.
The equality problem for n-bit strings is well known
to have SM complexity $\Theta(\sqrt n)$. We prove that solving m copies of the
problem has complexity $\Omega(m\sqrt n)$, the best possible lower bound. We
also prove some extensions and generalizations. The results are proven using a new notion that we call
informational complexity, which is related to SM complexity and has nice direct
sum properties.
This is joint work with Yaoyun Shi, Anthony Wirth and
Andrew Yao.
Tuesday, November 13, 2001
Clifford Smyth, IAS
Reimer's Inequality and Rudich's Conjecture
Abstract:
We prove Rudich's conjecture, 1984, which arose from his work on reducibility
among cryptographic primitives. Roughly the conjecture states that if a family
of subcubes of the discrete cube is a near partition, then there is a significant
dependence among the family. The subcubes are viewed as events where a point is
selected from the cube according to a product distribution. To prove this we use
a correlation inequality that is in some sense "dual” to Reimer's inequality
(a.k.a. the van den Berg-Kesten conjecture). We also use this inequality to
prove an upper bound on the approximate decision tree complexity of a boolean
function that is quadratic in its approximate certificate complexity, answering
a question of Tardos, 1989.
This is joint work with Jeff Kahn and Mike Saks.
Tuesday, November 19, 2001
Tim Gowers, Cambridge University and Princeton University
A Tower-type Lower Bound for Szemer\'edi's Regularity Lemma
Abstract:
Szemer\'edi's regularity lemma is one of the most useful tools
in extremal graph theory, but unfortunately the proof gives a
very weak bound, which limits the use of the theorem to statements
of a qualitative kind (at least in terms of some of the parameters).
In this talk, I shall show that the weak bound of the proof is in
fact of the right form.
Tuesday, November 20, 2001
Andris Ambainis, IAS
Quantum Coin Flipping
Abstract:
Alice and Bob want to flip a coin (generate a random bit)
but they do not trust one another. They want to have a protocol that
1) produces each of two results (0 and 1) with probability 1/2 each
if both Alice and Bob follow the protocol
2) if one of them follows the protocol but the other does not,
the person who follows the protocol is guaranteed that each
outcome has probability at most 1/2 + epsilon.
This can be done classically with cryptographic assumptions
(such as one-way functions) buti impossible classically if
the cheating party has unlimited computational power. Our
goal is to construct quantum protocols that are secure
against cheaters with unlimited computational power.
First, I will show a simple quantum protocol for coin
flipping in which no cheating party can achieve one outcome
(0 or 1) with probability more than 0.75. This is the best
provable result known. I will also show that the new protocol
is optimal for a restricted class of protocols.
Second, I will show a general lower bound on how good can
be protocols that are restricted to k messages between Alice
and Bob. This bound implies that, to decrease the bias, one
needs to increase the number of messages (rounds), not just
exchange a lot of qubits in few rounds.
Monday, November 26, 2001
Sergei Konyagin, IAS
Extimates for Lp
Norms of Exponential Sums
Abstract:
Let $e(u)=e^{2\pi iu}$, $\{b_1,\dots,b_N\}$ be a strictly convex sequence of
distinct integers (namely, \newline
$b_{j+1}-b_j+b_{j-1}>0$ for $j=2,\dots, N-1$),
$f(u)=\sum_{j=1}^N e(b_ju)$. For $p\ge1$ define the $L_p$-norm of $f$ as
$$\|f\|_p^p=\int_0^1|f(u)|^pdu.$$
It is easy to show that $\|f\|_p^p\le N^{p-1}$ for $p\ge2$. We prove that
for an even $p=2r>2$ there is a better estimate (provided that $N$ is large)
$$\|f\|_p^p\le C_p N^{p-2+2^{1-r}}.$$
The proof for $p=4$ will be presented. In this case $\|f\|_p^p$ is equal to
the number of solutions of the equation $b_i+b_j=b_l+b_m$. The last problem
has a combinatorial nature and can be investigated, as suggested by Elekes,
Nathanson and Ruzsa, by an elegant using of the Szemerédy--- Trotter
theorem.
Monday, December 3, 2001
Amin Shokrollahi, Digital Fountain
Capacity Achieving Sequences
Abstract:
Low-density parity-check codes are a class of linear error-correcting codes
based on sparse bipartite graphs. One of the most intriguing aspects of
low-density parity-check codes is that they are naturally equipped with very
efficient decoding algorithms. The main problem is the design of the codes
in such a way that the natural decoding algorithms can decode many errors.
In particular, one of the most interesting questions for the asymptotic
theory is the design of the bipartite graphs such that the decoding
algorithm achieve the Shannon capacity of the underlying channel, in the
limit.
Based on evidence gathered from Tornado codes introduced by Luby et al in
1997, researchers believe that the right degree distribution of the nodes
in the underlying bipartite graphs is sufficient to produce codes that
achieve capacity. We will give a brief survey of this interesting problem
and exemplify the ideas for the case of the erasure channel.
Monday, December 10, 2001
Nicolas Sourlas, ENS, Paris
On Statistical Mechanics and Capacity Approaching Codes
Abstract:
I will first briefly review the relation between error-correction
codes and certain mathematical models of spin glasses.
Then I will show how the recently discovered (or rediscovered)
capacity approaching codes (turbo codes and low density parity
check codes) can be analysed using statistical mechanics. It is
possible to show, using statistical mechanics, that these codes
allow error-free communication for signal to noise ratio above
a certain threshold. This threshold, which corresponds to a phase
transition in the spin model, depends on the particular code, and
can be computed analytically in many cases.
Tuesday, December 11, 2001
Avi Wigderson, IAS
Arithmetic Complexity - A Survey
Abstract:
I will describe the state of the art on the computational complexity of
natural polynomials, e.g. symmetric functions, determinant, matrix multiplication
etc. The model is arithmetic circuits over infinite fields (which should (?)
be easier to handle than finite fields or the Boolean ring). I'll present
some techniques, results and open problems.
Monday, December 17, 2001
Ilan Newman, NEC
Testing Monotonicity of Functions and Ruzsa-Szemerédi Type Graphs
Abstract:
I will talk on some result from a recent work with Eldar Fischer, Eric Lehman, Sofya
Raskhodnikova, Ronitt Rubinfeld and Alex Samorodnitsky. The major points are:
We show that in its most general setting, testing that Boolean functions are close
to monotone is equivalent, with respect to the number of required queries, to several
other testing problems in logic and graph theory. These problems include testing
that a Boolean assignment of variables is close to an assignment that satisfies a
specific 2-CNF formula, testing that a set of vertices is close to one that is a
vertex cover of a specific graph, and testing that a set of vertices is close to a clique.
We then show that there exist partial orders for which monotonicity testing cannot
be performed with less than Omega(log n/log log n) many queries (where n is the poset size).
To prove such lower bounds we construct graphs with with 2n vertices that can be partitioned
into n^{1/log log n} induced matchings of size n/3.
Tuesday, December 18, 2001
Avi Wigderson, IAS
Do Algorithms need Randomness? A Survey
Abstract:
I plan to survey the main ideas and best results so far in
removing randomness from probabilistic algorithms under various
complexity assumptions. I'll speculate on the chances we have
to remove these asumptions altogether in the near future.
Monday, January 14, 2002
Vladlen Koltun, Tel Aviv University
Recent Developments Concerning Vertical Decompositions
Abstract:
The vertical decomposition is a widely used tool in computational geometry.
It is especially important in the context of the design of efficient geometric
algorithms, for problems like point location, range searching, ray shooting,
robot motion planning and more. The results we present have immediate algorithmic
applications to all of these problems.
We present new combinatorial bounds on the complexity of the vertical decomposition
of algebraic surfaces in four and higher dimensions. In particular, we show that an
arrangement of fixed-degree algebraic surfaces in four dimensions can be decomposed
into approximately O(n^4) cells that have constant complexity. This improves the best
previously known bound by a near-linear factor. We also discuss the special case of
linear arrangements, for which sharper results are obtained.
Tuesday, January 15, 2002
Richard Beigel, IAS
Learning a Hidden Matching
Abstract:
We consider the problem of learning a matching (i.e., a graph in which all vertices
have degree 0 or 1) in a model where the only allowed operation is to query whether
a set of vertices induces an edge. This is motivated by a problem that arises in
molecular biology. In the deterministic nonadaptive setting, we have a
$\frac{1}{2}{n \choose 2} + O(n)$ upper bound and a nearly matching $\tilde\Omega(n^2)$ lower bound.
In contrast, if we allow randomness then we obtain (by a randomized, nonadaptive algorithm)
a much lower $O(n \log{n})$ upper bound, which is best possible (even for randomized
fully adaptive algorithms).
In this talk I will prove describe the biological background for this problem, prove some
of the results above, and discuss some related ongoing research.
Joint work with Noga Alon, Simon Kasif, and Steven Rudich.
Monday, January 21, 2002
Avner Magen, NEC Research Institute
Reduction of Dimensionality in Euclidean Space that Preserve Volumes, and Applications
Abstract:
A classical result of W. Johnson and J. Lindenstrauss states that there is a projection of
$n$-point subspace of an Euclidean space onto a subspace of dimension $O(log n/ epsilon^2)$,
which does not distort distances by more than a multiplicative factor of $1+epsilon$.
Here we show a natural extension of that result, where one considers the distortion of volumes
and related geometrical features. We show that $n$ points in the Euclidean space can be embedded
into a $t$-dimensional Euclidean space, with $t = O(k (\max {k,1/epsilon})^2 log n)$, so that no
set of size $s \le k$ changes its volume by more than $(1+epsilon)^{s-1}$. Moreover, under this
embedding, distances of points from affine hulls of sets of at most $k-1$ points in the space do
not change by more than a factor of $1+epsilon$. Our method can be applied to many problems with
high-dimensional nature such as Projective Clustering and Approximated Nearest Affine Neighbor Search.
In particular, it shows a first poly-logarithmic query time approximation algorithm to the latter.
Monday, January 21, 2002
Xiaodong Sun, Rutgers University
Space Lower Bounds for Distance Approximation in the Data Stream Model
Abstract:
We consider the problem of approximating the distance of two d-dimensional vectors x and y in
the data stream model. In this model, the 2d coordinates are presented as a ``stream'' of data
in some arbitrary order, where each data item includes the index and value of some coordinate and
a bit that identifies the vector (x or y) to which it belongs. The goal is to minimize the amount
of memory needed to approximate the distance. For the case of L^p-distance with p in [1,2], there
are good approximation algorithms that run in polylogarithmic space in d (here we assume that each
coordinate is an integer with O(log d) bits). However, it is still open whether such algorithms
exist for any p>2. Here we prove that they do not exist for p>3. In particular, we prove an
approximation-space tradeoff of approximating the maximum coordinate of the difference of two vectors.
We show that for c>0, any randomized algorithm that approximates the maximum distance of two length d vectors
within factor of d^c requires Omega(d^{1-6c}) space. As a consequence we show that for p>3/(1-6c),
any randomized algorithm that approximates L^p distance of two length d vectors within a factor d^c requires
Omega(d^{1-3/p-6c}) space. The lower bound follows from a lower bound on the two-party one-round
communication complexity of this problem. This lower bound is proved using a combination of information
theory and Fourier analysis.
This is joint work with Michael Saks.
Monday, January 28, 2002
Alex Lubotzy, Hebrew University of Jerusalem
Property 'Tau' and its Applications in Combinatorics, Computational Group Theory and Geometry
Abstract:
Property 'tau' (which is related to Kazhdan Property T in
representation theory) has been found to have surprising
applications to several areas of mathematics and computer science.
We will introduce and review this property and discuss some of
its (old and new) applications, including to:
(1) excplicit construction of expanding graphs (à la Margulis)
(2) diameters of finite simple groups.
(3) producing efficient methods of generating random elements in
finite groups (joint work with I. Pak); and
(4) Thurston conjecture on non vanishing of the first Betti number
of finite covers of hyperbolic manifolds.
Monday, January 29, 2002
Benjamin Sudakov, IAS
Vertex List Coloring by Semirandom Method
Abstract:
The semirandom method (Rödl Nibble) is the general approach to prove the existence of
something by generating it through many iterations, applying probabilistic reasoning at each step.
One area of Combinatorics where semirandom method has had the greatest impact is graph coloring.
In fact, many of the strongest result in graph coloring over the past decade are examples of this method.
In this talk we give detailed illustration of how semirandom method works by proving some result about vertex list coloring.
Joint work with Bruce Reed.
Monday, February 4, 2002
Pavel Pudlak, Mathematical Institute, Academy of Sciences of the Czech Republic, Prague, Czech Republic
An Application of Hindman's Theorem to a Problem on Communication Complexity
Abstract:
We consider the $k$-party communication complexity of the problem to determine if a word $w$ is of the form
$w_0a_1w_1a_2\dots w_{k-1}a_kw_k$, for fixed letters $a_1,\dots,a_k$. Using the well-known theorem of Hindman
(a Ramsey type result about finite subsets of natural numbers) we prove that for $k=4$ and $5$, the communication
complexity of the problem increases with the length of the word $w$.
Tuesday, February 5, 2002
Noga Alon, Tel Aviv University and IAS
Polynomials in Discrete Mathematics (A survey lecture)
Abstract:
Elementary properties of polynomials can be very powerful in the study of various combinatorial problems.
I will illustrate this fact by discussing various representative examples in Combinatorial Geometry, in
Information Theory, in Additive Number Theory and in Graph Theory.
Thursday, February 7, 2002
Mikkel Thorup, AT & T Research
Compact Oracles for Reachability and Approximate Distances in Planar Digraphs
Abstract:
It is shown that a planar digraph can be preprocessed in near-linear time, producing a near-linear
space distance oracle that can answer reachability queries in constant time. The oracle can be distributed
as an $O(\log n)$ space label for each vertex and then we can determine if one vertex can reach another
considering their two labels only. The approach generalizes to approximate distances in weighted planar
digraphs where we can then get a $(1+\eps)$ approximation distance in $O(\log\log \Delta+1/\eps)$ time
where $\Delta$ is the longest finite distance in the graph and weights are assumed to be non-negative integers.
Our scheme can be extended to find and route along the short dipaths. Our technique is based on a novel
{\em dipath decomposition of planar digraphs\/} that instead of using the standard separator with $O(\sqrt{n})$
vertices, in effect finds a separator using a constant number of dipaths.
Thursday, February 11, 2002
Vojta Rodl, Emory University
Independent Deuber Sets in Graphs on the Natural Numbers
Abstract:
Extending theorems of Schur and van der Waerden, Rado characterized, in 1933, all partition regular systems
of equations, that is, those systems of linear equations Ax=0 the solution of which can be found monochromatic
under any finite partition of the integers. In the talk we discuss some common generalization of Rado's Theorem,
Deuber's extension of it, and Ramsey's Theorem.
Joint work with D. S. Gunderson, I. Leader and H. J. Proemel.
Thursday, February 18, 2002
Benny Sudakov, Princeton University and IAS
Probabilistic Method and Ramsey Theory
Abstract:
"Ramsey Theory" refers to a large body of deep results in mathematics concerning partition of large collections.
Its underlying philosophy is captured succinctly by the statement that "In large system complete disorder is impossible".
Since the publication of the seminal paper of Ramsey in 1930, this subject has grown with increasing vitality, and is
currently among the most active areas in Combinatorics. One of the most important factors in the development of Ramsey
Theory was successful application of so called "Probabilistic Method".
In this talk I will describe some classical results of Ramsey Theory together with the recent progress on some old
question of Erdos about Ramsey numbers of sparse graphs. This result was obtain using probabilistic arguments.
This is joint work with A. Kostochka.
Thursday, February 19, 2002
Oded Regev, IAS
Quantum Computation and Lattice Problems
Abstract:
We present the first explicit connection between quantum computation and lattice problems. Namely, we show an
algorithm that solves the $\tilde{\Theta}(n^3)$-unique-SVP under the assumption that a certain quantum problem
has a solution. Moreover, using quantum reductions, we prove an average case to worst connection which is better
than the known results
Monday, February 25, 2002
Michael Krivelevich, Tel Aviv University
Turan Numbers of Bipartite Graphs and Related Questions
Abstract:
We obtain several new results about the maximum possible number of edges in graphs with a given number of
vertices that contain no copy of a given bipartite graph. The results are proven applying (some variations of)
a simple yet amazingly powerful double counting-type argument, which has been used by several researchers,
including Rodl, Kostochka, Gowers and Sudakov.
Joint work with Noga Alon and Benny Sudakov.
Tuesday, February 26, 2002
Andris Ambainis, IAS
Quantum Computing I (Survey Lecture)
Abstract:
I will introduce the model of quantum computing and show several quantum algorithms which are better than classical:
- Deutsch's algorithm for computing a parity of two bits with one query,
- Simon's algorithm for finding a hidden vector exponentially faster than classical,
- Shor's polynomial time quantum algorithm for factoring.
Then, I will describe the general hidden subgroup problem. Abelian case of it is solvable in quantum polynomial
time and includes Simon's and Shor's problems as particular cases. Non-abelian case is open and solving it would
imply a polynomial time quantum algorithm for graph isomorphism.
Monday, March 4, 2002
Shmuel Safra, Tel Aviv University
Putting a Junta to the Test
Abstract:
We show a property-tester for whether a Boolean function is a k-Junta. That is, a random procedure that
probes f in poly(k/epsilon) points and accepts any function that depends on at most k variables, while,
on the other hand, rejects w.h.p. any function which is not epsilon close to such a function.
Joint work with Eldar Fischer and Guy Kindler.
Monday, March 11, 2002
Van H. Vu, University of California, San Diego
Generating Random Regular Graphs
Abstract:
A random regular graph of degree d on n points is obtained by sampling with the uniform distribution from
the set of all simple regular graphs of degree d on n points. This is one of the most popular random graph
models and has many properties of crucial interest in computer science (for instance, random regular graphs
are very good expanders).
How to generate a random regular graph is a question of principal importance from both theoretical and
practical point of view. For the case d is a constant, the configuration model provides an effective method.
But little is known when d goes to infinity together with d.
In this talk, we shall present a nearly quadratic algorithm for the case d < n^{1/3}. The analysis of this algorithm is a new application of our recent project on concentration.
Tuesday, March 12, 2002
Andris Ambainis, IAS
Quantum Computing, Part II
Abstract:
I will continue my first lecture, describing Shor's polynomial
time quantum algorithm for factoring. Then, I will describe the
general hidden subgroup problem which includes factoring, discrete
log and other problems as special case. The abelian case of
this problem is solvable in quantum polynomial time. The nonabelian
case is open and solving it would imply a polynomial time quantum
algorithm for graph isomorphism.
Tuesday, March 18, 2002
Yi Zhao, Rutgers University
Graph Tiling Problems
Abstract:
Given two graphs $G$ and $H$, an $H$-matching of $G$ is a subgraph of
$G$ consisting of vertex-disjoint copies of $H$. For an $r$-chromatic
graph $H$ on $h$ vertices, we write $u=u(H)$ for the smallest
color-class size in any $r$-coloring of $H$. The critical chromatic
number of $H$ is the number $\chi_{cr}(H)=(r-1)h/(h-u)$. A conjecture
of Koml\'{o}s states that for every graph $H$, there is a constant $K$
such that if $G$ is any $n$-vertex graph of minimum degree at least
$(1-(1/\chi_{cr}(H)))n$, then $G$ contains an $H$-matching that covers
all but $K$ vertices of $G$. We prove the conjecture for sufficiently
large values of $n$.
Joint work with Ali Shokoufandeh
Tuesday, March 19, 2002
Andris Ambainis, IAS
Quantum Computing, Part III
Abstract:
Grover's search is a quantum algorithm for solving a general exhaustive
search problem with N possibilities in \sqrt{N} quantum steps.
This algorithm gives a quadratic speedup over the best
classical algorithm for many search problems, including
NP-complete ones.
I will describe the algorithm and show two intuitive visualizations
of why it works. Then, I will prove a lower bound which implies that
Grover's algorithm is optimal in the black box model.
Note: The lecture is independent from "Quantum Computing II".
The only background needed to understand it is the basic model
of quantum computing (quantum states, unitary transformations and
measurements) which I described in my first lecture.
Tuesday, March 25, 2002
Penny Haxell, University of Waterloo and Bell Labs
Transversals in Graphs
Abstract:
Let G be a graph whose vertex set is partitioned into classes
V_1, V_2, ..., V_t. Under what conditions can one guarantee the
existence of an independent set \{v_1, ...,v_t\} in G such that
v_i lies in V_i for each i? Several results of this general type will
be outlined, some of which have topological proofs as well as
combinatorial proofs. Links to other problems such as hypergraph
matching, list colouring and strong colouring will also be discussed.
Tuesday, March 26, 2002
Alexander Razborov, IAS
Proof Complexity, I (A Survey Lecture)
Abstract:
(Propositional) proof complexity deals with the following basic
question: how hard it is to prove a given theorem of propositional logic
using arguments from a prescribed class. Since propositional formulas
can
easily encode virtually any statement about discrete objects, this area
is
tightly connected with such diverse disciplines as computational
complexity, mathematical logic, algebra, program verification and
automated theorem proving etc. In these lectures I will give a (biased)
sample of concepts and results in the propositional proof complexity
that
look the most appealing from the mathematical point of view.
Tuesday, April 1, 2002
Kobbi Nissim, DIMACS and NEC
Communication Complexity and Secure Function Evaluation
Abstract:
A secure function evaluation protocol allows two parties to jointly
compute a
function $f(x,y)$ of their (private) inputs in a manner not leaking more
information than necessary. The problem was first considered by Yao who
presented the 'Millionaires problem' where two players want to compare
their
values but reveal no other information.
A major result in this field is that any function that can be computed
using
polynomial resources can be computed *securely* using polynomial resources
(where `resources' refer to communication and computation). This result
follows by a general transformation from a circuit for computing $f$ to a
secure protocol that evaluates $f$. Although the resources used by
protocols
resulting from this transformation are polynomial in the circuit size,
they are
in general much higher than those required for an insecure computation
of $f$.
We suggest two new methodologies for the design of efficient secure
protocols,
that differ with respect to their underlying computational models. One
methodology utilizes the communication complexity tree (or branching
program)
representation of $f$. The second methodology utilizes the circuit
computing
$f$, enhanced with look-up tables as its underlying computational model.
During the talk I will present some of the many applications of these
methodologies. In particular, a simple protocol for the `millionaires
problem'
that is more efficient than previously known ones. I will also present some
related open problems.
Joint work with Moni Naor.
Tuesday, April 2, 2002
Alexander Razborov, IAS
Proof Complexity, II (A Survey Lecture)
Abstract:
Time permitting, I hope to do the following two topics:
(a) Algebraic proof systems: the case of binomial ideals (cntd.)
(b) Width-size relation for Resolution.
Tuesday, April 8, 2002
Alexei Kitaev, California Institute of Technology
Quantum Interactive Proofs
Abstract:
An interactive proof is a game between two players in which they exchange
a certain number of messages. One player, called the verifier, performs
polynomial computation and determines the result ("accept" or "reject")
according to a given protocol. The other player, the prover, has unlimited
computational power and tries to maximize verifier's probability to
accept. I will consider a quantum analog of such games. The class QIP of
problems defined by such games lies between PSPACE and EXP. Any protocol
that admits two-sided error can be reduced to a protocol with one-sided
error. Any protocol with a polynomial number of messages can be simulated
by a protocol with just 3 messages.
Tuesday, April 9, 2002
Omer Reingold, IAS
Randomness Conductors (A Survey Lecture)
Abstract:
I will discuss several fundamental combinatorial objects such as
expander graphs, randomness extractors and universal hash functions.
In all these functions, certain guarantee on the input
"entropy" is converted to a guarantee on the output "entropy".
In this sense, all of these functions can be viewed as instances of
a more general object that we call "randomness conductors".
Nevertheless, as these objects were originally defined for different
sets of applications and motivations, what is exactly meant by "entropy"
and "guarantees" in the above description can vary quite a bit.
Tuesday, April 15, 2002
Tibor Szabo, ETH Zurich
Unique Sink Orientation of Cubes
Abstract:
Unique sink orientation of cubes is an abstraction of several
geometric optimization problems, like linear programming and
smallest enclosing ball. Similar generalizations were
widely studied in order to obtain efficient combinatorial
algorithms in the {\em unit cost model}.
An orientation of the edges in the graph of the $n$-dimensional
hypercube is called {\em unique sink orientation} (or {\em USO}) if
all the faces have exactly one sink (a vertex with no outgoing
edges). In the {\em vertex-oracle} model the orientation is given
implicitly: we can access an arbitrary vertex of the cube, for which
the oracle reveals the orientations of the incident edges. This
operation is called {\em vertex evaluation}. We are interested in
minimizing the number of evaluations needed to find the global sink
of the cube.
While the current upper and lower bounds are almost as far apart as
they could be, we still believe USOs do have a strong enough
structure to be interesting. In the talk I'll try to convince you
about this. (Joint work with Ingo Schurr and Emo Welzl.)
Monday, April 22, 2002
William Gasarch, University of Maryland
Fault Diagnosis: What if the Number of Faulty Processors is more than Half
Abstract:
Consider the following problem:
There are $n$ processors of which $\le b$ are faulty.
You can have one processor test another and report back
the results. Good processors report correct results.
Faulty processors report either correct or incorrect
results (they can be malicious). We want to classify
the processors into good and faulty ones, with a minimum
number of tests.
If $b<n/2$ then this problem has been well studied.
What if $b\ge n/2$? It is well known that in this case
you CANNOT classify.
But here is the new question:
What if you only want to find a (small) set of possible
classifications, one of which is correct?
How many tests does this take?
We obtain upper and lower bounds on this problem
that are fairly close: \theta( n2/(n-b))$.
We also discuss variants of the problem.
This is joint work with
Adam Bartgeil, Alex Chan, and Frank Stephan
Tuesday, April 23, 2002
Jozsef Balogh, IAS
Monotone Graph Properties
Abstract:
A graph property $P$ is called {\em monotone} if, whenever a graph
has property $P$, so does each of its subgraphs. For example,
the property that a graph is {\em $H$-free}, i.e., it contains
no subgraph isomorphic to a fixed so-called forbidden
subgraph $H$, is monotone. We measure the {\em size} of a property
$P$ by counting the number of graphs with property $P$ on $n$
labeled vertices.
Tur\'an's theorem provides an upper bound of roughly
$(1-1/p){n \choose 2}$ on the number of edges of a
$K_{p+1}$-free graph with $n$ vertices. Clearly, every subgraph
of a $K_{p+1}$-free graph is $K_{p+1}$-free, which gives
a lower bound on the size of this property. Erd\H os, Frankl,
R\"odl, Pr\"omel, Steger, and others proved that this bound is
not far from being tight, and generalized their result in
various directions.
In a recent paper, joint with Bollob\'as and Simonovits, we
managed to establish, among other results, the following
theorem: for every monotone property $P$, there is an integer $k$ and a
positive constant $c$ such that the size of $P$ is between
$2^{(1-1/k){n\choose 2}}$ and $2^{(1-1/k){n\choose 2}+n^{2-c}}$.
We applied probabilistic arguments and Szemer\'edi's
Regularity Lemma. We also present several open problems.
Joint work with B. Bollob\'as and M. Simonovits
Friday, April 26, 2002
Simon Kasif, Boston University
Active Learning of Biological Systems
Abstract:
In this talk I will describe the application of computational learning
and computer science algorithms to reverse engineering, predictive
modeling and analysis of biological systems.
Our research focuses on genes since genes and the post-translational
interactions of their products provide the basic scripts for biological
cell behavior. Our research involves new approaches for comparing of
the genomic content of human and mouse genomes, creating graphical
probabilistic models of apoptosis (programmed cell death) pathways in
order to interpret gene expression data, creating reliable functional
assignments to newly sequenced genes, and identifying conserved
functional circuits in metabolic pathways.
In many cases the models we construct rely on probabilistic automata,
circuits and graphs (networks). While these approaches are rather
abstract, they allow us to incorporate direct measurements obtained in
biological experiments into the interpretation process to obtain high
accuracy predictions.
Friday, April 30, 2002
Michael Capalbo, IAS
Explicit Unique-Neighbor Expanders
Abstract:
An expander graph is a graph $\Gamma$ in which, for every not too large
subset $S$ of vertices of $\Gamma$, there are a large number of vertices
outside of $S$ that are adjacent to at least one vertex in
$S$. We say that $\Gamma$ is also a unique-neighbor expander if there
are a large number of vertices outside of $S$ that are adjacent to
exactly one vertex in $S$. Unique-neighbor expanders are very useful
for various simple distributed algorithms. However, their construction
has been a long-standing open question. Here we answer this
long-standing open question by presenting explicit constructions of
infinite families of 6-regular, 4-regular, and 3-regular
uniuqe-neighbor expanders.
Monday, May 6, 2002
Tal Malkin, AT & T Labs - Research
From Minicript to Cryptomania: Relationships Among the Fundamental Cryptographic Primitives
Abstract:
The cryptographic research of the last few decades has proposed
solutions to numerous cryptographic tasks, proving the correctness and
security of these solutions under a growing number of unproven
computational assumptions. Indeed, since the security of most
cryptographic tasks implies that P is not equal to NP, unconditional
proofs of security seem well beyond the reach of complexity theory
today. A major goal of cryptography is thus studying the minimal
assumptions necessary for securely implementing cryptographic
protocols, and establishing the relationships between different
primitives.
We study the relationships among some of the most fundamental
primitives and protocols in cryptography: public-key encryption,
oblivious transfer (which is equivalent to general secure multi-party
computation), key agreement, trapdoor functions, and trapdoor
permutations. Despite the importance of these primitives, little was
known about the relationships among them. We resolve all the missing
relationships with respect to black-box reductions. For example, we show
that public-key encryption and oblivious transfer are incomparable under
black-box reductions. Furthermore, relative to black-box reductions,
neither oblivious transfer nor 1:1 trapdoor functions imply trapdoor
permutations. Finally, we show that there is no black-box reduction from
trapdoor functions to public-key encryption. Our separation results
employ the oracle separation paradigm introduced by Impagliazzo and
Rudich, as well as a new model of separation.
Our results demonstrate that, borrowing from the terminology of
Impagliazzo, there are many possible worlds that lie between Minicrypt
(the world that contains only one-way functions and primitives implied
by them) and Cryptomania (the world that contains trapdoor permutations
and the rich cryptographic possibilities that they offer).
The talk is based on joint works with Y. Gertner, S. Kannan,
O. Reingold, and M. Viswanathan (FOCS 00), and with Y. Gertner
and O. Reingold (FOCS 01).
Tuesday, May 7, 2002
Irit Dinur, Oded Regev and Clifford Smyth, IAS
The Hardness of 3-Uniform Hypergraph Coloring
Abstract:
We prove that coloring a 3-uniform 2-colorable hypergraph with
any constant number of colors is NP-hard. The best known algorithm
[KNS] colors such a graph using O(n^{1/5}) colors. Our result immediately
implies that for any constants k>2 and c_2>c_1>1, coloring
a k-uniform c_1-colorable hypergraph with c_2 colors is
NP-hard; leaving completely open only the k=2 graph case.
We are the first to obtain a hardness result for
approximately-coloring a 3-uniform hypergraph that is colorable
with a constant number of colors. For k\ge 4 such a result has
been shown by [GHS], who also discussed the inherent
difference between the k=3 case and k\ge 4.
Our proof presents a new connection between the Long-Code and the
Kneser graph, and relies on the high chromatic numbers of the
Kneser graph and the Schrijver graph.
Tuesday, May 13, 2002
Frederic Green, Clark University and CWI
The Correlation Between Parity and Quandratic Polynomials Mod 3
Abstract:
It is a challenging open problem to prove exponential lower bounds on
circuits consisting of a single threshold gate at the top, mod
gates in the middle layer, and small AND gates at the bottom. The
background and motivation behind this problem will be reviewed in
this talk. We also present new exponentially small upper bounds on
the correlation between parity and quadratic polynomials mod 3.
One corollary of this is that in order to compute parity, circuits
consisting of a threshold gate at the top, mod 3 gates in the
middle, and AND gates of fan-in two at the inputs must be of size
$2^{\Omega(n)}$. This is the first result of this type for general
mod subcircuits with ANDs of fan-in greater than 1. This yields an
exponential improvement over a recent result of Alon and Beigel.
The proof uses a novel inductive estimate of the relevant
exponential sums introduced by Cai, Green and Thierauf. The
exponential sum bounds are tight.
This result is to appear at Computational Complexity 2002.
Tuesday, May 14, 2002
Alexander Razborov, IAS
Quantum Communication Complexity of Symmetric Predicates
Abstract:
Alice holds an input $x\in X$, Bob holds $y\in Y$ ($X,Y$ finite sets),
and they exchange messages to evaluate a Boolean predicate $f: X\times
Y\rightarrow \{0,1\}$. They can employ the laws of quantum mechanics
and are allowed to send each other qubits instead of classical
bits. Additionally, a small output error is tolerated. How many qubits
should Alice and Bon exchange in the worst case to achieve their goal?
We determine this quantity up to a factor of $O(\log n)$ for any
predicate $f(x,y)$ when $x,y$ are subsets of an $n$-element set, and
$f(x,y)$ depends only on the size of $x\cap y$. In particular, we
prove an $\Omega(\sqrt n)$ lower bound on the complexity of the set
disjointness predicate.
Monday, June 3, 2002
Yaoyun Shi, California Institute of Technology
Polynomial Approximations and Quantum Lower Bounds
Abstract:
The approximation degree of a Boolean function is the smallest degree of
polynomials that approximate the function. It has been shown that the
approximation degree is a lower bound for the "black-box" quantum
complexity, a connection discovered by Beals, Buhrman, Cleve, Mosca,
and de Wolf [FOCS98]. Except for the case of symmetric Boolean functions,
proving approximation degree lower bounds remains a challenging task
in general.
In this talk I will use this "polynomial method" to prove a quantum
lower bound on the number of oracle accesses for solving the Collision
Problem, which is to find two distinct indices i and j such that f(i)=f(j),
where f is a two-to-one function of domain size n and is given as an oracle.
I will show that the technique of Aaronson [STOC02] can be extended to improve
his Omega(n^{1/5}) lower bound to Omega(n^{1/3}), which is asymptotically
tight. Our lower bound also implies an Omega(n^{2/3}) lower bound on the
number of queries to the input for Element Distinctness, which is to decide
whether or not n input numbers are all distinct. The previous best quantum lower
bound is Omega(sqrt(n)), by a simple reduction from Grover's search problem.
Monday, June 10, 2002
Amir Amihood, AT & T Research
Approximate Pattern Matching - The Hamming Distance Case
Abstract:
Many different meanings can be assigned to the words "approximation"
of pattern matching. One possibility is that of allowing "local"
errors.
Intuitively, we group under the label "local" errors that take place
in a bounded location, as opposed to changes that permeate the entire
data (e.g. scaling, rotation).
Specifically, consider the most limited local error -- the
mismatch. This error occurs in a single symbol and effects only its
location. In contrast, insertions and deletions have a global effect,
although the error itself is confined to a single location.
We discuss known techniques and upper bounds for matching with the
mismatch local error. We also present a new algorithm that finds in
a length-n text all locations where a length-m pattern matches with
at most k mismatches, in time O(n sqrt(k log k).
The talk uses the approximate Hamming distance problem only as an
excuse for a quick review of some main techniques in the field. Some
of these may come in handy even if the listener is not (heaven forbid)
a researcher in pattern matching.
(The new result is joint with Moshe Lewenstein and Ely Porat.)
Back
to the Seminars page
School of Mathematics'
home page
Last Updated: June 4, 2002