Skip to main content

Complexity

Overview

Our recent work in complexity theory addresses problems in the following areas: communication complexity, computation over massive data sets, data structures, hardness of approximation, lattice problems, learning theory, coding theory, constraint satisfaction, and derandomization.

Communication complexity

Quantum vs. classical one-way communication. For the model of one-way communication complexity, Ziv Bar-Yossef, T.S. Jayram, and Iordanis Kerenidis have exhibited a problem for which there is a quantum protocol that is exponentially more powerful than any bounded-error randomized protocol. Specifically, they define the hidden matching problem: Alice gets as input an n-bir string x in and Bob gets a perfect matching M on the n coordinates and Bob's goal is to output a tuple (i,j,b) such that the edge (i,j) ∈ M and b = x_i ⊕ x_j. It is shown that the quantum one-way communication complexity of this problem is O(log n), yet any randomized one-way protocol with bounded error must use Ω(sqrt{n}) bits of communication.

  • Z. Bar-Yossef, T. S. Jayram and I. Kerenidis. Exponential separation of quantum and classical one-way communication complexity. STOC, 2004;SIAM Journal of Computing 38(1), 2008.

Hellinger strikes back: a note on the multi-party information complexity of AND The AND problem on t bits is a promise decision problem where either at most one bit of the input is set to 1 (NO instance) or all t bits are set to 1 (YES instance). T.S. Jayram gave a new and simplified proof of an Ω(1/t) lower bound on the information complexity of AND in the number-in-hand model of communication. This has applications to proving space lower bounds on estimating the frequency moments and norms. The proof exploits the information geometry of communication protocols via Hellinger distance in a novel manner and avoids the analytic approach inherent in previous work.

  • T.S. Jayram. Hellinger Strikes Back: A Note on the Multi-Party Information Complexity of AND. RANDOM, 2009.

Lower bounds for edit distance and product metrics via Poincare-type inequalities Alexandr Andoni (MIT), T.S. Jayram, and Mihai Patrascu show that any sketching protocol for edit distance achieving a constant approximation requires nearly logarithmic (in the strings' length) communication complexity. This is an exponential improvement over the previous lower bound. From a technical perspective, our contribution was a direct-sum theorem for sketching product metrics: for any metric X that requires sketch size which is a sufficiently large constant, sketching the max-product metric Ld(X) requires Ω(d) bits. The proof uses a novel technique for information complexity based on Poincare inequalities and suggests an intimate connection between non-embeddability, sketching and communication complexity.

  • A. Andoni, T.S. Jayram, and M. Patrascu. Lower bounds for Edit Distance and Product Metrics via Poincare-Type Inequalities. Symposium on Discrete Algorithms (SODA), 2010 (to appear).

On the communication complexity of read-once AC0 formulae T.S. Jayram, joint with Swastik Kopparty (MIT), Prasad Raghavendra (formerly at Univ. of Washington, now at Georgia Tech.) studied the 2-party randomized communication complexity of read-once AC0 formulae. For balanced AND-OR trees with n inputs and depth d, they show that the communication complexity is Ω( n/4d ) while for general AND-OR trees, the communication complexity is n/2Ω(d log d). These results generalize the classical results on the communication complexity of set-disjointness and the tribes functions. Their techniques build on and extend the information complexity methodology for proving lower bounds on randomized communication complexity.

  • T.S. Jayram, S. Kopparty and P. Raghavendra. On the Communication Complexity of Read-Once AC0 Formulae. CCC 2009.

Computational Models for Massive Data Sets

Sketching complexity of pattern matching. Ziv Bar-Yossef, T. S. Jayram, Robert Krauthgamer, and Ravi Kumar addressed the problems of pattern matching and approximate pattern matching in the sketching model. They showed it is impossible to compress the text into a small sketch and use only the sketch to decide whether a given pattern occurs in the text. They also proved a sketch size lower bound for approximate pattern matching, and showed it is tight up to a logarithmic factor.

  • Z. Bar-Yossef, T. S. Jayram, R. Krauthgamer and R. Kumar. The sketching complexity of pattern matching. RANDOM, 2004.

Lower bounds for randomized read/write stream algorithms Motivated by the capabilities of modern storage architectures, Paul Beame (U. of Washington), T.S. Jayram, and Atri Rudra (U. of Washington) consider a generalization of the data-stream model, where the algorithm has sequential access to multiple streams and can also write onto streams. There is no limit on the size of the streams, but the number of passes made on the streams is restricted. On the other hand, the amount of internal memory used by the algorithm is scarce, similar to the data stream model. They resolve the main open problem in previous work of proving lower bounds in this model for algorithms that are allowed to have 2-sided error. Previously, such lower bounds were shown only for deterministic and 1-sided error randomized algorithms. For the classical set disjointness problem---an invaluable reduction candidate for deriving lower bounds for many other problems involving data streams and other randomized models of computation---they show a near-linear lower bound on the size of the internal memory used by a randomized algorithm with 2-sided error that is allowed to have o(log N/ log log N) passes over the streams. This bound is almost optimal since there is a simple algorithm that can solve this problem using logarithmic memory if the number of passes over the streams is allowed to be O(log N). Applications include near-linear lower bounds on the internal memory for well-known problems in the literature: (1) approximately counting the number of distinct elements in the input (F0); (2) approximating the frequency of the mode of an input sequence (3) computing the join of two relations; and (4) deciding if some node of an XML document matches an XQuery (or XPath) query. Their techniques involve a novel direct-sum type of argument that yields lower bounds for many other problems. Their results asymptotically improve previously known bounds for any problem even in deterministic and 1-sided error models of computation.

  • P. Beame, T. S. Jayram, and A. Rudra. Lower bounds for randomized read/write stream algorithms. Symposium on Theory of Computing (STOC), 2007

Streaming numerical linear algebra problems Ken Clarkson and David Woodruff found one-pass algorithms for some approximate matrix operations, including the computation of matrix products and the estimation of best low-rank projections. These algorithms maintain small "sketches" of the input matrices; with high probability, the product of the sketches is close to the true matrix product. Specifically, the Frobenius norm of the error is proportional to the product of the Frobenius norms of the input matrices. This upper bound result sharpens and simplifies prior work on this problem, and our matching lower bounds show that little improvement is possible.

  • K. Clarkson and D. Woodruff. Numerical Linear Algebra in the Streaming Model. STOC, 2009.

Tight lower bounds for selection in randomly ordered streams Tight lower bounds for selection in randomly ordered streams. The model of data streams where the data arrives in a random order arises in several scenarios. It is also motivated by the attempt to overcome the barrier of pessimistic lower bounds on space and/or number of passes that can be shown with the usual (adversarial) order. Recently, there has a been a resurgence of interest in this model although historically, random-order data streams were considered even in the seminal paper of data streams by Munro and Paterson from FOCS 1978. In a joint SODA, 2008 paper with Amit Chakrabarti (Dartmouth) and Mihai Pătraşcu (MIT), T.S. Jayram resolves an open question from their paper on computing the median, a fundamental aggregate in order statistics. In particular, the paper show that any algorithm computing the median of a stream presented in random order, using polylog(n) space, requires an optimal Omega(log log n) passes.

  • A. Chakrabarti, T. S. Jayram and M. Pătraşcu. Tight lower bounds for selection in randomly ordered streams. Symposium on Discrete Algorithms (SODA), 2008.

The average case complexity of counting distinct elements The problem of probabilistically estimating the number of distinct elements in a data stream is one of the central problems in data stream computing. To estimate this quantity to within a relative error ε, it is known that if the stream is adversarially ordered, or even randomly ordered, any such algorithm must use Ω(1/ε2) bits of space. This can be prohibitive in practice. In an attempt to bypass this lower bound, David Woodruff studied the average-case complexity of the problem when the streaming data comes from certain distributions. For some natural distributions corresponding to random uncorrelated data, he gives an algorithm with space complexity that beats the lower bound for adversarial and randomly-ordered streams. Nevertheless, for other natural distributions, he shows that the Ω(1/ε2) lower bound still holds when the data is drawn from such distributions.

  • D. Woodruff. The average-case complexity of counting distinct elements. ICDT, 2009.

Learning models and complexity

Separating models of learning with faulty teachers Vitaly Feldman and Shrenik Shah (Harvard) study the power of two models of faulty teachers in Valiant's PAC learning model and Angluin's exact learning model. The first model we consider is learning from an incomplete membership oracle introduced by Angluin and Slonim (1994). In this model, the answers to a random subset of the learner's membership queries may be missing. The second model we consider is random persistent classification noise in membership queries introduced by Goldman et al. (1993). In this model, the answers to a random subset of the learner's membership queries are flipped. They show that in both the PAC and the exact learning models the incomplete membership oracle is strictly stronger than the noisy membership oracle under the assumption that the problem of PAC learning parities with random classification noise is intractable.

  • V. Feldman, S. Shah. Separating Models of Learning with Faulty Teachers. Theoretical Computer Science journal, 410(19), 2009(Special issue on ALT 2007).

Evolvability and learning Leslie Valiant has recently introduced a framework for analyzing the capabilities and the limitations of the evolutionary process of random change guided by selection. In his framework ,the process of acquiring a complex functionality is viewed as a substantially restricted form of learning of an unknown function from a certain set of functions. Valiant showed that classes of functions evolvable in his model are also learnable in the statistical query (SQ) model of Kearns that allows only the gathering of statistics about the unknown function. Vitaly Feldman showed that evolvability is equivalent to learnability by a restricted form of statistical queries. Based on this equivalence he proved that for any fixed distribution D over the domain, every class of functions learnable by SQs over D is evolvable over D. Previously, only the evolvability of monotone conjunctions of Boolean variables over the uniform distribution was known (Valiant, 2007). In addition, he gave a characterization of distribution-independent evolvability that shows that evolvability is strictly weaker than learning by statistical queries.

  • V. Feldman. Evolvability from Learning Algorithms. Symposium on Theory of Computing (STOC), 2008.
  • V. Feldman and L. Valiant. The Learning Power of Evolution. Conference on Learning Theory (COLT), 2008.

Power of membership queries in agnostic learning Vitaly Feldman studied the properties of the agnostic learning framework of Haussler (1992) and Kearns, Schapire and Sellie (1992). In particular, he addressed the question: is there any situation in which membership queries are useful in agnostic learning? He showed that the answer is negative for distribution-independent agnostic learning and positive for agnostic learning with respect to a specific marginal distribution. Namely, he gives a proof that any concept class learnable agnostically by a distribution-independent algorithm with access to membership queries is also learnable agnostically without membership queries. This resolves an open problem posed by Kearns et al. (1992). For agnostic learning with respect to the uniform distribution over {0,1}n, he shows a concept class that is learnable with membership queries but computationally hard to learn from random examples alone (assuming that one-way functions exist).

  • V. Feldman. On The Power of Membership Queries in Agnostic Learning. Conference on Learning Theory (COLT), 2008; Journal of Machine Learning Research, 10(Feb), 2009.

Experience-induced neural circuits that achieve high capacity. Over a lifetime, the cortex performs a vast number of different cognitive actions, mostly dependent on past experience. Previously it has not been known how such capabilities can be reconciled, even in principle, with the known resource constraints on the cortex, such as low connectivity and low average synaptic strength. In joint work with Leslie Valiant, Vitaly Feldman described neural circuits and associated algorithms that respect the brain's most basic resource constraints and support the execution of high numbers of cognitive actions when presented with natural inputs. Their circuits simultaneously support a suite of four basic kinds of task that each require some circuit modification: hierarchical memory formation, pairwise association, supervised memorization, and inductive learning of threshold functions. The capacity of the circuits is established via experiments in which sequences of several thousands such actions are simulated by computer and the circuits created tested for subsequent efficacy. Their underlying theory is apparently the only biologically plausible systems-level theory of learning and memory in cortex for which such a demonstration has been performed.

  • V. Feldman and L. Valiant. Experience-Induced Neural Circuits That Achieve High Capacity. Neural Computation (to appear), 2009.

Robustness of evolvability. A framework for analyzing the computational capabilities and the limitations of the evolutionary process of random change guided by selection was recently introduced by Valiant (2006). In his framework the process of acquiring a complex functionality is viewed as a constrained form of PAC learning. In addition to the basic definition, a number of natural variants of the evolvability model were introduced by Valiant, and several others have been suggested since then. Understanding the relative power of these variants in terms of the efficiently evolvable function classes they define is one of the main open problems regarding the model. Vitaly Feldman presented several results that collectively demonstrate that the notion of evolvability is robust to a variety of reasonable modifications of the model. His results show that the power of a model of evolvability essentially depends only on the fitness metric used in the model. In particular, he proves that the classes of functions evolvable in Valiant's original model are also evolvable with substantially weaker and simpler assumptions on the mutation algorithm and the selection rule and therefore a wide range of models of evolution are equivalent. Another consequence of his results if that evolvability with the quadratic loss fitness metric (or any other non-linear loss) is strictly stronger than evolvability with the linear loss fitness metric used in Valiant's basic definition.

  • V. Feldman. Robustness of Evolvability. Conference on Learning Theory (COLT), 2009.

Characterization of statistical query learning with applications to evolvability. Statistical query (SQ) learning model of Kearns (1998) is a natural restriction of the PAC learning model in which a learning algorithm is allowed to obtain estimates of statistical properties of the examples but cannot see the examples themselves. Vitaly Feldman describes a new and simple characterization of the query complexity of learning in the SQ learning model. Unlike the previously known bounds on SQ learning (Blum et al., 1994; Bshouty and Feldman, 2002; Yang, 2005; Balcazar et al., 2007; Simon, 2007) his characterization preserves the accuracy and the efficiency of learning. The preservation of accuracy implies that that the characterization gives the first characterization of SQ learning in the agnostic learning framework of Haussler (1992) and Kearns et al. (1994). The preservation of efficiency allows him to derive a new technique for the design of evolutionary algorithms in Valiant's model of evolvability (2009). He uses this technique to demonstrate the existence of a large class of monotone evolutionary learning algorithms based on square loss fitness estimation. These results differ significantly from the few known evolutionary algorithms and give evidence that evolvability in Valiant's model is a more versatile phenomenon than there had been previous reason to suspect.

  • V. Feldman. A Complete Characterization of Statistical Query Learning with Applications to Evolvability. FOCS, 2009 (to appear).

Hardness of agnostic learning of monomials by halfspaces. Vitaly Feldman, Venkatesan Guruswami (CMU), Prasad Raghavendra and Yi Wu (CMU) show that given a distribution on labeled examples from the hypercube such that there exists a monomial (or conjunction) consistent with 99% of the examples, it is NP-hard to find a halfspace that is correct on 51% of the examples. In learning theory terms, weak agnostic learning of monomials by halfspaces is NP-hard. This hardness result bridges between and subsumes two previous results which showed similar hardness results for the proper learning of monomials and halfspaces. As immediate corollaries of this result, they obtain the first optimal hardness results for weak agnostic learning of decision lists and majorities. Their techniques are quite different from previous hardness proofs for learning. They use an invariance principle and sparse approximation of halfspaces from recent work on fooling halfspaces to give a new natural list decoding of a halfspace in the context of dictatorship tests/label cover reductions. In addition, unlike previous invariance principle based proofs which are only known to give Unique Games hardness, they give a reduction from a smooth version of Label Cover that is known to be NP-hard.

  • V. Feldman, V. Guruswami, P. Raghavendra and Yi Wu. Agnostic Learning of Monomials by Halfspaces is Hard. FOCS, 2009(to appear).

Data structures

The geometry of binary search trees The splay conjecture is one of the longest standing open problems in data structures (1983). It states that on any sequence of operations, splay trees are at most a constant factor worse than the best binary tree designed with that sequence in mind. If true, this would mean that we can use splay trees as the universal optimal solution, without trying to develop special purpose binary trees optimized for our particular data. Together with Erik Demaine, Dion Harmon, John Iacono, and Daniel Kane, Mihai Patrascu gave an appealing geometric interpretation of binary trees with several consequences: (1) previous lower bounds are easier to understand in this view; (2) discovery of a new lower bound that is at least as good as existing ones, and optimal within a natural class; (3) a previous approximation algorithm that was conjectured to give constant approximation becomes an online tree. This is the first alternative to splay trees that is conjectured to be optimal.

  • E. Demaine, D. Harmon, J. Iacono, D. Kane, and M. Pătraşcu. The Geometry of Binary Search Trees. Symposium on Discrete Algorithms (SODA), 2009.

The rank problem in succinct data structures The rank problem in succinct data structures is to preprocess an array A[1..n] of bits into a data structure using as close to n bits as possible, and answer queries of the form rank(k) = ∑ i=1..k A[i]. The problem has been intensely studied, and features as a subroutine in a majority of succinct data structures. Mihai Patrascu showed that in the cell probe model with w-bit cells, if rank takes t time, the space of the data structure must be at least n + n/wO(t) bits. This redundancy/query trade-off is essentially optimal, matching the author's upper bound from [FOCS'08].

  • M. Pătraşcu. A Lower Bound for Succinct Rank Queries. Symposium on Discrete Algorithms (SODA), 2010 (to appear).

Hardness of approximation

How hard is it to approximate the best Nash equilibrium? The quest for a PTAS for Nash equilibrium in a two-player game is a major open question in Algorithmic Game Theory. Elad Hazan and Robert Krauthgamer (Weizmann) show that the optimization version of problem, namely, finding in a two-player game an approximate equilibrium achieving large social welfare is unlikely to have a polynomial time algorithm. Technically, our result is a reduction from a seemingly unrelated yet notoriously difficult problem in modern combinatorics, of finding a planted (but hidden) clique in a random graph.

  • E. Hazan and R. Krauthgamer. How Hard is it to Approximate the Best Nash Equilibrium. Symposium on Discrete Algorithms (SODA), 2009.

Hardness of agnostic learning of monomials by halfspaces. Vitaly Feldman, Venkatesan Guruswami (CMU), Prasad Raghavendra and Yi Wu (CMU) show that given a distribution on labeled examples from the hypercube such that there exists a monomial (or conjunction) consistent with 99% of the examples, it is NP-hard to find a halfspace that is correct on 51% of the examples. In learning theory terms, weak agnostic learning of monomials by halfspaces is NP-hard. This hardness result bridges between and subsumes two previous results which showed similar hardness results for the proper learning of monomials and halfspaces. As immediate corollaries of this result, they obtain the first optimal hardness results for weak agnostic learning of decision lists and majorities. Their techniques are quite different from previous hardness proofs for learning. They use an invariance principle and sparse approximation of halfspaces from recent work on fooling halfspaces to give a new natural list decoding of a halfspace in the context of dictatorship tests/label cover reductions. In addition, unlike previous invariance principle based proofs which are only known to give Unique Games hardness, they give a reduction from a smooth version of Label Cover that is known to be NP-hard.

  • V. Feldman, V. Guruswami, P. Raghavendra and Yi Wu. Agnostic Learning of Monomials by Halfspaces is Hard. FOCS, 2009(to appear).

Lattice problems

A conjecture about polynomial time computable lattice-lattice functions. Miklós Ajtai formulates a conjecture which describes all of the polynomial time computable (p. t. c. ) functions f with domain(f)=lattice(n), range(f) ⊂ lattice(m), mnc, where lattice(n) is the set of lattices in ℜn with determinant 1. A former conjecture, the 0-1-conjecture, of the author states that all 0,1-valued functions defined on lattice(n) are constant almost everywhere. Since the n-dimensional lattices as Abelian groups are pairwise isomorphic we can say that, according to this conjecture, the value of a p. t. c. 0,1-function may depend only on the algebraic structure of the lattice and not on the metric defined on it. The new conjecture generalizes this statement for functions where the value is also a lattice. As a typical example we can think of the function f(L)=L′, where L′ is the dual of L. When both the domain and the range of the functions are n-dimensional lattices with determinants 1 then, according to the conjecture, every p. c. t. functions are either constant or identical almost everywhere to f(L)=AL, or f(L)=AL′, where A is a suitably chosen linear transformation with determinant one.There is a striking analogy between the new conjecture and theorems describing the uniquely definable functions which assign for each n-dimensional vectorspace an m-dimensional vectorspace. These theorems are closely related to questions about the Axiom of Choice. In this analogy polynomial time computation corresponds to definability (in set theory) and lattices to finite dimensional vectorspaces.

M. Ajtai. A conjecture about polynomial time computable lattice-lattice functions. STOC 2004.

Representing hard lattices with few bits. Miklós Ajtai presents a variant of the Ajtai-Dwork public-key cryptosystem where the size of the public-key is only O(n log n) bits and the encrypted text/clear text ratio is also O(n log n). This is true with the assumption that all of the participants in the cryptosystem share O(n2 log n) random bits which has to be picked only once and the users of the cryptosystem get them e.g. together with the software implementing the protocol. The public key is a random lattice with an nc-unique nonzero shortest vector, where the constant c≤1/2 can be picked arbitrarily close to 1/2, and the lattice is picked according to a distribution described in the paper. The author does not prove a worst-case average-case equivalence but the security of the system follows from the hardness of a randomized diophantine approximation problem related to a well-known theorem of Dirichlet.

M. Ajtai: Representing hard lattices with O(n log n) bits. STOC 2005; Chicago Journal of Theoretical Computer Science, Vol. 2008(4).

Coding theory

Index coding with side information. Consider the following coding problem, which is motivated by a problem of transmitting data over broadcast channels: a sender wishes to communicate an input x of length n over a broadcast channel with n receivers so that the i-th receiver Ri can recover the i-th bit of x. Each Ri has prior side information about x, induced by a directed graph G on n nodes; Ri knows the bits of x in the positions {j | (i,j) is an edge of G}. Encoding schemes that achieve this goal will be called INDEX codes with side information graph G. Ziv Bar-Yossef, Yitzhak Birk, T.S. Jayram, and Tomer Kol identify a measure on graphs, the minrank, which they conjecture to exactly characterize the minimum length of INDEX codes. They resolve the conjecture for certain natural classes of graphs. For arbitrary graphs, they show that the minrank bound is tight for both linear codes and certain classes of non-linear codes. For the general problem, they obtain a (weaker) lower bound that the length of an INDEX code for any graph G is at least the size of the maximum acyclic induced subgraph of G.

Z. Bar-Yossef, Y. Birk, T. S. Jayram and T. Kol. Index Coding with Side Information. FOCS, 2006.

Corruption and recovery-efficient locally decodable codes A locally decodable code (LDC) allows one to encode messages into codewords such that individual bits of the message can be recovered from a corrupted codeword y by probing only a few positions of y. This is in contrast to standard error-correcting codes which require first decoding the entire codeword, even if one is only interested in a single bit of the message. If the encoding is a linear transformation, the LDC is called linear. David Woodruff constructed the first non-linear LDCs, and showed that if only two positions in y are queried to recover message bits, then non-linear LDCs are provably shorter than linear LDCs, answering a question of Kerenidis and de Wolf. The work gives a general transformation from linear LDCs into non-linear LDCs which achieve the best known tradeoffs between the fraction of codeword bits that can be corrupted and the probability of recovering a bit of the message.

  • D. Woodruff. Corruption and Recovery-Efficient Locally Decodable Codes. APPROX-RANDOM, 2008.

Constraint Satisfaction

Computational and structural dichotomies in the connectivity of Boolean satisfiability problems. Recent work on heuristics for random Boolean satisfiability problems has centered around the structure and connectivity of the solution space. Motivated by this work, Parikshit Gopalan, Phokion Kolaitis, Elitza Maneva and Christos Papadimitriou study connectivity properties of the space of solutions of Boolean satisfiability problems, and establish both computational and structural dichotomies in Schaefer's framework. Their results assert that the intractable side of the computational dichotomies is PSPACE-complete, while the tractable side - which includes but is not limited to all problems with polynomial time algorithms for satisfiability - is in PTIME for the st-connectivity question, and in co-NP for the connectivity question. The diameter of components can be exponential for the PSPACE-complete cases, whereas in all other cases it is linear. The crux of these results is an expressibility theorem showing that a large class of Boolean constraint satisfaction problems can be reduced to each other in a way that preserves the structural properties of the solution space.

  • P. Gopalan, P. Kolaitis, E. Maneva and C. Papadimitriou. The Connectivity of Boolean Satisfiability: Computational and Structural Dichotomies. ICALP 2006; SIAM Journal of Computing, vol. 38(6), 2009

On the possibility of better SAT algorithms. Mihai Patrascu and Ryan Williams investigate the prospects for an algorithm for Boolean CNF satisfiability (CNF-SAT) that runs in O(2δn) time for some δ<1. In such a case one would say CNF-SAT has an improved algorithm. Several plausible hypotheses are presented. The authors prove that if any of the hypotheses are true, then CNF SAT has an improved algorithm. For other candidate hypotheses, they imply that k-SAT can be solved in 2δn time for a fixed δ<1 that is independent of k. Such results would be breakthroughs in exact algorithms. The contrapositive statements show that if one assumes satisfiability does not admit improved algorithms, then many interesting lower bounds follow. The reductions presented provide tight connections between satisfiability and foundational problems in parameterized complexity, graph theory, communication complexity, and computational geometry.

  • M. Patrascu and R. Williams. On the Possibility of Better SAT Algorithms. Symposium on Discrete Algorithms (SODA), 2010 (to appear).

Derandomization

Optimal rank bounds for depth-3 identities The problem of identity testing is a central problem in theoretical computer science. Given a circuit that computes a polynomial P, we wish to check if it is identically zero (P has no non-zero coefficients). A major open problem is to get a deterministic black box polynomial time algorithms for this. Such an algorithm cannot see the circuit, and is only allowed to evaluate the polynomial P at various domain points. In recent years, there has been some progress on various restricted versions of this problem. The depth 3 case, where one restricts the depth of this circuit, is still very wide open. With Nitin Saxena, C. Seshadhri made progress on this case, and presented almost efficient black box algorithms. This work resolves a question posed by Dvir and Shpilka (2005) dealing with the ranks of depth-3 identities. On of the interesting features of this result is a toolkit designed to study these circuits that is of independent interest.

  • N. Saxena and C. Seshadhri. An Almost Optimal Rank Bound for Depth-3 Identities. CCC, 2009.

[an error occurred while processing this directive]