I am an Assistant Professor in the Computer Science department at Boston University and a faculty fellow at the Hariri Institute.

The best way to reach me is by email; aene *at* bu *dot* edu.

My research interests span theoretical computer science and algorithms, optimization, and theoretical machine learning. A major focus of my recent work is on submodular optimization and various aspects of both convex and non-convex optimization. I am also very interested in algorithm design for machine learning and data mining applications.

Fall 2020: CS 237: Probability in Computing

Spring 2020: CS 531: Advanced Optimization Algorithms

Spring 2019: CS 237: Probability in Computing

Fall 2018: CS 591 E1: Advanced Optimization Algorithms

Spring 2018: CS 237: Probability in Computing

Spring 2018: CS 591 E2: Convex Optimization Algorithms (co-taught with Lorenzo Orecchia)

Spring 2017: CS 237: Probability in Computing

Fall 2016: CS 591 E2: Optimization Methods and their Applications

University of Warwick, Term 2, 2015/16: CS 254: Algorithmic Graph Theory

University of Warwick, Term 1, 2015/16: CS 136: Discrete Mathematics and its Applications I

University of Warwick, Term 2, 2014/15: CS 131:
Mathematics for Computer Science II

Adrian Vladu (postdoc, advised jointly with Lorenzo Orecchia)

Andrew Suh (PhD student)

**PhD:** Rafael da Ponte Barbosa (2014 -- 2017)

**MSc:** Erasmo Tani (2017--2019), co-advised with Lorenzo Orecchia; Joao Marcal Ramos (2015/16); Fabian Bode (2015/16); Chu Chu (2015/16); James Soffe (2014/15)

**Undergraduate:** Rachel Durant (2015/16); Sara Jenkins (2014/15)

See ** all publications**.
See** selected publications**.

**Copyright warning:** The copyright of each published paper
belongs to the respective publisher. The local copy is only for
non-commercial, personal use.

Adaptive and Universal Single-gradient Algorithms for Variational Inequalities (with Huy Lê Nguyễn). *Manuscript*, 2020.

[arXiv]

Adaptive Gradient Methods for Constrained Convex Optimization (with Huy Lê Nguyễn and Adrian Vladu). *Manuscript*, 2020.

[arXiv]

Team Formation: Striking a Balance between Coverage and Cost (with Sofia Maria Nikolakaki and Evimaria Terzi). *Manuscript*, 2020.

[arXiv]

Parallel Algorithm for Non-Monotone DR-Submodular Maximization (with Huy Lê Nguyễn). In Proceedings of the 37th International Conference on Machine Learning **(ICML)**, 2020.

[abstract][arXiv][video]

In this work, we give a new parallel algorithm for the problem of maximizing a non-monotone diminishing returns submodular function subject to a cardinality constraint. For any desired accuracy $\epsilon$, our algorithm achieves a $1/e-\epsilon$ approximation using $O(\log(n)\log(1/\epsilon)/\epsilon^3)$ parallel rounds of function evaluations. The approximation guarantee nearly matches the best approximation guarantee known for the problem in the sequential setting and the number of parallel rounds is nearly-optimal for any constant $\epsilon$. Previous algorithms achieve worse approximation guarantees using $\Omega(\log^2{n}) parallel rounds. Our experimental evaluation suggests that our algorithm obtains solutions whose objective value nearly matches the value obtained by the state of the art sequential algorithms, and it outperforms previous parallel algorithms in number of parallel rounds, iterations, and solution quality.

Optimal Streaming Algorithms for Submodular Maximization with Cardinality Constraints
(with Naor Alaluf, Moran Feldman, Huy Lê Nguyễn and Andrew Suh). In Proceedings of the 47th International Colloquium on Automata, Languages and Programming **(ICALP)**, 2020.

[arXiv][video by my student Andrew]

**Note:** This paper is a merger of two preprints: arXiv:1906.11237 and arXiv:1911.12959v1

Node-Weighted Network Design in Planar and Minor-Closed Families of Graphs
(with Chandra Chekuri and Ali Vakilian).

**Note:** This is an update/extension to an ICALP 2012 paper with the same title. The ICALP paper was on EC-SNDP and this one handles Element SNDP.

[arXiv]

Improved Convergence for l_infinity and l_1 Regression via Iteratively Reweighted Least Squares (with Adrian Vladu). In Proceedings of the 36th International Conference on Machine Learning **(ICML)**, 2019.

[abstract][arXiv][video by my postdoc Adrian]

The iteratively reweighted least squares method (IRLS) is a popular technique used in practice for solving regression problems. Various versions of this method have been proposed, but their theoretical analyses failed to capture the good practical performance.
In this paper we propose a simple and natural version of IRLS for solving $\ell_\infty$ and $\ell_1$ regression, which provably converges to a $(1+\epsilon)$-approximate solution in $O(m^{1/3}\log(1/\epsilon)/\epsilon + \log(m/\epsilon)/\epsilon^2)$ iterations, where $m$ is the number of rows of the input matrix. Interestingly, this running time is independent of the conditioning of the input, and the dominant term of the running time depends only linearly in $\epsilon^{-1}$, despite the fact that the problem it is solving is non-smooth, and the algorithm is not using any regularization.
This improves upon the more complex algorithms of Chin et al. (ITCS '12), and Christiano et al. (STOC '11) by a factor of at least $1/\epsilon^{5/3}$, and yields a truly efficient natural algorithm for the slime mold dynamics (Straszak-Vishnoi, SODA '16, ITCS '16, ITCS '17).

Towards Nearly-linear Time Algorithms for Submodular Maximization with a Matroid Constraint (with Huy Lê Nguyễn). In Proceedings of the 46th International Colloquium on Automata, Languages and Programming **(ICALP)**, 2019.

[abstract][arXiv]

We consider fast algorithms for monotone submodular maximization subject to a matroid constraint. We assume that the matroid is given as input in an explicit form, and the goal is to obtain the best possible running times for important matroids. We develop a new algorithm for a general matroid constraint with a $1 - 1/e - \epsilon$ approximation that achieves a fast running time provided we have a fast data structure for maintaining a maximum weight base in the matroid through a sequence of decrease weight operations. We construct such data structures for graphic matroids and partition matroids, and we obtain the first algorithms for these classes of matroids that achieve a nearly-optimal, $1 - 1/e - \epsilon$ approximation, using a nearly-linear number of function evaluations and arithmetic operations.

A Nearly-linear Time Algorithm for Submodular Maximization with a Knapsack Constraint (with Huy Lê Nguyễn). In Proceedings of the 46th International Colloquium on Automata, Languages and Programming **(ICALP)**, 2019.

[abstract][arXiv]

We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, $1 - 1/e - \epsilon$ approximation, using $(1/\epsilon)^O(1/\epsilon^4) n log^2(n)$ function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to $\Omega(n^2)$ running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.

Submodular Maximization with Matroid and Packing Constraints in Parallel (with Huy Lê Nguyễn and Adrian Vladu). In Proceedings of the 51st ACM Symposium on Theory of Computing **(STOC)**, 2019.

[abstract][arXiv]

We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries.
We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a $1-1/e-\epsilon$ approximation for monotone functions and a $1/e-\epsilon$ approximation for non-monotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is $O(\log^2{n}/\epsilon^3)$, which is an exponential speedup over the existing algorithms.
We obtain the first parallel algorithm for non-monotone submodular maximization subject to packing constraints. Our algorithm achieves a $1/e-\epsilon$ approximation using $O(\log(n/\epsilon) \log(1/\epsilon) \log(n+m)/ \epsilon^2)$ parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a $1-1/e-\epsilon$ approximation in $O(\log(n/\epsilon)\log(m)/\epsilon^2)$ parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective~\cite{mahoney2016approximating}.
Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function.

Submodular Maximization with Nearly-optimal Approximation and Adaptivity in Nearly-linear Time (with Huy Lê Nguyễn). In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms **(SODA)**, 2019.

[abstract][arXiv]

In this paper, we study the tradeoff between the approximation guarantee and adaptivity for the problem of maximizing a monotone submodular function subject to a cardinality constraint. The adaptivity of an algorithm is the number of sequential rounds of queries it makes to the evaluation oracle of the function, where in every round the algorithm is allowed to make polynomially-many parallel queries. Adaptivity is an important consideration in settings where the objective function is estimated using samples and in applications where adaptivity is the main running time bottleneck. Previous algorithms achieving a nearly-optimal 1-1/e-eps approximation require Omega(n) rounds of adaptivity. In this work, we give the first algorithm that achieves a 1-1/e-eps approximation using O(ln(n)/eps^2) rounds of adaptivity. The number of function evaluations and additional running time of the algorithm are O(n poly(log(n), 1/eps)).

A Parallel Double Greedy Algorithm for Submodular Maximization
(with Huy Lê Nguyễn and Adrian Vladu). *Manuscript*, 2018.

[abstract][arXiv]

We study parallel algorithms for the problem of maximizing a non-negative submodular function. Our main result is an algorithm that achieves a nearly-optimal $1/2 - \epsilon$ approximation using $O(\log(1/\epsilon)/\epsilon)$ parallel rounds of function evaluations. Our algorithm is based on a continuous variant of the double greedy algorithm of Buchbinder et al. that achieves the optimal 1/2 approximation in the sequential setting. Our algorithm applies more generally to the problem of maximizing a continuous diminishing-returns (DR) function.

Constant Congestion Routing of Symmetric Demands in Planar Directed Graphs (with Chandra Chekuri and Marcin Pilipczuk). SIAM Journal on Discrete Mathematics** (SIDMA)**, 2018.

Mining Tours and Paths in Activity Networks (with Sofia Maria Nikolakaki, Charalampos Mavroforakis, and Evimaria Terzi). In Proceedings of The Web Conference **(WWW)**, 2018.

Online Buy-at-Bulk Network Design (with Deeparnab Chakrabarty, Ravishankar Krishnaswamy and Debmalya Panigrahi).
SIAM Journal on Computing **(SICOMP)**, 2018.

Decomposable Submodular Function Minimization: Discrete and Continuous (with Huy Lê Nguyễn and László Végh). In Proceedings of the 30th Advances in Neural Information Processing Systems **(NeurIPS)**, 2017. Spotlight presentation.

[abstract][pdf][arXiv]

This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization. We provide improved running time estimates for the state-of-the-art continuous algorithms for the problem using combinatorial arguments. We also provide a systematic experimental comparison of the two types of methods, based on a clear distinction between level-0 and level-1 algorithms.

Approximation Algorithms for Stochastic k-TSP
(with Viswanath Nagarajan and Rishi Saket).
In Proceedings of the 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science **(FSTTCS)**, 2017.

[arXiv]

Geometric Packing under Non-uniform Constraints (with Sariel Har-Peled and Benjamin Raichel).
SIAM Journal on Computing **(SICOMP)**, 2017.

Submodular Unsplittable Flow on Trees (with Anna Adamaszek, Parinya Chalermsook and Andreas Wiese).
Mathematical Programming Series B
** (MAPR)**, 2017.

Constrained Submodular Maximization: Beyond 1/e (with Huy Lê Nguyễn). In Proceedings of the 57th Annual Symposium on Foundations of Computer Science
**(FOCS)**, 2016.

[abstract][arXiv]

In this work, we present a new algorithm for maximizing a non-monotone submodular function subject to a general constraint. Our algorithm finds an approximate fractional solution for maximizing the multilinear extension of the function over a down-closed polytope. The approximation guarantee is 0.372 and it is the first improvement over the 1/e approximation achieved by the unified Continuous Greedy algorithm [Feldman et al., FOCS 2011].

A New Framework for Distributed Submodular Maximization (with Rafael Barbosa, Huy Lê Nguyễn and Justin Ward).
In Proceedings of the 57th Annual Symposium on Foundations of Computer Science
**(FOCS)**, 2016.

[abstract][arXiv]

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal approximation ratios for many settings in only a constant number of MapReduce rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint.

On Approximating the Maximum Independent Set of Rectangles (with Julia Chuzhoy). In Proceedings of the 57th Annual Symposium on Foundations of Computer Science
**(FOCS)**, 2016.

[abstract][arXiv]

We study the Maximum Independent Set of Rectangles (MISR) problem: given a set of n axis-parallel rectangles, the goal is to find a largest-cardinality subset of the rectangles, such that no two of them overlap. MISR is a basic geometric optimization problem with many applications, that has been studied extensively. Until recently, the best approximation algorithm for it achieved an O(loglog(n)) approximation factor. In a recent breakthrough, Adamaszek and Wiese provided a *quasi-polynomial* time approximation scheme: a (1-eps)-approximation algorithm with running time n^{O(poly(log n)/eps)}. Despite this result, obtaining a PTAS or even a polynomial-time constant-factor approximation remains a challenging open question. In this paper, we make progress towards this goal by providing an algorithm for MISR that achieves a (1 - eps)-approximation in time n^{O(poly(loglog(n) / eps))}. We introduce several new technical ideas, that we hope will lead to further progress on this and related problems.

Constant Congestion Routing of Symmetric Demands in Planar Directed Graphs (with Chandra Chekuri and Marcin Pilipczuk). In Proceedings of the 43rd International Colloquium on Automata, Languages and Programming **(ICALP)**, 2016.

[pdf]

On Routing Disjoint Paths in Bounded Treewidth Graphs (with Matthias Mnich, Marcin Pilipczuk and Andrej Risteski). In Proceedings of the 15th Scandinavian Symposium and Workshops on Algorithm Theory **(SWAT)**, 2016.

[arXiv]

Routing under Balance (with Gary Miller, Jakub Pachocki and Aaron Sidford).
In Proceedings of the 48th ACM Symposium on Theory of Computing
**(STOC)**, 2016.

[abstract][arXiv]

We introduce the notion of \emph{balance} for directed graphs: a weighted directed graph is $\alpha$-balanced if for every cut $S \subseteq V$, the total weight of edges going from $S$ to $V\setminus S$ is within factor $\alpha$ of the total weight of edges going from $V\setminus S$ to $S$. Several important families of graphs are nearly balanced, in particular, Eulerian graphs (with $\alpha = 1$) and residual graphs of $(1+\epsilon)$-approximate undirected maximum flows (with $\alpha=\Oh(1/\epsilon)$).

We use the notion of balance to give a more fine-grained understanding of several well-studied routing questions that are considerably harder in directed graphs. We first revisit oblivious routings in directed graphs. Our main algorithmic result is an oblivious routing scheme for \emph{single-source} instances that achieve an $\Oh(\alpha \cdot \log^3 n / \log \log n)$ competitive ratio. In the process, we make several key technical contributions which may be of independent interest. In particular, we give an efficient algorithm for computing \emph{low-radius decompositions} of directed graphs parameterized by balance. We also define and construct \emph{low-stretch arborescences}, a new concept generalizing low-stretch spanning trees to directed graphs.

On the negative side, we present new lower bounds for oblivious routing problems on directed graphs. We show that the competitive ratio of oblivious routing algorithms for directed graphs has to be $\Omega(n)$ in general; this result improves upon the long-standing best known lower bound of $\Omega(\sqrt{n})$ \cite{HajiaghayiKLR07}. We also show that the restriction to single-source instances is necessary by showing an $\Omega(\sqrt{n})$ lower bound for multiple-source oblivious routing in Eulerian graphs.

We also study the maximum flow problem in balanced directed graphs with arbitrary capacities. We develop an efficient algorithm that finds an $(1+\epsilon)$-approximate maximum flows in $\alpha$-balanced graphs in time $\tilde{\Oh}(m \alpha^2 / \epsilon^2)$. We show that, using our approximate maximum flow algorithm, we can efficiently determine whether a given directed graph is $\alpha$-balanced. Additionally, we give applications to directed sparsest cut problems.

Submodular Unsplittable Flow on Trees (with Anna Adamaszek, Parinya Chalermsook and Andreas Wiese).
In Proceedings of the 18th Conference on Integer Programming and Combinatorial Optimization
** (IPCO)**, 2016.

[pdf]

A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns (with Huy Lê Nguyễn). *Manuscript*, 2016.

[arXiv]

Online Buy-at-Bulk Network Design (with Deeparnab Chakrabarty, Ravishankar Krishnaswamy and Debmalya Panigrahi).
In Proceedings of the 56th Annual Symposium on Foundations of
Computer Science **(FOCS)**, 2015.

[abstract]
[arXiv]

We present the first non-trivial online algorithms for the non-uniform, multicommodity buy-at-bulk (MC-BB) network design problem in undirected and directed graphs. Our competitive ratios qualitatively match the best known approximation factors for the corresponding offline problems. The main engine for our results is an online reduction theorem of MC-BB problems to their single-sink (SS-BB) counterparts. We use the concept of junction-tree solutions (Chekuri et al., FOCS 2006) that play an important role in solving the offline versions of the problem via a greedy subroutine -- an inherently offline procedure. Our main technical contribution is in designing an online algorithm using only the existence of good junction-trees to reduce an MC-BB instance to multiple SS-BB sub-instances. Along the way, we also give the first non-trivial online node-weighted/directed single-sink buy-at-bulk algorithms. In addition to the new results, our generic reduction also yields new proofs of recent results for the online node-weighted Steiner forest and online group Steiner forest problems.

Random Coordinate Descent Methods for
Minimizing Decomposable Submodular Functions (with Huy Lê Nguyễn).
In Proceedings of the 32nd International Conference on Machine
Learning **(ICML)**, 2015.

[abstract]
[arXiv]

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of "simple" functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster linear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations.

The Power of Randomization: Distributed Submodular Maximization on
Massive Datasets (with Rafael Barbosa, Huy Lê Nguyễn and Justin
Ward).
In Proceedings of the 32nd International Conference on Machine
Learning **(ICML)**, 2015.

[abstract]
[arXiv]

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. Unfortunately, the resulting submodular optimization problems are often too large to be solved on a single machine. We develop a simple distributed algorithm that is embarrassingly parallel and it achieves provable, constant factor, worst-case approximation guarantees. In our experiments, we demonstrate its efficiency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting.

The All-or-Nothing Flow Problem in Directed Graphs with Symmetric Demand Pairs (with Chandra Chekuri).
In Mathematical Programming Series B **(MAPR)**, 2015.

From Graph to Hypergraph Multiway Partition: Is the Single
Threshold the Only Route? (with Huy Lê Nguyễn).
In Proceedings of the 22nd European Symposium on Algorithms
**(ESA)**, 2014.

[pdf]

Hardness of Submodular Cost Allocation: Lattice Matching and a Simplex Coloring Conjecture (with Jan Vondrák).
In Proceedings of the 17th International Workshop on
Approximation Algorithms for Combinatorial Optimization Problems
**(APPROX)**, 2014.

[pdf]

Improved Approximation Algorithms for Degree-bounded Network Design Problems with Node Connectivity Requirements (with Ali Vakilian).
In Proceedings of the 46th ACM Symposium on Theory of Computing
**(STOC)**, 2014.

[pdf]

The All-or-Nothing Flow Problem in Directed Graphs with Symmetric Demand Pairs (with Chandra Chekuri).
In Proceedings of the 16th Conference on Integer Programming and Combinatorial Optimization
**(IPCO)** 2014.

[pdf]

Poly-logarithmic Approximation for Maximum Node Disjoint Paths with Constant Congestion (with Chandra Chekuri).
In Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms **(SODA)**, 2013.

[pdf]

Local Distribution and the Symmetry Gap: Approximability of Multiway Partitioning Problems (with Jan Vondrák and Yi Wu).
In Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms **(SODA)**, 2013.

[arXiv]

Connected Domatic Packings in Node-capacitated Graphs (with Nitish Korula and Ali Vakilian).
*Manuscript*, 2013.

[pdf]

Fast Clustering with Lower Bounds: No Customer too Far, No Shop too Small (with Sariel Har-Peled and Benjamin Raichel).
*Manuscript*, 2013.

[arXiv]

Prize-collecting Survivable Network Design in Node-weighted Graphs (with Chandra Chekuri and Ali Vakilian).
In Proceedings of the 15th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems
**(APPROX)**, 2012.

[pdf]

Node-weighted Network Design in Planar and Minor-closed Families of Graphs (with Chandra Chekuri and Ali Vakilian).
In Proceedings of the 39th International Colloquium on Automata Languages and Programming **(ICALP)**, 2012.

[pdf]

Approximation Algorithms and Hardness of Integral Concurrent Flow (with Parinya Chalermsook, Julia Chuzhoy, and Shi Li).
In Proceedings of the 44th ACM Symposium on Theory of Computing **(STOC)**, 2012.

[pdf (extended abstract)]
[pdf (full version)]

Geometric Packing under Non-uniform Constraints (with Sariel Har-Peled and Benjamin Raichel).
In Proceedings of the 28th Annual ACM Symposium on Computational Geometry **(SoCG)**, 2012.

[arXiv]

Approximation Algorithms for Submodular Multiway Partition (with Chandra Chekuri).
In Proceedings of the 52nd Annual Symposium on Foundations of Computer Science **(FOCS)**, 2011.

[pdf]
[arXiv]

Fast Clustering using MapReduce (with
Sungjin Im and Benjamin Moseley).
In Proceedings of the 17th ACM SIGKDD international conference on
Knowledge discovery and data mining **(KDD)**, 2011. Oral
Presentation.

[arXiv]

Submodular Cost Allocation Problem and Applications
(with Chandra Chekuri).
In Proceedings of the 38th International Colloquium on Automata,
Languages and Programming **(ICALP)**, 2011.

[arXiv]

Prize-collecting Steiner Problems on Planar Graphs
(with Mohammad Hossein Bateni, Chandra Chekuri, MohammadTaghi
Hajiaghayi, Nitish Korula, and Dániel Marx).
In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete
Algorithms **(SODA)**, 2011.

[pdf]

Unsplittable Flow in Paths and Trees, and Column-Restricted
Packing Integer Programs (with Chandra Chekuri and Nitish Korula).
In Proceedings of the 12th International Workshop on
Approximation Algorithms for Combinatorial Optimization Problems
**(APPROX)**, 2009.

[pdf]

Fast Exact and Heuristic Methods for Role
Minimization Problems
(with William Horne, Nikola Milosavljevic, Prasad Rao, Robert
Schreiber, Robert Tarjan).
In Proceedings of the 13th ACM Symposium on Access Control Models
and Technologies **(SACMAT)**, 2008.

[pdf]

**ICLR** 2020 (area chair), **SODA** 2020, **NeurIPS** 2020 (reviewer), **ICML** 2020 (area chair), **FOCS** 2019, **KDD** 2019, **ICML** 2019, **WWW** 2018, **APPROX** 2018, **ICML** 2018, **ICML** 2017, **KDD** 2017, **COLT** 2017, **ESA** 2017, **ICALP** 2017, **NeurIPS** 2016 (reviewer), **SODA** 2015, **WAOA** 2014, **APPROX** 2014.

Assistant Professor, Computer Science department, University of Warwick. I was also part of DIMAP and a faculty fellow at the Alan Turing Institute for Data Science. October 2014 - June 2016.

Postdoc, Center for Computational Intractability, Princeton University. September 2013 - September 2014.

Ph.D. student (advisor: Chandra Chekuri), Algorithms and Theory Group, Computer Science department, University of Illinois at Urbana-Champaign. August 2008 - July 2013. PhD Thesis.

Undegraduate student (advisor: Robert Tarjan), Princeton University. September 2004 - June 2008. Graduated with a high honors (magna cum laude) B.S.E. degree in Computer Science.