Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Repository logo
 

Convex Optimization Over Integer Points

Loading...
Thumbnail Image

Date

Authors

Jiang, Haotian

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Many problems in discrete optimization can be succinctly encapsulated as the question of minimizing a convex function $f$, which captures the combinatorial structures of the problem, over integer points that are typically $\{0,1\}^n$, i.e. $\min_{x \in \{0,1\}^n} f(x)$. At a high level, this thesis focuses on the efficient solvability and approximability of this optimization problem, with the aim of uncovering general principles towards understanding and utilizing the interplay between continuous and discrete optimization for the future development of algorithm design. In seminal work, Gr\"otschel, Lov\'asz, and Schrijver (Combinatorica'81, Prog. Comb. Optim.'84, Springer'88) identified a central condition for the efficient solvability of this optimization problem: that the minimizer of $f$ lies in $\{0,1\}^n$ itself. Under this condition, Gr\"otschel, Lov\'asz, and Schrijver designed a unified framework using the ellipsoid method to establish the polynomial solvability of a broad range of combinatorial optimization problems. When the integer minimizer condition fails, the problem typically becomes computationally intractable, as witnessed by the NP-Hardness of the Integer Linear Programming problem. In this case, one has to resort to solving a convex relaxation, which is typically the linear relaxation $\min_{x \in [0,1]^n} f(x)$, and then rounding the fractional solution to an integer one, where the rounding error is classically known to be related to the discrepancy of the system (Lov\'asz, Spencer, and Vesztergombi, Eur. J. Comb.'86). Over the past decade, following a breakthrough of Bansal (FOCS'10), there has been a burst of progress in developing efficient algorithms for fundamental discrepancy results which were once thought to be computationally intractable. Consequently, this opens up the opportunity to develop a unified and systematic framework for rounding, and more broadly for algorithm design, through the lens of discrepancy theory. In this thesis, we make substantial progress in both of the directions discussed above. The contributions of this thesis are summarized below: (1) Under the integer minimizer condition, we give a faster and unified algorithm for solving the problem $\min_{x \in \{0,1\}^n} f(x)$ based on a reduction to the Shortest Vector Problem in lattice theory, improving upon the classical work by Gr\"otschel, Lov\'asz, and Schrijver from the 1980s in its full generality. Consequently, we obtain the first sub-cubic strongly polynomial oracle complexity algorithms for Submodular Function Minimization in its 50 years of study. We complement our algorithms by proving stronger hardness results for Submodular Function Minimization. (2) We advance the frontier for central problems in discrepancy theory and obtain new applications of them to algorithm design. In particular, we prove the Matrix Spencer Conjecture, a generalization of the seminal result of Spencer (Trans. Am. Math. Soc.'85) in discrepancy theory, up to poly-logarithmic rank; we also obtain the first poly-logarithmic discrepancy algorithms for Online Discrepancy Minimization, complementing Spencer’s classical result over 40 years ago which states that random coloring cannot be improved against adaptive adversaries (Spencer, J. Comb. Theory Ser. B'77). We further demonstrate applications of these results to the problems of Quantum Random Access Codes and Online Fair Allocation.

Description

Thesis (Ph.D.)--University of Washington, 2022

Keywords

convex optimization, discrepancy theory, integer minimizer, integer points, rounding, submodular function minimization, Computer science, Applied mathematics, Operations research

Citation

DOI