Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Edge-Weighted Online Bipartite Matching

Published: 17 November 2022 Publication History

Abstract

Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) gave an elegant algorithm for unweighted bipartite matching that achieves an optimal competitive ratio of 1-1/e . Aggarwal et al. (SODA 2011) later generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial 1/2-competitive greedy algorithm. In this article, we present the first online algorithm that breaks the long-standing 1/2 barrier and achieves a competitive ratio of at least 0.5086. In light of the hardness result of Kapralov, Post, and Vondrák (SODA 2013), which restricts beating a  1/2 competitive ratio for the more general monotone submodular welfare maximization problem, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in an online setting.
The main ingredient in our online matching algorithm is a novel subroutine called online correlated selection (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems.

1 Introduction

Matchings are fundamental graph-theoretic objects that play an indispensable role in combinatorial optimization. For decades, there have been tremendous ongoing efforts to design more efficient algorithms for finding maximum matchings in terms of their cardinality and, more generally, their total weight. In particular, matchings in bipartite graphs have found countless applications in settings where it is desirable to assign entities from one set to those in another, for example, matching students to schools, physicians to hospitals, computing tasks to servers, and impressions in online media to advertisers. Due to the enormous growth of matching markets in digital domains, efficient online matching algorithms have become increasingly important. For example, search engine companies have created opportunities for online matching algorithms to have a massive impact in multibillion-dollar advertising markets. Motivated by these applications, we consider the problem of matching a set of impressions that arrive one by one to a set of advertisers that is known in advance. When an impression arrives, its edges to the advertisers are revealed and an irrevocable decision must be made about which advertiser to assign the impression. Karp et al. [38] provided an elegant online algorithm called Ranking to find matchings in unweighted bipartite graphs with a competitive ratio of \(1-1/e\). They also proved that this is the best achievable competitive ratio. Aggarwal et al. [1] later generalized this algorithm to the vertex-weighted online bipartite matching problem and showed that a \(1-1/e\) competitive ratio is still attainable.
The edge-weighted case, however, is less understood. This is partly due to the fact that no competitive algorithm exists without an additional assumption. To see this, consider two instances of the edge-weighted problem, each with one advertiser and two impressions. The edge weight of the first impression is 1 in both instances, and the weight of the second impression is 0 in the first instance and W in the second instance, for some arbitrarily large W. An online algorithm cannot distinguish between the two instances when the first impression arrives. However, it has to decide whether to assign this impression to the advertiser. Not assigning it gives a competitive ratio of 0 in the first instance, and assigning it gives an arbitrarily small competitive ratio of \(1/W\) in the second. This problem cannot be tackled unless assigning both impressions to the advertiser is an option.1
In display advertising, assigning more impressions to an advertiser than they paid for only makes them happier. In other words, we can assign multiple impressions to a given advertiser. However, instead of gaining the weights of all edges assigned to it, we gain only the maximum weight, that is, the objective equals the sum of the heaviest edge weight assigned to each advertiser. This is equivalent to allowing the advertiser to dispose of previously matched edges for free to make room for new, heavier edges. This assumption is known as the free disposal model. In the display advertising literature [14, 39], the free-disposal assumption is well received and widely used because of its natural economic interpretation. More generally, edge-weighted online bipartite matching in the free disposal model is a special case of monotone submodular welfare maximization, for which we can apply known \(1/2\)-competitive greedy algorithms [16, 41].

1.1 Our Contributions

Despite 30 years of research in online matching since the seminal work of Karp et al. [38] and more than a decade of efforts after Feldman et al. [14] introduced the free disposal model, deciding whether there exists an edge-weighted online bipartite matching algorithm that achieves a competitive ratio greater than \(1/2\) has remained a tantalizing open problem. This article presents a new online algorithm and answers the question affirmatively, breaking the long-standing \(1/2\) barrier (under free disposal).
Theorem 1.1.
There is a 0.5086-competitive algorithm for edge-weighted online bipartite matching.
Given the hardness result of Kapralov et al. [36] that restricts beating a \(1/2\) competitive ratio for monotone submodular welfare maximization assuming that \({\bf NP} \ne {\bf RP}\), our algorithm shows that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in an online setting.
From here, we will use the more formal terminologies of offline and online vertices in a bipartite graph instead of advertisers and impressions. One of our main technical contributions is a novel algorithmic ingredient called online correlated selection (OCS), which is an online subroutine that takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to make each of its decisions, the OCS asks to what extent the decisions across different pairs can be negatively correlated and guarantees that a vertex appearing in k pairs is selected at least once with probability strictly greater than \(1 - 2^{-k}\). See Section 3 for a short introduction and Section 5 for the full details.
Given an OCS, we can achieve a better than \(1/2\) competitive ratio for unweighted online bipartite matching with the following (barely) randomized algorithm. For each online vertex, either pick a pair of offline neighbors and let the OCS select one or choose one offline neighbor deterministically. Concretely, among the neighbors that have not been matched deterministically, find the least-matched ones, that is, those that have appeared in the least number of pairs so far. Pick two if there are at least two; otherwise, choose one deterministically. We analyze this algorithm in Appendix A.
Although the competitive ratio of the just described algorithm is far worse than the optimal \(1-1/e\) ratio by Karp et al. [38], it benefits from improved generalizability. To extend this algorithm to the edge-weighted problem, we need a reasonable notion of “least-matched” offline neighbors. Suppose that one neighbor’s heaviest edge weight is either 1 or 4 each with probability \(1/2\) (over the randomness of the algorithm), another neighbor’s heaviest edge is 2 with certainty, and their edge weights with the current online vertex are both 3. Which one is less matched? To tackle this, we use the online primal-dual framework for matching problems by Devanur et al. [9] along with an alternative formulation of the edge-weighted online bipartite matching problem by Devanur et al. [8]. In short, we account for the contribution of each offline vertex by weight levels. At each weight level, we consider the probability that the heaviest edge matched to the vertex has weight of at least this level. This is the complementary cumulative distribution function (CCDF) of the heaviest edge weight; hence, we call this the CCDF viewpoint. Then, for each offline neighbor, we utilize the dual variables to compute an offer at each weight level should the current online vertex be matched to it. The neighbor with the largest net offer aggregating over all weight levels is considered the “least matched.” We introduce the online primal-dual framework and the CCDF viewpoint in Section 2. We formally present our edge-weighted matching algorithm in Section 4, followed by its analysis. In Appendix B, we include hard instances showing that the competitive ratio of our algorithm is nearly tight.
We draw connections between the online primal-dual algorithm in this article and the original algorithm of Fahrbach and Zadimoghaddam [12] in Appendix C. The algorithms share two key ideas. First, the original algorithm uses a pairing mechanism to safely make adaptive decisions based on potential past assignments. This is the initial version of negative correlation, that is, the mechanism is a primitive OCS, and it is crucial for making their analysis of the conditional probabilities tractable. Second, neither algorithm is fully adaptive—both algorithms operate in the expected state over all of their possible branches instead of conditioning on the outcomes of their internal randomness. This greatly limits the amount of randomness in the algorithms and means that most of the variables are deterministic quantities governed solely by the input graph and arrival order of the online vertices.

1.2 Related Work

While the online weighted bipartite matching algorithms literature is extensive, most of these works achieve competitive ratios greater than \(1/2\) by assuming that offline vertices have large capacities or that some stochastic information about the online vertices is known in advance. In this section, we list the most relevant works and refer interested readers to the excellent survey of Mehta [44]. There have also been several significant advances in more general settings, including new arrival models and non-bipartite graphs [2, 17, 18, 24, 25, 29].
Large Capacities. The capacity of an offline vertex is the number of online vertices that can be assigned to it. Exploiting the large-capacity assumption to beat \(1/2\) dates back 2 decades to Kalyanasundaram and Pruhs [35]. Feldman et al. [14] provided a \((1-1/e)\)-competitive algorithm for Display Ads, which is equivalent to edge-weighted online bipartite matching assuming large capacities. Under analogous assumptions, the same competitive ratio was obtained for AdWords [6, 45], in which offline vertices have a budget constraint on the total weight that can be assigned to them rather than the number of impressions. From a theoretical point of view, one of the main goals in the online matching literature is to develop algorithms with a competitive ratio greater than \(1/2\) without making any assumption on the capacities of offline vertices.
Stochastic Arrivals. If we have knowledge about the arrival patterns of online vertices, we can often leverage this information to design better algorithms. Typical stochastic assumptions include assuming that the online vertices are drawn from some known or unknown distribution [4, 10, 15, 22, 26, 32, 37, 43] or that they arrive in a random order [7, 13, 20, 21, 28, 33, 42]. These works achieve a \(1-\varepsilon\) competitive ratio if the large-capacity assumption holds in addition to the stochastic assumptions, or at least \(1-1/e\) for arbitrary capacities. Hybrid models that mix adversarial and stochastic assumptions have also been studied and are known to be very powerful in practice [11, 46]. Korula et al. [40] showed that the greedy algorithm is 0.505-competitive for the more general problem of submodular welfare maximization if the online vertices arrive in a random order, without any assumption on the capacities. The analysis is later simplified and improved to 0.509-competitive by Buchbinder et al. [5]. The random order assumption is particularly justified because Kapralov et al. [36] proved that beating \(1/2\) for submodular welfare maximization in the oblivious adversary model implies that \({\bf NP} = {\bf RP}\).
Subsequent Work. Several articles featuring work done concurrent with the work in this article have explored the open problems in Section 6 about improving OCS. Gao et al. [19] showed that the optimal “level of negative correlation” of an OCS is between \([0.167, 0.25]\), improving on our bounds of \([0.1099,1)\). Unlike the matching-based approach in this article, their OCS uses probabilistic automata to obtain stronger negative correlation. Using their 0.167-OCS, they provided a 0.519-competitive algorithm for edge-weighted online bipartite matching. Blanc and Charikar [3] showed that multiway OCS (i.e., considering more than two elements in each round) is strictly more powerful than two-way OCS, obtaining a 0.5368 competitive ratio through a simpler reduction from online matching. Among other novelties, they relaxed the distinction between consecutive and inconsecutive steps in our definition of OCS to partially ignore small gaps. Shin and An [47] constructed a three-way OCS composed of two-way OCS subroutines to provide a 0.513-competitive algorithm for edge-weighted online bipartite matching when combined with the 0.167-OCS of Gao et al. [19].
Huang et al. [31] generalized the OCS technique to break the \(1/2\) barrier for the AdWords problem, achieving a 0.5016-competitive algorithm for general bids without any stochastic assumptions.
Huang et al. [27] applied an OCS to online stochastic matching and improved the competitive ratios of unweighted and vertex-weighted matching to 0.716. Tang et al. [48] further used OCS to break the \(1-1/e\) barrier for online stochastic matching with non-IID online vertices.

2 Preliminaries

The edge-weighted online matching problem considers a bipartite graph \(G = (L, R, E)\), where L and R are the sets of vertices on the left-hand side (LHS) and right-hand side (RHS), respectively, and \(E \subseteq L \times R\) is the set of edges. Every edge \((i, j) \in E\) is associated with a nonnegative weight \(w_{ij} \ge 0\), and we can assume without loss of generality that this is a complete bipartite graph, that is, \(E = L \times R\), by assigning zero weights to the missing edges.
The vertices on the LHS are offline in that they are all known to the algorithm in advance. The vertices on the RHS, however, arrive online one at a time. When an online vertex \(j \in R\) arrives, its incident edges and their weights are revealed to the algorithm, which must then irrevocably match j to an offline neighbor. Each offline vertex can be matched any number of times; however, only the weight of its heaviest edge counts towards the objective. This is equivalent to allowing a matched offline vertex i, say, to j, to be rematched to a new online vertex \(j^{\prime }\) with edge weight \(w_{ij^{\prime }} \gt w_{ij}\), disposing of vertex j and edge \((i, j)\) for free. This assumption is known as the free disposal model [14].
The goal is to maximize the total weight of the matching. A randomized algorithm is γ-competitive if its expected objective value is at least γ times the offline optimal, in hindsight, for any instance of edge-weighted online matching. We refer to \(0 \le \Gamma \le 1\) as the competitive ratio of the algorithm.

2.1 Complementary Cumulative Distribution Function Viewpoint

Next, we describe an alternative formulation of the edge-weighted online matching problem due to Devanur et al. [8]. This approach captures the contribution of each offline vertex \(i \in L\) to the objective in terms of the CCDF of the heaviest edge weight matched to i. We refer to this method as the CCDF viewpoint.
For any offline vertex \(i \in L\) and any weight level \(w \ge 0\), let \(y_i(w)\) be the CCDF of the weight of the heaviest edge matched to i, that is, the probability that the algorithm has matched at least one online vertex j to i such that \(w_{ij} \ge w\). It follows that \(y_i(w)\) is a non-increasing function of w that takes values between 0 and 1. Further observe that \(y_i(w)\) is a step function with polynomially many pieces, since the number of pieces is at most the number of incident edges. Hence, we will be able to maintain \(y_i(w)\) in polynomial time.
The expected weight of the heaviest edge matched to i then equals the area under \(y_i(w)\), that is,
\begin{equation} \int _0^\infty y_i(w) \mathop {}\!\mathrm{d}w \text{.} \end{equation}
(1)
This follows from an alternative formula for the expected value of a nonnegative random variable involving only its cumulative distribution function.
We illustrate this idea with an example in Figure 1. Suppose that an offline vertex i has four online neighbors \(j_1\), \(j_2\), \(j_3\), and \(j_4\) with edge weights \(w_1 \lt w_2 \lt w_3 \lt w_4\). Further, suppose that \(j_1\) is matched to i with certainty, whereas \(j_2\), \(j_3\), and \(j_4\) each have some probability of being matched to i. (The latter events may be correlated.) Next, suppose that a new neighbor arrives whose edge weight is also \(w_3\). The values of \(y_i(w)\) are then increased for \(w_1 \lt w \le w_3\) accordingly, and the total area of the shaded regions is the increment in the expected weight of the heaviest edge matched to vertex i.
Fig. 1.
Fig. 1. Complementary cumulative distribution function (CCDF) viewpoint. The first function is the CCDF of vertex i, and the second function demonstrates how the CCDF of vertex i is updated.

2.2 Online Primal-Dual Framework

We analyze our algorithm using a linear program (LP) for edge-weighted matching under the online primal-dual framework. Consider the standard matching LP and its dual below. We interpret the primal variables \(x_{ij}\) as the probability that \((i, j)\) is the heaviest edge matched to vertex i.
\begin{align*} \!\!\!\!\!\ \text{maximize} \quad & \sum _{i \in L} \sum _{j \in R} w_{ij} x_{ij} \\ \text{subject to} \quad & \sum _{j \in R} x_{ij} \le 1 && \forall ~i \in L \\ & \sum _{i \in L} x_{ij} \le 1 && \forall ~j \in R \\ & x_{ij} \ge 0 && \forall ~i \in L, \forall ~j \in R \end{align*}
\begin{align*} \text{minimize} \quad & \sum _{i \in L} \alpha _i + \sum _{j \in R} \beta _j \\ \text{subject to} \quad & \alpha _i + \beta _j \ge w_{ij} && \forall ~i \in L, \forall ~j \in R \\ & \alpha _i \ge 0 && \forall ~i \in L \\ & \beta _j \ge 0 && \forall ~j \in R \end{align*}
Let \({\rm\small P}\) denote the primal objective. If \(x_{ij}\) is the probability that \((i, j)\) is the heaviest edge matched to i, then \({\rm\small P}\) also equals the objective of the algorithm. Let \({\rm\small D}\) denote the dual objective.
Online algorithms under the online primal-dual framework maintain not only a matching but also a dual assignment (not necessarily feasible) at all times subject to the conditions summarized in the following.
Lemma 2.1.
Suppose that an online algorithm simultaneously maintains primal and dual assignments such that for some constant \(0 \le \Gamma \le 1\), the following conditions hold at all times:
(1)
Approximate dual feasibility: For any \(i \in L\) and any \(j \in R\), we have that \(\alpha _i + \beta _j \ge \Gamma \cdot w_{ij}\).
(2)
Reverse weak duality: The objectives of the primal and dual assignments satisfy \({\rm\small P}\ge {\rm\small D}\).
Then, the algorithm is γ-competitive.
Proof.
By the first condition, the values \(\Gamma ^{-1} \alpha _i\) and \(\Gamma ^{-1} \beta _j\) form a feasible dual assignment with objective value \(\Gamma ^{-1} {\rm\small D}\). By weak duality of linear programming, the objective of any feasible dual assignment upper bounds the optimal (i.e., \({\rm\small D}\) is at least γ times the optimal). Applying the second condition proves the lemma.□
Online Primal-Dual in the CCDF Viewpoint. In light of the CCDF viewpoint, for any offline vertex \(i \in L\) and any weight level \(w \gt 0\), we introduce and maintain new variables \(\alpha _i(w)\) that satisfy
\begin{equation} \alpha _i = \int _0^\infty \alpha _i(w) \mathop {}\!\mathrm{d}w \text{.} \end{equation}
(2)
Accordingly, we rephrase approximate dual feasibility in Lemma 2.1 in the CCDF viewpoint as
\begin{equation} \int _0^{\infty } \alpha _i(w) \mathop {}\!\mathrm{d}w + \beta _j \ge \Gamma \cdot w_{ij} \text{.} \end{equation}
(3)
Concretely, at each step of our primal-dual algorithm, \(\alpha _i(w)\) is a piecewise constant function with possible discontinuities at the weight levels \(w \in \lbrace w_{ij} : \text{online vertex $j$ has arrived}\rbrace\). Initially, all of the \(\alpha _i(w)\)s are the zero function. Then, as each online vertex \(j \in R\) arrives, if j is potentially matched to an offline candidate \(i \in L\), the function values of \(\alpha _{i}(w)\) are systematically increased according to the dual update rules in Section 4.1. In contrast, each dual variable \(\beta _j\) is a scalar value that is initialized to zero and increased only once during the algorithm, at the time when j arrives.

3 Online Correlated Selection: an Introduction

This section introduces our novel ingredient for online algorithms, which we believe to be widely applicable and of independent interest. To motivate this technique, consider the following thought experiment in the case of unweighted online matching, that is, \(w_{ij} \in \lbrace 0, 1\rbrace\) for any \(i \in L\) and any \(j \in R\).
Deterministic Greedy. We first recall why all deterministic greedy algorithms that match each online vertex to an unmatched offline neighbor are at most \(1/2\)-competitive. Consider an instance with a graph that has two offline and two online vertices. The first online vertex is adjacent to both offline vertices; the algorithm deterministically chooses one of them. The second online vertex, however, is adjacent to the previously matched vertex only.
Two-Choice Greedy with Independent Random Bits. We can avoid this problem by matching the first online vertex randomly, which improves the expected matching size from 1 to 1.5. In this spirit, consider the following two-choice greedy algorithm. When an online vertex arrives, identify its neighbors that are least likely to be matched (over the randomness in previous rounds). If there is more than one such neighbor, choose any two, for example, lexicographically, and match to one with a fresh random bit. Otherwise, match to the least-matched neighbor deterministically. We refer to the former as a randomized round and the latter as a deterministic round. Since each randomized round uses a fresh random bit, this is equivalent to matching to neighbors that have been chosen in the least number of randomized rounds and in no deterministic round. Unfortunately, this algorithm is also \(1/2\)-competitive due to upper triangular graphs. We defer this standard example to Appendix B.1.
Two-Choice Greedy with Perfect Negative Correlation. The last algorithm in this thought experiment is an imaginary variant of two-choice greedy that perfectly and negatively correlates the randomized rounds so that each offline vertex is matched with certainty after being a candidate in two randomized rounds. It is impossible to achieve such perfect negative correlation in the online setting in general (see Appendix B.3 for an explanation). Nevertheless, if we assume feasibility then this algorithm is \(5/9\)-competitive [30]. In fact, it is effectively the 2-matching algorithm of Kalyanasundaram and Pruhs [35] by having two copies of each online vertex and allowing offline vertices to be matched twice. This motivates the following question:
Can we use partial negative correlation to retain feasibility and break the \(1/2\) barrier?
We answer this affirmatively by introducing an algorithmic ingredient called online correlated selection (OCS), which allows us to quantify the amount of negative correlation between the randomized rounds. Appendix A provides an analysis of the two-choice greedy algorithm powered by OCS in the unweighted case. Section 4 generalizes this approach to edge-weighted online matching, giving us the first algorithm with a competitive ratio that is provably greater than \(1/2\).
Definition 3.1 (γ-semi-OCS)
Consider a set of ground elements. For any \(\gamma \in [0, 1]\), a γ-semi-OCS is an online algorithm that takes as input a sequence of pairs of elements, and selects one element per pair in an online fashion such that if an element has appeared in \(k \ge 1\) pairs, it is selected at least once with probability at least
\begin{equation*} 1 - 2^{-k} (1 - \gamma)^{k-1}\text{.} \end{equation*}
Using independent random bits is a 0-semi-OCS, and the perfect negative correlation in the thought experiment is a 1-semi-OCS, although it is typically infeasible. Our algorithms satisfy a stronger definition, which considers any collection of pairs containing an element i. This stronger definition is useful for generalizing to the edge-weighted bipartite matching problem.
In the following definition, a subsequence (not necessarily contiguous) of pairs containing element i is consecutive if it includes all of the pairs that contain element i between the first and last pair in the subsequence. Two subsequences of pairs are disjoint if no pair belongs to both of them. For example, consider the sequence \((\lbrace a,i\rbrace , \lbrace b,i\rbrace , \lbrace c,d\rbrace , \lbrace e,i\rbrace , \lbrace i,z\rbrace)\). The subsequences \((\lbrace a,i\rbrace ,\lbrace b,i\rbrace)\) and \((\lbrace i,z\rbrace)\) are consecutive and disjoint, but the subsequence \((\lbrace a,i\rbrace , \lbrace b,i\rbrace , \lbrace i,z\rbrace)\) is not consecutive because it does not include the pair \(\lbrace e,i\rbrace\). As a special case, a pair is consecutive to another pair if they share an element i and no other pairs between them contain element i.
Definition 3.2 (γ-OCS)
Consider a set of ground elements. For any \(\gamma \in [0, 1]\), a γ-OCS is an online algorithm that takes as input a sequence of pairs of elements and selects one per pair such that for any element i and any disjoint subsequences of \(k_1, k_2, \dots , k_m\) consecutive pairs containing i, i is selected in at least one of these pairs with probability at least
\begin{equation*} 1 - \prod _{\ell =1}^m 2^{-k_\ell } (1 - \gamma)^{k_\ell -1} ~. \end{equation*}
Theorem 3.3.
There exists a \(\frac{13\sqrt {13}-35}{108} \gt 0.1099\)-OCS.
We defer the design and analysis of the 0.1099-OCS to Section 5. Here, we describe a weaker \(1/16\)-OCS, which suffices for breaking the \(1/2\) barrier. We give the formal proof of the \(1/16\)-OCS construction in Section 5.1.
Proof Sketch of a 1/16-OCS
Proof Sketch of a \(1/16\)-OCS Consider two sequences of independent random bits. The first sequence is used to construct a random matching among the pairs, where any two consecutive pairs (with respect to some common element) are matched with probability \(1/16\). Each pair is consecutive to at most four pairs, one before it and one after it for each of its two elements. For each pair, choose one of its consecutive pairs, each with probability \(1/4\). Two consecutive pairs are matched if they choose each other.
The second random sequence is used to select an element from each pair. For an unmatched pair, choose one of its elements with a fresh random bit. For any two matched pairs, use a fresh random bit to choose an element in the first pair. Then, make the opposite selection in the later pair (i.e., select the common element if it is not selected in the earlier pair and vice versa). Observe that even if two matched pairs are identical, there is no ambiguity in the opposite selection.
Next, fix any element i and any disjoint subsequences of \(k_1, k_2, \dots , k_m\) consecutive pairs that contain i. We bound the probability that i is never selected. If any two of these pairs are matched, i is selected once in the two pairs. Otherwise, the selections from the pairs are independent and the probability that i is never selected is \(\prod _{\ell =1}^m 2^{-k_\ell }\). Applying the law of total probability to the event that i is in a matched pair, it remains to upper bound the probability that no pairs are matched by \(\prod _{\ell =1}^m (1 - 1/16)^{k_\ell -1}\). Intuitively, this is because there are \(k_\ell -1\) choices of two consecutive pairs within the \(\ell\)-th subsequence, each of which is matched with probability \(1/16\). These events are negatively dependent; thus, the probability that none of them happens is upper bounded by the independent case.□
Readers familiar with the theory of negative association [34] can directly argue that the events in this proof sketch are negatively dependent, which then makes this a complete proof. Instead of using negative association, we present a proof of this warm-up case from first principles in Section 5.1. In the process, we will build up tools for constructing and analyzing the OCS in Theorem 3.3.

4 Edge-Weighted Online Matching

This section presents an online primal-dual algorithm for the edge-weighted online bipartite matching problem. The algorithm uses a γ-OCS as a black box, and its competitive ratio depends on the value of γ. For \(\gamma = 1/16\) (as sketched in Section 3) it is 0.505-competitive and for \(\gamma \approx 0.1099\) (as in Theorem 3.3) it is 0.5086-competitive, proving our main result about edge-weighted online matching.

4.1 Online Primal-Dual Algorithm

The algorithm is similar to the two-choice greedy in the previous section. It maintains an OCS with the offline vertices as the ground elements. For each online vertex, the algorithm either (1) matches it deterministically to one offline neighbor, (2) chooses a pair of offline neighbors and matches to the one selected by the OCS, or (3) leaves it unmatched. We refer to the first case as a deterministic round, the second as a randomized round, and the third as an unmatched round.
How does the algorithm decide whether it is a randomized, deterministic, or unmatched round, and how does it choose the candidate offline vertices? We leverage the online primal-dual framework. When an online vertex j arrives, it calculates for every offline vertex i how much the dual variable \(\beta _j\) would gain if j is matched to i in a deterministic round, denoted as \(\Delta _i^D \beta _j\), and similarly \(\Delta _i^R \beta _j\) for a randomized round. Then, it finds \(i^*\) with the maximum \(\Delta _i^D \beta _j\) and \(i_1, i_2\) with the maximum \(\Delta _i^R \beta _j\). If both \(\Delta _{i_1}^R \beta _j + \Delta _{i_2}^R \beta _j\) and \(\Delta _{i^*}^D \beta _j\) are negative, it leaves j unmatched. If \(\Delta _{i_1}^R \beta _j + \Delta _{i_2}^R \beta _j\) is nonnegative and greater than \(\Delta _{i^*}^D \beta _j\), it matches j in a randomized round with \(i_1\) and \(i_2\) as the candidates using its OCS. Finally, if \(\Delta _{i^*}^D \beta _j\) is nonnegative and greater than \(\Delta _{i_1}^R \beta _j + \Delta _{i_2}^R \beta _j\), it matches j to \(i^*\) in a deterministic round. See Algorithm 1 for the formal definition.
It remains to explain how \(\Delta _i^D \beta _j\) and \(\Delta _i^R \beta _j\) are calculated. For any offline vertex \(i \in L\) and any weight level \(w \gt 0\), let \(k_i(w)\) be the number of randomized rounds in which i has been chosen (as input to the OCS) and has edge weight at least w. The values of \(k_i(w)\) may change over time; thus, we consider these values at the beginning of each online round. The increments to the dual variables \(\alpha _{i}(w)\) and \(\beta _j\) depend on the values of \(k_i(w)\) via the following gain-sharing parameters, which we determine later using a factor-revealing LP to optimize the competitive ratio. The gain-sharing values are listed at the end of this section in Table 1.
Table 1.
Table 1. Approximately Optimal Solutions to the Factor-Revealing LP with \(\kappa = 3/2\) and \(k_{\max }= 8\)
\(a(k)\): Amortized increment in the dual variable \(\alpha _i(w)\) if i is chosen as one of the two candidates in a randomized round in which its edge weight is at least w and \(k_i(w) = k\).
\(b(k)\): Increment in the dual variable \(\beta _j\) due to an offline vertex i at weight level \(w \le w_{ij}\) if j is matched in a randomized round with i as one of the two candidates and \(k_i(w) = k\).
Note that these gain-sharing values \(a(k)\) and \(b(k)\) are instance independent (i.e., they do not depend on the underlying graph) and defined for all \(k \in \mathbb {Z}_{\ge 0}\). We interpret these parameters according to a gain-splitting rule. If i is one of the two candidates to be matched to j in a randomized round, the increase in the expected weight of the heaviest edge matched to i equals the integration of \(y_i(w)\)’s increments, for \(0 \lt w \le w_{ij}\), which can be related to the values of the \(k_i(w)\)s. We then lower bound the gain due to the increment of \(y_i(w)\) using the definition of a γ-OCS and split the gain into two parts, \(a(k_i(w))\) and \(b(k_i(w))\). The former is assigned to \(\alpha _i(w)\) and the latter goes to \(\beta _j\).
In fact, we prove at the end of this subsection the following invariant about how the dual variables \(\alpha _i(w)\) are incremented:
\begin{equation} \alpha _i(w) \ge \sum _{0 \le \ell \lt k_i(w)} a(\ell) \text{.} \end{equation}
(4)
Next, define \(\Delta ^R_i \beta _j\) to be
\begin{equation} \Delta _i^R \beta _j \stackrel{\text{def}}{=}\int _0^{w_{ij}} b\left(k_i(w)\right) \mathop {}\!\mathrm{d}w - \frac{1}{2} \int _{w_{ij}}^{\infty } \sum _{0 \le \ell \lt k_i(w)} a(\ell) \mathop {}\!\mathrm{d}w \text{.} \end{equation}
(5)
We should think of \(\Delta _i^R \beta _j\) as the increase in the dual variable \(\beta _j\) due to offline vertex i, if i is chosen as one of the two candidates for j in a randomized round. The first term in Equation (5) follows from the interpretation of \(b(k)\) cited earlier (and would be the only term in the unweighted case). The second term is designed to cancel out the extra help we get from the \(\alpha _i(w)\)s at weight-levels \(w \gt w_{ij}\) in order to satisfy approximate dual feasibility for the edge \((i,j)\). Concretely, if j is matched in a randomized round to two candidates at least as good as i, our choice of \(b(k)\)’s ensures approximate dual feasibility between i and j (i.e., the following inequality holds):
\begin{equation*} \int _0^{\infty } \alpha _i(w) \mathop {}\!\mathrm{d}w + 2 \cdot \Delta _i^R \beta _j \ge \Gamma \cdot w_{ij} \text{.} \end{equation*}
Finally, for some \(1 \lt \kappa \lt 2\), define the value of \(\Delta _i^D \beta _j\) to be
\begin{equation} \Delta _i^D \beta _j \stackrel{\text{def}}{=}\kappa \cdot \Delta _i^R \beta _j = \kappa \int _0^{w_{ij}} b\left(k_i(w)\right) \mathop {}\!\mathrm{d}w - \frac{\kappa }{2} \int _{w_{ij}}^{\infty } \sum _{0 \le \ell \lt k_i(w)} a(\ell) \mathop {}\!\mathrm{d}w. \end{equation}
(6)
For concreteness, readers can assume that \(\kappa = 1.5\). The competitive ratio, however, is insensitive to the choice of \(\kappa\) as long as it is neither too close to 1 nor to 2. On one hand, \(\kappa \gt 1\) ensures that if the algorithm chooses a randomized round with offline vertex \(i_1\) and another vertex \(i_2\) as the candidates, the contribution from \(i_2\) to \(\beta _j\) must be at least a \(\kappa - 1\) fraction of what \(i_1\) offers. Otherwise, the algorithm would have preferred a deterministic round with \(i_1\) alone. On the other hand, we have \(\kappa \lt 2\) because otherwise a randomized round would always be inferior to a deterministic round. We further explain the definitions of \(\Delta _{i}^R \beta _j\) and \(\Delta _{i}^D \beta _j\) in Section 4.3 and demonstrate how their terms interact when proving that the dual assignments always satisfy approximate dual feasibility.
Primal Increments. We have defined the primal algorithm and, implicitly, how the dual algorithm updates the \(\beta _j\)s. It remains to define the updates to the \(\alpha _i(w)\)s. First, we need to characterize the primal increment since the dual updates are driven by it. Recall that by the CCDF viewpoint:
\begin{equation*} {\rm\small P}= \sum _{i \in L} \int _0^\infty y_i(w) \mathop {}\!\mathrm{d}w \text{.} \end{equation*}
Since it is difficult to account for the exact CCDF \(y_i(w)\) due to complicated correlations in the selections, we instead consider a lower bound for it given by the γ-OCS. A critical observation here is that the decisions made by the primal-dual algorithm are deterministic except for the randomness in the OCS. In particular, its choices of \(i_1\), \(i_2\), \(i^*\) and the decisions about whether a round is unmatched, randomized, or deterministic are independent of the selections in the OCS and, therefore, deterministic quantities governed solely by the input graph and arrival order of the online vertices. Hence, we may view the sequence of pairs of candidates the OCS considers as fixed.
For any offline vertex i and weight level \(w \gt 0\), consider the randomized rounds in which i is a candidate and has edge weight at least w. Decompose these rounds into disjoint collections of, say, \(k_1, k_2, \dots , k_m\) consecutive rounds. We consider the nontrivial partition of \(k_1, k_2, \dots , k_m\) subsequences instead of one large subsequence because i may have been chosen in intermediate randomized rounds with weight level less than w. This is precisely why we need the guarantee of a γ-OCS instead of a γ-semi-OCS. By Definition 3.2, vertex i is selected by the algorithm (either deterministically or in a randomized round by the γ-OCS) with probability at least
\begin{equation} \overline{y}_i(w) \stackrel{\text{def}}{=}{\left\lbrace \begin{array}{ll} 1 & \text{if $i$ has been matched in a deterministic round;} \\ 1 - \prod _{\ell =1}^m 2^{-k_\ell } \left(1 - \gamma \right)^{k_\ell -1} & \text{otherwise.} \end{array}\right.} \end{equation}
(7)
Accordingly, we will use the following surrogate primal objective:
\begin{equation*} \overline{{\rm\small P}} = \sum _{i \in L} \int _0^\infty \overline{y}_i(w) \mathop {}\!\mathrm{d}w \text{.} \end{equation*}
Lemma 4.1.
The primal objective is lower bounded by the surrogate, that is, \(\overline{{\rm\small P}} \le {\rm\small P}\).
It will often be more convenient to consider the following characterization of \(\overline{y}_i(w)\):
Initially, let \(\overline{y}_i(w) = 0\).
If i is matched in a deterministic round in which its edge weight is at least w, set \(\overline{y}_i(w) = 1\).
If i is chosen in a randomized round in which its edge weight is at least w, further consider \(w^{\prime }\), its edge weight in the previous round involving i; let \(w^{\prime } = 0\) if it is the first randomized round involving i. Then, decrease the gap \(1 - \overline{y}_i(w)\) by a \(1/2 (1 - \gamma)\) factor if \(w^{\prime } \ge w\), that is, if it is the second or later pair of a collection of consecutive pairs containing i with edge weight at least w; otherwise, just decrease \(1 - \overline{y}_i(w)\) by \(1/2\). The missing \(1-\gamma\) factor in the second case corresponds to the \(-1\) in the exponent of \(1 - \gamma\) in Equation (7).
Lemma 4.2.
For any offline vertex i and any weight level \(w \gt 0\), we have that
\begin{equation*} 1 - \overline{y}_i(w) \ge 2^{-k_i(w)} \left(1 - \gamma \right)^{\max \lbrace k_i(w)-1, 0\rbrace } \text{.} \end{equation*}
Proof.
Initially, \(1 - \overline{y}_i(w)\) equals 1. Then, by Equation (7), it decreases by \(1/2\) in the first randomized round involving i with edge weight at least w and by at most \(1/2 \left(1 - \gamma \right)\) in each of the subsequent \(k_i(w) - 1\) rounds.□
Recall that \(\overline{y}_i(w)\) increases to 1 in a deterministic round; Lemma 4.2 gives a lower bound for this increment.
Lemma 4.3.
For any offline vertex i and any weight level \(w \gt 0\), if i is matched in a deterministic round in which its edge weight is at least w, the increment in \(\overline{y}_i(w)\) is at least
\begin{equation*} 2^{-k_i(w)} \left(1 - \gamma \right)^{\max \lbrace k_i(w)-1,0\rbrace } \text{.} \end{equation*}
Lemma 4.4.
For any offline vertex i and any weight level \(w \gt 0\), if i is chosen as a candidate in a randomized round in which its edge weight is at least w, the increment in \(\overline{y}_i(w)\) is at least
\begin{equation*} 2^{-k_i(w)-1} \left(1 - \gamma \right)^{\max \lbrace k_i(w)-1, 0\rbrace } \text{.} \end{equation*}
Suppose further that vertex i’s edge weight is also at least w in the last randomized round involving i. Then, it follows that \(k_i(w) \ge 1\) and the increment in \(\overline{y}_i(w)\) is at least
\begin{equation*} 2^{-k_i(w)-1} \left(1 - \gamma \right)^{k_i(w)-1} \left(1 + \gamma \right). \end{equation*}
Proof.
By definition, \(1 - \overline{y}_i(w)\) decreases by a factor of either \(1/2 (1 - \gamma)\) or \(1/2\) in a randomized round, depending on whether vertex i’s edge weight is at least w the last time it is chosen in a randomized round. Therefore, the increment in \(\overline{y}_i(w)\) is either a \(1/2 (1 + \gamma)\) fraction of \(1 - \overline{y}_i(w)\) or a \(1/2\) fraction. Putting this together with the lower bound for \(1 - \overline{y}_i(w)\) in Lemma 4.2 proves the lemma.□
Dual Updates to Online Vertices. Consider any online vertex \(j \in R\) at the time of its arrival. The dual variable \(\beta _j\) will only increase at the end of this round depending on the type of assignment. If j is left unmatched, then the value of \(\beta _j\) remains zero. If j is matched in a randomized round, set \(\beta _j = \Delta _{i_1}^R \beta _j + \Delta _{i_2}^R \beta _j\). Last, if j is matched in a deterministic round, set \(\beta _j = \Delta _{i^*}^D \beta _j\).
Dual Updates to Offline Vertices: Proof of Equation (4). Fix any offline vertex \(i \in L\). Suppose that i is matched in a deterministic round in which its edge weight is \(w_{ij}\). Then, for any weight level \(w \gt w_{ij}\), the value of \(k_i(w)\) stays the same. Thus, we leave \(\alpha _i(w)\) unchanged. On the other hand, for any weight level \(w \le w_{ij}\), the value of \(k_i(w)\) becomes \(\infty\) by definition. Therefore, to maintain the invariant in Equation (4), we increase \(\alpha _i(w)\) for each weight level \(w \le w_{ij}\) by
\begin{equation} \sum _{\ell = k_i(w)}^{\infty } a(\ell) \text{.} \end{equation}
(8)
The updates in randomized rounds are more subtle. Suppose that i is one of the two candidates in a randomized round in which its edge weight is \(w_{ij}\). Further, consider i’s edge weight the last time it was chosen in a randomized round, denoted as \(w^{\prime }\); let \(w^{\prime } = 0\) if this is the first randomized round involving vertex i. Then, \(w_{ij}\) and \(w^{\prime }\) partition the weight levels \(w \gt 0\) into up to three subsets, each of which requires a different update rule for \(\alpha _i(w)\). Concretely, the algorithm increases \(\alpha _i(w)\) by
\begin{equation} {\left\lbrace \begin{array}{ll} a\left(k_i(w)\right) & \mbox{if $0 \lt w \le w_{ij}$, and either $w \le w^{\prime }$ or $k_i(w) = 0$;} \\ a\left(k_i(w)\right) - 2^{-k_i(w)-1} \left(1 - \gamma \right)^{k_i(w)-1} \gamma & \text{if } w^{\prime } \lt w \le w_{ij} \textrm { and } k_i(w) \ge 1; \\ 2^{-k_i(w)-1} \left(1 - \gamma \right)^{k_i(w)-1} \gamma & \text{if } w \gt w_{ij} \textrm { and } k_i(w) \ge 1. \end{array}\right.} \end{equation}
(9)
The first case is straightforward—we simply increase \(\alpha _i(w)\) by \(a\left(k_i(w)\right)\) to maintain the invariant in Equation (4). Observe that this is the only case in the unweighted problem.
For a weight level w that falls into the second case (if there is any), the increment in \(\alpha _i(w)\) is smaller than the first case by \(2^{-k_i(w)-1} (1 - \gamma)^{k_i(w)-1} \gamma\). This is the difference between the lower bounds for the increments in \(\overline{y}_i(w)\) in Lemma 4.4 depending on whether i’s edge weight was at least w the last time it was chosen in a randomized round. Since the increase in the surrogate primal objective \(\overline{{\rm\small P}}\) due to vertex i and weight level w (when \(w^{\prime } \lt w\)) is less than the first case of Equation (9), we subtract this difference from the increment in \(\alpha _i(w)\) so that the update to \(\beta _j\) is unaffected.
How can we still maintain the invariant in Equation (4) given the subtraction in the second case? Observe that if the second case happens, the same weight level must fall into the third case in the previous randomized round involving i. Thus, an equal amount is prepaid to each \(\alpha _i(w)\) in the previous round. This give-and-take in the offline dual vertex updates becomes clear when we prove reverse weak duality in the next subsection.

4.2 Online Primal-Dual Analysis: Reverse Weak Duality

This subsection derives a set of sufficient conditions under which the increment in the surrogate primal \(\overline{{\rm\small P}}\) is at least that of the dual solution \({\rm\small D}\). Reverse weak duality then follows from \({\rm\small P}\ge \overline{{\rm\small P}} \ge {\rm\small D}\).
Deterministic Rounds. Suppose that j is matched to i in a deterministic round. Using the lower bound for the increase of \(\overline{{\rm\small P}}\) in Lemma 4.3, the increase of the \(\alpha _i(w)\)s in Equation (8) and an upper bound for \(\beta _j\) by dropping the second term in Equation (6), we need
\begin{equation*} \int _0^{w_{ij}} \sum _{\ell = k_i(w)}^\infty a(\ell) \mathop {}\!\mathrm{d}w + \kappa \int _0^{w_{ij}} b\left(k_i(w)\right) \mathop {}\!\mathrm{d}w \le \int _0^{w_{ij}} 2^{-k_i(w)} \left(1-\gamma \right)^{\max \lbrace k_i(w)-1, 0\rbrace } \mathop {}\!\mathrm{d}w \text{.} \end{equation*}
We will ensure the inequality locally at every weight level; thus, it suffices to have
\begin{equation} \forall k \ge 0 \quad : \quad \sum _{\ell = k}^\infty a(\ell) + \kappa \cdot b(k) \le 2^{-k} \left(1-\gamma \right)^{\max \lbrace k-1, 0\rbrace } \text{.} \end{equation}
(10)
Randomized Rounds. Now, suppose that j is matched with candidates \(i_1, i_2\) in a randomized round. We show that the increment in \(\overline{{\rm\small P}}\) due to \(i_1\) is at least the increase in the \(\alpha _{i_1}(w)\)s plus its contribution to \(\beta _j\) (i.e., \(\Delta _{i_1}^R \beta _j\)). This also holds for \(i_2\) by symmetry; together, they prove reverse weak duality.
Let \(w_1\) be the edge weight of \(i \leftarrow i_1\) in this round, and let \(w_1^{\prime }\) be its edge weight the last time it was chosen in a randomized round. Set \(w_1^{\prime } = 0\) if this has not happened. Then, \(w_1\) and \(w_1^{\prime }\) partition the weight levels \(w \gt 0\) into three subsets corresponding to the three cases for incrementing the dual variables \(\alpha _i(w)\) in a randomized round, as in Equation (9).
The first case is when \(0 \lt w \le w_{ij}\), and either \(w \le w^{\prime }\) or \(k_i(w) = 0\). By Lemma 4.4, the increase in \(\overline{{\rm\small P}}\) due to vertex i at weight level w is at least
\begin{equation*} {\left\lbrace \begin{array}{ll} \frac{1}{2} & \text{if } k_{i}(w) = 0 ; \\ 2^{-k_{i}(w)-1} \left(1 - \gamma \right)^{k_{i}(w)-1} \left(1 + \gamma \right) & \text{if } k_{i}(w) \ge 1 \textrm { and } w \le \min \lbrace w_1, w_1^{\prime }\rbrace . \end{array}\right.} \end{equation*}
By the first case of Equation (9), the increase in \(\alpha _{i}(w)\) is \(a(k_{i}(w))\). Finally, the contribution to the first term of \(\beta _j = \Delta _{i}^R \beta _j + \Delta _{i_2}^R \beta _j\), at weight level w, in Equation (5) is \(b(k_{i}(w))\). Hence, it suffices to ensure that
\begin{equation} a(0) + b(0) \le \frac{1}{2} \quad \text{and}\quad \forall k \ge 1: a(k) + b(k) \le 2^{-k-1} \left(1 - \gamma \right)^{k-1} \left(1 + \gamma \right). \end{equation}
(11)
The second case is when \(w_1^{\prime } \lt w \le w_1\) and \(k_{i}(w) \ge 1\). By Lemma 4.4, the increment in \(\overline{{\rm\small P}}\) due to i at weight level w is at least \(2^{-k_{i}(w)-1} (1 - \gamma)^{k_{i}(w)-1}\). By the second case of Equation (9), the increase in \(\alpha _{i}(w)\) is \(a(k_{i}(w)) - 2^{-k_{i}(w)-1} (1 - \gamma)^{k_{i}(w)-1} \gamma\). Finally, the contribution to the first term of \(\beta _j\), at weight level w, is \(b(k_{i}(w))\). Hence, we need
\begin{equation*} a\left(k_{i}(w)\right) - 2^{-k_{i}(w)-1} \left(1 - \gamma \right)^{k_{i}(w)-1} \gamma + b\left(k_{i}(w)\right) \le 2^{-k_{i}(w)-1} \left(1 - \gamma \right)^{k_{i}(w)-1}. \end{equation*}
Rearranging the second term to the RHS gives us the same conditions as the second part of Equation (11).
The third case is when \(w \gt w_1\) and \(k_{i}(w) \ge 1\). The increment in \(\overline{{\rm\small P}}\) due to i at weight level w is 0. By the last case of Equation (9), the increase in \(\alpha _{i}(w)\) is \(2^{-k_{i}(w)-1} (1 - \gamma)^{k_{i}(w)-1} \gamma\). The negative contribution from the second term of \(\beta _j\), at weight level w, is \(\frac{1}{2} \sum _{0 \le \ell \lt k_{i}(w)} a(\ell)\). Hence, we need
\begin{equation*} 2^{-k_{i}(w)-1} \left(1 - \gamma \right)^{k_{i}(w)-1} \gamma - \frac{1}{2} \sum _{0 \le \ell \lt k_{i}(w)} a(\ell) \le 0 \text{.} \end{equation*}
The first term is decreasing in \(k_{i}(w)\) and the second is increasing (in absolute value). Thus, it suffices to consider \(k_{i}(w) = 1\):
\begin{equation} a(0) \ge \frac{\gamma }{2} \text{.} \end{equation}
(12)

4.3 Online Primal-Dual Analysis: Approximate Dual Feasibility

This subsection derives a set of conditions that are sufficient for approximate dual feasibility, that is, Equation (3). Start by fixing any \(i \in L\) and any \(j \in R\), as well as the values of the \(k_i(w)\)s when j arrives.
Boundary Condition at the Limit. First, it may be the case that \(k_i(w) = \infty\) for all \(0 \lt w \le w_{ij}\) and j is unmatched. This means that \(\beta _j = 0\) in this round. Thus, the contribution from the \(\alpha _i(w)\)s alone must ensure approximate dual feasibility. To do so, we will ensure that the value of \(\alpha _i(w)\) is at least γ whenever \(k_i(w) = \infty\). By the invariant in Equation (4), it suffices to have
\begin{equation} \sum _{\ell = 0}^\infty a(\ell) \ge \Gamma \text{.} \end{equation}
(13)
Next, we consider five different cases that depend on whether the round of j is randomized, deterministic, or unmatched, and if i is chosen as a candidate. We first analyze the cases in which j is in a randomized round. Then, we show that the other cases require only weaker conditions.
Case 1: Round of j is randomized, i is not chosen. By definition, \(\beta _j = \Delta _{i_1}^R \beta _j + \Delta _{i_2}^R \beta _j\). Since i is not chosen, both terms on the RHS are at least \(\Delta _i^R \beta _j\). Using the definition of \(\Delta _i^R \beta _j\) in Equation (5) and lower bounding \(\alpha _i(w)\) by Equation (4), approximate dual feasibility in Equation (3) reduces to
\begin{equation*} \int _0^{w_{ij}} \sum _{0 \le \ell \lt k_i(w)} a(\ell) \mathop {}\!\mathrm{d}w + 2 \int _0^{w_{ij}} b\left(k_i(w)\right) \mathop {}\!\mathrm{d}w \ge \Gamma \cdot w_{ij} \text{.} \end{equation*}
We will again ensure this inequality at every weight level. Therefore, it suffices to have
\begin{equation} \forall k \ge 0 \quad : \qquad \sum _{0 \le \ell \lt k} a(\ell) + 2 \cdot b(k) \ge \Gamma \text{.} \end{equation}
(14)
Case 2: Round of j is randomized, i is chosen. By symmetry, suppose without loss of generality that \(i \leftarrow i_1\) and \(i_2\) is the other candidate. By definition, \(\beta _j = \Delta _{i}^R \beta _j + \Delta _{i_2}^R \beta _j\). Next, we derive a lower bound only in terms of \(\Delta _i^R \beta _j\). Since the algorithm does not choose a deterministic round with i alone, we have that \(\Delta _{i}^R \beta _j + \Delta _{i_2}^R \beta _j \ge \Delta _i^D \beta _j\). Further, we have that \(\Delta _i^D \beta _j = \kappa \cdot \Delta _i^R \beta _j\) by Equation (6). Combining these, we have that \(\beta _j \ge \kappa \cdot \Delta _i^R \beta _j\). Finally, by the definition of \(\Delta _i^R \beta _j\) in Equation (5), \(\beta _j\) is at least
\begin{equation*} \kappa \cdot \left(\int _0^{w_{ij}} b\big (k_i(w)\big) \mathop {}\!\mathrm{d}w - \frac{1}{2} \int _{w_{ij}}^\infty \sum _{0 \le \ell \lt k_i(w)} a(\ell) \mathop {}\!\mathrm{d}w \right) \text{.} \end{equation*}
Lower bounding the \(\alpha _i(w)\)s is more subtle. Recall that \(k_i(w)\) denotes the value at the beginning of the round when j arrives. Thus, the value of \(k_i(w)\) increases by 1 for any weight level \(0 \lt w \le w_{ij}\) and stays the same for any other weight level \(w \gt w_{ij}\). Therefore, the contribution of the \(\alpha _i(w)\)s to approximate dual feasibility is at least
\begin{equation*} \int _0^{w_{ij}} \sum _{0 \le \ell \le k_i(w)} a(\ell) \mathop {}\!\mathrm{d}w + \int _{w_{ij}}^\infty \sum _{0 \le \ell \lt k_i(w)} a(\ell) \mathop {}\!\mathrm{d}w \text{.} \end{equation*}
Finally, since \(\kappa \lt 2\), the net contribution from weight levels \(w \gt w_{ij}\) is nonnegative; thus, we can drop them. Then, approximate dual feasibility as in Equation (3) becomes
\begin{equation*} \int _0^{w_{ij}} \left(\sum _{0 \le \ell \le k_i(w)} a(\ell) + \kappa \cdot b\left(k_i(w)\right) \right) \mathop {}\!\mathrm{d}w \ge \Gamma \cdot w_{ij} \text{.} \end{equation*}
Thus, it suffices to ensure the inequality locally at every weight level:
\begin{equation} \forall k \ge 0 \quad : \qquad \sum _{0 \le \ell \le k} a(\ell) + \kappa \cdot b(k) \ge \Gamma \text{.} \end{equation}
(15)
There are two differences between Equations (14) and (15). First, the summation above includes \(\ell = k\). We can do this because i is one of the two candidates and, therefore, \(k_i(w)\) increases by 1 in the round of j for any weight level \(w \le w_{ij}\). Second, the \(\kappa\) coefficient for the second term is smaller.
Case 3: Round of j is deterministic, i is not chosen. By definition, \(\beta _j = \Delta _{i^*}^D \beta _j\). Next, we derive a lower bound in terms of \(\Delta _i^R \beta _j\). Since the algorithm does not choose a randomized round with i and \(i^*\) as the two candidates, we have that \(\Delta _{i^*}^D \beta _j \gt \Delta _{i^*}^R \beta _j + \Delta _i^R \beta _j\). By Equation (6) and \(\kappa \lt 2\), we have that \(\Delta _{i^*}^R \beta _j \gt \frac{1}{2} \cdot \Delta _{i^*}^D \beta _j\). Here, we use the fact that \(\Delta _{i^*}^D \beta _j \ge 0\), because \(i^*\) is chosen in a deterministic round. Putting this together gives us \(\beta _j = \Delta _{i^*}^D \beta _j \gt 2 \cdot \Delta _i^R \beta _j\), which is identical to the lower bound in the first case. Therefore, approximate dual feasibility is guaranteed by Equation (14).
Case 4: Round of j is deterministic, i is chosen. For any \(0 \lt w \le w_{ij}\), we have that \(k_i(w) = \infty\) after this round. Therefore, approximate dual feasibility follows from the contribution of the \(\alpha _i(w)\)s alone due to the invariant in Equation (4) and the boundary condition in Equation (13).
Case 5: Round of j is unmatched. By definition, \(\beta _j = 0\). Moreover, \(\Delta _i^D \beta _j \lt 0\) since the algorithm chooses to leave j unmatched, which further implies that \(\Delta _i^R \beta _j \lt 0\) by Equation (6). Therefore, we have that \(\beta _j \ge 2 \cdot \Delta _i^R \beta _j\), which is identical to the lower bound in the first case. Thus, approximate dual feasibility is guaranteed by Equation (14).

4.4 Optimizing the Gain-Sharing Parameters

To optimize the competitive ratio γ in the above online primal-dual analysis, it remains to solve for the gain sharing parameters \(a(k)\) and \(b(k)\) using the following LP:
\begin{align*} \textrm {maximize} \quad & \Gamma \\ \textrm {subject to} \quad & \text{Equations}~{((\href{#eq10}{{}10}))-(\href{#eq15}{{}15})} \end{align*}
We obtain a lower bound on the competitive ratio by solving a more restricted LP, which is finite. In particular, we set \(a(k) = b(k) = 0\) for all \(k \gt k_{\max }\) for some sufficiently large integer \(k_{\max }\) so that it becomes
\begin{align*} \textrm {maximize} \quad & \Gamma \\ \textrm {subject to} \quad & \sum _{k \le \ell \le k_{\max }} a(\ell) + \kappa \cdot b(k) \le 2^{-k} \left(1 - \gamma \right)^{\max \lbrace k-1,0\rbrace } && \forall ~0 \le k \le k_{\max }\\ & a(0) + b(0) \le \frac{1}{2} \\ & a(k) + b(k) \le 2^{-k-1} \left(1 - \gamma \right)^{k-1} \left(1 + \gamma \right) && \forall ~1 \le k \le k_{\max }\\ & a(0) \ge \frac{\gamma }{2} \\ & \sum _{0 \le \ell \le k_{\max }} a(\ell) \ge \Gamma \\ & \sum _{0 \le \ell \lt k} a(\ell) + 2 \cdot b(k) \ge \Gamma && \forall ~0 \le k \le k_{\max }\\ & \sum _{0 \le \ell \le k} a(\ell) + \kappa \cdot b(k) \ge \Gamma && \forall ~0 \le k \le k_{\max }\\ & a(k), b(k) \ge 0 && \forall ~0 \le k \le k_{\max } \end{align*}
We present an approximate solution to the finite LP in Table 1(a) with \(\gamma = 1/16\), \(\kappa = 3/2\), and \(k_{\max }= 8\), which gives \(\Gamma \gt 0.505\). We also tried different values of \(\kappa = 1 + \ell /16\), for \(0 \le \ell \le 16\). If \(\kappa = 1\) or \(\kappa = 2\), then \(\Gamma = 0.5\); if \(\kappa = 1 + 15/16\), then \(\Gamma \approx 0.5026\); for all other values of \(\kappa\) that are multiples of \(1/16\), we have \(\Gamma \gt 0.505\). Hence, the analysis is robust to the choice of \(\kappa\) so long as it is neither too close to 1 nor to 2. Furthermore, even for \(k_{\max }\) as small as 3 (and \(\gamma = \frac{13\sqrt {13}-35}{108}\)), we get a competitive ratio \(\Gamma \gt 0.504\), improving upon greedy. Table 1(b) presents an approximately optimal solution under the same setting except that we use a larger \(\gamma = \frac{13\sqrt {13}-35}{108} \gt 0.1099\) as in Theorem 3.3, which leads to the improved competitive ratio \(\Gamma \gt 0.5086\).2

5 Online Correlated Selection: in Detail

This section provides the formal description and analysis of the OCS used in Section 4. Section 5.1 introduces the basics of OCS with the proof of a \(1/16\)-OCS, substantiating the sketch in Section 3. Section 5.2 then shows how to improve the design and analysis of the OCS to prove Theorem 3.3.

5.1 Warmup: Constructing a 1/16-OCS

Algorithm 2 presents the \(1/16\)-OCS. It maintains a state variable \(\tau _i\) for each element i. If the state \(\tau _i\) equals \({ selected}\) or \({ not selected}\), it reflects the selection in the last pair involving i and indicates that this information can be used in the next pair involving i. If the state \(\tau _i\) is \({ unknown}\), it means that the past selection result of element i cannot be used to determine the selections in future pairs.
For each pair of elements \(i_1\) and \(i_2\) in the sequence, the OCS first decides whether this pair is a sender or a receiver uniformly at random. If it is a sender, use a fresh random bit to select \(i_\ell\), \(\ell \in \lbrace 1, 2\rbrace\), for this pair. Then, draw \(m \in \lbrace 1, 2\rbrace\) uniformly at random and set \(\tau _{i_m}\) to reflect the selection in this round; set \(\tau _{i_{-m}}\) to be \({ unknown}\), where \(-m\) is an abbreviation for \(3 - m\). That is, the OCS forwards the random selection in this round to subsequent rounds for only one of the two elements in the current pair, chosen uniformly at random.
If it is a receiver, on the other hand, the OCS seeks to use the previous selection result of the elements to determine its choice of \(i_\ell\). First, it draws \(m \in \lbrace 1, 2\rbrace\) uniformly at random and checks the state variable of \(i_m\). To achieve negative correlation, the OCS makes the opposite selection in this round whenever possible. If the state is \({ selected}\), indicating that \(i_m\) is selected in the last pair involving it, the OCS selects \(i_{-m}\) this time, and vice versa. If the state variable equals \({ unknown}\), the OCS uses a fresh random bit to select \(i_\ell\). In either case, reset the states of \(i_1\) and \(i_2\) to be \({ unknown}\).
In fact, we will show a result stronger than the definition of \(1/16\)-OCS.
Lemma 5.1.
For any fixed sequence of pairs of elements, any fixed element i, and any integer \(k \ge 0\), Algorithm 2 ensures that after appearing in a collection of k consecutive pairs, i is selected at least once with probability at least \(1 - 2^{-k} \cdot f_k\), where \(f_k\) is defined recursively as
\begin{equation} f_k = {\left\lbrace \begin{array}{ll} 1 & \text{if $k = 0, 1$;} \\ f_{k-1} - \frac{1}{16} f_{k-2} & \text{if $k \ge 2$.} \end{array}\right.} \end{equation}
(16)
Lemma 5.1 implies that Algorithm 2 is a \(1/16\)-semi-OCS by considering the subsequence of all pairs involving element i because
\begin{equation*} f_k = f_{k-1} - \frac{1}{16} f_{k-2} \le \left(1-\frac{1}{16} \right) f_{k-1} \le \left(1 - \frac{1}{16} \right)^{k-1} f_1 = \left(1 - \frac{1}{16} \right)^{k-1} \text{.} \end{equation*}
Let \(P^1 = \lbrace i_1^1, i_2^1\rbrace , P^2 = \lbrace i_1^2, i_2^2\rbrace , \dots , P^n = \lbrace i_1^n, i_2^n\rbrace\) be the sequence of pairs of ground elements. We start with a graph-theoretic interpretation of the OCS algorithm.
Ex-ante Dependence Graph. Consider a graph \(G^{ { ex-ante}} = (V, E^{ { ex-ante}})\) as follows, which we shall refer to as the ex-ante dependence graph. To make a distinction with the vertices and edges in the online matching problem, we shall refer to the vertices and edges in the dependence graph as nodes and arcs, respectively. Let there be a node for each pair of elements in the collection. We will refer to them as \(1 \le j \le n\), that is,
\begin{equation*} V = \big \lbrace j \in \mathbb {Z} : 1 \le j \le n \big \rbrace \text{.} \end{equation*}
Further, for any fixed element i in the ground set, let there be a directed arc from \(j_1\) to \(j_2\) for any two consecutive pairs \(j_1 \lt j_2\) involving i, that is,
\begin{equation*} E^{ { ex-ante}} = \big \lbrace (j_1, j_2)_i : j_1 \lt j_2 \textrm {s.t.} i \in P^{j_1}, i \in P^{j_2}, \textrm { and } \forall j_1 \lt t \lt j_2, i \notin P^t \big \rbrace \text{.} \end{equation*}
The subscript i helps to distinguish parallel arcs when the pairs \(j_1\) and \(j_2\) have the same two elements. See Figure 2(a) for an illustrative example of the ex-ante dependence graph.
Fig. 2.
Fig. 2. Example of dependence graphs with five ground elements and a sequence of seven pairs.
Each arc in the ex-ante dependence graph represents two pairs in the sequence in which the OCS could use the same random bit to select oppositely. By construction, there are at most two outgoing arcs and at most two incoming arcs for each node.
In particular, consider any arc \((j_1, j_2)_i\) in the ex-ante dependence graph, with i being the common element. If the randomness used by the OCS satisfies (1) pair \(j_1\) is a sender, (2) \(i_m = i\) in pair \(j_1\), (3) pair \(j_2\) is a receiver, and (4) \(i_m = i\) in pair \(j_2\), the selections in the two pairs would be perfectly negatively correlated in the sense that i is selected in exactly one of the two pairs. Each of these four events happens independently with probability \(1/2\). Hence, we achieve this perfect negative correlation with probability \(1/16\).
Ex-post Dependence Graph. The ex-post dependence graph \(G^{ { ex-post}} = (V, E^{ { ex-post}})\) is a subgraph of the ex-ante dependence graph that keeps the arcs corresponding to pairs that are perfectly negatively correlated given the realization of whether each step is a sender or a receiver and the value of m therein. Equivalently, the ex-post dependence graph is realized as follows. Over the randomness with which the OCS decides whether each step is a sender or a receiver, and the values of m, each node in the ex-ante dependence graph effectively picks at most one of its incident arcs, each with probability \(1/4\). An arc is realized in the ex-post graph if both incident nodes choose it. With this interpretation, we get that the ex-post graph is a matching. The OCS may be viewed as a randomized online algorithm that picks a matching in the ex-ante graph such that each arc in the ex-ante graph is chosen with probability lower bounded by a constant. See Figure 2(b) for an example.
Proof of Lemma 5.1
Let \(j_1 \lt j_2 \lt \dots\) be the pairs involving a ground element i. We will use the element a and \(k = 4\) in Figure 2(c) as a running example, where \(j_1 = 1, j_2 = 3, j_3 = 5, j_4 = 7\) and the relevant arcs in the dependence graphs are \((1, 3)_a\), \((3, 5)_a\), \((3, 7)_c\), and \((5, 7)_a\).
If at least one of the arcs among \(j_1 \lt j_2 \lt \dots \lt j_k\) is realized in the ex-post dependence graph, element i must be selected at least once. This is because the randomness (related to the choice of \(\ell\) in the OCS) is perfectly negatively correlated in the two incident nodes of the arc and thus, i is selected exactly once in these two steps. Importantly, this is true even if the arc is not due to element i. For example, given that the arc \((3, 7)_c\) is realized in Figure 2(c), element a must be selected at least once after step 7.
On the other hand, if none of these arcs is realized, then the random bits used in the k steps \(j_1 \lt j_2 \lt \dots \lt j_k\) are independent. For example, consider element a and \(k = 3\) in Figure 2(c). Element a is selected independently with probability \(1/2\) in steps \(j_1 = 1\), \(j_2 = 3\), and \(j_3 = 5\) given that neither \((1, 3)_a\) nor \((3, 5)_a\) is realized.
Importantly, even if some of these pairs are receivers in that the selections therein are based on the random bits realized earlier by some senders, from i’s point of view, they are still independent of the randomness in the other rounds that i is involved in. For example, from c’s point of view in Figure 2(c), even though the selection in step 2 is determined by the selection in step 1, it is independent of the selections in steps 3 and 7, which involve c.
Putting this together, the probability that i is never selected after steps \(j_1 \lt j_2 \lt \dots \lt j_k\) is equal to the probability that (1) none of the arcs among these steps is realized, times the probability that (2) none of the k independent random selections picks i. This follows from the law of total probability. The latter quantity equals \(2^{-k}\); thus, it remains to analyze the former. We shall upper bound it by the probability that none of the arcs \((j_1, j_2)_i, (j_2, j_3)_i, \dots , (j_{k-1}, j_k)_i\) is realized. We shall omit the subscript i in the rest of the proof for brevity. Denote this event as \(F_k\) and its probability as \(f_k\).
Trivially, we have that \(f_0 = f_1 = 1\). To prove that the recurrence in Equation (16) governs \(f_k\), further divide event \(F_k\) into two subevents. Let \(A_k\) be the event that none of the arcs \((j_1, j_2), (j_2, j_3), \dots , (j_{k-1}, j_k)\) is realized, and node \(j_k\) picks arc \((j_k, j_{k+1})\) in realizing the ex-post dependence graph. Let \(B_k\) be the event that none of the arcs is realized and node \(j_k\) does not pick arc \((j_k, j_{k+1})\). Let \(a_k\) and \(b_k\) be the probability of \(A_k\) and \(B_k\), respectively. We have that \(A_k\) and \(B_k\) form a partition of \(F_k\) and. thus,
\begin{equation*} f_k = a_k + b_k \text{.} \end{equation*}
If node \(j_k\) picks arc \((j_k, j_{k+1})\), which happens with probability \(1/4\), arc \((j_{k-1}, j_k)\) is not realized by definition regardless of the remaining randomness. Therefore, conditioned on the choice of \(j_k\), subevent \(A_k\) happens if and only if the choices made by steps \(j_1, j_2, \ldots , j_{k-1}\) is such that none of \((j_1, j_2), \dots , (j_{k-2}, j_{k-1})\) is realized, ithat is, when \(F_{k-1}\) happens. That is,
\begin{equation*} a_k = \frac{1}{4} f_{k-1} \text{.} \end{equation*}
On the other hand, if pair \(j_k\) does not pick \((j_k, j_{k+1})\), there are two possibilities. The first case is when \(j_k\) picks \((j_{k-1}, j_k)\), which happens with probability \(1/4\). In this case, the choices made by \(j_1, \dots , j_{k-1}\) must be such that none of \((j_1, j_2), \dots , (j_{k-2}, j_{k-1})\) is realized, and \(j_{k-1}\) does not pick \((j_{k-1}, j_k)\), that is, \(B_{k-1}\) happens. The second case is when \(j_k\) picks neither \((j_{k-1}, j_k)\) nor \((j_k, j_{k+1})\), which happens with probability \(1/2\). In this case, the choices made by \(j_1, \dots , j_{k-1}\) must be such that none of \((j_1, j_2), \dots , (j_{k-2}, j_{k-1})\) is realized, that is, \(F_{k-1}\) happens. Putting this together, we have that
\begin{equation*} b_k = \frac{1}{4} b_{k-1} + \frac{1}{2} f_{k-1} \text{.} \end{equation*}
Eliminating \(a_k\)s and \(b_k\)s with the above three equations, we get the recurrence in Equation (16).□
The same analysis generalizes to prove a stronger result, which implies a \(1/16\)-OCS.
Lemma 5.2.
For any fixed sequence of pairs of elements, any fixed element i, and any disjoint collections of \(k_1, k_2, \dots , k_m\) consecutive pairs involving i, Algorithm 2 ensures that i is selected in at least one of these pairs with probability at least
\begin{equation*} 1 - \prod _{\ell =1}^m 2^{-k_\ell } \cdot f_{k_\ell } \text{.} \end{equation*}
Proof.
Let \(j_1^\ell \lt j_2^\ell \lt \dots \lt j_{k_\ell }^\ell\) be the \(\ell\)-th subsequence of consecutive pairs involving element i for any \(1 \le \ell \le m\). The probability that i is never selected is equal to (1) the probability that none of the arcs among the steps in these collections is realized, times (2) the probability that all \(\sum _{\ell =1}^m k_\ell\) random bits are against i. The latter is \(\prod _{\ell =1}^m 2^{-k_\ell }\). We upper bound the former with the probability that for any \(1 \le \ell \le m\), none of the arcs \((j_1^\ell , j_2^\ell), \dots , (j_{k_\ell -1}^\ell j_{k_\ell }^\ell)\) is realized. Finally, observe that the events are independent for different collections \(\ell\) because the event concerning each collection only relies on the randomness of the nodes in the collection. Hence, it is at most \(\prod _{\ell =1}^m f_{k_\ell }\).□

5.2 Optimizing the OCS: Proof of Theorem 3.3

Similar to the warmup algorithm, we will realize the ex-post dependence graph by letting each node be either a sender or a receiver independently and randomly. The probability of letting a node be a sender, denoted as p, will be optimized later.
A sender uses a fresh random bit to select an element from the corresponding pair. Further, it randomly picks an out-arc in the ex-post graph and sends its selection along the out-arc. Although the out-neighbors and out-arcs have yet to arrive, we can refer to them as the one due to the first and second element in the current pair, respectively. This is identical to the warmup case.
A receiver, on the other hand, adapts to the information it receives and makes the opposite selection. The improved OCS proactively checks both in-arcs of a receiver; in contrast, the warmup algorithm checks only one randomly chosen in-arc. Concretely, check both in-arcs in the ex-ante graph to see whether any in-neighbor is a sender who picks the arc between them. If both in-neighbors are senders and both pick the corresponding arcs, choose one randomly. Add the arc to the ex-post dependence graph. Supposethat a receiver j receives the selection by a sender \(j^{\prime }\) sent along arc \((j^{\prime }, j)_i\). Then, select i in round j if it is not selected in round \(j^{\prime }\), and vice versa. See Algorithm 3 for a formal definition of the improved OCS.
Lemma 5.3.
For any fixed sequence of pairs of elements, any fixed element i, and any disjoint subsequences of \(k_1, k_2, \dots , k_m\) consecutive pairs involving i, Algorithm 3 ensures that i is selected in at least one of these pairs with probability at least
\begin{equation*} 1 - \prod _{\ell = 1}^m 2^{-k_\ell } \cdot g_{k_\ell }, \end{equation*}
where \(g_k\) is defined recursively as follows:
\begin{equation} g_k = {\left\lbrace \begin{array}{ll} 1 & \text{if $k = 0, 1$} ; \\ g_{k-1} - \frac{1}{8} p \left(1-p \right) \left(4 - p \right) \cdot g_{k-2} & \text{if $k \ge 2$}. \end{array}\right.} \end{equation}
(17)
Corollary 5.4.
Algorithm 3 is a \(\frac{1}{8} p (1-p) (4-p)\)-OCS.
To prove Theorem 3.3, let \(p = \frac{5-\sqrt {13}}{3}\) to maximize \(\frac{1}{8} p \big (1-p \big) \big (4-p \big) = \frac{13\sqrt {13}-35}{108} \gt 0.1099\).
Proof of Lemma 5.3. Let \(j^\ell _1 \lt j^\ell _2 \lt \dots \lt j^\ell _{k_\ell }\), \(1 \le \ell \le m\), be the subsequences of consecutive pairs that involve element i. The algorithm uses two kinds of independent random bits. The first kind is used to realize the ex-post dependence graph, that is, the random type of each pair, the random out-arc chosen by each sender, and the random in-neighbor of each receiver in the tie-breaking case. The second kind is the random selections by senders, and by receivers that fail to receive the selection of any sender. Importantly, the two kinds of randomness are independent.
Similar to the warmup case, we are interested in the event that there is no arc among these pairs in the ex-post dependence graph:
\begin{equation*} F = \lbrace \text {nodes} \ j^\ell _s, \text {for} 1 \le \ell \le m \text {and} 1 \le s \le k_\ell , \text {are disjoint in} \ G^{{ex-post}} \rbrace . \end{equation*}
If there is an arc between two pairs in the collections in the ex-post dependence graph, i is selected in exactly one of the two pairs. Otherwise, the selections in these pairs are independent. Hence, the probability that i is never selected is equal to the product of (1) the probability that the \(\sum _{\ell = 1}^m k_\ell\) nodes in the collections are disjoint in the ex-post dependence graph, and (2) none of the \(\sum _{\ell = 1}^m k_\ell\) independent random selections picks i. This follows from the law of total probability. The former quantity is \(\Pr (F)\), and the latter is equal to \(2^{-\sum _{\ell = 1}^m k_\ell }\). Putting this together, it equals
\begin{equation*} 2^{- \sum _{1 \le \ell \le m} k_\ell } \cdot \Pr (F). \end{equation*}
Therefore, it remains to show that
\begin{equation} \Pr (F) \le \prod _{\ell = 1}^m g_{k_\ell } \text{.} \end{equation}
(18)
Which arcs are we concerned about in the event F? Since these are subsequences of consecutive pairs involving element i, the arcs of the form \((j^\ell _s, j^\ell _{s+1})_i\) always exist in the ex-ante dependence graph. To characterize whether some of these arcs are realized in the ex-post graphs, we need to further consider another set of arcs as follows.
For any \(1 \le \ell \le m\), consider the in-arcs of nodes \(j^\ell _1 \lt \dots \lt j^\ell _{k_\ell }\) in the ex-ante dependence graph due to the element other than i. Let them be \((\hat{j}^\ell _s, j^\ell _s)\) for \(1 \le s \le k_\ell\). We omit the subscript that denotes the common element in the two nodes, with the understanding that they are due to the element other than i in the round of \(j^\ell _s\). Then, an arc \((j^\ell _s, j^\ell _{s+1})_i\) is realized in the ex-post graph if:
(1)
Node \(j^\ell _s\) is a sender that picks arc \((j^\ell _s, j^\ell _{s+1})_i\);
(2)
Node \(j^\ell _{s+1}\) is a receiver;
(3)
Either node \(\hat{j}^\ell _{s+1}\) is a receiver, or it is a sender but does not choose arc \((\hat{j}^\ell _{s+1}, j^\ell _{s+1})\), or the tie-breaking by node \(j^\ell _{s+1}\) is in favor of \(j^\ell _s\).
Binding Case. First, suppose that all \(\hat{j}^\ell _s\)’s exist, and the \(j^\ell _s\)’s and \(\hat{j}^\ell _s\)’s are all distinct. It is relatively easy to analyze because, in this case, it suffices to consider arcs of the form \((j^\ell _s, j^\ell _{s+1})_i\) and different subsequences of consecutive pairs depend on disjoint sets of random bits and, therefore, may be analyzed separately. This turns out to be the binding case of the analysis. We will analyze the binding case in Lemma 5.5 and show in Lemma 5.6 that this is the worst-case scenario that maximizes \(\Pr (F)\).
Lemma 5.5.
In the binding case, the probability of event F is:
\begin{equation*} \Pr (F) = \prod _{\ell = 1}^m g_{k_\ell } \text{.} \end{equation*}
Proof.
We start by formalizing the aforementioned implications of the assumption that all \(j^\ell _s\)s and \(\hat{j}^\ell _s\)s are distinct. First, two pairs in the collections are connected if and only if they are consecutive pairs in the same collection, for example, \(j^\ell _{s-1}\) and \(j^\ell _s\), and arc \((j^\ell _{s-1}, j^\ell _s)_i\) is realized. A pair \(j^\ell _s\) with \(s \gt 1\) cannot be the receiver of a sender other than \(j^\ell _{s-1}\) in the collections because \(\hat{j}^\ell _s\)s are not in the collections by the assumption. Second, the realization of these arcs in different collections are independent. The realization of arcs of the form \((j^\ell _{s-1}, j^\ell _s)_i\), for any fixed collection \(\ell\), depends only on the realization of first kind of randomness related to nodes with superscript \(\ell\), that is, \(j^\ell _s\)s and \(\hat{j}^\ell _s\)s.
Next, we focus on a fixed subsequence \(\ell\) and analyze the probability that no arc of the form \((j^\ell _s, j^\ell _{s+1})_i\), for \(1 \le s \lt k_\ell\), is realized. To simplify notation, we omit the superscripts and subscripts \(\ell\) and write \(j_1 \lt j_2 \lt \dots \lt j_k\) and \(\hat{j}_2, \hat{j}_3, \dots , \hat{j}_k\). Let \(G_k\) denote this event and \(g_k\) be its probability. Trivially, we have that \(g_0 = g_1 = 1\). It remains to show that \(g_k\) follows the recurrence in Equation (17).
We will do so by further considering an auxiliary subevent \(A_k\), which requires not only \(G_k\) to happen but also \(j_k\) to be a sender who picks the out-arc due to i. Let \(a_k\) denote its probability.
Auxiliary Event. If \(j_k\) is a sender who picks the out-arc due to i, which happens with probability \(p/2\), arc \((j_{k-1}, j_k)_i\) would not be realized regardless of the randomness of the other nodes in the collection. Therefore, under this condition, event \(A_k\) reduces to event \(G_{k-1}\).
\begin{equation*} a_k = \frac{p}{2} \cdot g_{k-1}. \end{equation*}
Main Event. If \(j_k\) is a sender, which happens with probability p, arc \((j_{k-1}, j_k)_i\) would not be realized regardless of the randomness of the other nodes in the collection. Therefore, under this condition, event \(G_k\) reduces to event \(G_{k-1}\). The contribution of this part to the probability of \(G_k\) is
\begin{equation*} p \cdot g_{k-1}. \end{equation*}
If \(j_k\) is a receiver (probability \(1-p\)) but \(\hat{j}_k\) is a sender who picks arc \((\hat{j}_k, j_k)\) (probability \(p/2\)), and the tie-breaking at \(j_k\) is in favor of \(\hat{j}_k\) (probability \(1/2\)), we still have that arc \((j_{k-1}, j_k)_i\) cannot be realized regardless of the randomness of the other nodes. The contribution of this part to the probability of \(G_k\) is
\begin{equation*} \frac{p(1-p)}{4} g_{k-1}. \end{equation*}
Otherwise, \(j_{k-1}\) must not be a sender who picks arc \((j_{k-1}, j_k)_i\) or else arc \((j_{k-1}, j_k)_i\) would be realized. Therefore, conditioned on being in this case, events \(G_k\) reduces to event \(G_{k-1} \setminus A_{k-1}\). The contribution of this part to the probability of \(G_k\) is
\begin{equation*} (1-p)\left(1-\frac{p}{4}\right) \left(g_{k-1} - a_{k-1} \right). \end{equation*}
Putting everything together, we have that
\begin{equation*} g_k = g_{k-1} - (1-p) \left(1-\frac{p}{4}\right) a_{k-1}. \end{equation*}
Eliminating \(a_k\)s by combining the two equations, we get the recurrence in Equation (17).□
Lemma 5.6.
The probability of event F is maximized in the binding case.
Proof.
Here are the possible violations of the conditions of the regular case:
(1)
Some arc \((\hat{j}^\ell _s, j^\ell _s)\) may not exist, that is, the element other than i in pair \(j^\ell _s\) has its first appearance in pair \(j^\ell _s\).
(2)
There may be \(\ell , \ell ^{\prime }, s, s^{\prime }\) such that \(\hat{j}^\ell _s = j^{\ell ^{\prime }}_{s^{\prime }}\), that is, the element other than i in pair \(j^\ell _s\) is also an element in pair \(j^{\ell ^{\prime }}_{s^{\prime }}\), and in no other pairs in between.
(3)
There may be \(\ell , \ell ^{\prime }, s, s^{\prime }\) such that \(\hat{j}^\ell _s = \hat{j}^{\ell ^{\prime }}_{s^{\prime }}\).
We use a coupling argument to compare the probability of event F in a general case, potentially with some of these violations, with the probability in the binding case.
Type 1 Violation. Consider an instance almost identical to the one at hand except that we introduce a new node \(\hat{j}_s^\ell\) for such a violation. For example, let pair \(\hat{j}_s^\ell\) be at the beginning of the sequence, and let it contain the element other than i in pair \(j_s^\ell\) and a new dummy element that does not appear elsewhere. Further, couple the two instances by letting the common nodes realize identical random bits and by letting the new node draw fresh random bits. We claim that whenever event F happens in the original instance, it also happens in the new instance. If arc \((\hat{j}_s^\ell , j_s^\ell)\) is not realized, the rest of the arcs are realized identically in the two cases. Otherwise, having arc \((\hat{j}_s^\ell , j_s^\ell)\) may preclude arc \((j_{s-1}^\ell , j_s^\ell)_i\) from being realized, making event F more likely to happen in the new instance.
Type 2 Violation. Consider an instance almost identical to the one at hand except that we introduce a new node \(\hat{j}_s^\ell \ne j_{s^{\prime }}^{\ell ^{\prime }}\) for such a violation. For example, let \(\hat{j}_s^\ell\) be a pair arriving after \(j_{s^{\prime }}^{\ell ^{\prime }}\) and before \(j_s^\ell\) that involves the element other than i in these two pairs and a new dummy element that does not appear elsewhere. Further, couple the two instances by letting the common nodes realize identical random bits and by letting the new nodes draw fresh random bits. We claim that whenever event F happens in the original instance, it also happens in the new instance. Since F happens in the original instance, arc \((j_{s^{\prime }}^{\ell ^{\prime }}, j_s^\ell)\) is not realized. If further arc \((\hat{j}_s^\ell , j_s^\ell)\) is not realized, the rest of the arcs are realized identically in the two cases. Otherwise, having arc \((\hat{j}_s^\ell , j_s^\ell)\) may preclude arc \((j_{s-1}^\ell , j_s^\ell)_i\) from being realized, making event F more likely to happen in the new instance.
Type 3 Violation. Consider an instance almost identical to the one at hand except that we introduce a new node \(\hat{j}_s^\ell \ne \hat{j}_{s^{\prime }}^{\ell ^{\prime }}\) for such a violation. For example, let \(\hat{j}_s^\ell\) be a pair arriving right before \(j_s^\ell\) that involves the element other than i in pair \(j_s^\ell\) and a new dummy element that does not appear elsewhere. To avoid confusion in the following discussion, let \(\hat{j}\) be the node in the type 3 violation in the original instance, and let \(\hat{j}_s^\ell \ne \hat{j}_{s^{\prime }}^{\ell ^{\prime }}\) be the nodes in the new instance. Further, couple the two instances by letting nodes other than \(\hat{j}\), \(\hat{j}_s^\ell\), and \(\hat{j}_{s^{\prime }}^{\ell ^{\prime }}\) realize identical random bits. To define the coupling for these three nodes, we need some notations. We say that node \(j_s^\ell\) needs help if node \(j_{s-1}^\ell\) is a sender who picks arc \((j_{s-1}^\ell , j_s^\ell)_i\) and if node \(j_s^\ell\) is a receiver who breaks the tie against \(j_{s-1}^\ell\). For F to happen in this case, \(\hat{j}_s^\ell\) must be a sender who picks arc \((\hat{j}_s^\ell , j_s^\ell)\). Define similarly for node \(j_{s^{\prime }}^{\ell ^{\prime }}\). If \(j_s^\ell\) needs help but \(j_{s^{\prime }}^{\ell ^{\prime }}\) does not, let \(j^*\) and \(j_s^\ell\) realize identical random bits and let \(\hat{j}_{s^{\prime }}^{\ell ^{\prime }}\) draw fresh random bits, and vice versa. Otherwise, that is, if none or both need help, let them have independent random bits. Then, when at most one needs help, event F happens in the original instance if and only if it happens in the new instance since the realization of the relevant arcs is identical. If both need help, on the other hand, F cannot happen in the original instance because at least one of \((j_{s-1}^\ell , j_{s}^\ell)_i\) or \((j_{s^{\prime }-1}^{\ell ^{\prime }}, j_{s^{\prime }}^{\ell ^{\prime }})_i\) would be realized.□

6 Conclusion

This article presents an online primal-dual algorithm for the edge-weighted bipartite matching problem that is 0.5086-competitive, resolving a long-standing open problem in the study of online algorithms. In particular, this work merges and refines the results of Fahrbach and Zadimoghaddam [12] and Huang and Tao [23, 30] to give a simpler algorithm under the online primal-dual framework. Our work initiates the study of online correlated selection (OCS), a key algorithmic ingredient for quantifying the level of negative correlation in online assignment problems. This new technique has already found applications in related online matching problems. Subsequent to our article, Huang et al. [31] generalized our OCS to obtain the first online algorithm that breaks the \(1/2\) barrier in the general AdWords problem, and Huang et al. [27] and Tang et al. [48] applied OCS to online stochastic matching.
Using independent random bits to make each selection is a 0-OCS (no negative correlation), and using an imaginary 1-OCS with perfect negative correlation results in infeasible assignments. Therefore, we aim to design an online matching algorithm using partial negative correlation. We first construct a \(1/16\)-OCS, and then we optimize this subroutine to obtain a 0.1099-OCS. Designing a γ-OCS with the largest possible γ is an interesting open problem, as it directly improves the competitive ratio of the edge-weighted online bipartite matching algorithm in this article. Subsequent to our work, Gao et al. [19] improved the lower and upper bounds of γ to 0.167 and 0.25, respectively.
Even if a 1-OCS did exist, the best competitive ratio that can be achieved using this approach (without any deviations) is at most \(5/9\), as shown by Huang and Tao [30]. Hence, we need fundamentally new ideas to come closer to the optimal \(1-1/e\) ratio in the unweighted and vertex-weighted cases. One idea is to consider an OCS that allows for more than two candidates in each round, which we call multiway OCS. Subsequent to our work, Gao et al. [19] gave a multiway semi-OCS, and Blanc and Charikar [3] and Shin and An [47] made progress on multiway OCS. Blanc and Charikar [3] used their 6-way OCS to get the current best 0.536-competitive ratio for edge-weighted online bipartite matching. Designing optimal multiway OCS is another interesting open problem for future works.

Footnotes

1
The text addresses only deterministic algorithms. To rule out randomized algorithms, consider n instances, each with one advertiser and n impressions. In instance i, the first i impressions have weights \(1, W, W^2, \dots , W^{i-1}\) whereas the remaining impressions have zero weights. The probability that an online algorithm assigns impression i to the advertiser must be consistent for instances i to n; denote this by \(p_i\). Since \(\sum _{i=1}^n p_i \le 1\), we have that \(p_{i^*} \le 1/n\) for some \(i^*\). The algorithm is then at most \(1/n\) competitive for instance \(i^*\).

A Unweighted Online Matching

This section considers the unweighted case and shows that the two-choice greedy algorithm is strictly better than the \(1/2\)-competitive algorithm when it uses OCS for the randomized rounds. We write \(g_k\) for \((1-\gamma)^{\max \lbrace k-1, 0\rbrace }\) in the following discussion.
Theorem A.1.
The two-choice greedy algorithm with the randomized rounds that use a 0.1099-OCS is at least 0.508-competitive for unweighted online bipartite matching.
Proof.
In the unweighted case, it suffices to consider a single weight-level \(w = 1\). Thus, for each offline vertex i, we write \(k_i = k_i(1)\) for brevity. We will maintain \(\overline{x}_i = 1 - 2^{-k_i} \cdot g_{k_i}\) for each offline vertex i, which, according to Lemma 5.3, lower bounds the probability that i is matched. Correspondingly, we maintain the following lower bound on the primal objective:
\begin{equation*} \overline{{\rm\small P}} = \sum _{i \in L} \overline{x}_i\text{.} \end{equation*}
To prove the stated competitive ratio, it suffices to explain how to maintain a dual assignment such that (1) the dual objective equals the lower bound of the primal objective, that is, \({\rm\small D}= \overline{{\rm\small P}}\), and (2) it is approximately feasible up to a γ factor, that is, \(\alpha _i + \beta _j \ge \Gamma\) for every edge \((i, j) \in E\).□

Dual Updates.

The dual updates are based on a solution to a finite version of the following LP. All of the solution values are presented in Table 2 at the end of this section. The constraints that follow are simpler than in the more general edge-weighted case, but the competitive ratio we achieve is almost the same.
Table 2.
k\(g_k\)\(a(k)\)\(b(k)\)
01.000000000.245506780.25449322
11.000000000.145742040.13173982
20.890072530.066131200.05886880
30.780145060.029071080.02580320
40.682301640.012734240.01126766
50.596542270.005592360.00490054
60.521538580.002482280.00210436
70.455962200.001141930.00086312
80.398630780.000584310.00029216
Table 2. An Approximately Optimal Solution to the FactorRevealing Linear Program with \(k_{\max }= 8\) in the Unweighted Case
Lemma A.2.
The optimal value of the LP that follows is at least 0.508:
\begin{align} \text{maximize} \quad & \Gamma \nonumber \nonumber\\ \text{subject to} \quad & a(k) + b(k) \le 2^{-k} \cdot g_k - 2^{-(k+1)} \cdot g_{k+1} & \forall ~k \ge 0 \end{align}
(19)
\begin{align} & \sum _{\ell = 0}^{k-1} a(\ell) + 2 \cdot b(k) \ge \Gamma & \forall ~k \ge 0 \end{align}
(20)
\begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\!\! & b(k) \ge b(k+1) & \forall ~k \ge 0 \\ & a(k), b(k) \ge 0 & \forall ~k \ge 0 \nonumber \nonumber \end{align}
(21)
Consider an online vertex \(j \in R\), and let \(k_{\min }= \min _{i \in N(j)} k_i\) denote the minimum value of \(k_i\) among offline neighbors i of vertex j. First, suppose that it is a randomized round. Recall that \(i_1\) and \(i_2\) denote the two candidate offline vertices shortlisted in round j. Then, we have that \(k_i = k_{\min }\) for both \(i \in \lbrace i_1, i_2\rbrace\). In the primal, \(\overline{x}_i\) increases by \(2^{-k_{\min }} \cdot g_{k_{\min }} - 2^{-(k_{\min }+1)} \cdot g_{k_{\min }+1}\) for both \(i \in \lbrace i_1, i_2\rbrace\). In the dual, increase \(\alpha _i\) by \(a(k_{\min })\) for both \(i \in \lbrace i_1, i_2\rbrace\) and let \(\beta _j = 2 \cdot b(k_{\min }),\) where each \(i \in \lbrace i_1, i_2\rbrace\) contributes \(b(k_{\min })\).
Next, suppose that it is a deterministic round. Recall that \(i^*\) denotes the offline vertex to which vertex j is matched deterministically. Then, \(\overline{x}_{i^*}\) increases by \(2^{-k_{\min }} \cdot g_{k_{\min }}\) in the primal. In the dual, increase \(\alpha _{i^*}\) by \(\sum _{\ell \ge k_{\min }} a(\ell)\) and let \(\beta _j = 2 \cdot b(k_{\min }+1)\). No update is needed in an unmatched round, as \(\overline{{\rm\small P}}\) remains the same.

Objective Comparisons.

Next, we show that the increment in the dual objective \({\rm\small D}\) is at most that in the lower bound of the primal objective, that is, \(\overline{{\rm\small P}}\). In a randomized round, it is followed by Equation (19). In a deterministic round, it is followed by the sequence of inequalities here:
\begin{align*} \sum _{\ell \ge k} a(\ell) + 2 \cdot b(k+1) & \le \sum _{\ell \ge k} a(\ell) + b(k) + b(k+1) && \textrm {(by Equation}~{(\href{#eq21}{{}21}))} \\ & \le \sum _{\ell \ge k} \left(a(\ell) + b(\ell)\right) \\ & \le \sum _{\ell \ge k} \left(2^{-\ell } \cdot g_\ell - 2^{-(\ell +1)} \cdot g_{\ell +1} \right) && \textrm {(by Equation}~{(\href{#eq19}{{}19}))} \\ & = 2^{-k} \cdot g_{k} \text{.} \end{align*}

Approximate Dual Feasibility.

We first summarize the following invariants, which follow from the definitions of the dual updates.
For any offline vertex \(i \in L\), \(\alpha _i = \sum _{\ell =0}^{k_i-1} a(\ell)\).
For any online vertex j, \(\beta _j = 2 \cdot b(k)\) if it is matched in a randomized round to neighbors with \(k_i = k\) or in a deterministic round to a neighbor with \(k_i = k-1\).
For any edge \((i, j) \in E\), consider the value of \(k_i\) at the time when j arrives. If \(k_i = \infty\), the value of \(\alpha _i\) alone ensures approximate dual feasibility because
\begin{align*} \alpha _i & = \sum _{\ell \ge 0} a(\ell) \\ & = \lim _{k \rightarrow \infty } \sum _{\ell = 0}^{k-1} a(\ell) \\ & \ge \Gamma - 2 \lim _{k \rightarrow \infty } b(k) && \textrm {(by Equation}~{(\href{#eq20}{{}20}))} \\ & = \Gamma ~. && \textrm {(by Equation}~(\href{#eq19}{{}19}), \textrm {whose RHS tends to 0)} \end{align*}
Otherwise, by the definition of the two-choice greedy algorithm, j is either matched in a randomized round to two vertices with \(k_{i^{\prime }} \le k_i\) or in a deterministic round to a vertex with \(k_{i^{\prime }} \lt k_i\). In both cases, we have that \(\beta _j \ge 2 \cdot b(k_i).\) Approximate dual feasibility now follows by \(\alpha _i = \sum _{\ell = 0}^{k_i-1} a(\ell)\) and Equation (20).
Proof of Lemma A.2
Consider a restricted version of the LP that is finite. For some positive \(k_{\max }\), let \(a(k) = b(k) = 0\) for all \(k \gt k_{\max }\). Then, the linear program becomes
\begin{align*} \text{ maximize} \quad & \Gamma \\ \text{ subject to} \quad & a(k) + b(k) \le 2^{-k} \cdot g_k - 2^{-(k+1)} \cdot g_{k+1} & 0 \le k \le k_{\max }\quad & \text {(revised}~\href{#eq20}{{}20}) \\ r & \sum _{\ell = 0}^{k-1} a(\ell) + 2 \cdot b(k) \ge \Gamma & 0 \le k \le k_{\max }\quad & \text {(revised}~\href{#eq20}{{}20}) \\ & \sum _{\ell = 0}^{k_{\max }} a(\ell) \ge \Gamma & & \text {(revised}~\href{#eq20}{{}20}), \text {boundary case} \\ & b(k) \ge b(k+1) & 0 \le k \lt k_{\max }\quad & \text {(revised}~\href{#eq21}{{}21}) \\ & a(k), b(k) \ge 0 & 0 \le k \le k_{\max } \end{align*}
See Table 2 for an approximately optimal solution for the restricted LP with \(k_{\max }= 8\), which gives a competitive ratio of \(\Gamma \approx 0.508986\).□

B Hard Instances

This section presents two families of unweighted graphs that demonstrate some hardness results for the online matching algorithms considered in this article. We also give an example showing that the two-choice greedy with perfect negative correlation is infeasible in the online setting.

B.1 Upper Triangular Graphs

Consider a bipartite graph with n vertices on each side. Let each online vertex \(1 \le j \le n\) be incident to the offline vertices \(j \le i \le n\). Thus, the adjacency matrix (with online vertices as rows and offline vertices as columns) is an upper triangular matrix. This is a standard instance for showing hardness that dates back to Karp et al. [38].
Theorem B.1.
The two-choice greedy algorithm using independent random bits in different randomized rounds is only \((1/2 + o(1))\)-competitive.
Proof.
For ease of presentation, suppose that the algorithm chooses candidates in reverse lexicographical order. Consider an upper triangular graph with \(n = 3^k\) for some large positive integer k. First, observe that there is a perfect matching where the i-th online vertex is matched to the i-th offline vertex. Hence, the optimal value is n.
Next, consider the performance of the online algorithm. The first \(n/3 = 3^{k-1}\) vertices are matched to the last \(2/3\) fraction of the offline vertices in randomized rounds. That is, their correct neighbors in the perfect matching are left unmatched, while the other offline vertices are only half matched. Then, the first one-third of the remaining online vertices (i.e., \(1/3 \cdot 2n/3 = 2 \cdot 3^{k-2}\) in total) are matched to the last \((2/3)^2\) fraction of the offline vertices in randomized rounds. That is, their correct neighbors in the perfect matching are left matched by only half, while the correct neighbors of subsequent online vertices are now matched by three-quarters. The argument goes on recursively.
Therefore, omitting a lower-order term due to the last \(2^k = n^{\log _3 2}\) vertices on both sides, the expected size of the matching is
\begin{align*} \left(1 \cdot \frac{1}{3} + \frac{1}{2} \cdot \frac{2}{9} + \cdots + \left(\frac{1}{2}\right)^k \cdot \frac{2^k}{3^{k+1}} + \cdots \right) n & = \left(\frac{1}{3} + \frac{1}{9} + \cdots + \frac{1}{3^{k+1}} + \cdots \right) n \\ & = \frac{n}{2} ~. \end{align*}
Hence, the two-choice greedy algorithm is at best \((1/2 + o(1))\)-competitive.□
Theorem B.2.
The imaginary two-choice greedy algorithm with perfect negative correlation across different randomized rounds is only \(5/9\)-competitive.
Proof.
For ease of presentation, suppose that the algorithm chooses candidates in reverse lexicographical order. Consider an upper triangular graph with \(n = 9\). There are nine vertices on each side, denoted as \(i_1, i_2, \dots , i_9\) and \(j_1, j_2, \dots , j_9\), and a perfect matching with \(i_k\) matched to \(j_k\) for \(k = 1, 2, \dots , 9\). The first three online vertices, \(j_1\), \(j_2\), and \(j_3\), are connected to all offline vertices. After their arrivals, \(i_1\), \(i_2\), and \(i_3\) are unmatched while the remaining six offline vertices are matched by half. Then, the next two online vertices, \(j_4\) and \(j_5\), are connected to the last six offline vertices, that is, \(i_4\) to \(i_9\). After their arrival, \(i_4\) and \(i_5\) remain matched by half, while \(i_6\) to \(i_9\) are fully matched. Therefore, the algorithm finds a matching of size \(1/2 \cdot 2 + 1 \cdot 4 = 5\) in expectation, but the optimal matching has size 9. The competitive ratio is \(5/9\), which matches the lower bound that we want to show.□
While these hardness results crucially rely on the fact that the two-choice greedy algorithm breaks ties in a deterministic way, we note that the analysis can be extended to random tie-breaking algorithms by permuting the offline vertices.

B.2 Erdös–Rényi Upper Triangular Graphs

Consider the following random bipartite graph that has n vertices on each side. Each online vertex \(1 \le j \le n\) is incident to the offline vertex \(i = j\) with certainty. Each offline vertex \(j \lt i \le n\) is adjacent to j independently with probability p, where \(0 \lt p \lt 1\) is a parameter to be determined.
By considering the Erdös–Rényi variant of upper triangular graphs instead of the original ones, we ensure that with high probability any fixed online vertex is paired with different offline vertices in its randomized round. This is effectively the worst-case scenario in the analysis of the OCS algorithm in Section 5. Letting \(n = 2^{13}\) and \(p = 2^{-6}\), an empirical evaluation shows that our analysis for the combination of a two-choice greedy algorithm and with an OCS is nearly optimal.
Observation 1.
The competitive ratio of the two-choice greedy algorithm with the OCS in Algorithm 2 is at most 0.5057-competitive.
Observation 2.
The competitive ratio of the two-choice greedy algorithm with the OCS in Algorithm 3 is at most 0.51-competitive.

B.3 Infeasibility

Now, we show that two-choice greedy with perfect negative correlation is infeasible in the online setting. Here, perfect negative correlation means that if an offline vertex is a candidate in two randomized rounds (i.e., it is fed as input to the OCS), then it must be selected by the OCS at least once and, hence, is matched. Consider a graph with 4 offline vertices, denoted as 1 to 4. The first online vertex, denoted as 5, is connected to 1 and 2. The second online vertex, denoted as 6, is connected to 3 and 4. The third online vertex, denoted as 7, has two possibilities: it is connected with either 1 and 3 or 1 and 4. In the former case, the following pairs of edges have perfect negative correlations: \((1,5)\) and \((2,5)\), \((1,7)\) and \((3,7)\), \((1,5)\) and \((1,7)\), and \((3,6)\) and \((3,7)\). The first two pairs are due to having the same online vertex; the last two pairs are due to having the same offline vertex. Hence, we can deduce that \((2,5)\) and \((3,6)\) have perfect positive correlation. In the latter case, however, a similar argument gives that \((2,5)\) and \((3,6)\) have perfect negative correlation. An online algorithm cannot handle both cases simultaneously since the correlation between \((2,5)\) and \((3,6)\) are determined before the arrival of vertex 7 in the online setting.

C Connections to the Original Algorithm

This section explains the connections between the online primal-dual algorithm in this article and the original algorithm by Fahrbach and Zadimoghaddam [12]. We start by briefly describing the algorithm in Algorithm 4, with minor modifications to make it consistent with the notations in this article. Then, we simplify the algorithm by considering the special case of unweighted online matching and present the result as Algorithm 5. Finally, we explain how Algorithm 5 in the unweighted case is effectively a two-choice greedy algorithm that implicitly uses the warmup \(1/16\)-OCS from Section 5.

C.1 Original Algorithm

The algorithm uses two parameters \(\varepsilon , \delta \gt 0\) that are later optimized in the analysis in [12]. These parameters are not necessary for explaining the connections between the two algorithms; thus, we omit their values. For each offline vertex \(i \in L\), the algorithm maintains the following state variables that express the behavior of the last randomized round involving i. First, it maintains a Boolean variable \(\textsf {active}(i)\) that indicates whether the realization of the last randomized selection involving vertex i can be adaptively used in the next randomized round in which i is involved. The goal here is to introduce negative correlation in the same way that the \(1/16\)-OCS does. Each offline vertex also maintains two state variables about the last randomized selection involving i: the corresponding online vertex \(\textsf {index}(i)\) and the other offline vertex \(\textsf {partner}(i)\) in the last randomized round. Finally, the realization of the last randomized selection is stored as \(\textsf {priority}(i)\). Informally in the notation of this article, \(\textsf {priority}(i) = 0\) corresponds to the case in which the last randomized round involving i is a receiver; otherwise (i.e., the sender case) \(\textsf {priority}(i)\) is 1 if i is selected the last time and is 2 if i is not selected.
For each online vertex \(j \in R\), the matching decision is made using two different quality measures for the offline vertices. For each offline vertex i, \(\textsf {gain}_{ij}\) denotes how much i’s heaviest edge weight would increase should j be matched to i. The first measure is the expectation of \(\textsf {gain}_{ij}\), which equals \(\int _0^{w_{ij}} (1 - y_i(w)) \mathop {}\!\mathrm{d}w\) in the CCDF viewpoint of this article. It also defines \(\textsf {adaptive_gain}_{ij}\), which captures the extra value of matching j to i due to the ability to make adaptive decisions based on the realization of the last randomized round involving i. Informally, this corresponds to the benefit of using the OCS for negative correlation in our primal-dual algorithm. The formula of \(\textsf {adaptive_gain}_{ij}\) is derived from the analysis in [12] and its interpretation is not necessary for understanding the connections between the two algorithms. We refer the reader to [12] for a more detailed explanation of Algorithm 4 in the general edge-weighted online matching problem.

C.2 Simplified and Symmetrized Algorithm for Unweighted Online Matching

We now focus on a simplified algorithm in the special case of unweighted online matching in order to better explain the connections to our primal-dual algorithm in this article. In this setting, \(w_{ij} \in \lbrace 0,1\rbrace\) for any \(i \in L\) and any \(j \in R\).
Simplifying Case 2. The case in which \(|B| \le 1\) in Algorithm 4 is significantly simpler in the unweighted case. Observe that for any offline vertex \(i \in L\), either \(w_{ij} = 0\), in which case we have \(\mathbb {E}[ \textsf {gain}_{ij} ] = 0\), or \(w_{ij} = 1\), in which case we have \(w_{ij} \gt w_{i,\textsf {index}(i)}-\delta M_j\). In other words, C is always an empty set. On the other hand, \(|B^{\prime }|\) is nonempty because the offline neighbor with the maximum value of \(\mathbb {E}[ \textsf {gain}_{ij} ]\) is always in the set. Further observe that \(B^{\prime }\) must be a singleton because \(B^{\prime }\) is a subset of B, which has at most one element since we are in the second case of the algorithm. Putting this all together, the algorithm always matches i to the unique element in \(B^{\prime }\). This corresponds to a deterministic round in the primal-dual algorithm in this article.
Simplifying Gains and Adaptive Gains. Recall that \(x_i\) denotes the probability that an offline vertex i is matched. Thus, the expected gain \(\mathbb {E}[ \textsf {gain}_{ij} ]\) in the unweighted case equals \(1 - x_i\). Observe that the expected gain is 0 if an offline vertex i is involved in case 2 (i.e., a deterministic round).
Next, we consider the adaptive gain values. In the unweighted case, the second term in the formula for computing adaptive gains is always 0 for any offline neighbor i of the online vertex j, because both \(w_{i,\textsf {index}(i)}\) and \(w_{ij}\) are 1. The third term, on the other hand, equals 0 if i has never been in case 2, and otherwise equals \(\mathbb {E}[ \textsf {gain}_{i,\textsf {index}(i)}]\) due to the earlier discussion on the simplification of case 2 in the unweighted case. In other words, the adaptive gain can be simplified as \(\mathbb {E}[ \textsf {gain}_{i, \textsf {index}(i)} ]/36\) for any i that has not yet been matched deterministically and is 0 otherwise.
Simplifying the Candidate Set. Since both the gain and the adaptive gain values are 0 for any offline vertex that has been deterministically matched, they cannot appear in the candidate set B. Therefore, it suffices to consider j’s offline neighbors that have not yet been matched deterministically. For such vertices, the first condition of the candidate set B holds trivially because \(w_{ij} = 1\) and \(w_{i, \textsf {index}(i)} - \delta M_j = 1 - \delta M_j \lt 1\). In conclusion, it suffices to keep only the second condition.
Symmetrizing the Choice of \(\ell\). We observe that choosing \(\ell\) to maximize the adaptive gain has no significance in the analysis by the authors of [12]. The analysis therein distributes the benefit of making adaptive decisions with regard to to \(i_\ell\) equally between \(i_1\) and \(i_2\), and for \(i_{-\ell }\) it merely needs its share to be at least half the benefit of making adaptive decisions with regard to \(i_{-\ell }\). To this end, we symmetrize the choice of \(\ell \in \lbrace 1, 2\rbrace\) to be uniformly at random. Doing so makes the connection to the algorithm in this article more apparent.
Optimizing the Efficiency of Adaptivity. Finally, we remove the condition on the value of adaptive gain being 0 in the if statement in the first case of the algorithm. This is driven by the observation that the online vertex j is matched randomly to \(i_1\) and \(i_2\) with equal probability whenever it holds, which is identical to the else case of the if statement. Moreover, the latter case allows us to store the realization of the random selection to be exploited adaptively later, whereas the former does not. Hence, other than making the algorithm closer to the OCS introduced in this article, this technical change also improves the efficiency of adaptivity in the original algorithm.
This simplified and symmetrized version of the original algorithm in the unweighted case is summarized as Algorithm 5.

C.3 Connections Between the Unweighted Algorithms

We focus on the randomized rounds to explain the connections to the warmup \(1/16\)-OCS in Section 5. If \(R \in [1/3, 2/3)\), it corresponds to a sender round in the OCS where \(i_1\) is selected. If \(R \in [2/3, 1)\), it corresponds to a sender round where \(i_2\) is selected. If \(R \in [0, 1/3)\), it corresponds to a receiver round. The choice of \(\ell\) corresponds to the choice of a random in-arc by a receiver in the OCS. The state variable \(\textsf {active}(i)\) ensures that each sender’s selection is adaptively used by at most one receiver. In the OCS, the sender randomly picks an out-arc as the potential receiver. In contrast, Algorithm 5 deterministically picks the out-neighbor that arrives earlier. The authors of [12] effectively use an amortization in the analysis to distribute the benefit between the two out-arcs.
In conclusion, aside from the different choices of constants and the use of an amortization in the analysis instead of a symmetrized algorithm, the 0.501-competitive algorithm in [12] implicitly contains the ideas behind the warmup \(1/16\)-OCS presented as Algorithm 2 in this article.

References

[1]
Gagan Aggarwal, Gagan Goel, Chinmay Karande, and Aranyak Mehta. 2011. Online vertex-weighted bipartite matching and single-bid budgeted allocations. In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 1253–1264.
[2]
Itai Ashlagi, Maximilien Burq, Chinmoy Dutta, Patrick Jaillet, Amin Saberi, and Chris Sholley. 2019. Edge weighted online windowed matching. In Proceedings of the 20th ACM Conference on Economics and Computation. ACM, 729–742.
[3]
Guy Blanc and Moses Charikar. 2021. Multiway online correlated selection. In Proceedings of the 62nd Annual IEEE Symposium on Foundations of Computer Science. IEEE, 1277–1284.
[4]
Brian Brubach, Karthik A. Sankararaman, Aravind Srinivasan, and Pan Xu. 2020. Attenuate locally, win globally: Attenuation-based frameworks for online stochastic matching with timeouts. Algorithmica 82, 1 (2020), 64–87.
[5]
Niv Buchbinder, Moran Feldman, Yuval Filmus, and Mohit Garg. 2020. Online submodular maximization: Beating 1/2 made simple. Mathematical Programming 183, 1 (2020), 149–169.
[6]
Niv Buchbinder, Kamal Jain, and Joseph Seffi Naor. 2007. Online primal-dual algorithms for maximizing ad-auctions revenue. In Proceedings of the 15th Annual European Conference on Algorithms. Springer, 253–264.
[7]
Nikhil R. Devanur and Thomas P. Hayes. 2009. The adwords problem: Online keyword matching with budgeted bidders under random permutations. In Proceedings of the 10th ACM Conference on Electronic Commerce. ACM, 71–78.
[8]
Nikhil R. Devanur, Zhiyi Huang, Nitish Korula, Vahab S. Mirrokni, and Qiqi Yan. 2016. Whole-page optimization and submodular welfare maximization with online bidders. ACM Transactions on Economics and Computation 4, 3 (2016), 1–20.
[9]
Nikhil R. Devanur, Kamal Jain, and Robert D. Kleinberg. 2013. Randomized primal-dual analysis of ranking for online bipartite matching. In Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 101–107.
[10]
Nikhil R. Devanur, Kamal Jain, Balasubramanian Sivan, and Christopher A. Wilkens. 2011. Near optimal online algorithms and fast approximation algorithms for resource allocation problems. In Proceedings of the 12th ACM Conference on Electronic Commerce. ACM, 29–38.
[11]
Hossein Esfandiari, Nitish Korula, and Vahab Mirrokni. 2015. Online allocation with traffic spikes: Mixing adversarial and stochastic models. In Proceedings of the 16th ACM Conference on Economics and Computation. 169–186.
[12]
Matthew Fahrbach and Morteza Zadimoghaddam. 2019. Online weighted matching: Breaking the \(\frac{1}{2}\) barrier. arXiv preprint arXiv:1704.05384v2.
[13]
Jon Feldman, Monika Henzinger, Nitish Korula, Vahab S. Mirrokni, and Cliff Stein. 2010. Online stochastic packing applied to display ad allocation. In Proceedings of the 18th Annual European Conference on Algorithms. Springer, 182–194.
[14]
Jon Feldman, Nitish Korula, Vahab S. Mirrokni, Shanmugavelayutham Muthukrishnan, and Martin Pál. 2009. Online ad assignment with free disposal. In International Workshop on Internet and Network Economics. Springer, 374–385.
[15]
Jon Feldman, Aranyak Mehta, Vahab S. Mirrokni, and Shan Muthukrishnan. 2009. Online stochastic matching: Beating \(1-1/e\). In Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science. IEEE, 117–126.
[16]
Marshall L. Fisher, George L. Nemhauser, and Laurence A. Wolsey. 1978. An analysis of approximations for maximizing submodular set functions-II. In Polyhedral Combinatorics. Springer, 73–87.
[17]
Buddhima Gamlath, Sagar Kale, and Ola Svensso. 2019. Beating greedy for stochastic bipartite matching. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 2841–2854.
[18]
Buddhima Gamlath, Michael Kapralov, Andreas Maggiori, Ola Svensson, and David Wajc. 2019. Online matching with general arrivals. In Proceedings of the 60th Annual IEEE Symposium on Foundations of Computer Science. IEEE, 26–37.
[19]
Ruiquan Gao, Zhongtian He, Zhiyi Huang, Zipei Nie, Bijun Yuan, and Yan Zhong. 2021. Improved online correlated selection. In Proceedings of the 62nd Annual IEEE Symposium on Foundations of Computer Science. IEEE, 1265–1276.
[20]
Gagan Goel and Aranyak Mehta. 2008. Online budgeted matching in random input models with applications to adwords. In Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 982–991.
[21]
Guru Prashanth Guruganesh and Sahil Singla. 2017. Online matroid intersection: Beating half for random arrival. In International Conference on Integer Programming and Combinatorial Optimization. Springer, 241–253.
[22]
Bernhard Haeupler, Vahab S. Mirrokni, and Morteza Zadimoghaddam. 2011. Online stochastic weighted matching: Improved approximation algorithms. In International Workshop on Internet and Network Economics. Springer, 170–181.
[23]
Zhiyi Huang. 2019. Understanding Zadimoghaddam’s edge-weighted online matching algorithm: Weighted case. arXiv preprint arXiv:1910.03287 (2019).
[24]
Zhiyi Huang, Ning Kang, Zhihao Gavin Tang, Xiaowei Wu, Yuhao Zhang, and Xue Zhu. 2018. How to match when all vertices arrive online. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. ACM, 17–29.
[25]
Zhiyi Huang, Binghui Peng, Zhihao Gavin Tang, Runzhou Tao, Xiaowei Wu, and Yuhao Zhang. 2019. Tight competitive ratios of classic matching algorithms in the fully online model. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 2875–2886.
[26]
Zhiyi Huang and Xinkai Shu. 2021. Online stochastic matching, Poisson arrivals, and the natural linear program. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing. ACM, 682–693.
[27]
Zhiyi Huang, Xinkai Shu, and Shuyi Yan. 2022. The power of multiple choices in online stochastic matching. In Proceedings of the 53rd Annual ACM Symposium on Theory of Computing. ACM, 91–103.
[28]
Zhiyi Huang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang. 2019. Online vertex-weighted bipartite matching: Beating \(1-1/e\) with random arrivals. ACM Transactions on Algorithms 15, 3 (2019), 38.
[29]
Zhiyi Huang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang. 2020. Fully online matching II: Beating ranking and water-filling. In Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science. IEEE, 1380–1391.
[30]
Zhiyi Huang and Runzhou Tao. 2019. Understanding Zadimoghaddam’s edge-weighted online matching algorithm: Unweighted case. arXiv preprint arXiv:1910.02569
[31]
Zhiyi Huang, Qiankun Zhang, and Yuhao Zhang. 2020. AdWords in a panorama. In Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science. IEEE, 1416–1426.
[32]
Patrick Jaillet and Xin Lu. 2013. Online stochastic matching: New algorithms with better bounds. Mathematics of Operations Research 39, 3 (2013), 624–646.
[33]
Billy Jin and David P. Williamson. 2021. Improved analysis of RANKING for online vertex-weighted bipartite matching in the random order model. In International Conference on Web and Internet Economics. Springer, 207–225.
[34]
Kumar Joag-Dev and Frank Proschan. 1983. Negative association of random variables with applications. The Annals of Statistics (1983), 286–295.
[35]
Bala Kalyanasundaram and Kirk R. Pruhs. 2000. An optimal deterministic algorithm for online b-matching. Theoretical Computer Science 233, 1-2 (2000), 319–325.
[36]
Michael Kapralov, Ian Post, and Jan Vondrák. 2013. Online submodular welfare maximization: Greedy is optimal. In Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 1216–1225.
[37]
Chinmay Karande, Aranyak Mehta, and Pushkar Tripathi. 2011. Online bipartite matching with unknown distributions. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing. ACM, 587–596.
[38]
Richard Karp, Umesh Vazirani, and Vijay Vazirani. 1990. An optimal algorithm for on-line bipartite matching. In Proceedings of the 22nd Annual ACM Symposium on Theory of Computing. ACM, 352–358.
[39]
Nitish Korula, Vahab S. Mirrokni, and Morteza Zadimoghaddam. 2013. Bicriteria online matching: Maximizing weight and cardinality. In International Conference on Web and Internet Economics. Springer, 305–318.
[40]
Nitish Korula, Vahab S. Mirrokni, and Morteza Zadimoghaddam. 2018. Online submodular welfare maximization: Greedy beats \(1/2\) in random order. SIAM Journal on Computing 47, 3 (2018), 1056–1086.
[41]
Benny Lehmann, Daniel Lehmann, and Noam Nisan. 2006. Combinatorial auctions with decreasing marginal utilities. Games and Economic Behavior 55, 2 (2006), 270–296.
[42]
Mohammad Mahdian and Qiqi Yan. 2011. Online bipartite matching with random arrivals: An approach based on strongly factor-revealing LPs. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing. ACM, 597–606.
[43]
Vahideh H. Manshadi, Shayan Oveis Gharan, and Amin Saberi. 2012. Online stochastic matching: Online actions based on offline statistics. Mathematics of Operations Research 37, 4 (2012), 559–573.
[44]
Aranyak Mehta. 2013. Online matching and ad allocation. Foundations and Trends in Theoretical Computer Science 8, 4 (2013), 265–368.
[45]
Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani. 2005. AdWords and generalized on-line matching. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. IEEE, 264–273.
[46]
Vahab S. Mirrokni, Shayan Oveis Gharan, and Morteza Zadimoghaddam. 2012. Simultaneous approximations for adversarial and stochastic online budgeted allocation. In Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 1690–1701.
[47]
Yongho Shin and Hyung-Chan An. 2021. Making three out of two: three-way online correlated selection. In Proceedings of the 32nd International Symposium on Algorithms and Computation. Schloss Dagstuhl, 49:1–49:17.
[48]
Zhihao Gavin Tang, Jinzhao Wu, and Hongxun Wu. 2022. (Fractional) online stochastic matching via fine-grained offline statistics. In Proceedings of the 53rd Annual ACM Symposium on Theory of Computing. ACM, 77–90.

Cited By

View all
  • (2024)Online resource allocation with non-stationary customersProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694537(59700-59730)Online publication date: 21-Jul-2024
  • (2024)Online Matching: A Brief SurveyACM SIGecom Exchanges10.1145/3699824.369983722:1(135-158)Online publication date: 1-Jun-2024
  • (2024)AdWords in a PanoramaSIAM Journal on Computing10.1137/22M147889653:3(701-763)Online publication date: 17-Jun-2024
  • Show More Cited By

Index Terms

  1. Edge-Weighted Online Bipartite Matching

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Journal of the ACM
    Journal of the ACM  Volume 69, Issue 6
    December 2022
    302 pages
    ISSN:0004-5411
    EISSN:1557-735X
    DOI:10.1145/3570966
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 November 2022
    Online AM: 13 August 2022
    Accepted: 19 July 2022
    Revised: 10 March 2022
    Received: 11 February 2021
    Published in JACM Volume 69, Issue 6

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Factor-revealing linear program
    2. free disposal
    3. online bipartite matching
    4. online correlated selection
    5. primal-dual method

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • NSF
    • RGC

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,456
    • Downloads (Last 6 weeks)151
    Reflects downloads up to 24 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Online resource allocation with non-stationary customersProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694537(59700-59730)Online publication date: 21-Jul-2024
    • (2024)Online Matching: A Brief SurveyACM SIGecom Exchanges10.1145/3699824.369983722:1(135-158)Online publication date: 1-Jun-2024
    • (2024)AdWords in a PanoramaSIAM Journal on Computing10.1137/22M147889653:3(701-763)Online publication date: 17-Jun-2024
    • (2024)Online Primal Dual Meets Online Matching with Stochastic Rewards: Configuration LP to the RescueSIAM Journal on Computing10.1137/21M145470553:5(1217-1256)Online publication date: 4-Sep-2024
    • (2023)A policy gradient approach to solving dynamic assignment problem for on-site service deliveryTransportation Research Part E: Logistics and Transportation Review10.1016/j.tre.2023.103260178(103260)Online publication date: Oct-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Full Access

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media