Abstract
Imitating successful behavior is a natural and frequently applied approach when facing complex decision problems. In this paper, we design protocols for distributed latency minimization in atomic congestion games based on imitation. We propose to study concurrent dynamics that emerge when each agent samples another agent and possibly imitates this agent’s strategy if the anticipated latency gain is sufficiently large. Our focus is on convergence properties. We show convergence in a monotonic fashion to stable states, in which none of the agents can improve their latency by imitating others. As our main result, we show rapid convergence to approximate equilibria, in which only a small fraction of agents sustains a latency significantly above or below average. Imitation dynamics behave like an FPTAS, and the convergence time depends only logarithmically on the number of agents. Imitation processes cannot discover unused strategies, and strategies may become extinct with non-zero probability. For singleton games we show that the probability of this event occurring is negligible. Additionally, we prove that the social cost of a stable state reached by our dynamics is not much worse than an optimal state in singleton games with linear latency functions. We concentrate on the case of symmetric network congestion games, but our results do not use the network structure and continue to hold accordingly for general symmetric games. They even apply to asymmetric games when agents sample within the set of agents with the same strategy space. Finally, we discuss how the protocol can be extended such that, in the long run, dynamics converge to a pure Nash equilibrium.
Similar content being viewed by others
Notes
We will restrict our attention to pure Nash equilibria throughout the paper.
References
Ackermann, H., Fischer, S., Hoefer, M., Schöngens, M.: Distributed algorithms for QoS load balancing. Distrib. Comput. 23(5–6), 321–330 (2011)
Ackermann, H., Röglin, H., Vöcking, B.: On the impact of combinatorial structure on congestion games. J. ACM 55(6), (2008)
Adolphs, C., Berenbrink, P.: Distributed selfish load balancing with weights and speeds. In: Proceedings of the 31st Symposium Principles of Distributed Computing (PODC), pp. 135–144 (2012)
Awerbuch, B., Azar, Y., Epstein, A.: The price of routing unsplittable flow. SIAM J. Comput. 42(1), 160–177 (2013)
Awerbuch, B., Azar, Y., Epstein, A., Mirrokni, V., Skopalik, A.: Fast convergence to nearly optimal solutions in potential games. In: Proceedings of the 9th Conference Electronic Commerce (EC), pp. 264–273 (2008)
Berenbrink, P., Friedetzky, T., Goldberg, L.A., Goldberg, P., Hu, Z., Martin, R.: Distributed selfish load balancing. SIAM J. Comput. 37(4), 1163–1181 (2007)
Berenbrink, P., Friedetzky, T., Hajirasouliha, I., Hu, Z.: Convergence to equilibria in distributed, selfish reallocation processes with weighted tasks. Algorithmica 62(3–4), 767–786 (2012)
Berenbrink, P., Hoefer, M., Sauerwald, T.: Distributed selfish load balancing on networks. In: Proceedings of the 22nd Symposium Discrete Algorithms (SODA), pp. 1487–1497 (2011)
Blum, A., Even-Dar, E., Ligett, K.: Routing without regret: on convergence to Nash equilibria of regret-minimizing algorithms in routing games. Theory Comput. 6(1), 179–199 (2010)
Chien, S., Sinclair, A.: Convergence to approximate Nash equilibria in congestion games. Games Econom. Behav. 71(2), 315–327 (2011)
Christodoulou, G., Koutsoupias, E.: The price of anarchy in finite congestion games. In: Proceedings of the 37th Symposium Theory of Computing (STOC), pp. 67–73 (2005)
Even-Dar, E., Kesselman, A., Mansour, Y.: Convergence time to Nash equilibria in load balancing. ACM Trans. Algorithms 3(3), (2007)
Even-Dar, E., Mansour, Y.: Fast convergence of selfish rerouting. In: Proceedings of the 16th Symposium Discrete Algorithms (SODA), pp. 772–781 (2005)
Fabrikant, A., Papadimitriou, C., Talwar, K.: The complexity of pure Nash equilibria. In: Proceedings of the 36th Symposium Theory of Computing (STOC), pp. 604–612 (2004)
Fanelli, A., Flammini, M., Moscardelli, L.: The speed of convergence in congestion games under best-response dynamics. ACM Trans. Algorithms 8(3), 25 (2012)
Fanelli, A., Moscardelli, L.: On best response dynamics in weighted congestion games with polynomial delays. Distrib. Comput. 24(5), 245–254 (2011)
Fanelli, A., Moscardelli, L., Skopalik, A.: On the impact of fair best response dynamics. In: Proceedings of the 37th International Symposium on Mathematics Foundations of Computer Science (MFCS), pp. 360–371 (2012)
Fischer, S., Mähönen, P., Schöngens, M., Vöcking, B.: Load balancing for dynamic spectrum assignment with local information for secondary users. In: Proceedings of the Symposium on Dynamic Spectrum Access Networks (DySPAN) (2008)
Fischer, S., Räcke, H., Vöcking, B.: Fast convergence to Wardrop equilibria by adaptive sampling mehtods. SIAM J. Comput. 39(8), 3700–3735 (2010)
Fischer, S., Vöcking, B.: Adaptive routing with stale information. Theor. Comput. Sci. 410(36), 3357–3371 (2008)
Fotakis, D., Kaporis, A., Spirakis, P.: Atomic congestion games: fast, myopic and concurrent. Theory Comput. Syst. 47(1), 38–49 (2010)
Fotakis, D., Kontogiannis, S., Spirakis, P.: Atomic congestion games among coalitions. ACM Trans. Algorithms 4(4), (2008)
Goldberg, P.: Bounds for the convergence rate of randomized local search in a multiplayer load-balancing game. In: Proceedings of the 23rd Symposium on Principles of Distributed Computing (PODC), pp. 131–140 (2004)
Hagerup, T., Rüb, C.: A guided tour of Chernoff bounds. Inf. Process. Lett. 33, 305–308 (1990)
Hofbauer, J., Sigmund, K.: Evolutionary Games and Population Dynamics. Cambridge University Press, Cambridge (1998)
Ieong, S., McGrew, R., Nudelman, E., Shoham, Y., Sun, Q.: Fast and compact: a simple class of congestion games. In: Proceedings of the 20th Conference Artificial Intelligence (AAAI), pp. 489–494 (2005)
Kleinberg, R., Piliouras, G., Tardos, É.: Multiplicative updates outperform generic no-regret learning in congestion games. In: Proceedings of the 41st Symposium Theory of Computing (STOC), pp. 533–542 (2009)
Kleinberg, R., Piliouras, G., Tardos, É.: Load balancing without regret in the bulletin board model. Distrib. Comput. 24(1), 21–29 (2011)
Koutsoupias, E., Papadimitriou, C.: Worst-case equilibria. Comput. Sci. Rev. 3(2), 65–69 (2009)
Rosenthal, R.: A class of games possessing pure-strategy Nash equilibria. Int. J. Game Theory 2, 65–67 (1973)
Roughgarden, T.: Intrinsic robustness of the price of anarchy. In: Proceedings of the 41st Symposium on Theory of Computing (STOC), pp. 513–522 (2009)
Skopalik, A., Vöcking, B.: Inapproximability of pure Nash equilibria. In: Proceedings of the 40th Symposium on Theory of Computing (STOC), pp. 355–364 (2008)
Vöcking, B.: Selfish load balancing. In: Nisan, N., Tardos, É., Roughgarden, T., Vazirani, V. (eds.) Algorithmic Game Theory, chapter 20. Cambridge University Press, Cambridge (2007)
Weibull, J.: Evolutionary Game Theory. MIT Press, Cambridge (1995)
Author information
Authors and Affiliations
Corresponding author
Additional information
An extended abstract of this work has been accepted for publication in the proceedings of the 28th Symposium on Principles of Distributed Computing (PODC 2009). This work was in part supported by DFG through Cluster of Excellence MMCI and UMIC Research Centre at RWTH Aachen University, and by an NSERC grant. Part of this work was done while the authors were at RWTH Aachen University.
Appendix
Appendix
Throughout the technical part of this paper, we apply the following two Chernoff bounds.
Fact 7
(Chernoff, see [24]) Let \(X\) be a sum of Bernoulli variables. Then, \( \mathbb {P}_{}\left[ X \ge k\cdot \mathbb {E}\left[ X\right] \right] \le \mathrm {e}^{-\mathbb {E}\left[ X\right] \,k\cdot (\ln k - 1)} \), and, for \(k\ge 4 > \mathrm {e}^{4/3}, \mathbb {P}_{}\left[ X \ge k\cdot \mathbb {E}\left[ X\right] \right] \le \mathrm {e}^{-\frac{1}{4}\,\mathbb {E}\left[ X\right] \,k\,\ln k} \). Equivalently, for \(k\ge 4\,\mathbb {E}\left[ X\right] , \mathbb {P}_{}\left[ X \ge k\right] \le \mathrm {e}^{-\frac{1}{4}\,k\,\ln (k/\mathbb {E}\left[ X\right] )} \).
The following fact yields a linear approximation of the exponential function.
Fact 8
For any \(r>0\) and \(x\in [0,r]\), it holds that \((\mathrm {e}^{x} - 1) \le x\cdot \frac{\mathrm {e}^{r}-1}{r}\).
Proof
The function \(\exp (x)-1\) is convex and it goes through the points \((0,0)\) and \((r,\mathrm {e}^{r}-1)\), as does the function \(x\cdot \frac{\mathrm {e}^{r}-1}{r}\). \(\square \)
Fact 9
It holds that
Proof
We have
\(\square \)
Fact 10
It holds that
Proof
We have
\(\square \)
Fact 11
For every \(c \in ]0,1[\) it holds
Fact 12
(Jensen’s Inequality) Let \(f :\mathbb {R}\rightarrow \mathbb {R}\) be a convex function, and let \(a_1,\ldots ,a_k,x_1,\ldots ,x_k \in \mathbb {R}\). Then
If \(f(x) = x^2\), then
Lemma 7
Let \(X_0,X_1,\ldots \) denote a sequence of non-negative random variables and assume that for all \(i\ge 0\)
and let \(\tau \) denote the first time \(t\) such that \(X_t=0\). Then,
The proof follows, e.g., from standard martingale arguments in combination with the optional stopping theorem and is omitted here.
Lemma 8
Let \(X_0,X_1,\ldots \) denote a sequence of non-negative random variables and assume that for all \(i\ge 0\mathbb {E}\left[ X_i \mid X_{i-1} = x_{i-1}\right] \le x_{i-1} \cdot \alpha \) for some constant \(\alpha \in (0,1)\). Furthermore, fix some constant \(x^*\in (0,x_0]\) and let \(\tau \) be the random variable that describes the smallest \(t\) such that \(X_t \le x^*\). Then,
Proof
Let us define \(\gamma = \frac{1}{1-\alpha }\) and an auxiliary random variable \(Y^t\) by \(Y^0:= X^{0}\), and for any round \(t \ge 1\),
Then, for any \(t \ge 1\), it follows
We have for \(\kappa = \gamma \cdot (\ln (x^0) - \ln (x^*/2))\),
Hence by Markov’s inequality,
We consider two cases.
Case 1: For all time steps \(t \in [0,\ldots ,\kappa ], Y^{t} = X^{t}\). Then, as seen above \(X^{\kappa } \le x^*\) with probability at least 1/2.
Case 2: There exists a step \(t \in [1,\ldots ,\kappa ]\) such that \(Y^{t} \ne X^{t}\). Let \(t\) be the smallest time step with that property. Hence, \(Y^{t} \ne X^{t}\), but \(Y^{t-1} = X^{t-1}\). If \(Y^{t-1}=0\), then \(X^{t-1} = 0\). If \(Y^{t-1} \ne 0\), then by definition of \(Y^{t}\),
In all cases we have shown that with probability at least 1/2, there exists a step \(t \in [0,\kappa ]\) so that \(X^{t} \le x^*\). If such a step does not exist, we simply repeat the analysis and consider the next \(\kappa \) steps. The probability that we do not observe a step as desired decreases exponentially in the number of restarts. In expectation, we need only \(\sum _{k=1}^{\infty } k/2^{k-1} = 4\) phases of \(\kappa \) steps to observe a step as desired. Thus, the expected number of steps is at most \(\tau = 4\kappa \). This completes the proof of the lemma. \(\square \)
Rights and permissions
About this article
Cite this article
Ackermann, H., Berenbrink, P., Fischer, S. et al. Concurrent imitation dynamics in congestion games. Distrib. Comput. 29, 105–125 (2016). https://doi.org/10.1007/s00446-014-0223-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00446-014-0223-6