Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Influence Maximization in Dynamic Networks Using Reinforcement Learning

Published: 08 January 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Influence maximization (IM) has been widely studied in recent decades, aiming to maximize the spread of influence over networks. Despite many works for static networks, fewer research studies have been dedicated to the IM problem for dynamic networks, which creates many challenges. An IM method for such an environment, should consider its dynamics and perform well under different network structures. To fulfill this objective, more computations are required. Hence, an IM approach should be efficient enough to be applicable for the ever-changing structure of a network. In this research, an IM method for dynamic networks has been proposed which uses a deep Q-learning (DQL) approach. To learn dynamic features from the network and retain previously learned information, incremental and transfer learning methods have been applied. Experiments substantiate the good performance of the DQL methods and their superiority over compared methods on larger sizes of tested synthetic and real-world networks. These experiments illustrate better performance for incremental and transfer learning methods on real-world networks.

    References

    [1]
    Leskovec J, Adamic LA, and Huberman BA The dynamics of viral marketing ACM Trans Web 2007 1 1 5-es
    [2]
    Kempe D, Kleinberg J, Tardos E. Maximizing the spread of influence through a social network. In: Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’03. New York: Association for Computing Machinery; 2003. p. 137–46.
    [3]
    Murata T and Koga H Extended methods for influence maximization in dynamic networks Comput Soc Netw 2018 10 5
    [4]
    Huang S, Bao Z, Culpepper JS, Zhang B. Finding temporal influential users over evolving social networks. In: 2019 IEEE 35th international conference on data engineering (ICDE). 2019. p. 398–409.
    [5]
    Song G, Li Y, Chen X, He X, and Tang J Influential node tracking on dynamic social network: an interchange greedy approach IEEE Trans Knowl Data Eng 2017 29 2 359-372
    [6]
    Dai H, Khalil EB, Zhang Y, Dilkina B, Song L. Learning combinatorial optimization algorithms over graphs. In: Proceedings of the 31st international conference on neural information processing systems. NIPS’17. Red Hook: Curran Associates Inc.; 2017. p. 6351–61.
    [7]
    Wijayanto AW and Murata T Effective and scalable methods for graph protection strategies against epidemics on dynamic networks Appl Netw Sci 2019 04 4
    [8]
    Manchanda S, Mittal A, Dhawan A, Medya S, Ranu S, Singh A. Learning heuristics over large graphs via deep reinforcement learning. 2019. arXiv:1903.03332. [cs.LG].
    [9]
    Tian S, Mo S, Wang L, and Peng Z Deep reinforcement learning-based approach to tackle topic-aware influence maximization Data Sci Eng 2020 03 5
    [10]
    Lin SC, Lin SD, Chen MS. A learning-based framework to handle multi-round multi-party influence maximization on social networks. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’15. New York: Association for Computing Machinery; 2015. p. 695–704.
    [11]
    Parisi GI, Kemker R, Part JL, Kanan C, and Wermter S Continual lifelong learning with neural networks: a review Neural Netw 2019 113 54-71
    [12]
    Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu AA, et al. Overcoming catastrophic forgetting in neural networks Proc Natl Acad Sci 2017 114 13 3521-3526 www.pnas.org/content/114/13/3521.full.pdf
    [13]
    Zenke F, Poole B, Ganguli S. Continual learning through synaptic intelligence. In: Precup D, Teh YW, editors. Proceedings of the 34th international conference on machine learning. Proceedings of machine learning research, vol. 70. International Convention Centre, Sydney, Australia: PMLR; 2017. p. 3987–95. http://proceedings.mlr.press/v70/zenke17a.html.
    [14]
    Li Y, Fan J, Wang Y, and Tan KL Influence maximization on social graphs: a survey IEEE Trans Knowl Data Eng 2018 30 10 1852-1872
    [15]
    Leskovec J, Krause A, Guestrin C, Faloutsos C, VanBriesen J, Glance N. Cost-effective outbreak detection in networks. In: Proceedings of the 13th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’07. New York: Association for Computing Machinery; 2007. p. 420–9.
    [16]
    Goyal A, Lu W, Lakshmanan LVS. CELF++: optimizing the greedy algorithm for influence maximization in social networks. In: Proceedings of the 20th international conference companion on world wide web. WWW ’11. New York: Association for Computing Machinery; 2011. p. 47–8.
    [17]
    Wang Y, Cong G, Song G, Xie K. Community-based greedy algorithm for mining top-K influential nodes in mobile social networks. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’10. New York: Association for Computing Machinery; 2010. p. 1039–48.
    [18]
    Brin S. The PageRank citation ranking: bringing order to the web. In: Proceedings of ASIS, 1998; 1998. p. 161–172.
    [19]
    Freeman LC Centrality in social networks conceptual clarification Soc Netw 1978 1 3 215-239
    [20]
    Morone F and Makse H Influence maximization in complex networks through optimal percolation Nature 2015 06 524
    [21]
    Chen W, Wang Y, Yang S. Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’09. New York: Association for Computing Machinery; 2009. p. 199–208.
    [22]
    Liu Q, Xiang B, Chen E, Xiong H, Tang F, Yu JX. Influence maximization over large-scale social networks: a bounded linear approach. In: Proceedings of the 23rd ACM international conference on conference on information and knowledge management. CIKM ’14. New York: Association for Computing Machinery; 2014. p. 171–80.
    [23]
    Kimura M and Saito K Fürnkranz J, Scheffer T, and Spiliopoulou M Tractable models for information diffusion in social networks Knowledge discovery in databases: PKDD 2006 2006 Berlin Springer 259-271
    [24]
    Chen W, Wang C, Wang Y. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’10. New York: Association for Computing Machinery; 2010. p. 1029–38.
    [25]
    Goyal A, Lu W, Lakshmanan LVS. SIMPATH: an efficient algorithm for influence maximization under the linear threshold model. In: 2011 IEEE 11th international conference on data mining; 2011. p. 211–20.
    [26]
    Rui X, Yang X, Fan J, and Wang Z A neighbour scale fixed approach for influence maximization in social networks Computing 2020 102 2 427-449
    [27]
    Cheng S, Shen H, Huang J, Zhang G, Cheng X. StaticGreedy: solving the scalability-accuracy dilemma in influence maximization. In: Proceedings of the 22nd ACM international conference on information and knowledge management. CIKM ’13. New York: Association for Computing Machinery; 2013. p. 509–18.
    [28]
    Borgs C, Brautbar M, Chayes J, Lucier B. Maximizing social influence in nearly optimal time. In: Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms; 2014. p. 946–957.
    [29]
    Tang Y, Xiao X, Shi Y. Influence maximization: Near-optimal time complexity meets practical efficiency. In: Proceedings of the 2014 ACM SIGMOD international conference on Management of data; 2014. p. 75–86.
    [30]
    Nguyen H, Thai M, Dinh T. Stop-and-stare: optimal sampling algorithms for viral marketing in billion-scale networks. 2016. p. 695–710.
    [31]
    Lei S, Maniu S, Mo L, Cheng R, Senellart P. Online influence maximization. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’15. New York: Association for Computing Machinery; 2015. p. 645–54.
    [32]
    Habiba, Yu Y, Berger-Wolf TY, Saia J. Finding spread blockers in dynamic networks. In: Giles L, Smith M, Yen J, Zhang H, editors. Advances in social network mining and analysis. Berlin: Springer; 2010. p. 55–76.
    [33]
    Tong G, Wu W, Tang S, and Du D Adaptive influence maximization in dynamic social networks IEEE/ACM Trans Netw 2017 25 112-125
    [34]
    Zhuang H, Sun Y, Tang J, Zhang J, Sun X. Influence maximization in dynamic social networks. In: 2013 IEEE 13th international conference on data mining; 2013. p. 1313–8.
    [35]
    Zhou C, Zhang P, Guo J, Zhu X, Guo L. UBLF: an upper bound based approach to discover influential nodes in social networks. In: 2013 IEEE 13th international conference on data mining. 2013. p. 907–16.
    [36]
    Kermack WO and McKendrick AG A contribution to the mathematical theory of epidemics Proc R Soc Lond Ser A 1927 115 772 700-721
    [37]
    Kermack WO and McKendrick AG Contributions to the mathematical theory of epidemics—II. The problem of endemicity Bull Math Biol 1991 53 1 57-87
    [38]
    Pastor-Satorras R, Castellano C, Van Mieghem P, and Vespignani A Epidemic processes in complex networks Rev Mod Phys 2015 87 925-979
    [39]
    Rossi RA, Ahmed NK. The network data repository with interactive graph analytics and visualization. In: Proceedings of the twenty-ninth AAAI conference on artificial intelligence. AAAI’15. Washington, DC: AAAI Press; 2015. p. 4292–3.
    [40]
    Hasselt H. Double Q-learning. In: Lafferty J, Williams C, Shawe-Taylor J, Zemel R, Culotta A, editors. Advances in neural information processing systems, vol. 23. Red Hook: Curran Associates, Inc.; 2010. https://proceedings.neurips.cc/paper/2010/file/091d584fced301b442654dd8c23b3fc9-Paper.pdf.
    [41]
    Hasselt Hv, Guez A, Silver D. Deep reinforcement learning with double Q-learning. In: Proceedings of the thirtieth AAAI conference on artificial intelligence. AAAI’16. Washington, DC: AAAI Press; 2016. p. 2094–100.
    [42]
    Sutton RS and Barto AG Reinforcement learning: an introduction 2018 Cambridge A Bradford Book
    [43]
    Polyak BT and Juditsky AB Acceleration of stochastic approximation by averaging SIAM J Control Optim 1992 30 4 838-855
    [44]
    Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Teh YW, Titterington M, editors. Proceedings of the thirteenth international conference on artificial intelligence and statistics. Proceedings of machine learning research, vol. 9. JMLR Workshop and Conference Proceedings, Chia Laguna Resort, Sardinia, Italy. 2010. p. 249–56. http://proceedings.mlr.press/v9/glorot10a.html.
    [45]
    Mastrandrea R, Fournet J, and Barrat A Contact patterns in a high school: a comparison between data collected using wearable sensors, contact diaries and friendship surveys PLoS ONE 2015
    [46]
    Gomez-Rodriguez M, Leskovec J, and Krause A Inferring networks of diffusion and influence ACM Trans Knowl Discov Data 2012
    [47]
    Khalil EB, Dilkina B, Song L. Scalable diffusion-aware optimization of network topology. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’14. New York: Association for Computing Machinery; 2014. p. 1226–35.
    [48]
    Avrachenkov K, Dreveton M, Leskelä L. Community recovery in non-binary and temporal stochastic block models. ArXiv. 2020;abs/2008.04790.
    [49]
    Holland PW, Laskey KB, and Leinhardt S Stochastic blockmodels: first steps Soc Netw 1983 5 2 109-137
    [50]
    G’enois M and Barrat A Can co-location be used as a proxy for face-to-face contacts? EPJ Data Sci 2018 7 1 11

    Index Terms

    1. Influence Maximization in Dynamic Networks Using Reinforcement Learning
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image SN Computer Science
      SN Computer Science  Volume 5, Issue 1
      Nov 2023
      2935 pages

      Publisher

      Springer-Verlag

      Berlin, Heidelberg

      Publication History

      Published: 08 January 2024
      Accepted: 25 October 2023
      Received: 23 March 2022

      Author Tags

      1. Influence maximization
      2. Dynamic networks
      3. Reinforcement learning
      4. Deep Q-learning
      5. Incremental learning
      6. Transfer learning

      Qualifiers

      • Research-article

      Funding Sources

      • European Union’s research and innovation program
      • RISE Academy of UCAJEDI Investments by the National Research Agency (ANR) by French government
      • The project of Inria - Nokia Bell Labs “Distributed Learning and Control for Network Analysis”
      • University of Klagenfurt

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 26 Jul 2024

      Other Metrics

      Citations

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media