Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3447548.3467351acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Error-Bounded Online Trajectory Simplification with Multi-Agent Reinforcement Learning

Published: 14 August 2021 Publication History

Abstract

Trajectory data has been widely used in various applications, including taxi services, traffic management, mobility analysis, etc. It is usually collected at a sensor's side in real time and corresponds to a sequence of sampled points. Constrained by the storage and/or network bandwidth of a sensor, it is common to simplify raw trajectory data when it is collected by dropping some sampled points. Many algorithms have been proposed for the error-bounded online trajectory simplification (EB-OTS) problem, which is to drop as many points as possible subject to that the error is bounded by an error tolerance. Nevertheless, these existing algorithms rely on pre-defined rules for decision making during the trajectory simplification process and there is no theoretical ground supporting their effectiveness. In this paper, we propose a multi-agent reinforcement learning method called MARL4TS for EB-OTS. MARL4TS involves two agents for different decision making problems during the trajectory simplification processes. Besides, MARL4TS has its objective equivalent to that of the EB-OTS problem, which provides some theoretical ground of its effectiveness. We conduct extensive experiments on real-world trajectory datasets, which verify that MARL4TS outperforms all existing algorithms in effectiveness and provides competitive efficiency.

Supplementary Material

MP4 File (errorbounded_online_trajectory_simplification_with-zheng_wang-cheng_long-38957925-vzAt.mp4)
Presentation video

References

[1]
Richard Bellman. 1961. On the approximation of curves by line segments using dynamic programming. Commun. ACM, Vol. 4, 6 (1961), 284.
[2]
Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade. Springer, 437--478.
[3]
Weiquan Cao and Yunzhao Li. 2017. DOTS: An online and near-optimal trajectory simplification algorithm. Journal of Systems and Software, Vol. 126 (2017), 34--44.
[4]
Chao Chen, Yan Ding, Xuefeng Xie, Shu Zhang, Zhu Wang, and Liang Feng. 2019. TrajCompressor: An online map-matching-based trajectory compression framework leveraging vehicle heading direction and change. IEEE TITS, Vol. 21, 5 (2019), 2012--2028.
[5]
Yunheng Han, Weiwei Sun, and Baihua Zheng. 2017. COMPRESS: A comprehensive framework of trajectory compression in road networks. TODS, Vol. 42, 2 (2017), 1--49.
[6]
John Edward Hershberger and Jack Snoeyink. 1992. Speeding up the Douglas-Peucker line-simplification algorithm. University of British Columbia, Department of Computer Science.
[7]
Bingqing Ke, Jie Shao, and Dongxiang Zhang. 2017. An efficient online approach for direction-preserving trajectory simplification with interval bounds. In MDM. IEEE, 50--55.
[8]
Bingqing Ke, Jie Shao, Yi Zhang, Dongxiang Zhang, and Yang Yang. 2016. An online approach for direction-based trajectory compression with error bound guarantee. In APWeb. Springer, 79--91.
[9]
Eamonn Keogh, Selina Chu, David Hart, and Michael Pazzani. 2001. An online algorithm for segmenting time series. In ICDM. IEEE, 289--296.
[10]
Jens Kober, J Andrew Bagnell, and Jan Peters. 2013. Reinforcement learning in robotics: A survey. IJRR, Vol. 32, 11 (2013), 1238--1274.
[11]
Ralph Lange, Frank Dürr, and Kurt Rothermel. 2011. Efficient real-time trajectory tracking. VLDBJ, Vol. 20, 5 (2011), 671--694.
[12]
Hyun-Rok Lee and Taesik Lee. 2019. Improved cooperative multi-agent reinforcement learning algorithm augmented by mixing demonstrations from centralized policy. In AAMAS. 1089--1098.
[13]
Tianyi Li, Ruikai Huang, Lu Chen, Christian S Jensen, and Torben Bach Pedersen. 2020 a. Compression of uncertain trajectories in road networks. PVLDB, Vol. 13, 7 (2020), 1050--1063.
[14]
Yexin Li, Yu Zheng, and Qiang Yang. 2019. Efficient and Effective Express via Contextual Cooperative Reinforcement Learning. In SIGKDD. 510--519.
[15]
Yexin Li, Yu Zheng, and Qiang Yang. 2020 b. Cooperative Multi-Agent Reinforcement Learning in Express System. In CIKM. 805--814.
[16]
Kaixiang Lin, Renyu Zhao, Zhe Xu, and Jiayu Zhou. 2018. Efficient large-scale fleet management via multi-agent deep reinforcement learning. In SIGKDD. 1774--1783.
[17]
Xuelian Lin, Jiahao Jiang, Shuai Ma, Yimeng Zuo, and Chunming Hu. 2019. One-pass trajectory simplification using the synchronous Euclidean distance. VLDBJ, Vol. 28, 6 (2019), 897--921.
[18]
Xuelian Lin, Shuai Ma, Han Zhang, Tianyu Wo, and Jinpeng Huai. 2017. One-pass error bounded trajectory simplification. PVLDB, Vol. 10, 7 (2017), 841--852.
[19]
Jiajun Liu, Kun Zhao, Philipp Sommer, Shuo Shang, Brano Kusy, and Raja Jurdak. 2015. Bounded quadrant system: Error-bounded trajectory compression on the go. In ICDE. IEEE, 987--998.
[20]
Jiajun Liu, Kun Zhao, Philipp Sommer, Shuo Shang, Brano Kusy, Jae-Gil Lee, and Raja Jurdak. 2016. A novel framework for online amnesic trajectory compression in resource-constrained environments. IEEE TKDE, Vol. 28, 11 (2016), 2827--2841.
[21]
Cheng Long, Raymond Chi-Wing Wong, and HV Jagadish. 2013. Direction-preserving trajectory simplification. PVLDB, Vol. 6, 10 (2013), 949--960.
[22]
Cheng Long, Raymond Chi-Wing Wong, and HV Jagadish. 2014. Trajectory simplification: on minimizing the direction-based error. PVLDB, Vol. 8, 1 (2014), 49--60.
[23]
Nirvana Meratnia and A Rolf. 2004. Spatiotemporal compression techniques for moving point objects. In EDBT. Springer, 765--782.
[24]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013).
[25]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et almbox. 2015. Human-level control through deep reinforcement learning. nature, Vol. 518, 7540 (2015), 529--533.
[26]
Jonathan Muckell, Jeong-Hyon Hwang, Vikram Patil, Catherine T Lawson, Fan Ping, and SS Ravi. 2011. SQUISH: an online approach for GPS trajectory compression. In Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications. ACM, 13.
[27]
Jonathan Muckell, Paul W Olsen, Jeong-Hyon Hwang, Catherine T Lawson, and SS Ravi. 2014. Compression of trajectory data: a comprehensive evaluation and new approach. GeoInformatica, Vol. 18, 3 (2014), 435--460.
[28]
Michalis Potamias, Kostas Patroumpas, and Timos Sellis. 2006. Sampling trajectory streams with spatiotemporal criteria. In SSDBM. IEEE, 275--284.
[29]
Martin L Puterman. 2014. Markov Decision Processes.: Discrete Stochastic Dynamic Programming. John Wiley & Sons.
[30]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
[31]
Goce Trajcevski, Hu Cao, Peter Scheuermanny, Ouri Wolfsonz, and Dennis Vaccaro. 2006. On-line data reduction and the quality of history in moving objects databases. In Proceedings of the 5th ACM international workshop on Data engineering for wireless and mobile access. ACM, 19--26.
[32]
Zheng Wang, Cheng Long, and Gao Cong. 2021. Trajectory Simplification with Reinforcement Learning. In ICDE.
[33]
Zheng Wang, Cheng Long, Gao Cong, and Ce Ju. 2019. Effective and efficient sports play retrieval with deep representation learning. In SIGKDD. 499--509.
[34]
Zheng Wang, Cheng Long, Gao Cong, and Yiding Liu. 2020. Efficient and effective similar subtrajectory search with deep reinforcement learning. PVLDB, Vol. 13, 12 (2020), 2312--2325.
[35]
Dongxiang Zhang, Mengting Ding, Dingyu Yang, Yi Liu, Ju Fan, and Heng Tao Shen. 2018. Trajectory simplification: an experimental study and quality analysis. PVLDB, Vol. 11, 9 (2018), 934--946.
[36]
Yu Zheng, Xing Xie, Wei-Ying Ma, et al. 2010. Geolife: A collaborative social networking service among user, location and trajectory. IEEE Data Eng. Bull., Vol. 33, 2 (2010), 32--39.

Cited By

View all
  • (2024)An Efficient and Distributed Framework for Real-Time Trajectory Stream ClusteringIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331231936:5(1857-1873)Online publication date: May-2024
  • (2024)Deep Reinforcement Learning for Efficient IoT Data Compression in Smart Railroad ManagementIEEE Internet of Things Journal10.1109/JIOT.2023.334848711:15(25494-25504)Online publication date: 1-Aug-2024
  • (2024)Collectively Simplifying Trajectories in a Database: A Query Accuracy Driven Approach2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00334(4383-4395)Online publication date: 13-May-2024
  • Show More Cited By

Index Terms

  1. Error-Bounded Online Trajectory Simplification with Multi-Agent Reinforcement Learning

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
      August 2021
      4259 pages
      ISBN:9781450383325
      DOI:10.1145/3447548
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 August 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. multi-agent reinforcement learning
      2. online trajectory simplification
      3. trajectory data

      Qualifiers

      • Research-article

      Conference

      KDD '21
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)47
      • Downloads (Last 6 weeks)7
      Reflects downloads up to 16 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)An Efficient and Distributed Framework for Real-Time Trajectory Stream ClusteringIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331231936:5(1857-1873)Online publication date: May-2024
      • (2024)Deep Reinforcement Learning for Efficient IoT Data Compression in Smart Railroad ManagementIEEE Internet of Things Journal10.1109/JIOT.2023.334848711:15(25494-25504)Online publication date: 1-Aug-2024
      • (2024)Collectively Simplifying Trajectories in a Database: A Query Accuracy Driven Approach2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00334(4383-4395)Online publication date: 13-May-2024
      • (2024)A survey on multi-agent reinforcement learning and its applicationJournal of Automation and Intelligence10.1016/j.jai.2024.02.0033:2(73-91)Online publication date: Jun-2024
      • (2024)Safety: A spatial and feature mixed outlier detection method for big trajectory dataInformation Processing & Management10.1016/j.ipm.2024.10367961:3(103679)Online publication date: May-2024
      • (2023)Batch Simplification Algorithm for Trajectories over Road NetworksISPRS International Journal of Geo-Information10.3390/ijgi1210039912:10(399)Online publication date: 30-Sep-2023
      • (2023)A trajectory simplification algorithm based on motion trend and variable speed characteristicsInternational Conference on Images, Signals, and Computing (ICISC 2023)10.1117/12.2691797(9)Online publication date: 21-Aug-2023
      • (2023)A Lightweight Framework for Fast Trajectory Simplification2023 IEEE 39th International Conference on Data Engineering (ICDE)10.1109/ICDE55515.2023.00184(2386-2399)Online publication date: Apr-2023
      • (2023)Online Anomalous Subtrajectory Detection on Road Networks with Deep Reinforcement Learning2023 IEEE 39th International Conference on Data Engineering (ICDE)10.1109/ICDE55515.2023.00026(246-258)Online publication date: Apr-2023

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media