Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3423335.3428165acmconferencesArticle/Chapter ViewAbstractPublication PagesgisConference Proceedingsconference-collections
research-article

A review of methods to model route choice behavior of bicyclists: inverse reinforcement learning in spatial context and recursive logit

Published: 03 November 2020 Publication History

Abstract

Used for route choice modeling by the transportation research community, recursive logit is a form of inverse reinforcement learning, the field of learning an agent's objective by observing it's behavior. By solving a large-scale system of linear equations it allows estimation of an optimal (negative) reward function in a computationally efficient way that performs for large networks and a large number of observations. In this paper we review examples of IRL models applied to real world travel trajectories and look at some of the challenges with recursive logit for modeling bicycle route choice in the city center area of Amsterdam.

References

[1]
Moshe Ben-Akiva and Michel Bierlaire. 1999. Discrete choice methods and their applications to short term travel decisions. In Handbook of transportation science. Springer, 5--33.
[2]
Bikeprint. 2017. Download bestanden Nationale Fietstelweek 2015 en 2016. http://www.bikeprint.nl/fietstelweek/
[3]
Ennio Cascetta, Agostino Nuzzolo, Francesco Russo, and Antonino Vitetta. 1996. A modified logit route choice model overcoming path overlapping problems. Specification and some calibration results for interurban networks. In Transportation and Traffic Theory. Proceedings of The 13th International Symposium On Transportation And Traffic Theory, Lyon, France, 24--26 July 1996.
[4]
Lucas Meyer de Freitas, Henrik Becker, Maelle Zimmermann, and Kay W Axhausen. 2019. Modelling intermodal travel in Switzerland: A recursive logit approach. Transportation Research Part A: Policy and Practice 119 (2019), 200--213.
[5]
Giselle de Moraes Ramos, Tien Mai, Winnie Daamen, Emma Frejinger, and SP Hoogendoorn. 2020. Route choice behaviour and travel information in a congested network: Static and dynamic recursive models. Transportation Research Part C: Emerging Technologies 114 (2020), 681--693.
[6]
Mogens Fosgerau, Emma Frejinger, and Anders Karlstrom. 2013. A link based network route choice model with unrestricted choice set. Transportation Research Part B 56 (2013), 70--80.
[7]
Emma Frejinger, Michel Bierlaire, and Moshe Ben-Akiva. 2009. Sampling of alternatives for route choice modeling. Transportation Research Part B: Methodological 43, 10 (2009), 984--994.
[8]
Tsubasa Hirakawa, Takayoshi Yamashita, Toru Tamaki, Hironobu Fujiyoshi, Yuta Umezu, Ichiro Takeuchi, Sakiko Matsumoto, and Ken Yoda. 2018. Can AI predict animal movements? Filling gaps in animal trajectories using inverse reinforcement learning. Ecosphere 9, 10 (2018).
[9]
Edwin T Jaynes. 1957. Information theory and statistical mechanics. Physical review 106, 4 (1957), 620.
[10]
Thomas Koch, Luk Knapen, and Elenna Dugundji. 2019. Path complexity and bicyclist route choice set quality assessment. Personal and Ubiquitous Computing (2019), 1--13.
[11]
Tien Mai, Fabian Bastin, and Emma Frejinger. 2018. A decomposition method for estimating recursive logit based route choice models. EURO Journal on Transportation and Logistics 7, 3 (2018), 253--275.
[12]
Tien Mai, Kennard Chan, and Patrick Jaillet. 2019. Generalized Maximum Causal Entropy for Inverse Reinforcement Learning. arXiv preprint arXiv:1911.06928 (2019).
[13]
Tien Mai, Mogens Fosgerau, and Emma Frejinger. 2015. A nested recursive logit model for route choice analysis. Transportation Research Part B: Methodological 75 (2015), 100--112.
[14]
Tien Mai, Quoc Phong Nguyen, Kian Hsiang Low, and Patrick Jaillet. 2019. Inverse Reinforcement Learning with Missing Data. arXiv preprint arXiv:1911.06930 (2019).
[15]
Daniel McFadden et al. 1973. Conditional logit analysis of qualitative choice behavior. (1973).
[16]
Michael Mo. 2019. Cyclist's route choice prediction with inverse reinforcement learning. Master's thesis. Universiteit van Amsterdam, the Netherlands.
[17]
Quoc Phong Nguyen, Bryan Kian Hsiang Low, and Patrick Jaillet. 2015. Inverse reinforcement learning with locally consistent reward functions. In Advances in neural information processing systems. 1747--1755.
[18]
Guojun Wu, Yichen Ding, Yanhua Li, Jun Luo, Fan Zhang, and Jie Fu. 2017. Data-driven inverse learning of passenger preferences in urban public transits. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEE, 5068--5073.
[19]
Guojun Wu, Yanhua Li, Jie Bao, Yu Zheng, Jieping Ye, and Jun Luo. 2018. Human-centric urban transit evaluation and planning. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 547--556.
[20]
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. 2008. Maximum entropy inverse reinforcement learning. In Aaai, Vol. 8. Chicago, IL, USA, 1433--1438.
[21]
Maëlle Zimmermann, Tien Mai, and Emma Frejinger. 2017. Bike route choice modeling using GPS data without choice sets of paths. Transportation research part C: emerging technologies 75 (2017), 183--196.

Cited By

View all
  • (2021)Taste variation in environmental features of bicycle routesProceedings of the 14th ACM SIGSPATIAL International Workshop on Computational Transportation Science10.1145/3486629.3490697(1-10)Online publication date: 2-Nov-2021
  • (2021)The 3rd ACM SIGSPATIAL International Workshop on Geospatial SimulationSIGSPATIAL Special10.1145/3447994.344800012:3(11-14)Online publication date: 25-Jan-2021

Index Terms

  1. A review of methods to model route choice behavior of bicyclists: inverse reinforcement learning in spatial context and recursive logit

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      GeoSim '20: Proceedings of the 3rd ACM SIGSPATIAL International Workshop on GeoSpatial Simulation
      November 2020
      70 pages
      ISBN:9781450381611
      DOI:10.1145/3423335
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 03 November 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. GPS trajectory
      2. bicycle route behavior
      3. dynamic discrete choice
      4. dynamic programming
      5. inverse reinforcement learning
      6. markov decision process
      7. maximum entropy
      8. recursive logit
      9. route choice modeling

      Qualifiers

      • Research-article

      Conference

      SIGSPATIAL '20
      Sponsor:

      Acceptance Rates

      GeoSim '20 Paper Acceptance Rate 9 of 14 submissions, 64%;
      Overall Acceptance Rate 16 of 24 submissions, 67%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)36
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 14 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2021)Taste variation in environmental features of bicycle routesProceedings of the 14th ACM SIGSPATIAL International Workshop on Computational Transportation Science10.1145/3486629.3490697(1-10)Online publication date: 2-Nov-2021
      • (2021)The 3rd ACM SIGSPATIAL International Workshop on Geospatial SimulationSIGSPATIAL Special10.1145/3447994.344800012:3(11-14)Online publication date: 25-Jan-2021

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media