Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

SafeRoute: Learning to Navigate Streets Safely in an Urban Environment

Published: 27 September 2020 Publication History

Abstract

Recent studies show that 85% of women have changed their traveled routes to avoid harassment and assault. Despite this, current mapping tools do not empower users with information to take charge of their personal safety. We propose SafeRoute, a novel solution to the problem of navigating cities and avoiding street harassment and crime. Unlike other street navigation applications, SafeRoute introduces a new type of path generation via deep reinforcement learning. This enables us to successfully optimize for multi-criteria path-finding and incorporate representation learning within our framework. Our agent learns to pick favorable streets to create a safe and short path with a reward function that incorporates safety and efficiency. Given access to recent crime reports in many urban cities, we train our model for experiments in Boston, New York, and San Francisco. We test our model on areas of these cities, specifically the populated downtown regions with high foot traffic. We evaluate SafeRoute and successfully improve over state-of-the-art methods by up to 17% in local average distance from crimes while decreasing path length by up to 7%.

References

[1]
Justin A. Boyan and Michael L. Littman. 1994. Packet routing in dynamically changing networks: A reinforcement learning approach. In Advances in Neural Information Processing Systems. 671--678.
[2]
Gino Brunner, Oliver Richter, Yuyi Wang, and Roger Wattenhofer. 2017. Teaching a machine to read maps with deep reinforcement learning. AAAI 2018.
[3]
Edsger W. Dijkstra. 1959. A note on two problems in connexion with graphs. Numerische Mathematik 1, 1 (1959), 269--271.
[4]
Esther Galbrun, Konstantinos Pelechrinis, and Evimaria Terzi. 2016. Urban navigation beyond shortest route. Information Systems 57, C (April 2016), 160--171.
[5]
Maria Grillo, Rebecca Paluch, and Beth Livingston. 2014. Cornell International Survey on Street Harassment.
[6]
Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 855--864.
[7]
Peter E. Hart, Nils J. Nilsson, and Bertram Raphael. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics 4, 2 (1968), 100--107.
[8]
Abdeltawab M. Hendawi, Aqeel Rustum, Dev Oliver, David Hazel, Ankur Teredesai, and Mohamed Ali. 2015. Multi-preference time dependent routing. Technical Report UWT-CDS-TR-2015--03-01, Center for Data Science, Institute of Technology, University of Washington, Tacoma, Washington (2015).
[9]
Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. 2017. Imitation learning: A survey of learning methods. ACM Computing Surveys 50, 2, Article 21 (April 2017), 35 pages.
[10]
David Johnson and Josh Sanburn. 2017. Violent crime is on the rise in U.S. cities. Time (2017).
[11]
Jaewoo Kim, Meeyoung Cha, and Thomas Sandholm. 2014. SocRoutes: Safe routes based on tweet sentiments. In Proceedings of the 23rd International Conference on World Wide Web. ACM, 179--182.
[12]
Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. ICLR 2015 (2014).
[13]
Hans-Peter Kriegel, Matthias Renz, and Matthias Schubert. 2010. Route skyline queries: A multi-preference path planning approach. In Proceedings of the IEEE 26th International Conference on Data Engineering (ICDE’10). IEEE, 261--272.
[14]
Felix Mata, Miguel Torres-Ruiz, and Giovanni Guzman. 2016. A mobile information system based on crowd-sensed and official crime data for finding safe routes: A case study of Mexico City. Mobile Information Systems 2016, Article 8068209 (2016).
[15]
Piotr Mirowski, Matt Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith Anderson, Denis Teplyashin, Karen Simonyan, Andrew Zisserman, Raia Hadsell, et al. 2018. Learning to navigate in cities without a map. In Advances in Neural Information Processing Systems. 2419--2430.
[16]
OpenStreetMap contributors. 2017. Planet dump. Retrieved from https://planet.osm.org. https://www.openstreetmap.org.
[17]
Daniele Quercia, Rossano Schifanella, and Luca Maria Aiello. 2014. The shortest path to happiness: Recommending beautiful, quiet, and happy routes in the city. In Proceedings of the 25th ACM Conference on Hypertext and Social Media. ACM, 116--125.
[18]
Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. 2011. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems. 693--701.
[19]
Murray Rosenblatt. 1956. Remarks on some nonparametric estimates of a density function. Annals of Mathematical Statistics 27, 3 (09 1956), 832--837.
[20]
Nikolay Savinov, Alexey Dosovitskiy, and Vladlen Koltun. 2018. Semi-parametric topological memory for navigation. In Proceedings of the International Conference on Learning Representations. https://openreview.net/forum?id=SygwwGbRW.
[21]
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (Jan. 2016), 484--489.
[22]
SpotCrime 2007. Retrieved from https://spotcrime.com/.
[23]
Xin Wang, Wenhan Xiong, Hongmin Wang, and William Yang Wang. 2018. Look before you leap: Bridging model-free and model-based reinforcement learning for planned-ahead vision-and-language navigation. In Proceedings of the European Conference on Computer Vision (ECCV). 37--53.
[24]
Christopher John Cornish Hellaby Watkins. 1989. Learning from delayed rewards. Ph.D. Dissertation. King's College, Cambridge, UK. http://www.cs.rhul.ac.uk/∼chrisw/new_thesis.pdf.
[25]
Fangfang Wei, Shoufeng Ma, and Ning Jia. 2014. A day-to-day route choice model based on reinforcement learning. Mathematical Problems in Engineering 2014 (Sep. 2014), 14--32.
[26]
Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8, 3 (May 1992), 229--256.
[27]
Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. DeepPath: A reinforcement learning method for knowledge graph reasoning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). ACL, Copenhagen, Denmark.
[28]
Yong Yang and Ana V. Diez-Roux. 2012. Walking distance by trip purpose and population subgroups. American Journal of Preventive Medicine 43, 1 (2012), 11--19.
[29]
Jing Yuan, Yu Zheng, Chengyang Zhang, Wenlei Xie, Xing Xie, Guangzhong Sun, and Yan Huang. 2010. T-drive: Driving directions based on taxi trajectories. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS’10). ACM, New York, NY, 99--108.
[30]
Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural turing machines. CoRR abs/1505.00521 (2015). arxiv:1505.00521 http://arxiv.org/abs/1505.00521.
[31]
Mortaza Zolfpour-Arokhlo, Ali Selamat, Siti Zaiton Mohd Hashim, and Hossein Afkhami. 2014. Modeling of route planning system based on Q value-based dynamic programming with multi-agent reinforcement learning algorithms. Engineering Applications of Artificial Intelligence 29 (2014), 163--177.

Cited By

View all
  • (2024)Understanding Pedestrians’ Perception of Safety and Safe Mobility PracticesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642896(1-17)Online publication date: 11-May-2024
  • (2023)Navigation in adversarial environments guided by PRA* and a local RL plannerProceedings of the Nineteenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment10.1609/aiide.v19i1.27530(343-351)Online publication date: 8-Oct-2023
  • (2023)MOOP: An Efficient Utility-Rich Route Planning Framework Over Two-Fold Time-Dependent Road NetworksIEEE Transactions on Emerging Topics in Computational Intelligence10.1109/TETCI.2023.32419307:5(1554-1570)Online publication date: Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Intelligent Systems and Technology
ACM Transactions on Intelligent Systems and Technology  Volume 11, Issue 6
Survey Paper and Regular Paper
December 2020
237 pages
ISSN:2157-6904
EISSN:2157-6912
DOI:10.1145/3424135
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 September 2020
Accepted: 01 May 2020
Revised: 01 March 2020
Received: 01 January 2019
Published in TIST Volume 11, Issue 6

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Safe routing
  2. deep reinforcement learning
  3. multi-preference routing

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)383
  • Downloads (Last 6 weeks)54
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Understanding Pedestrians’ Perception of Safety and Safe Mobility PracticesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642896(1-17)Online publication date: 11-May-2024
  • (2023)Navigation in adversarial environments guided by PRA* and a local RL plannerProceedings of the Nineteenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment10.1609/aiide.v19i1.27530(343-351)Online publication date: 8-Oct-2023
  • (2023)MOOP: An Efficient Utility-Rich Route Planning Framework Over Two-Fold Time-Dependent Road NetworksIEEE Transactions on Emerging Topics in Computational Intelligence10.1109/TETCI.2023.32419307:5(1554-1570)Online publication date: Oct-2023
  • (2023)A reinforcement learning-based routing algorithm for large street networksInternational Journal of Geographical Information Science10.1080/13658816.2023.227997538:2(183-215)Online publication date: 11-Dec-2023
  • (2022)Risk-Aware Travel Path Planning Algorithm Based on Reinforcement Learning during COVID-19Sustainability10.3390/su14201336414:20(13364)Online publication date: 17-Oct-2022
  • (2022)cuRL: A Generic Framework for Bi-Criteria Optimum Path-Finding Based on Deep Reinforcement LearningIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2022.3219543(1-13)Online publication date: 2022
  • (2022)Travel Safe: A systematic review on Safe Route Guidance System2022 IEEE Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI)10.1109/IATMSI56455.2022.10119408(1-6)Online publication date: 21-Dec-2022
  • (2022)Safe route-finding: A review of literature and future directionsAccident Analysis & Prevention10.1016/j.aap.2022.106816177(106816)Online publication date: Nov-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media