Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Coordinating randomized policies for increasing security of agent systems

  • Published:
Information Technology and Management Aims and scope Submit manuscript

Abstract

We consider the problem of providing decision support to a patrolling or security service in an adversarial domain. The idea is to create patrols that can achieve a high level of coverage or reward while taking into account the presence of an adversary. We assume that the adversary can learn or observe the patrolling strategy and use this to its advantage. We follow two different approaches depending on what is known about the adversary. If there is no information about the adversary we use a Markov Decision Process (MDP) to represent patrols and identify randomized solutions that minimize the information available to the adversary. This lead to the development of algorithms CRLP and BRLP, for policy randomization of MDPs. Second, when there is partial information about the adversary we decide on efficient patrols by solving a Bayesian–Stackelberg games. Here, the leader decides first on a patrolling strategy and then an adversary, of possibly many adversary types, selects its best response for the given patrol. We provide two efficient MIP formulations named DOBSS and ASAP to solve this NP-hard problem. Our experimental results show the efficiency of these algorithms and illustrate how these techniques provide optimal and secure patrolling policies. We note that these models have been applied in practice, with DOBSS being at the heart of the ARMOR system that is currently deployed at the Los Angeles International airport (LAX) for randomizing checkpoints on the roadways entering the airport and canine patrol routes within the airport terminals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. The ARMOR software has been developed in close interaction with the LAWA (Los Angeles World Airports) police, and has been in use at LAX since Aug’07.

References

  1. R. Beard, T. McLain, Multiple UAV cooperative search under collision avoidance and limited range communication constraints, in Proceedings of the 42nd IEEE Conference on Decision and Control, vol. 1, pp. 25–30 (2003)

  2. N. Borisov, J. Waddle, Anonymity in Structured Peer-to-peer Networks. University of California, Berkeley, Technical Report No. UCB/CSD-05-1390 (2005)

  3. G. Brown, M. Carlyle, J. Salmeron, K. Wood, Defending critical infrastructures. Interfaces 36(6), 530–544 (2006)

    Article  Google Scholar 

  4. J. Brynielsson, S. Arnborg, Bayesian games for threat prediction and situation analysis, in Proceedings of the Seventh International Conference on Information Fusion, vol. 2, pp. 1125–1132 (2004)

  5. D.M. Carroll, C. Nguyen, H. Everett, B. Frederick, Development and Testing for Physical Security Robots. http://www.nosc.mil/robots/pubs/spie5804-63.pdf (2005)

  6. V. Conitzer, T. Sandholm, Computing the optimal strategy to commit to, in Proceedings of the 7th ACM Conference on Electronic Commerce, pp. 82–90 (2006)

  7. D. Dolgov, E. Durfee, Approximating optimal policies for agents with limited execution resources, in Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, AAAI Press, pp. 1107–1112 (2003)

  8. D. Dolgov, E. Durfee, Constructing Optimal Policies for Agents with Constrained Architectures. Technical report, University of Michigan, 2003.

  9. D. Fudenberg, J. Tirole, Game Theory (MIT Press, 1991)

  10. J.C. Harsanyi, R. Selten, A generalized Nash solution for two-person bargaining games with incomplete information. Manag. Sci. 18(5), 80–106 (1972)

    Article  Google Scholar 

  11. A. Murr, Random checks, in Newsweek National News. http://www.newsweek.com/id/43401. Accessed 28 September 2007

  12. C. Ozturk, Y. Zhang, W. Trappe, Source-location privacy in energy-constrained sensor network routing, in Proceedings of the 2nd ACM Workshop on Security of ad hoc and Sensor Networks, pp. 88–93 (2004)

  13. P. Paruchuri, M. Tambe, F. Ordonez, S. Kraus, Towards a formalization of teamwork with resource constraints, in Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 596–603 (2004)

  14. P. Paruchuri, M. Tambe, F. Ordonez, S. Kraus, Security in multiagent systems by policy randomization, in Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 273–280 (2006)

  15. P. Paruchuri, J.P. Pearce, M. Tambe, F. Ordonez, S. Kraus, An efficient heuristic approach for security against multiple adversaries, in Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multiagent Systems, Article No. 181 (2007)

  16. P. Paruchuri, J.P. Pearce, J. Marecki, M. Tambe, F. Ordonez, S. Kraus, Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games, in Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 2, pp. 895–902 (2008)

  17. J. Pita, M. Jain, J. Marecki, F. Ordonez, C. Portway, M. Tambe, C. Western, P. Paruchuri, S. Kraus, Deployed ARMOR protection: the application of a game theoretic model for security at the Los Angeles international airport, in Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, Industry Track, pp. 125–132 (2008)

  18. R.W. Poole, G. Passantino, A risk based airport security policy, Policy Study No. 308, Reason Foundation, pp. 20–21 (2003)

  19. M. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming (Wiley, 1994)

  20. T. Roughgarden, Stackelberg scheduling strategies, in Proceedings of the 33rd Annual ACM Symposium on the Theory of Computing, pp. 104–113 (2001)

  21. T. Sandholm, A. Gilpin, V. Conitzer, Mixed-integer programming methods for finding Nash equilibria, in Proceedings of the 20th National Conference on Artificial Intelligence, pp. 495–501 (2005)

  22. C.E. Shannon, A mathematical theory of communication. Bell Labs Tech. J. 27, 379–423 and 623–656 (1948)

    Google Scholar 

  23. H.V. Stackelberg, Marketform und Gleichgewicht (Springer, Vienna, 1934)

Download references

Acknowledgments

This research is supported by the United States Department of Homeland Security through Center for Risk and Economic Analysis of Terrorism Events (CREATE). This work was supported in part by NSF grant no. IIS0705587 and ISF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fernando Ordóñez.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Paruchuri, P., Pearce, J.P., Marecki, J. et al. Coordinating randomized policies for increasing security of agent systems. Inf Technol Manag 10, 67–79 (2009). https://doi.org/10.1007/s10799-008-0047-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10799-008-0047-9

Keywords