Hindsight Optimization for Hybrid State and Action MDPs

Authors

  • Aswin Raghavan Oregon State University
  • Scott Sanner University of Toronto
  • Roni Khardon Tufts University
  • Prasad Tadepalli Oregon State University
  • Alan Fern Oregon State University

DOI:

https://doi.org/10.1609/aaai.v31i1.11056

Keywords:

Markov Decision Processes, Factored MDP, Hybrid MDP, Hindsight Optimization, Planning under Uncertainty

Abstract

Hybrid (mixed discrete and continuous) state and action Markov Decision Processes (HSA-MDPs) provide an expressive formalism for modeling stochastic and concurrent sequential decision-making problems. Existing solvers for HSA-MDPs are either limited to very restricted transition distributions, require knowledge of domain-specific basis functions to achieve good approximations, or do not scale. We explore a domain-independent approach based on the framework of hindsight optimization (HOP) for HSA-MDPs, which uses an upper bound on the finite-horizon action values for action selection. Our main contribution is a linear time reduction to a Mixed Integer Linear Program (MILP) that encodes the HOP objective, when the dynamics are specified as location-scale probability distributions parametrized by Piecewise Linear (PWL) functions of states and actions. In addition, we show how to use the same machinery to select actions based on a lower-bound generated by straight line plans. Our empirical results show that the HSA-HOP approach effectively scales to high-dimensional problems and outperforms baselines that are capable of scaling to such large hybrid MDPs.

Downloads

Published

2017-02-12

How to Cite

Raghavan, A., Sanner, S., Khardon, R., Tadepalli, P., & Fern, A. (2017). Hindsight Optimization for Hybrid State and Action MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11056

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty