An Application of Inverse Reinforcement Learning To Medical Records of Diabetes Treatment
An Application of Inverse Reinforcement Learning To Medical Records of Diabetes Treatment
1 Introduction
The process of medical treatment can be considered as interaction between doc-
tors and patients. Doctors observe the state of patients through various types
of examination and accordingly choose the proper treatment, which causes a
change of state in the patient. In particular, lifestyle-related chronic diseases
such as diabetes mellitus, hypertension, hyperlipidemia, and vascular disorders
slowly progress and the interaction can last for several years.
Conventional statistical analysis of medical data mainly evaluates the impact
of a single treatment or a single factor on the target events. Analyzing records of
such long-term treatment to evaluate the costs and the benets on the quality
of life of the patient remains under investigation.
2
MDP is a common probabilistic generative model of time series data with sequen-
tial decisions and rewards [79]. A stationary MDP is composed of four elements:
S, A, T , and R. S is a set of states, A is a set of actions, T : S A S [0, 1]
is a set of state transition probabilities, and R : S A S R [0, 1] is a set
of probabilities of taking an immediate reward value for a state transition.
For each state s S, the expectation of the cumulative reward under the
policy can be computed as follows:
[ ]
V (s) = E t rt s0 = s, .
t=1
For a given MDP and policy, we can evaluate state values V (s) and action
values Q (s, a). We also can solve the MDP to obtain the optimal policy and
evaluate the state and action values under the optimal policy by various types
of RL algorithm.
The problem of IRL can be formulated as the problem of estimating the re-
ward function of an expert agent who behaves optimally under an environment
and the reward function [2]. Several IRL algorithms have been proposed so far.
Ng and Russel proposed an algorithm using linear programing [2]. Abbeel and Ng
used quadratic programing. They used the estimated reward function to mimic
the behavior of the expert agent, and call the learning process apprenticeship
learning [3]. Ramachandran and Emir formulated the IRL problem in Bayesian
framework, and proposed an algorithm using Markov chain Monte Carlo sam-
pling [4]. Rothkopf and Dimitrakakis formally reformulated the Bayesian IRL
within the framework of preference elicitation [5]. They also extend their method
in the context of multitask learning [6].
Among them, we exploited the Bayesian IRL algorithm proposed in [4] be-
cause of the simplicity of the method. In their formulation, based on the Markov
assumption, the probability of generating a sequence of observations and actions
O = {(s1 , a1 ), (s2 , a2 ), ..., (sk , ak )} under the reward function R is decomposed
as
k
P r(O|R) = P r((si , ai )|R).
i=1
It is also assumed that the experts decide their action based on the following
Boltzmann distribution, derived from the action value function Q (s, a, R) under
the optimal policy for R.
1
P r((si , ai )|R) = exp{Q (si , ai , R)}.
Ci
4
Here is a parameter that controls the variance of the actions of experts. When
is large, the variance becomes small. Ci is the normalization constant and
Ci = exp{Q (s, a, R)}.
sS,aA
1
P r(R|O) =
exp{ Q (si , ai , R)}P r(R).
Z i
The data used in this study were extracted from the medical records of heart
disease patients cared for by the University of Tokyo Hospital. After obtaining
approval for our research plan from the Institutional Review Board at the Uni-
versity of Tokyo and the National Institute of Advanced Industrial Science and
Technology, we prepared the data for this study. First, we anonymized the data,
and then extracted the records to be analyzed. We focused on patients who peri-
odically attended the hospital and underwent examinations and pharmaceutical
treatments. The data after January 1, 2000 were used.
In this study we exclusively focus on the data related to diabetes. This is
because diabetes is a typical chronic disease that frequently generates long-term
treatment records. In addition, the disease is a very important factor in compli-
cations leading to heart diseases, and thus it is worth analyzing.
Among the examinations in the data, we focused the value of hemoglobin-
A1c (HbA1c) for simplifying the analysis. The value of HbA1c was discretized
into three levels (normal, medium, and severe) according to two threshold values
(6.0 and 8.0) suggested by a domain expert.
For pharmaceutical treatments, we grouped the drugs according to their func-
tions and identied patterns of combinations of drug groups prescribed simul-
taneously. The number of identied combination patterns that appeared in the
data was 38. Name of the drug groups and their abbreviations are shown in
Table 1.
An event in a sequence is a visit of a patient to the hospital, i.e., a combina-
tion of an examination and a pharmaceutical treatment. Sometimes, no drug is
prescribed. The intervals between successive events are almost stable (about 30
days), but sometimes vary. For the collection of periodic visit sequences, when
the interval is longer than 75 days, we cut the sequence at that point. There-
after we collected sequences that were longer than 24 visits (about 2 years). The
total number of extracted sequences was 801. The characteristics of the data are
summarized in Table 2.
6
Property Value
Number of episodes 801
Total number of events 32,198
Maximum length of an episode 124
Minimum length of an episode 25
4 Experiment
Using the extracted episodes from the records of diabetes treatments, we rst
estimated the state transition probabilities T of the MDP and the policy of
doctors. The discretized HbA1c values consists of a set of state S. The combi-
nations of medicines correspond to action set A. In the probability estimation,
we exploited Laplace smoothing to avoid problems caused by the sparseness of
data. We set the smoothing parameter equals to 1 as default.
We then applied the IRL algorithm introduced in Section 2. Because we
discretized the observation (state) value into three levels, the reward function R
can be described by a three-dimensional vector r = (Rnormal , Rmedium , Rsevere )
We normalized the vector as Rnormal + Rmedium + Rsevere = 1. The value of
and was set as = 0.01 and = 1 respectively. As the prior distribution for
the r we used the uniform distribution over the simplex.
We implemented the whole program using the statistical computing language
R. For the policy-iteration in the Policy Walk algorithm, we used a package of
R for RL which we are currently developing.
As the result of MCMC sampling, we got r = (0.01, 0.98, 0.01) with probability
nearly equal to 1. Figure 1 shows a typical sampling sequence of Rmedium . This
means that the reward value for the medium state is the highest.
This result seems counter-intuitive because the aim of treatment is considered
to move the patient into normal state. To conrm the result, we compared the
log likelihood values of the observations under the reward vector (0.98 0.1, 0.1),
7
1.0
0.8
rewardRec[, 2]
0.6
0.4
0.2
Index
(0.1, 0.98, 0.1), and (0.1, 0.1, 0.98). They are -159878, -143568, and -162928
respectively. Hence, the obtained reward vector has the maximum value.
An interpretation of the result is that our data is from patients and many of
them were already in the medium state when they came to the hospital. The
relative number of occurrences of the three states among the data is (0.178, 0.65,
0.172). Hence, keeping the patients state at medium may be the best-eort
target of doctors. Other possible reasons for the unexpected result are as follows:
the MDP model is too simple to model the decision-making process of doc-
tors,
heterogeneity of doctors and patients is not properly considered.
We computed the optimal policy under the reward vector r = (0.01, 0.98, 0.01).
Optimal actions for three states are BG+SU, aGI+BG+SU+TDZ, and
DPP4, respectively. We also evaluated the similarity between the doctors
policy and the optimal policy under the estimated reward function. The similar-
ity is unfortunately not so high. This suggests that the assumption of the reward
function depending purely on the patients current state may be too simple.
References
1. Asoh, H., Shiro, M., Akaho, S., Kamishima, T., Hasida, K., Aramaki, E., T.Kohro:
Modeling medical records of diabetes using Markov decision processes. In: Pro-
ceedings of the ICML2013 Workshop on Roll of Machine Learning for Transforming
Healthcare. (2013)
2. Ng, A.Y., Russell, S.: Algorithms for inverse reinforcement learning. In: Proceedings
of the 17th International Conference on Machine Learning (ICML 2000). (2000)
663670
3. Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning.
In: Proceedings of ICML 2004. (2004)
4. Ramachandran, D., Amir, E.: Bayesian inverse reinforcement learning. In: Proceed-
ings of the 20th International Joint Conference on Artificial Intelligence. (2007)
5. Rothkopf, C.A., Dimitrakakis, C.: Preference elicitation and inverse reinforcement
learning. In: Proceedings of ECML-PKDD 2011. (2011)
6. Dimitrakakis, C., Rothkopf, C.A.: Bayesian multitask inverse reinforcement leran-
ing. arXiv:1106.3655v2 (2011)
7. Bellman, R.: A Markovian decision process. Journal of Mathematics and Mechanics
6(5) (1957) 679684
8. Puterman, M.L.: Markov Decision Processes. Wiley (1994)
9. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. 3rd edn. Pearson
(2010)