Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Simulation Optimization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13
At a glance
Powered by AI
The key takeaways are that simulation optimization can be used to optimize simulation models by estimating objectives through simulation and finding optimal parameter settings. Various approaches like gradient estimation and stochastic approximation are discussed.

The main approaches discussed for simulation optimization are gradient estimation, finite difference estimation, likelihood ratio estimation, and stochastic approximation.

Examples of applications discussed are call center design and optimization of a single-server queue model.

Proceedings of the 2005 Winter Simulation Conference

M. E. Kuhl, N. M. Steiger, F. B. Armstrong, and J. A. Joines, eds.

SIMULATION OPTIMIZATION: A REVIEW, NEW DEVELOPMENTS, AND APPLICATIONS

Michael C. Fu Fred W. Glover

Smith School of Business Leeds School of Business


University of Maryland University of Colorado
College Park, MD 20742, U.S.A. Boulder, CO 80309, U.S.A.

Jay April

OptTek Systems, Inc.


1919 Seventh Street
Boulder, CO 80302, U.S.A.

ABSTRACT The assumption in the simulation optimization setting


is that J (θ) is not available directly, but must be estimated
We provide a descriptive review of the main approaches for through simulation, e.g., the simulation output provides
carrying out simulation optimization, and sample some re- Jˆ(θ), a noisy estimate of J (θ). The most common form
cent algorithmic and theoretical developments in simulation for J is an expectation, e.g.,
optimization research. Then we survey some of the soft-
ware available for simulation languages and spreadsheets, J (θ) = E[L(θ, ω)],
and present several illustrative applications.
where ω represents a sample path (simulation replication),
1 INTRODUCTION L is the sample performance measure. Although this form
is fairly general (includes probabilities by using indicator
The advances in computing power and memory over the functions), it does exclude certain types of performance
last decade have opened up the possibility of optimizing measures such as the median (and other quantiles) and the
simulation models. This development offers one of the most mode.
exciting opportunities in simulation, and there are plenty
of interesting research problems in the field. The goals of Real-World Example: Call Center Design
this tutorial include the following: The state-of-the-art call centers (sometimes called contact
centers) integrate traditional voice operations with both au-
• to provide a general overview of the primary ap- tomated response systems (computer account access) and
proaches found in the research literature, and in- Internet (Web-based) services, often spread over multiple
clude pointers/references to the state of the art, geographically separate sites. Most of these centers han-
• to survey some of the commercial software, and dle multiple sources of jobs, including voice, e-mail, fax,
• to illustrate the problems through examples and and interactive Web, each of which may require a different
real-world applications. levels of operator (call agent) training, as well as different
priorities, e.g., voice almost always preempting any of the
The general optimization problem we consider it to find other contact types (except possibly interactive Web). There
a setting of controllable parameters that minimizes a given are also different types of jobs according to the service re-
objective function, i.e., quired, e.g., checking the status of an order versus placing
a new order versus requesting live service help. Further-
min J (θ ), (1) more, because of individual customer segmentation, there
θ∈
are different classes of customers in terms of priority levels.
where θ ∈  represents the (vector of) input variables, J (θ) Designing and operating such a call center includes such
is the (scalar) objective function, and  is the constraint set, problems as selecting the number of operators at each skill
which may be either explicitly given or implicitly defined. level, and determining what routing algorithm and type of

83
Fu, Glover, and April

queue discipline to use. A trade off must be made between versus the problem of finding the minimum value of the
achieving a desired level of customer service and the cost objective function itself. In terms of our notation, this is
of providing service. An objective function might incorpo- the difference between finding a setting θ ∗ that achieves the
rate costs associated with operations such as agent wages minimum in (1) versus finding the value of J (θ ∗ ). A key
and network utilization, as well as customer service perfor- difference between deterministic optimization and stochastic
mance metrics such as the probability of waiting more than optimization is that these two problems are not necessarily
a certain amount of time. This is just one example of how the same! In deterministic optimization, where J can be
simulation optimization can be applied to business process easily evaluated, the two problems are essentially identical;
management. A similar design problem is considered later however, in the stochastic setting, one or the other may be
in the applications section for a hospital emergency room. easier. Furthermore, it may also be the case that only one
or the other is the real goal of the modeling exercise. In
Toy Example: Single-Server Queue some cases, selecting the best design is the goal, and the
Consider a first-come, first-served (FCFS) single-server objective function is merely a means towards achieving this
queue. A well-studied optimization problem uses the fol- end, providing a way to measuring the relative performance
lowing objective function (cf. Fu 1994): of each design, whereas the absolute value of the metric may
have little meaning. In other cases, the objective function
J (θ ) = E[W (θ )] + c/θ, (2) may have intrinsic meaning, e.g., costs or profits. And in
yet other (less frequent) cases, estimating the optimal value
where W is the mean time spent in the system, θ is the mean itself is the primary goal. An example of this is the pric-
service time of the server (so 1/θ corresponds to the server ing of financial derivatives with early exercise opportunities
speed), and c is the cost factor for the server speed (i.e., a (especially when done primarily for the sake of satisfying
higher-skilled worker costs more). Since W is increasing regulatory requirements of marking a portfolio to market).
in θ, the objective function quantifies the trade-off between To put it another way, simulation optimization has two
customer service level and cost of providing service. This major components that vie for computational resources:
could be viewed as the simplest possible case of the call search and evaluation. How to balance between the two,
center design problem, where there is a single operator i.e., how to best allocate simulation replications, is a large
whose skill level must be selected. For the special M/M/1 challenge in making simulation optimization practical. To be
queue in steady state, this problem is analytically tractable, concrete, one is choosing between simulation replications to
and serves as a test case for optimization procedures. get better estimates versus more iterations of the optimization
algorithm to more thoroughly explore the search space.
Another Academic Example: Inventory Control
The objective is to minimize a total cost function consisting 2 APPROACHES
of ordering, holding, and backlogging or lost sales compo-
nents. The ordering policy involves two parameters, s and We now briefly describe the main approaches in the simu-
S, corresponding to the re-order level and order-up-to level, lation literature.
respectively. When the inventory level falls below s, an
order is placed for an amount that would bring the current 2.1 Ranking & Selection
level back up to S.
In the setting where it is assumed that there is a fixed set of
The input variables can be divided into two main types: alternatives — so no search for new candidates is involved
qualitative and quantitative. In the call center example, both — the problem comes down to one of statistical inference,
types are present, whereas in the two simpler examples, the and ranking & selection procedures can be applied. Let
input variables are quantitative. Quantitative variables are the probability of correct selection be denoted by PCS,
then either discrete or continuous. Many of the call center which we will not define formally here, but intuitively
variables are inherently discrete, e.g., the number of op- “correct selection” would mean either selecting either the
erators, whereas in the single-server queue and inventory best solution or a solution within some presecified tolerance
control examples, the input variables θ and (s, S) could of the best.
be specified as either, depending on the particular problem There are two main forms the resulting problem for-
setting or solution technique being applied. As in determin- mulations can take:
istic optimization, the approaches to solve these different
types of problems can differ greatly. (i) minimize the number of simulation replications
To get an idea of the particular challenges facing sim- subject to the PCS exceeding a given level.
ulation optimization, consider the problem of finding the (ii) maximize the PCS subject to a given simulation
value of θ that minimizes the objective function in (1) budget constraint.

84
Fu, Glover, and April

In case (i), one ensures a level of correct selection, but 2.2 Response Surface Methodology
has little control over how much computation this might
entail. This is the traditional statistics ranking & selection Response surface methodology (RSM) has its roots in sta-
approach, and in the simulation setting, Kim and Nelson tistical design of experiments, and its goal is to obtain
(2005) overview the state of the art, where multiple com- an approximate functional relationship between the input
parison procedures can be used to provide valid confidence variables and the output objective function. In design of
intervals, as well; see also Goldsman and Nelson (1998). experiments terminology, these are referred to as the fac-
The books by Bechhofer, Santner, and Goldsman (1995) tors and the response, respectively. RSM carried out on the
and Hochberg and Tamhane (1987) contain more general entire domain of interest results in what is called a meta-
discussion of multiple comparison procedures outside of the model (see Barton 2005). The two most common ways
simulation setting. of obtaining this representation are regression and neural
In case (ii), one tries to do the best within a specified networks. Once a metamodel is in hand, optimization can
computational limit, but a priori one may not have any idea be carried out using deterministic optimization procedures.
how good the resulting solution will be. This formula- However, when optimization is the focus, a form of sequen-
tion was coined the “optimal computing budget allocation” tial RSM is usually employed (Kleijnen 1998), in which a
(OCBA) problem, as first proposed by Chen (1995). Subse- local response surface representation is obtained that guides
quent and related work includes Chick and Inoue (2001ab), the sequential search. For example, linear regression could
Chen et al. (2005), Fu et al. (2005). be used to obtain an estimate of the direction of steepest
Ranking & selection procedures can also used in the descent. This approach is model free, well-established, and
following ways that are relevant to simulation optimiza- fairly straightforward to apply, but it is not implemented in
tion: for screening a large set of alternatives, i.e., quickly any of the commercial packages. Until recently, SIMUL8
(meaning based on a relatively small amount of replications) (<http://www.SIMUL8.com/optimiz1.htm>) had
eliminating poor performers in order to get a more man- employed an optimization algorithm based on a form of se-
ageable set of alternatives; for comparing among candidate quential RSM using neural networks, but now uses OptQuest
solutions in an iterative algorithm, e.g., deciding whether instead. The primary drawback seems to be the excessive
or not an improvement has been made; for providing some use of simulation points in one area before exploring other
statistical guidance in assessing the quality of the declared parts of the search space. This can be especially exacer-
best solution versus all the solutions visit. The latter is bated when the number of input variables is large. Recently,
what Boesel, Nelson and Kim (2003) call “cleaning up" kriging has been proposed as a possibly more efficient way
after simulation optimization. of carrying out this step (see van Beers and Kleijnen 2003).
The framework of ordinal optimization (Ho, Sreenivas, For more information on RSM procedures for simulation
and Vakili 1992; Ho et al. 2000) might also be classified optimization, see Barton (2005) and Kleijnen (1998).
under ranking & selection. This approach is based on the
observation that in most cases it is much easier to find 2.3 Gradient-Based Procedures
ordering among candidate solutions than to carry out the
estimation procedure for each solution individually, and then The gradient-based approach tries to mimic its counterpart
try to rank order the solutions. This can be especially true in deterministic optimization. In the stochastic setting, the
in the simulation setting, where the user has more control, resulting procedures usually take the form of stochastic
so for instance can use common random numbers to induce approximation (SA) algorithms; the book by Kushner and
positive correlation between estimates of solution perfor- Yin (1997) contains a general discussion of SA outside
mance to dramatically reduce the number of simulation of simulation. Specifically, given a current best setting of
replications required to make a distinction. Intuitively, it is the input variables, a movement is made in the gradient
the difference between estimating J1 − J2 = E[L1 − L2 ] direction, similar to sequential RSM. However, unlike se-
versus “estimating” P (J1 > J2 ), say using the simple mean quential RSM procedures, SA algorithms can be shown
based on n simulation replications. Estimating the former to be provably convergent (asymptotically, usually to a
using the sample mean is governed by the Monte Carlo local optimum) under appropriate conditions on the gra-
convergence rate of n−1/2 , whereas deciding on the latter dient estimator and step sizes, and they generally require
based on the sample mean has an exponential convergence far less simulations per iteration. Practically speaking, the
rate. Using the theory of large deviations from probabil- key to making this approach successful is the quality of
ity, Dai and Chen (1997) explore this exponential rate of the gradient estimator. Fu (2005) surveys the main ap-
convergence in the discrete-event simulation context. proaches available for coming up with gradient estimators
that can be implemented in simulation. These include
“brute-force” finite differences, simultaneous perturbations,
perturbation analysis, the likelihood ratio/score function

85
Fu, Glover, and April

method, and weak derivatives. For technical details on si- (which is not necessarily the current iterate).
multaneous perturbation stochastic approximation (SPSA), The second feature (b) does not arise in the deterministic
see Spall (1992), Fu and Hill (1997), Spall (2003), and setting, because the current best (among visited solutions)
<http://www.jhuapl.edu/spsa>. SPSA has two is known with certainty, since the objective function values
advantages over the other methods: it requires only two have no estimation noise. However, in the stochastic case,
simulations per gradient estimate, regardless of the number due to the noise in the objective function estimates, there
of input variables, and it can treat the simulation model as a are many possible choices, e.g., the current solution, the
black box, i.e., no knowledge of the workings of the system solution that has been visited the most often, or the solution
is required (model free). A one-simulation version of SPSA that has the best sample mean thus far.
is also available, but in practice it appears far noisier than Like stochastic approximation algorithms, random
the two-simulation version, even accounting for the effort search algorithms can generally be shown to be provably
being half as much. convergent (often to a global optimum). For more details
An in-depth study of various gradient-based algorithms on random search methods in simulation, see Andradóttir
for the single-server M/M/1 queue example can be found (2005); for a more general survey on discrete input sim-
in L’Ecuyer, Giroux, and Glynn (1994); see Fu (1994b) and ulation optimization problems, see Swisher et al. (2001).
Fu and Healy (1997) for the (s, S) inventory system. Ka- A recently proposed version of random search that is very
puscinski and Tayur (1999) describe the use of perturbation promising is Convergent Optimization via Most-Promising-
analysis in a simulation optimization framework for in- Area Stochastic Search (COMPASS), introduced by Hong
ventory management of a capacitated production-inventory and Nelson (2005), which utilizes a unique neighborhood
system. This approach was implemented on the world- structure and results in a provably convergent algorithm to
wide supply chain of Caterpillar (its success reported in a a locally optimal solution.
Fortune magazine article by Philip Siekman, October 30,
2000: “New Victories in the Supply Chain Revolution"). 2.5 Sample Path Optimization
The primary drawback of the gradient-based approach is
that it currently is really only practically applicable to the Sample path optimization (also known as stochastic coun-
continuous variable case, notwithstanding recent attempts terpart, sample average approximation; see Rubinstein and
to apply it to discrete-valued variables. Furthermore, es- Shapiro 1993) takes many simulations first, and then tries to
timating direct gradients may require knowledge of the optimize the resulting estimates. Specifically, if J˜i denotes
underlying model, and the applicability of such estimators the estimate of J from the ith simulation replication, the
is often highly problem dependent. sample mean over n replications is given by

1 ˜
n
2.4 Random Search
Jˆn (θ ) = Ji (θ ).
n
In contrast to gradient-based procedures, random search i=1

algorithms are targeted primarily at discrete input variable


problems. They were first developed for deterministic opti- If each of the J˜i are i.i.d. unbiased estimates of J , then by
mization, but have been extended to the stochastic setting. the strong law of large numbers,
Like gradient-based procedures, they proceed by moving
iteratively from a current best setting of the input variables Jˆn (θ ) −→ J (θ) with probability 1.
(candidate solution). Instead of using a gradient, however,
the next move is probabilistically drawn from the “neigh- The approach then is to optimize, for a sufficiently large n,
borhood” of the current best. For example, defining the the deterministic function Jˆn , which approximates J . Its
neighborhood of a solution as all of the other solutions and key feature, as Robinson (1996) advocates, is that “we can
drawing from a uniform distribution (assuming the number bring to bear the large and powerful array of deterministic
of feasible solutions is finite) would give a “pure” ran- [primarily continuous variable] optimization methods that
dom search algorithm. Practically speaking, the success have been developed in the last half-century. In particular,
of a particular random search algorithm depends heavily we can deal with problems in which the parameters θ might
on the defined neighborhood structure. Furthermore, in the be subject to complicated constraints, and therefore in which
stochastic setting, the estimation problem must also be in- gradient-step methods like stochastic approximation may
corporated into the algorithm. Thus, the two features that have difficulty.” In the simulation context, the method of
define an algorithm in the simulation optimization setting common random numbers is used to provide the same sample
are paths for Jˆn (θ ) over different values of θ . Furthermore, the
(a) how the next candidate solution(s) is(are) chosen; and availability of derivatives greatly enhances the effectiveness
(b) how to determine which is the current best solution

86
Fu, Glover, and April

of the approach, as many nonlinear optimization packages and the two methods are highly complementary and often
require these. used together. SS is an evolutionary (population-based)
algorithm that constructs solutions by combining others.
2.6 Metaheuristics Scatter search is designed to operate on a set of points,
called reference points, that constitute good solutions ob-
Metaheuristics are methods that guide other procedures tained from previous solution efforts. Notably, the basis
(heuristic or truncated exact methods) to enable them to for defining “good” includes special criteria such as di-
overcome the trap of local optimality for complex opti- versity that purposefully go beyond the objective function
mization problems. Four metaheuristics have primarily value. The approach systematically generates combinations
been applied with some success to simulation optimization: of the reference points to create new points, each of which is
simulated annealing, genetic algorithms, tabu search and mapped into an associated feasible point. The combinations
scatter search (occasionally supplemented by a procedure are generalized forms of linear combinations, accompanied
such as neural networks in a forecasting or curve fitting by processes to adaptively enforce feasibility conditions,
role). Of these, tabu search and scatter search have proved including those of discreteness.
to be by far the most effective, and are at the core of the The SS process is organized to (1) capture informa-
simulation optimization software that is now most widely tion not contained separately in the original points, (2) take
used. We briefly sketch the nature of these two approaches advantage of auxiliary heuristic solution methods (to eval-
below. The general metaheuristics framework of Ólaffson uate the combinations produced and to actively generate
(2005), which looks very much like the deterministic version new points), and (3) make dedicated use of strategy instead
of random search, also contains discussion of the nested of randomization to carry out component steps. SS basi-
partitions method introduced by Shi and Ólaffson (2000ab) cally consists of five methods: a diversification generation
(see also Pinter 1996). method, an improvement method (often consisting of tabu
Tabu Search (TS) is distinguished by introducing adap- search), a reference set update method, a subset generation
tive memory into metaheuristic search, together with asso- method, and a solution combination method. Applications
ciated strategies for exploiting such memory, equipping it to of SS, like those of TS, have grown dramatically in recent
penetrate complexities that often confound other approaches. years, and its use in simulation optimization has become
Applications of TS span the realms of resource planning, the cornerstone of significant advances in the field. The
telecommunications, VLSI design, financial analysis, space most complete reference on SS is the book by Laguna and
planning, energy, distribution, molecular engineering, lo- Marti (2002).
gistics, pattern classification, flexible manufacturing, waste
management, mineral exploration, biomedical analysis, en- 3 MODEL-BASED METHODS
vironmental conservation and scores of others. A partial
indication of the rapid recent growth of TS applications is An approach that looks promising and which has just begun
disclosed by the fact that a Google search on “tabu search” to be explored in the simulation optimization context are
returns more than 90,000 pages, a figure that has been model-based methods. These are contrasted with what
growing exponentially over the past several years. are called instance-based approaches, which generate new
Adaptive memory in tabu search involves an attribute- solutions based only on the current solution (or population
based focus, and depends intimately on the elements of of solutions) (cf. Dorigo and Stützle 2004, pp.139-140).
recency, frequency, quality and influence. This catalog dis- The metaheuristics described earlier generally fall into this
guises a surprising range of alternatives, which arise by latter category, with the exception of tabu search, because it
differentiating attribute classes over varying regions and uses memory. Model-based methods, on the other hand, are
spans of time. The TS notion of influence, for example, not dependent explicitly on any current set of solutions, but
encompasses changes in structure, feasibility and regional- use a probability distribution on the space of solutions to
ity, and the logical constructions used to interrelate these provide an estimate of where the best solutions are located.
elements span multiple dimensions, involving distinctions The following are some examples:
between “sequential logic” and “event driven logic,” giving
rise to different kinds of memory structures. • Swarm Intelligence. This approach is perhaps
The most comprehensive reference for tabu search and best known under the name of “Ant Colony Op-
its applications is the book by Glover and Laguna, (1997). timization," because it uses ant behavior (group
A new book that gives more recent applications and pseudo- cooperation and use of pheromone updates and
code for creating various implementations is scheduled to evaporation) as a paradigm for its probabilistic
appear in 2006. workings. Because there is memory involved in
Scatter Search (SS) has its foundations in proposals the mechanisms, like tabu search, it is not instance-
from the 1970s that also led to the emergence of tabu search,

87
Fu, Glover, and April

Table 1: Optimization for Simulation: Commercial Software Packages


Optimization Package Vendor Primary Search Strategies
(simulation platform) (URL)
AutoStat AutoSimulations, Inc. evolutionary,
(AutoMod) (www.autosim.com) genetic algorithms
Evolutionary Optimizer AutoSimulations, Inc. evolutionary,
(Extend) (www.imaginethatinc.com) genetic algorithms
OptQuest OptTek Systems, Inc. scatter search, tabu search,
(Arena, Crystal Ball, ProModel, SIMUL8, et al.) (www.opttek.com) neural networks
RISKOptimizer Palisade Corp. genetic algorithms
(@RISK) (www.palisade.com)
Optimizer Lanner Group, Inc. simulated annealing,
(WITNESS) (www.lanner.com/corporate) tabu search

based; see Dorigo and Stützle (2004) for more theoretical convergence results can be established;
details. see Hu, Fu, and Marcus (2005abc) for details.
• Estimation of Distribution Algorithms (EDAs).
The goal of this approach is to progressively im- 4 SOFTWARE
prove a probability distribution on the solution
space based on samples generated from the current Table 1 surveys a few simulation optimization software pack-
distribution. The crudest form of this would utlize ages (either plug-ins or integrated) currently available, and
all samples generated to a certain point, hence the summarizes their search strategies. Comparing with Table
use of memory, but in practical implementation, 1 in Fu (2002), one observes that ProModel and SIMUL8
parameterization of the distribution is generally have both migrated to OptQuest from their previous sim-
employed, and the parameters are updated based ulation optimization packages (SimRunner and OPTIMIZ,
on the samples; see Larrañaga and Lozano (2002) respectively).
for more details.
• Cross-Entropy (CE) Method. This approach 5 APPLICATIONS
grew out of a procedure to find an optimal impor-
tance sampling measure by projecting a parameter- Applications of optimization technology are quite diverse;
ized probability distribution, using cross entropy they cover a broad surface of business activities. To illus-
to measure the distance from the optimum mea- trate, the user of simulation and other business or industry
sure. Like EDAs, samples are taken that are used evaluation models may want to know:
to update the parameter values for the distribution.
Taking the optimal measure as a point mass at the • What is the most effective factory layout?
solution optimum of an optimization problem, the • What is the safest equipment replacement policy?
procedure can be applied in that context; see De • What is the most cost effective inventory policy?
Boer et al. (2005), Rubinstein and Kroese (2004), • What is the best workforce allocation?
and <http://www.cemethod.org> for more • What is the most productive operating schedule?
details. • What is the best investment portfolio?
• Model Reference Adaptive Search. As in EDAs,
this approach updates a parameterized probabil- The answers to such questions require a painstaking exam-
ity distribution, and like the CE method, it also ination of multiple scenarios, where each scenario in turn
uses the cross-entropy measure to project a pa- requires the implementation of an appropriate simulation or
rameterized distribution. However, the particular evaluation model to determine the consequences for costs,
projection used relies on a stochastic sequence of profits and risks. The critical “missing component" is to
reference distributions rather than a single fixed disclose which decision scenarios are the ones that should
reference distribution (the final optimal measure) be investigated – and still more completely, to identify good
as in the CE method, and this results in very differ- scenarios automatically by a search process designed to find
ent performance in practice. Furthermore, stronger the best set of decisions. This is the core problem for sim-
ulation optimization in a practical setting. The following

88
Fu, Glover, and April

descriptions provide a sampling of uses of the technology projects were added to the final portfolio as long as the
that enable solutions to be identified efficiently. budget constraint was not violated. This procedure resulted
in a portfolio with the following statistics:
5.1 Project Portfolio Management
μNP V = 7342, σNP V = 2472, q.05 = 3216,
For project portfolio management, OptFolio is a soft-
ware tool being implemented in several markets including where q.05 denotes the 5th percentile (quantile), i.e.,
Petroleum and Energy, IT Governance, and Pharmaceuti- P (N P V ≥ qp ) = p.
cals. The following example demonstrates the versatility of In this case, 15 projects were selected in the final port-
OptFolio as a simulation optimization tool. folio. What follows is a discussion of how using OptFolio
Among many other types of initiatives, the Pharmaceu- can help improve these results.
tical Industry uses project portfolio optimization to manage
investments in new drug development. A pharmaceutical Case 2: Traditional Markowitz Approach
company that is developing a new breakthrough drug is The decision was to determine participation levels [0,1] in
faced with the possibility that the drug may not do what it each project with the objective of maximizing the expected
was intended to do, or have serious side effects that make it NPV of the portfolio while keeping the standard deviation of
commercially infeasible. Thus, these projects have a con- the NPV below a specified threshold of 1000. An investment
siderable degree of uncertainty related to the probability of budget was also imposed on the portfolio, where Bi denotes
success. Relatively recently, an options-pricing approach, the budget in period i.
called “real options" has been proposed to model such un- Maximize μNP V subject to
certainties. Initial feedback has indicated that an obstacle to
its market penetration is that it is difficult to understand and σNP V ≤ 1000, B1 ≤ 125, B2 ≤ 140, B3 ≤ 160.
use; furthermore, there are no research results that illustrate
performance that rivals the algorithmic approach underlying This formulation resulted in a portfolio with the following
OptFolio. statistics:
The following example is based on data provided by
Decision Strategies, Inc., a consulting firm with numerous μNP V = 4140, σNP V = 1000, q.05 = 2432.
clients in the Pharmaceutical Industry. The data consists
of twenty potential projects in drug development having We performed this traditional mean-variance case to
rather long horizons – 5 to 20 years – and the pro-forma provide a basis for comparison for the subsequent cases.
information is given as triangular distributions for both per- An empirical histogram for the optimal portfolio is shown
period net contribution and investment. The models use a in Figure 1.
probability of success index – from 0% to 100% – applied in
such a way that, if the project fails during a simulation trial,
then the investments are realized, but the net contribution
of the project is not. In this way, the system can be used to
model premature project terminations providing a simple,
understandable alternative to real options. In this example,
we examined five cases, the first two representing approaches
commonly used in practice but, as we have found, turn out
to be significantly inferior to the ones using the OptFolio
software.
Case 1: Simple Ranking of Projects
Projects were ranked in this approach according to a specific
objective criterion, a process often adopted in currently
Figure 1: Mean-Variance Portfolio
available Project Portfolio Management tools in order to
select projects under a budgetary constraint. In this case
the following objective measure was selected: Case 3: Risk Controlled by 5th Percentile
The decision was to determine participation levels [0,1] in
P V (Revenues) each project with the objective of maximizing the expected
R= .
P V (Expenses) NPV of the portfolio while keeping the 5th percentile of
NPV above the value determined in Case 2 (2432), keeping
Employing the customary design, the 20 projects were the same investment budget constraints on the portfolio.
ranked in descending order according to this measure, and

89
Fu, Glover, and April

Maximize μNP V subject to This portfolio has a 91% chance of achieving/exceeding the
NPV goal of 4140, representing a significant improvement
q.05 ≥ 2432, B1 ≤ 125, B2 ≤ 140, B3 ≤ 160. over the Case 2 portfolio, where the probability was only
50%. The NPV distribution is shown in Figure 3.
This case has replaced standard deviation with the 5th per-
centile for risk containment, which is an intuitive way to
control catastrophic risk (Value at Risk or VaR in tradi-
tional finance terminology). The resulting portfolio has the
following attributes:

μNP V = 7520, σNP V = 2550, q.05 = 3294.

By using the 5th percentile as a measure of risk, we were


able to almost double the expected return compared to the
solution found in Case 2, and improved on the simple
ranking solution. Additionally, as previously discussed, the
5th percentile provides a more intuitive measure of risk,
i.e., there is a 95% chance that the portfolio will achieve Figure 3: 5th Percentile Portfolio
a NPV of 3294 or higher. The NPV distribution is shown
Case 5: All-or-Nothing
in Figure 2. It is interesting to note that this solution
In many real-world settings, these types of projects have
has more variability but is focused on the upside of the
all-or-nothing participation levels, whereas in the Case 4
distribution. By focusing on the 5th percentile rather than
solution, most of the optimal participation levels found were
standard deviation, a superior solution was created.
fractional. Under the same investment budget constraints
on the portfolio, Case 5 modified the Case 4 constraints to
allow only 0 or 1 participation levels, i.e., a project must
utilize 100% participation or be excluded from the portfolio.
Maximize Prob(NPV ≥ 4140) subject to

B1 ≤ 125, B2 ≤ 140, B3 ≤ 160, Participations ∈ {0, 1}.

The resulting portfolio has the following attributes:

μNP V = 7472, σNP V = 2503, q.05 = 3323.

Figure 2: 5th Percentile Portfolio In spite of the participation restriction, this portfolio also
has a 91% chance of exceeding an NPV of 4140, and has
Case 4: Maximizing Probability of Success a high expected return. In this case, as in Case 1, 15 out
The decision was to determine participation levels [0,1] in of the 20 projects were selected in the final portfolio, but
each project with the objective of maximizing the probability the expected returns are higher. These cases illustrate the
of meeting or exceeding the mean NPV found in Case benefits of using alternative measures for risk. Not only are
2, keeping the same investment budget constraints on the percentiles and probabilities more intuitive for the decision-
portfolio. maker, but they also produce solutions with better financial
Maximize Prob(NPV ≥ 4140) subject to metrics. The OptFolio system can also be used to optimize
performance metrics such as Internal Rate of Return (IRR)
B1 ≤ 125, B2 ≤ 140, B3 ≤ 160. and Payback Period.
The illustrated analyses can be applied very effectively
This case focuses on maximizing the chance of obtaining for complex, as well as simple, sets of projects, where differ-
a goal and essentially combines performance and risk con- ent measures of risk and return can produce improvements
tainment into one metric. The resulting portfolio has the over the traditional Markowitz (mean-variance) approach,
following attributes: as well as over simple project ranking approaches. The
flexibility to choose various measures and statistics, both
μNP V = 7461, σNP V = 2430, q.05 = 3366. as objective performance measures as well as constraints,
is a major advantage to using a simulation optimization

90
Fu, Glover, and April

approach as embedded in OptFolio. The user is given an Patients arrive either on their own or in an ambulance,
ability to select better ways of modeling and controlling risk, according to some arrival process. Arriving patients are
while aligning the outcomes to specific corporate goals. classified into different levels, according to their condition,
OptFolio also provides ways to define special relation- with Level 1 patients being more critical than Level 2 and
ships that often arise between and among projects. Correla- Level 3 patients.
tions can be defined between the revenues and/or expenses Level 1 patients are taken to an ER immediately upon
of two projects. In addition, the user can define projects arrival. Once in the room, they undergo their treatment.
that are mutually exclusive, or dependent. For example, in Finally, they complete the registration process before being
some cases, selecting Project A implies selecting Project B; either released or admitted into the hospital for further
such a definition can easily be done in OptFolio. treatment.
Portfolio analysis tools are designed to aid senior man- Level 2 and Level 3 patients must first sign in with an
agement in the development and analysis of project portfolio Administrative Clerk. After their condition is then assessed
strategies, by giving them the capability to assess the im- by a Triage Nurse, and then they are taken to an ER. Once
pact on the corporation of various investment decisions. To in the room, Level 2 and 3 patients, must first complete
date, commercial portfolio optimization packages are rela- their registration, then go on to receive their treatment, and,
tively inflexible and are often not able to answer the key finally, they are either released or admitted into the hospital
questions asked by senior management. As a result of the for further treatment.
simulation optimization capabilities embodied in OptFolio, After undergoing the various activities involved in reg-
new techniques are made available that increase the flexi- istration and treatment, 90% of all patients are released
bility of portfolio optimization tools and deepen the types from the ER, while the remaining 10% are admitted into
of portfolio analysis that can be carried out. the hospital for further treatment. The final release/hospital
admission process consists of the following activities: 1.
5.2 Business Process Management In case of release, either a nurse or a PCT fills out the
release papers (whoever is available first). 2. In case of
When changes are proposed to business processes in order admission into the hospital, an Administrative Clerk fills
to improve performance, important advantages can result out the patients admission papers. The patient must then
by evaluating the projected improvements using simulation, wait for a hospital bed to become available. The time until
and then determining an optimal set of changes using sim- a bed is available is handled by an empirical probability
ulation optimization. In this case it becomes possible to distribution. Finally, the patient is transferred to the hospital
examine and quantify the sensitivity of making the changes bed.
on the ultimate objectives to reduce the risk of actual im- The following illustrates a simple instance of this pro-
plementation. Changes may entail adding, deleting, and cess that is actually taken from a real world application. In
modifying processes, process times, resources required, this instance, due to cost and layout considerations, hospital
schedules, work rates within processes, skill levels, and administrators have determined that the staffing level must
budgets. Performance objectives may include throughput, not exceed 7 nurses, 3 physicians, 4 PCTs and 4 Adminis-
costs, inventories, cycle times, resource and capital utiliza- trative Clerks. Furthermore, the ER has 20 rooms available;
tion, start-up times, cash flow, and waste. In the context of however, using fewer rooms would be beneficial, since the
business process management and improvement, simulation additional space could be used more profitably by other
can be thought of as a way to understand and communi- departments in the hospital. The hospital wants to find
cate the uncertainty related to making the changes while the configuration of the above resources that minimizes the
optimization provides the way to manage that uncertainty. total asset cost. The asset cost includes the staff’s hourly
The following example is based on a model provided wages and the fixed cost of each ER used. We must also
by CACI, and simulated on SIMPROCESS. Consider the make sure that, on average, Level 1 patients do not spend
operation of an emergency room (ER) in a hospital. Figure more than 2.4 hours in the ER. This can be formulated as
4 shows a high-level view of the overall process, which an optimization problem, as follows:
begins when a patient arrives through the doors of the ER,
and ends when a patient is either released from the ER Minimize Expected Total Asset Cost
or admitted into the hospital for further treatment. Upon subject to the following constraints:
arrival, patients sign in, receive an assessment of their Average Level 1 Cycle Time ≤ 2.4 hours,
condition, and are transferred to an ER. Depending on their # Nurses ≤ 7,
assessment, patients then go through various alternatives # Physicians ≤ 3,
involving a registration process and a treatment process, # PCTs ≤ 4,
before being released or admitted into the hospital.

91
Fu, Glover, and April

Figure 4: High-level Process View

# Admin. Clerks ≤ 4, process and the registration process in parallel. That is, we
# ERs ≤ 20. assume that while the patient is undergoing treatment, the
registration process is being done by a surrogate or whoever
This is a relatively unimposing problem in terms of is accompanying the patient. If the patient’s condition is
size: five variables and six constraints. However, if we very critical, than someone else can provide the registration
were to rely solely on simulation to solve this problem, data; however, if the patient’s condition allows it, then the
even after the hospital administrators have narrowed down patient can provide the registration data during treatment.
our choices to the above limits, we would have to perform Figure 5 shows the model with this change. By op-
7x3x4x4x20=6,720 experiments. If we want a sample size timizing the model that incorporates this change, we now
of, say, at least 30 runs per trial solution in order to obtain obtain an average Level 1 patient cycle time of 1.98 (a 12%
the desired level of precision, then each experiment would improvement).
take about 2 minutes, based on a Dell Dimension 8100 with The new solution had 4 nurses, 2 physicians, 2 PCTs,
a 1.7GHz Intel Pentium 4 processor. This means that a 2 administrative clerks, and 9 ERs, yielding an Expected
complete enumeration of all possible solutions would take Total Asset Cost of $24,574, and an average Level 1 patient
approximately 13,400 minutes, or about 70 working days. cycle time of 1.94 hours. By using simulation optimization,
This is obviously too long a duration for finding a solution. we were able to find a very high quality solution in less
In order to solve this problem in a reasonable amount of than 30 minutes.
time, we called upon the OptQuest® optimization technol-
ogy integrated with SIMPROCESS. As a base case we used 6 CONCLUSIONS
the upper resource limits provided by hospital administrators,
to get a reasonably good initial solution. This configuration In addition to chapters in the Handbook on Operations Re-
yielded an Expected Total Asset Cost of $36,840, and a search and Management Science: Simulation volume cited
Level 1 patient cycle time of 1.91 hours. already, more technical details on simulation optimization
Once we set up the problem in OptQuest, we ran it for techniques can be found in the chapter by Andradóttir (1998)
100 iterations (experiments), and 5 runs per iteration (each and the review paper by Fu (1994), whereas the feature ar-
run simulates 5 days of the ER operation). Given these ticle by Fu (2002) explores deeper research versus practice
parameters, the best solution, found at iteration 21 was: 4 issues. Previous volumes of these Winter Simulation Con-
nurses, 2 physicians, 3 PCTs, 3 administrative clerks, and ference proceedings also provide good current sources (e.g.,
12 ERs. April et al. 2003, 2004). Other books that treat simulation
The Expected Total Asset Cost for this configuration optimization in some technical depth include Rubinstein
came out to $25,250 (a 31% improvement over the base and Shapiro (1993), Fu and Hu (1997), Pflug (1997), Spall
case), and the average Level 1 patient cycle time was 2.17 (2003).
hours. The time to run all 100 iterations was approximately Note that the “model” in model-based approaches is a
28 minutes. probability distribution on the solution space, as opposed
After obtaining this solution, we redesigned some fea- to modeling the response surface itself; the input variables
tures of the current model to improve the cycle time of are the same in both cases. Is there some way of com-
Level 1 patients even further. In the redesigned model, we bining the two approaches? One seeming advantage of the
assume that Level 1 patients can go through the treatment probabilistic approach is that it applies equally well to both

92
Fu, Glover, and April

Figure 5: Proposed Process

the continuous and discrete case. The key in both cases to ACKNOWLEDGMENTS
practical implementation is parameterization! For example,
neural networks and regression are used in the former case, Michael C. Fu was supported in part by the National Science
whereas the natural exponential family works well in the Foundation under Grant DMI-0323220, and by the Air Force
latter case. of Scientific Research under Grant FA95500410210.
Relatively little research has been done on multi-
response simulation optimization, or for that matter, with REFERENCES
random constraints, i.e., where the constraints themselves
must be estimated. Most of the commercial software pack- Andradóttir, S. 1998. Simulation optimization. Chapter 9
ages, however, do allow multiple responses (combining by in Handbook of Simulation: Principles, Methodology,
using a weighting) and explicit inequality constraints on Advances, Applications, and Practice, ed. J. Banks.
output performance measures, but in the latter case, there is New York: John Wiley & Sons.
usually not provided any statistical estimate as to how likely Andradóttir, S. 2005. An overview of simulation optimiza-
the constraint is actually being violated (just a confidence tion via random search. Chapter 21 in Handbooks in
interval on the performance measure itself). Operations Research and Management Science: Simu-
To summarize, here are some key issues in simulation lation, S.G. Henderson and B.L. Nelson, eds., Elsevier.
optimization algorithms: April, J., F. Glover, J.P. Kelly, and M. Laguna. 2003.
Practical introduction to simulation optimization. In
• neighborhood definition; Proceedings of the 2003 Winter Simulation Conference,
• mechanism for exploration/sampling (search), es- eds. S. Chick, P. J. Sánchez, D. Ferrin, and D. J. Morrice.
pecially how previously generated (sampled) solu- 71-78. Piscataway, New Jersey: Institute of Electrical
tions are incorporated; and Electronics Engineers.
• determining which candidate solution(s) to declare April, J., F. Glover, J.P. Kelly, and M. Laguna. 2004. New
the best (or “good”); statistical statements? advances and applications for marrying simulation and
• the computational burden of each function estimate optimization. In Proceedings of the 2004 Winter Simu-
(obtained through simulation replications) relative lation Conference, eds. R. G. Ingalls, M. D. Rossetti, J.
to search (the optimization algorithm). S. Smith, and B. A. Peters, 255-260. Piscataway, New
Jersey: Institute of Electrical and Electronics Engineers.
The first two issues are not specific to the stochastic setting Barton, R. 2005. Response surface methodology. Chapter
of simulation optimization, but their effectiveness depends 19 in Handbooks in Operations Research and Man-
intimately on the last issue. For example, defining the agement Science: Simulation, S.G. Henderson and
neighborhood as a very large region may lead to theoretical B.L. Nelson, eds., Elsevier.
global convergence, but it may not lead to very efficient Bechhofer, R.E., T.J. Santner, and D.M. Goldsman. 1995.
search, especially if simulation is expensive. The model- Design and Analysis of Experiments for Statistical Se-
based algorithms allow a large neighborhood, but can also lection, Screening, and Multiple Comparisons, New
allow search in a localized manner by the way the model York: John Wiley & Sons.
(probability distribution) is constructed and updated.

93
Fu, Glover, and April

Boesel, J., B.L. Nelson, and S.-H. Kim. 2003. Using Fu, M.C., J.Q. Hu, C.H. Chen, and X. Xiong. 2005 Simu-
ranking and selection to ’clean up’ after simulation lation allocation for determining the best design in the
optimization. Operations Research 51: 814-825. presence of correlated sampling. INFORMS Journal
Chen, C. H. 1995. An Effective Approach to Smartly Allo- on Computing forthcoming.
cate Computing Budget for Discrete Event Simulation. Glover, F. and M. Laguna. 1997. Tabu Search. Boston:
Proceedings of the 34th IEEE Conference on Decision Kluwer Academic.
and Control, 2598-2605. Goldsman, D. and B.L. Nelson. 1998. Comparing systems
Chen, C.H., J. Lin, E. Yücesan, and S.E. Chick. 2000. via simulation. Chapter 8 in Handbook of Simulation:
Simulation budget allocation for further enhancing the Principles, Methodology, Advances, Applications, and
efficiency of ordinal optimization. Discrete Event Dy- Practice, ed. J. Banks. New York: John Wiley & Sons.
namic Systems: Theory and Applications 10 251-270. Ho, Y.C., C.G. Cassandras, C.H. Chen, and L.Y. Dai. 2000.
Chen, C.H., E. Yücesan, L. Dai, and H.C. Chen. 2005. Ordinal optimization and simulation. Journal of Oper-
Efficient computation of optimal budget allocation for ations Research Society 51: 490-500.
discrete event simulation experiment. IIE Transactions Ho, Y.C., R. Sreenivas, and P. Vakili. 1992. Ordinal opti-
forthcoming. mization of DEDS. Discrete Event Dynamic Systems:
Chen, H.C., C.H. Chen, and E. Yucesan. 2000. Computing Theory and Applications 2: 61-88.
efforts allocation for ordinal optimization and discrete Hochberg, Y. and A.C. Tamhane. 1987. Multiple Compar-
event simulation. IEEE Transactions on Automatic ison Procedures. New York: John Wiley & Sons.
Control 45: 960-964. Hu, J., M.C. Fu, and S.I. Marcus. 2005a. A model
Chick, S.E. and K. Inoue. 2001a. New two-stage and reference adaptive search algorithm for global op-
sequential procedures for selecting the best simulated timization. Operations Research, submitted. also
system. Operations Research 49: 732-743. available at http://techreports.isr.umd.
Chick, S.E. and K. Inoue. 2001b. New procedures to edu/ARCHIVE/dsp_reportList.php?year=
select the best simulated system using common random 2005&center=ISR.
numbers. Management Science 47: 1133-1149. Hu, J., M.C. Fu, and S.I. Marcus. 2005b. A model ref-
Dai, L. and C. Chen. 1997. Rate of convergence for ordinal erence adaptive search algorithm for stochastic global
comparison of dependent simulations in discrete event optimization, submitted.
dynamic systems. Journal of Optimization Theory and Hu, J., M.C. Fu, and S.I. Marcus. 2005c. Simulation
Applications 94: 29-54. optimization using model reference adaptive search.
De Boer, P.-T., D.P. Kroese, S. Mannor, and R.Y. Rubinstein. Proceedings of the 2005 Winter Simulation Conference.
2005. A tutorial on the cross-entropy method. Annals Kapuscinski, R. and S.R. Tayur. 1999. Optimal policies
of Operation Research 134 (1): 19-67. and simulation based optimization for capacitated pro-
Dorigo, M. and T. Stützle. 2004. Ant Colony Optimization. duction inventory systems. Chapter 2 in Quantitative
Cambridge, MA: MIT Press. Models for Supply Chain Management, eds. S.R. Tayur,
Fu, M.C. 1994a. Optimization via simulation: A review. R. Ganeshan, M.J. Magazine. Boston: Kluwer Aca-
Annals of Operations Research 53: 199-248. demic Publishers.
Fu, M.C., 1994b. Sample path derivatives for (s, S) inven- Kim, S.-H., and Nelson, B.L. 2005. Selecting the best sys-
tory systems, Operations Research 42, 351-364. tem. Chapter 18 in Handbooks in Operations Research
Fu, M.C. 2002. Optimization for simulation: Theory vs. and Management Science: Simulation, S.G. Henderson
Practice (Feature Article). INFORMS Journal on Com- and B.L. Nelson, eds., Elsevier, 2005.
puting 14 (3): 192-215, 2002. Kleijnen, J.P.C. 1998. Experimental design for sensitiv-
Fu, M.C. 2005. Gradient estimation. Chapter 19 in Hand- ity analysis, optimization, and validation of simula-
books in Operations Research and Management Sci- tion models. Chapter 6 in Handbook of Simulation:
ence: Simulation, S.G. Henderson and B.L. Nelson, Principles, Methodology, Advances, Applications, and
eds., Elsevier. Practice, ed. J. Banks. New York: John Wiley & Sons.
Fu, M.C. and K.J. Healy. 1997. Techniques for simula- Kushner, H.J. and G.G. Yin. 1997. Stochastic Approxima-
tion optimization: An experimental study on an (s, S) tion Algorithms and Applications. New York: Springer-
inventory system. IIE Transactions 29 (3): 191-199. Verlag.
Fu, M.C. and S.D. Hill. 1997. Optimization of discrete Laguna, M., and R. Marti. 2002. Scatter Search, Boston:
event systems via simultaneous perturbation stochastic Kluwer Academic Publishers.
approximation. IIE Transactions 29 (3): 233-243. Larrañaga, P. and J.A. Lozano. 2002. Estimation of Dis-
Fu, M.C. and J.Q. Hu, 1997. Conditional Monte Carlo: tribution Algorithms: A New Tool for Evolutionary
Gradient Estimation and Optimization Applications. Computation. Boston: Kluwer Academic Publishers.
Boston: Kluwer Academic Publishers.

94
Fu, Glover, and April

L’Ecuyer, N. Giroux, and P. W. Glynn. 1994. Stochastic Simulation Area Editor of Operations Research, and was
optimization by simulation: Numerical experiments co-Editor for a 2003 special issue on simulation optimiza-
with the M/M/1 queue in steady-state. Management tion in the ACM Transactions on Modeling and Computer
Science 40: 1245-1261. Simulation. He is co-author of the book, Conditional Monte
Ólaffson, S. 2005. Metaheuristics. Chapter 22 in Hand- Carlo: Gradient Estimation and Optimization Applications,
books in Operations Research and Management Sci- which received the INFORMS College on Simulation Out-
ence: Simulation, S.G. Henderson and B.L. Nelson, standing Publication Award in 1998. His e-mail address is
eds., Elsevier. <mfu@rhsmith.umd.edu>.
Pflug, G.C. 1996. Optimization of Stochastic Models.
Boston: Kluwer Academic Publishers. FRED W. GLOVER is President of OptTek Systems, Inc.,
Pinter, J.D. 1996. Global Optimization in Action. Boston: and is in charge of algorithmic design and strategic plan-
Kluwer Academic Publishers. ning initiatives. He currently serves as Comcast Chaired
Rubinstein, R. Y. and A. Shapiro. 1993. Discrete Event Professor in Systems Science at the University of Colorado.
Systems: Sensitivity Analysis and Stochastic Optimiza- He has authored or co-authored more than 340 published
tion by the Score Function Method. New York: John articles and seven books in the fields of mathematical opti-
Wiley & Sons. mization, computer science and artificial intelligence, with
Rubinstein, R. Y. and D.P. Kroese. 2004. The Cross- particular emphasis on practical applications in industry
Entropy Method: A Unified Approach to Combinatorial and government. Dr. Glover is the recipient of the distin-
Optimization, Monte-Carlo Simulation, and Machine guished von Neumann Theory Prize, as well as of numerous
Learning. New York: Springer-Verlag. other awards and honorary fellowships, including those from
Shi, L. and S. Ólafsson. 2000. Nested partitioned method the American Association for the Advancement of Science,
for global optimization. Operations Research 48: 390- the NATO Division of Scientific Affairs, the Institute of
407. Management Science, the Operations Research Society, the
Spall, J.C. 1992. Multivariate stochastic approximation Decision Sciences Institute, the U.S. Defense Communica-
using a simultaneous perturbation gradient approxima- tions Agency, the Energy Research Institute, the American
tion. IEEE Transactions on Automatic Control 37 (3): Assembly of Collegiate Schools of Business, Alpha Iota
332-341. Delta, and the Miller Institute for Basic Research in Sci-
Spall, J.C. 2003. Introduction to Stochastic Search and ence. He also serves on advisory boards for numerous
Optimization. New York: John Wiley & Sons. journals and professional organizations. His email address
Swisher, J.R., P.D. Hyden, S.H. Jacobson, and is <glover@OptTek.com>.
L.W. Schruben. 2001. A survey of recent advances
in discrete input parameter discrete-event simulation JAY APRIL is Chief Development Officer of OptTek Sys-
optimization, IIE Transactions 36(6): 591-600. tems, Inc. He holds bachelors degrees in philosophy and
van Beers, W. C. M. and J. P. C. Kleijnen. 2003. Kriging aeronautical engineering, an MBA, and a Ph.D. in Busi-
for interpolation in random simulation, Journal of the ness Administration (emphasis in operations research and
Operational Research Society 54(3): 255-262. economics). Dr. April has held several executive posi-
tions including VP of Business Development and CIO of
AUTHOR BIOGRAPHIES EG&G subsidiaries, and Director of Business Development
at Unisys Corporation. He also held the position of Pro-
MICHAEL C. FU is a Professor in the Robert H. Smith fessor at Colorado State University, heading the Laboratory
School of Business, with a joint appointment in the Insti- of Information Sciences in Agriculture. His email address
tute for Systems Research and an affiliate appointment in is <april@OptTek.com>.
the Department of Electrical and Computer Engineering,
all at the University of Maryland. He received degrees in
mathematics and EE/CS from MIT, and an M.S. and Ph.D.
in applied mathematics from Harvard University. His re-
search interests include simulation methodology and applied
probability modeling, particularly with applications towards
manufacturing and financial engineering. He teaches courses
in simulation, stochastic modeling, computational finance,
and supply chain logistics and operations management. In
1995 he received the Allen J. Krowe Award for Teaching Ex-
cellence, and was a University of Maryland Distinguished
Scholar-Teacher for 2004–2005. He currently serves as

95

You might also like