Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Mark S Daskin
  • Industrial and Operations Engineering
    University of Michigan
    1205 Beal Avenue
    Ann Arbor, MI  48109
    USA
  • 1-734-764-9410
  • noneedit
  • I am a faculty member in the Industrial and Operations Engineering Department of the University of Michigan. Prior t... moreedit
The first capacitated location-inventory model we introduce in this dissertation assigns each retailer to a single distribution center. We formulate this model as a nonlinear integer program in which the objective function is neither... more
The first capacitated location-inventory model we introduce in this dissertation assigns each retailer to a single distribution center. We formulate this model as a nonlinear integer program in which the objective function is neither concave nor convex. Feasible solutions for this ...
This chapter begins with a basic taxonomy of facility location models. This is followed by the formulation of five classic facility location models: the set covering model, the maximum covering model, the p-median model, the fixed charge... more
This chapter begins with a basic taxonomy of facility location models. This is followed by the formulation of five classic facility location models: the set covering model, the maximum covering model, the p-median model, the fixed charge location model and the p-center problem. Advanced: Computational results on a new set-covering problem instance with 880 nodes representing 880 population centers in the contiguous United States are provided and a few counter-intuitive results are outlined. This is followed by a state of the art discussion of multi-objective problems in location analysis and the importance of multiple objectives in designing distribution networks. Models that integrate inventory planning into facility location modeling are then outlined. Finally, the chapter ends with a discussion of reliability in facility network planning.
ABSTRACT Hard capacity constraints have been used for decades in facility location modelling and planning. However, such constraints are unrealistic as a variety of operational tools can be used to extend capacity in the short term. To... more
ABSTRACT Hard capacity constraints have been used for decades in facility location modelling and planning. However, such constraints are unrealistic as a variety of operational tools can be used to extend capacity in the short term. To address this, the Inventory-Modulated Capacitated Location Problem (IMCLP) uses inventory as a method of mitigating the hard capacity constraints, but enforces single sourcing. In this paper, we examine a cyclic, day-specific allocation approach to assigning demand sites to processing facilities in the IMCLP. This enables the model to develop a day-of-the-week allocation policy that considers day-to-day variations in the daily processing capacity levels of a set of candidate processing facilities and/or systematic day-to-day demand variations. We demonstrate that allowing demands at a particular site to be allocated to multiple processing facilities in such a manner can be a cost-effective operational tool.
In this paper, we present an extension of the classic p-median facility location model. The new formulation allows the user to trace the trade-off between the demand-weighted average distance (the traditional p-median objective) and the... more
In this paper, we present an extension of the classic p-median facility location model. The new formulation allows the user to trace the trade-off between the demand-weighted average distance (the traditional p-median objective) and the range in assigned demand. We extend the model to incorporate additional constraints that significantly reduce the computation time associated with the model. We also outline a genetic algorithm-based approach for solving the problem. The paper shows that significant reductions in the range in assigned demand are possible with relatively minor degradations in the average distance metric. The paper also shows that the genetic algorithm does very well at identifying the approximate trade-off curve. The model and algorithms were tested on real-life data-sets ranging in size from 33 nodes to 880 nodes.
Page 1. SERVICE SCIENCE Mark S. Daskin Page 2. SERVICE SCIENCE Page 3. Page 4. SERVICE SCIENCE Mark S. Daskin Department of Industrial and Operations Engineering University of Michigan Ann Arbor, MI A JOHN WILEY & SONS, INC.,... more
Page 1. SERVICE SCIENCE Mark S. Daskin Page 2. SERVICE SCIENCE Page 3. Page 4. SERVICE SCIENCE Mark S. Daskin Department of Industrial and Operations Engineering University of Michigan Ann Arbor, MI A JOHN WILEY & SONS, INC., PUBLICATION Page 5. ...
This report is in microfiche form. Two models of supertanker lightering operations are developed. The first is a set of linked queueing models while the second employs a five-dimensional static space to model the process using the theory... more
This report is in microfiche form. Two models of supertanker lightering operations are developed. The first is a set of linked queueing models while the second employs a five-dimensional static space to model the process using the theory of Markov processes. Both models estimate delays to supertankers and to lightering vessels as functions of super-tanker arrival rate, the number of lightering vessels employed, the lightering vessel load and discharge times and transit times, and the number of berths used for lightering. The models are compared, and the input assumptions and output predictions are tested against observed data. The use of the models as planning tools is illustrated.
Research Interests:
We study a strategic facility location problem under uncertainty. The uncertainty associated with future events is modeled by defining alternative future scenarios with probabilities. We present a new model which minimizes the expected... more
We study a strategic facility location problem under uncertainty. The uncertainty associated with future events is modeled by defining alternative future scenarios with probabilities. We present a new model which minimizes the expected regret with respect to an endogenously selected subset of worst-case scenarios whose collective probability of occurrence is exactly 1-α. We demonstrate the effectiveness of this new approach by comparing it to the “α-reliable p-median Minimax regret” model and by presenting computation results for large-scale problems. We also present a heuristic, which involves solving a series of α-reliable Mean-excess regret sub-problems, for the α-reliable p-median Minimax regret model.
Research Interests:
The major components of delay to rail cars in passing through yards are waiting for classification and connection to an appropriate outbound train. This paper proposes queuing models for each of these components which provide expressions... more
The major components of delay to rail cars in passing through yards are waiting for classification and connection to an appropriate outbound train. This paper proposes queuing models for each of these components which provide expressions for both the mean and variance of delay times. The models are then used in an example application to draw inferences regarding the effectiveness of alternative strategies for dispatching trains between yards.
We present the Stochastic R-Interdiction Median Problem with Fortification (S-RIMF). This model optimally allocates defensive resources among facilities to minimize the worst-case impact of an intentional disruption. Since the extent of... more
We present the Stochastic R-Interdiction Median Problem with Fortification (S-RIMF). This model optimally allocates defensive resources among facilities to minimize the worst-case impact of an intentional disruption. Since the extent of terrorist attacks and malicious actions is uncertain, the problem deals with a random number of possible losses. A max-covering type formulation for the S-RIMF is developed. Since the problem size grows very rapidly with the problem inputs, we propose pre-processing techniques based on the computation of valid lower and upper bounds to expedite the solution of instances of realistic size. We also present heuristic approaches based on heuristic concentration-type rules. The heuristics are able to find an optimal solution for almost all problem instances considered. Extensive computational testing shows that both the optimal algorithm and the heuristics are very successful at solving the problem. A comparison of the results obtained by the two methods ...
Michael Lim• Achal Bassamboo• Sunil Chopra• Mark S. Daskin Department of Business Administration, University of Illinois, Urbana-Champaign, IL 61820, USA • mlim@illinois.edu Department of Managerial Economics and Decision Sciences,... more
Michael Lim• Achal Bassamboo• Sunil Chopra• Mark S. Daskin Department of Business Administration, University of Illinois, Urbana-Champaign, IL 61820, USA • mlim@illinois.edu Department of Managerial Economics and Decision Sciences, Northwestern University, Evanston, IL 60208, USA • a-bassamboo@kellogg.northwestern.edu; s-chopra@kellogg.northwestern.edu Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, USA • msdaskin@umich.edu
Background  As resident “index” procedures change in volume due to advances in technology or reliance on simulation, it may be difficult to ensure trainees meet case requirements. Training programs are in need of metrics to determine how... more
Background  As resident “index” procedures change in volume due to advances in technology or reliance on simulation, it may be difficult to ensure trainees meet case requirements. Training programs are in need of metrics to determine how many residents their institutional volume can support. Objective  As a case study of how such metrics can be applied, we evaluated a case distribution simulation model to examine program-level mediastinoscopy and endobronchial ultrasound (EBUS) volumes needed to train thoracic surgery residents. Methods  A computer model was created to simulate case distribution based on annual case volume, number of trainees, and rotation length. Single institutional case volume data (2011–2013) were applied, and 10 000 simulation years were run to predict the likelihood (95% confidence interval) of all residents (4 trainees) achieving board requirements for operative volume during a 2-year program. Results  The mean annual mediastinoscopy volume was 43. In a simul...
Research Interests:
Research Interests:
Research Interests:
Abstract: To support transit agencies in the design and evaluation of more equitable and efficient fare structures, an optimization-based model system has been developed and implemented on a microcomputer. This system seeks distance-based... more
Abstract: To support transit agencies in the design and evaluation of more equitable and efficient fare structures, an optimization-based model system has been developed and implemented on a microcomputer. This system seeks distance-based fares of the form: ...
Many objectives have been proposed for optimization under uncertainty. The typical stochastic programming objective of minimizing expected cost may yield solutions that are inexpensive in the long run but perform poorly under certain... more
Many objectives have been proposed for optimization under uncertainty. The typical stochastic programming objective of minimizing expected cost may yield solutions that are inexpensive in the long run but perform poorly under certain realizations of the random data. On the other hand, the typical robust optimization objective of minimizing maximum cost or regret tends to be overly conservative, planning against a disastrous but unlikely scenario. In this paper, we present facility location models that combine the two objectives by minimizing the expected cost while bounding the relative regret in each scenario. In particular, the models seek the minimum-expected-cost solution that is p-robust; i.e., whose relative regret is no more than 100p% in each scenario.
this paper, the partition and repeat' strategy (see McGinnis, et al. 1992) is used for managing PCB assembly resources. Following McGinnis et al. (1992), a partition and repeat strategy (PAR) partitions the components required by the... more
this paper, the partition and repeat' strategy (see McGinnis, et al. 1992) is used for managing PCB assembly resources. Following McGinnis et al. (1992), a partition and repeat strategy (PAR) partitions the components required by the family into subsets such that the group has enough staging capacity for each subset. The group is con gured to run each subset in turn, requiring the accumulation of all the partially completed PCBs in the family. The advantage of the PAR strategy is that it requires relatively few setups; the disadvantage is that it requires every PCB in the family to be accumulated as partially completed work in process
ABSTRACT We present a new formulation for assigning students to groups to maximize the diversity within each group. We compare its solution to that of the well-known linearized maximally diverse grouping problem. The new formulation... more
ABSTRACT We present a new formulation for assigning students to groups to maximize the diversity within each group. We compare its solution to that of the well-known linearized maximally diverse grouping problem. The new formulation minimizes similar student attributes within a group by penalizing the deviations from the target number of students with each attribute within each group. We apply the model to the task of assigning University of Michigan Engineering Global Leadership (EGL) Honors Program students to cultural families. The EGL program implemented the results of the model with minimal changes.
... They propose a staggered work shift sched-ule, each starting on the hour, to better match the ... Finally, there seems to have been relatively little work that cuts across the two primary ... dutyhours/dh index.asp 2. ACGME (2008) The... more
... They propose a staggered work shift sched-ule, each starting on the hour, to better match the ... Finally, there seems to have been relatively little work that cuts across the two primary ... dutyhours/dh index.asp 2. ACGME (2008) The ACGME's approach to limit resident duty hours ...
Increased nurse-to-patient ratios are associated negatively with increased costs and positively with improved patient care and reduced nurse burnout rates. Thus, it is critical from a cost, patient safety, and nurse satisfaction... more
Increased nurse-to-patient ratios are associated negatively with increased costs and positively with improved patient care and reduced nurse burnout rates. Thus, it is critical from a cost, patient safety, and nurse satisfaction perspective that nurses be utilized efficiently and effectively. To address this, we propose a stochastic programming formulation for nurse staffing that accounts for variability in the patient census and nurse absenteeism, day-to-day correlations among the patient census levels, and costs associated with three different classes of nursing personnel: unit, pool, and temporary nurses. The decisions to be made include: how many unit nurses to employ, how large a pool of cross-trained nurses to maintain, how to allocate the pool nurses on a daily basis, and how many temporary nurses to utilize daily. A genetic algorithm is developed to solve the resulting model. Preliminary results using data from a large university hospital suggest that the proposed model can save a four-unit pool hundreds of thousands of dollars annually as opposed to the crude heuristics the hospital currently employs.
ABSTRACT The centdian problem [2,3] seeks P points that minimize a convex combination of the median (average) and center (maximum) distance objectives. In this paper we outline an approach to finding all non-dominated points on the... more
ABSTRACT The centdian problem [2,3] seeks P points that minimize a convex combination of the median (average) and center (maximum) distance objectives. In this paper we outline an approach to finding all non-dominated points on the tradeoff between these two objectives, including those that are contained in the duality gap region (i.e., those that could not be found by minimizing a convex combination of two other solutions). We solve the problem assuming that facilities must be located on the nodes of the network. We first solve the P-center problem yielding the smallest maximum distance within which all demands can be served by P facilities. We then solve the P-median problem yielding the smallest average distance. The largest distance, Dc , for the P-median solution is computed. A suitably large endogenously determined constant is added to all distances greater than or equal to Dc. The solution to a new P-median problem with the modified distance matrix will utilize new facility locations and will have a maximum assigned distance strictly less than Dc, and a larger average distance. We then compute the maximum distance for the new solution and repeat the process until the maximum distance for the last solution found equals the objective function value for the P-center problem. The algorithm was tested on problems ranging in size from 49 nodes to 200 nodes and for values of P=5, 10, 15, 20.
... Page 7. -1-The Carbon Footprint of UHT Milk Lejun Qi 1 ● Saif Benjaafar2 ● Shaun Kennedy2 1Industrial and Systems Engineering, University of Minnesota, Minneapolis, MN 55455, saif@umn.edu ... (Thoma et al. 2009) and (McReynolds 2009).... more
... Page 7. -1-The Carbon Footprint of UHT Milk Lejun Qi 1 ● Saif Benjaafar2 ● Shaun Kennedy2 1Industrial and Systems Engineering, University of Minnesota, Minneapolis, MN 55455, saif@umn.edu ... (Thoma et al. 2009) and (McReynolds 2009). ...
ABSTRACT The p-median problem is central to much of discrete location modeling and theory. While the p-median problem is NP-hard on a general graph, it can be solved in polynomial time on a tree. A linear time algorithm for the 1-median... more
ABSTRACT The p-median problem is central to much of discrete location modeling and theory. While the p-median problem is NP-hard on a general graph, it can be solved in polynomial time on a tree. A linear time algorithm for the 1-median problem on a tree is described. We also present a classical formulation of the problem. Basic construction and improvement algorithms are outlined. Results from the literature using various metaheuristics including tabu search, heuristic concentration, genetic algorithms, and simulated annealing are summarized. A Lagrangian relaxation approach is presented and used for computational results on 40 classical test instances as well as a 500-node instance derived from the most populous counties in the contiguous United States. We conclude with a discussion of multi-objective extensions of the p-median problem.
ABSTRACT This paper presents an optimization-based model to compute least-cost-to-society strategies for technology deployment and retirement in the passenger vehicle and electric power generation sectors to meet greenhouse gas (GHG)... more
ABSTRACT This paper presents an optimization-based model to compute least-cost-to-society strategies for technology deployment and retirement in the passenger vehicle and electric power generation sectors to meet greenhouse gas (GHG) reduction targets set by the Intergovernmental Panel on Climate Change (IPCC) through 2050. The model output provides a timeline and technology quantities to be deployed or retired early for years 2011 through 2050, as well as annual and total costs- to-society and GHG emissions. Model inputs include costs of deploying or retiring incumbent and elective GHG-reducing technologies, as well as numerous scenarios for energy prices and technology costs. On top of constraints on GHG emissions, as well as scenario constraints for retirement and market factors, the model framework provides the ability to investigate the effect of additional constraints such as renewable portfolio standards and increases in corporate average fuel economy. Ultimately, the framework is targeted in scope to operate in a broader policy discussion capable of quantitatively evaluating existing or proposed policy measures for any country or geographic region. The paper describes the model framework and its various components, along with its relevance and application in technology policy. It also presents the mathematical formulation of the linear programming model that runs at the core of the framework. Results are presented from application of the framework to the U.S. automotive market operating under IPCC GHG constraints to determine technology deployment and retirement trajectories for automotive technologies through 2050 under various future scenarios.

And 88 more