Inventory Accuracy
Inventory Accuracy
Inventory Accuracy
English Kansas State University Long Yu University of Arkansas Manuel Rossetti University of Arkansas Nebil Buyurgan University of Arkansas
Abstract Inventory accuracy is a critical concern in most industrial environments. Specifically, when on-hand inventories dont match recorded inventories, time is spent rectifying observed problems. Even though the activities, which span corrections to the data base to expedited replenishment, demand resources to rectify human error, they are necessary to satisfy customer expectations. If the inventory accuracy is poor, tangible costs (e.g., the loss of customer good-will) are realized. Cycle counting is a proven methodology used to monitor inventory accuracy on a continuous basis. It requires that items kept in inventory be counted periodically to ensure an accurate inventory. This approach requires 100% inspection of all stock keeping units maintained in inventory on a periodic basis. This work demonstrates the effectiveness of a statistical process control (SPC) approach to monitoring inventory accuracy as an alternative to cycle counting. The benefit of such an approach is that random samples are utilized in lieu of 100% inspection. In this research, we document the unique statistical properties of inventory scenarios found in large retail/warehousing environments. The robustness of SPC, specifically the p chart, in these environments is measured through computer simulation and resulting probabilities of Type I and II errors are presented.
1.
Introduction
For operations in large warehousing/retail environments to be effective, the perpetual inventory need to match the recorded inventory for all stock keeping units (SKUs) in an operational unit. In this work, the inventory accuracy of a given operational unit is the number of SKUs with perpetual inventory matching the recorded levels divided by the number of SKUs in the operational unit. If inventory accuracy is poor or if the on-hand inventories fail to match recorded inventories in an excessive manner, personnel must spend time rectifying discrepancies which may include the waste of money for unnecessary replenishment and/or having insufficient stock to satisfy customer demand. Cycle counting is a proven methodology used to check and monitor inventory accuracy through continuous verification of inventory. Unlike traditional physical inventory counting that has to stop the operation and count all items at one time, cycle counting provides an online process of inventory accuracy checking. Cycle counting is repeated for each stock keeping unit (SKU) in inventory for each reporting period, and inventory records are updated as required. When cycle counting is implemented in a facility, all SKUs are counted periodically to ensure inventory matches the levels recorded in the companys inventory system. Brooks and Wilson [1] state that through the proper use of cycle counting, inventory record accuracy above 95% can be consistently maintained. When the number of on-hand items for a given SKU does not match the recorded level in a companys inventory system, the inventory for that SKU is considered inaccurate or in the quality domain defective. It is well known that statistical process control (SPC) is an effective statistical approach used to monitor processes and improve quality through variation reduction. SPC requires periodic assessment of process stability in view of its natural random behavior based upon observed outcome of a random sample. It has the benefit of much reduced resource expenditures as it depends upon random samples and not 100% inspection of all products produced. SPC is used for continuous process improvement, process monitoring and decision making, by ongoing statistical examination of data. Effective use of SPC results in improved product quality, reduction of waste, improved productivity and improved customer service. In essence, SPC is utilized to monitor the variability of a process and embedded statistical tests are used to judge the stability or stationarity of a process. Improvements or degradation of process performance are observed and interpreted in a statistically valid manner. The production of inferior quality goods is always of the highest priority, and it must be minimized. In SPC, one of the most popular approaches is to monitor the percent nonconforming from a process. For example, a manufacturing facility may desire to know the percent nonconforming units coming off a production line. In lieu of inspecting each item produces, which can lead to very large inspection errors, a random sample of n is selected from the production process and each item is examined to determine if it meets quality requirements. The observed number of defective items is divided by the sample size to provide a point estimate of the per cent nonconforming of the process. This scenario is perfectly suited for the application of the SPC technique known as a p chart. The p chart is used to monitor and control the percent nonconforming of such a
process. Historically, SPC and cycle counting have advanced in isolation from each other. If techniques can be developed that integrate the two methodologies, an effective approach can be realized for environments where the number of SKUs exceeds the practical limits of cycle counting. The goal of this research is to show that SPC can be applied to environments needing to monitor and hopefully improve inventory accuracy. It is the thesis of this work to present the p chart as an efficient alternative to cycle counting as a means of monitoring inventory accuracy. We refer to this application of the p chart to the inventory accuracy domain as perpetual inventory record sampling (PIRS). It is not uncommon in some organizations to find thousands of SKUs within an operational department and hundreds of thousands of SKUs representing the entire organization across the operational departments. Such environments are common to large distributors and retailers. In such environments, management often has the impression that cycle counting is not feasible. When cycle counting or 100% inspection is not feasible, statistically valid sampling is an option to provide on going estimation of population parameters (e.g. inventory accuracy). PIRS is a viable approach to cycle counting for large retailers as it significantly reduces the number of SKUs inspected if the associated Type I and II error rates can tolerated. Furthermore, it is argued that cycle counting can be excessively error proned as it necessarily requires 100% inspection. It is a well accepted fact in the quality control domain that 100% inspection is risky in view of operator error. If one must depend upon 100% inspection or cycle counting, care should be utilized in interpretation of results as operational errors are almost assured unless great lengths are taken to error-proof the inspection process. A given department is made up of SKUs each having different statistical properties which violate the assumption of SPC that all samples come from the same stationary process. In this paper, we present a robustness analysis of a SPC/PIRS approach for cycle counting. This research is unique in that the error rates, type I and type II, are examined when the population is represented by individual SKUs each having a unique and stationary Bernoulli processes that are descriptive of the likelihood that the perpetual inventory will match the recorded inventory level. The effectiveness of the SPC/PIRS approach examined through a classic Type I and II error rate analysis when process parameters are set at values often found in industry based upon computer simulation.
1.1.
Related Work
This research requires the integration of two technical fields: statistical process control and cycle counting. In this section, we provide a brief overview of these two fields to support the utilization of SPC as an alternative approach to cycle counting. Montgomery [2] describes many ways to manage the quality of a process. The fundamental tools of SPC include histograms, check sheets, Pareto charts, cause and effect diagrams, defect concentration diagrams, scatter diagrams, and control charts. Shewhart [3] is credited as the inventor of the control chart and presented industry a tool
to monitor the quality of products or services without inspecting all products being produced. Most programs in industrial engineering offer courses in this area and utilized texts such as Montgomery [2]. There are two types of data that are monitored in processes: variable and attribute. Variable data are measured on a continuous numerical scale (e.g., length, width, etc.). The Shewhart charts for these types of quality characteristic are called variable charts, and some commonly used control charts for variable data are X , R, and S charts. The use of X and R charts simultaneously is the most common application of variable control charts. Many quality characteristics cannot be represented by a number. In such cases, the observed results are usually classified as either a zero or one (e.g., conforming/nonconforming, defective/not-defective, etc.). This type of quality characteristic is called an attribute. The most commonly used attribute charts include p, np, c, and u charts. Regardless of the data monitored with the selected control chart, a random sample is selected from the process of concern and the calculated statistic (% non-conforming for the p-chart, X for the X chart, etc.) is compared to the prescribed upper control limit (UCL) and lower control limit (LCL). The control limits of a given statistic, say Y, are determined as follows: UCLY = E[Y] + L Var[Y ] (1) LCLY = E[Y] L Var[Y ] Where, E[Y] and Var[Y] are the expected value and variance of random variable Y, respectively. It is customary to set the value of L such that the type I error is controlled. In most applications, L is set equal to three. When a data point falls out of the limits of a given control chart, it indicates an out of control condition. The average run length (ARL) is the average number of points that is needed to detect an out of control condition. For control charts the ARL can be calculated from 1 ARL = (2) p' Here, p is the probability that a point is plotted beyond the control limits. In other words, the in-control ARL for a process that is stationary is simply the inverse of the probability of a type I error or . ARL calculations become quite complicated as documented in the literature. For example, Champ and Woodall [4] model control charts when runs rules are utilized, and English and Alam [5] present ARLs for control charts when data are serially correlated. Similarly, the out of control ARL is 1/[1-P(Type II error)] and is the average number of points required to detect a process shift when one has occurred. So, large in-control ARLs and small out-of-control ARLs are desired. The ARL has been used in many research advances to evaluate the performance of most control charts. For example, Lucas and Saccucci [6] and Crowder [7] provide alternative means to compute the ARL the EWMA control chart, while Champ and Woodall [4] utilize Markov processes to compute the ARLs for the X chart. Vardeman
and Ray [8] present tables for average run lengths for the exponential case and comments on an application of exponential CUSUM charts to controlling the intensity of a Poisson process. Many other examples are found in the literature. Inventory record accuracy is critical for the efficient operation of organizations. Inventory record accuracy is based on what is observed the field verses what is recorded in the inventory records (Brooks and Wilson [1] and Plaff [9]). With the appropriate inventory system and efficient processes for inventory inspection, inventory accuracy can be monitored. An inventory record accuracy based on the dollar value is not recommended, since the dollar value grants bias to the more costly SKUs. Inventory record accuracy for a given department computed as follows: Total number of accurate records (3) Accuracy = 100%
Number of records checked
In effect, inventory accuracy for a given department made up of N SKUs is an attribute variable consisting of the percentage of accurate inventory records, and it is suited for the application of a p chart. Iglehart and Morey [10] attempt to select the type and frequency of counts and to modify the predetermined stocking policy so as to minimize the total cost per unit time subject to the probability of a warehouse denial between counts being below a prescribed level. That is, the beginning of such a process relies heavily upon an accurate inventory accuracy be realized by the cycle counting technique. Carlson and Gilman [11] describe the procedures that use of cycle counting. Wilson [12] and Stahl [13] present cycle counting as a quality assurance process emphasizing the finding and correcting of errors. Bergman [14] illustrates a multiple criteria weighting system by which each SKU is ranked according to common usage across the bill-of-material, lead-time, method of issue, and number of issues. Flores and Whybark [15, 16] formally extend the ABC analysis based on usage and dollar values to include non-cost factors such as certainty of supply, impact of a stock out, and rate of obsolescence. Tersine [17] describes that the goals of cycle counting are: 1. To identify the causes of errors, 2. To correct the conditions causing the errors, 3. To maintain a high level of inventory record accuracy, and 4. To provide a correct statement of assets. Inventory accuracy can be maintained above 95%, if the cycle counting is properly implemented as reported in Brooks and Wilson [1]. Cycle counting is used to find the reasons leading to inventory inaccuracy rather than simply adjusting the number of records. In cycle counting, items are counted; and the errors and reasons leading to the errors are recorded. As a result, inventory accuracy is continuously improved with the use of cycle counting. Brooks and Wilson [1] recommended several cycle counting methodologies that might be implemented in a particular inventory process. They are: control group cycle counting, random sample cycle counting, ABC cycle counting, and process control cycle counting. Cycle counting techniques require 100% inspection, and it is documented to provide 95% continuous inventory accuracy. 100% inspection is not feasible for many companies that have a large volume of inventory, and random samples are necessary. SPC,
specifically the use of control charts, is known to be well-suited to monitor process performance base upon random sample, but the statistical errors can depress the ability of estimating inventory accuracy which could be an issue in using SPC to monitor inventory accuracy.
2.
Process Model
It is necessary to define basic notation and our assumptions to describe the modeling details of this effort.
2.1
Notation
N = Number of stock keeping units in an operational unit/department N = Sample size collected from a given operational unit P = The expected value of the proportion of SKUs that the perpetual inventory matches the recorded inventory levels (called inventory accuracy for this application) pi = Bernoulli rate for the inventory accuracy of sku i i = Shift in pi of sku i Xi,j = Indicator variable give a value of 0 or 1 if sku i for sample j is inaccurate or accurate, respectively. pi is the probability that sku i will be found accurate (that is, the perpetual inventory matches the recorded inventory level). nj = jth sample of n units from a given operational unit Dj = Number of defective units in nj
=
X
i =1
nj
i, j
which is the estimate of p, the expected inventory accuracy, for an operational nj unit for sample j V = Variability in the pi values for a given operational unit
pj =
Dj
2.2
Assumptions
1. The successive samples collected for an operational unit are selected at random from all SKUs. 2. The samples are collected without replacement. 3. pi ~ Uac(p-V, p+V) where Uac is the uniform distribution for a continuous random variable between p-V and p+V. 4. When shifted conditions are considered, all SKUs are shifted by the same i.
2.3
Modeling Details
For this application, it is desired to assess the robustness of the p chart with 3 sigma limits (see, Montgomery [2]) in detecting changes in inventory accuracy when the operational unit has hundreds to thousands of stock keeping units. It is unreasonable to assume that the inventory accuracy for each sku follows the same Bernoulli process, and as a result, the pi for a given sku is in fact a random variable. This application assumes that the pis are an absolutely uniform and continuous random variable between the values of p-V and p+V. As the inventory accuracy for a given sku i, pi, shifts, the detection strength or robustness of the p chart must be measured in view of the variability of the pis. Since sampling is made without replacement and the pis are a random variable, the probability of detection cannot be exactly determined analytically; therefore, simulation is in order. For this application, we consider the following scenarios that can be found in industry today: Table 1. Experimental Factors and Levels Considered Experimental Factors Levels Considered N 300, 6000, 11,000 P 0.5, 0.75 i No shift, 0.05, 0.10 V 0.05, 0.10, 0.15 The sample size, n, is determined in accordance to the approach described in Israel [18]. Specifically, the sample size recommended is dependent upon N and p. The resulting values of n considered for this work is shown below in Table 2. N 300 6,000 11,000 300 6,000 11,000 Table 2. n for given N and p values P N 0.5 169 0.5 363 0.5 371 0.75 149 0.75 277 0.75 283
The simulation is structured in accordance to the environment common to cycle counting. That is, N SKUs are contained within a given operational unit. For each of the SKUs, individual pis are generated according to the specific levels of p and V. Each pi is shifted in accordance to the experimental setting of i. The process is then run until a point is plotted beyond the control limits of the p chart using the values of p and n for the given experimental condition. The run length, RL, is recorded for this experiment and the process is run again using the previously generated pis 1000 times, and a resulting average run length is determined. This process is replicated 10 times in order to compute
the resulting 95% confidence interval of the mean ARL for a given experimental condition and is inverted to provide an inferential estimate of the probability of either a Type I or II error. In entirety, 54 experimental conditions are considered. Considering the number of RLs in a given ARL and the number of replications, the findings reported are representative of 540,000 experiments. In the results section, we summarize our findings in accordance to the impact variability in the underlying Bernoulli processes has on the detection strength of the p chart used to track inventory accuracy. The results collected are representative of the values observed in some sectors of industry. The combined effect of the levels of each of the factors above yield 18 experimental conditions that would give rise to the effective type I error rates or in-control ARLs for this application of the p-chart. That is, with the simulation of the above described experimental conditions, we effectively consider practical environments where the variability of the Bernoulli rate is mild (0.05), average (0.10) or severe (0.15). The remaining 36 scenarios represent shifted conditions where the Bernoulli rates are increased or improved by 0.05 and 0.10. The resulting probabilities provide great insight into the Type I and II errors.
3. 3.1
As described in the proceeding section, 18 experimental conditions are modeled to examine the effective type I error rates. For each of the 18 experimental conditions, we run the p-chart to the out of control condition 1000 times and determine the resulting ARLs for 10 replications. The sample sizes utilized follow the recommendations of Israel (2003). The resulting 10 ARLs are used to determine 95% confidence intervals on the mean Type I error rate. In Figures 1 through 6, the upper and lower 95% confidence intervals on the average probability of a type I error () when the selected levels of variability are included. For each combination of N and n, the confidence intervals are plotted along side the values as if there were no variability. Obviously, the values with no variability are simply calculated using the associated Binomial distribution.
0.0034
0.0032
0.003
0.0028 0.0026
LCL UCL
0.0024
0.0022
0.0034
0.0032
0.003
0.0028 0.0026
LCL UCL
0.0024
0.0022
0.0034
0.0032
0.003
0.0028 0.0026
LCL UCL
0.0024
0.0022
0.0034
0.0032
0.003
0.0028 0.0026
LCL UCL
0.0024
0.0022
10
0.0034
0.0032
0.003
0.0028 0.0026
LCL UCL
0.0024
0.0022
0.0034
0.0032
0.003
0.0028 0.0026
LCL UCL
0.0024
0.0022
11
The 18 scenarios considered provide clear evidence that variability in the p values result in slight but statistically significant increases the Type I errors. This increase, even though existent, does not appear to be so significant to disallow the use of p charts in this environment. As a result, in as long as the variability in the p values across all SKUs within a department is small, the use of the p chart may be used will a relative assurance that the Type I error is very close to that of classic application of the p chart where the Bernoulli rates are constant.
3.2
Type II Errors
The remaining 36 scenarios provide an assessment of the Type II errors that would be realized in this application domain. The quality control literature often focuses upon the detection strength of the control chart to detect out of control conditions. As a result, the confidence intervals on the average probability of detection (1-) or power are recorded for each of the shifted conditions as outlined in Table 1. As included for the Type I error analysis, we have computed the power of the p chart to detect the shift condition if there was no variability across SKUs within an operational unit. As was the case for , the 1- calculation without variability is a simple application of the Binomial distribution. Figures 7-18 present the simulated results.
0.05
0.04
0.02
0.01
12
0.4
0.35
0.3
0.25 1-
LCL UCL 1-
0.2
0.15
0.1
0.05
0.13
0.125
0.12 1-
LCL UCL 1-
0.115
0.11
0.105
13
1-
0.785 0.78 0.775 0.77 0.765 0.76 0.05 0.1 Variability 0.15
UCL 1-
14
0.815
0.81
0.79
0.785
0.05
0.04
0.02
0.01
15
0.5
0.4
0.3
LCL UCL 1-
1-
0.2
0.1
0.125
0.12
0.115 1-
LCL UCL 1-
0.11
0.105
0.1
16
0.86
0.85
0.84
LCL UCL 1-
1-
0.83 0.82 0.81 0.8 0.05 0.1 Variability 0.15
0.115
0.11
0.1
0.095
17
Figures 7-18 present our findings in view of Type II errors. It is evident for these type of changes, the power of the p chart is not impacted by the introduction of variability across SKUs in an operational unit. This observation provides motivation for the use of the PIRS/SPC approach to tracking inventory accuracy.
4.
Summary
In this paper, we present the impact of nonstationarity or variability in the Bernoulli rate on the p-chart as implied by the utilization of the p-chart to monitor the proportion of SKUs within an operational department whose perpetual inventory matches the recorded inventory in time. For poor to fair inventory accuracy (50-75% accurate), it is shown that variability in the underlying rates for specific SKUs significantly increases the type 1 error rates. In particular, we show that the probability of a Type I error is significantly more when variability is introduced. Even though the increases are statistically significant, it does not appear that the increases would be practically impacting. As the variability is increased for a given inventory accuracy (p), department size (N) and sample size (n), there does not appear to be an increase in type I errors. The results in view of estimating effective Type II errors indicate that there is not a significant change in detection strength when variability across SKUs within a department in introduced. This finding provides sufficient motivation and trust of the SPC/PIRS approach to monitory inventory accuracy.
18
As a rule of thumb, if this approach is utilized in industry to monitor the perpetual inventory accuracy, care should be exercised to establish the sample size. For smaller departments, say around 300 SKUs, sample sizes of 200 would be sufficient to judge changes in inventory accuracy. As larger departments (SKUs on the order of thousands) utilize this approach, sample sizes of 300-400 are certainly justified.
References
[1] Brooks, R.B. and Wilson, L.W. (2004) Inventory Record Accuracy Unleashing the Power of Cycle Counting, John-Wiley & Sons, 2004. [2] Montgomery, D.C. (2005) Applied statistics and probability for engineers 5th edition, John Wiley and Sons, New York. [3] Shewhart, W.A. (1926) Quality Control Charts, Bell System Technical Journal, 593-603. [4] Champ, C.W. and Woodall, W.H. (1987) Exact results for Shewhart Control Charts with Supplementary Runs Rules, Technometrics, 29, 4, 393-229. [5] English, J.R. and Alam, J. (2001) Modeling and Process Disturbance Detection of Autocorrelated Data, Nonlinear Analysis, 47, 2103-2111. [6] Lucas, J.M. and Saccucci, M.S. (1990) Average run lengths for exponentially weighted moving average control schemes using the Markov chain approach, Journal of Quality Technology, 22, 2, 154-62. [7] Crowder, S.V. (1987) Simple Method for Studying Run-length Distributions of Exponentially Weighted Moving Average Charts, Technometrics, 29, 4, 401-407. [8] Vardeman, S. and Ray, D. (1985) Average Run Length for CUSUM Schemes when Observations are Exponentially Distributed, Technometrics, 27, 2, 145-150. [9] Plaff, B. (1999) Count your parts, improving storeroom accuracy for maintenance customers, IIE Solutions, 31, 12, 29-31. [10] Iglehart, D. and Morey, R. (1972) Inventory Systems with Imperfect Asset information, Management Science, 18, 8, B388-B394. [11] Carlson, J. and Gilman, R. (1978) Inventory Decision Systems: Cycle Counting, Journal of Purchasing and Materials Management, Winter, 14, 4, 21-28. [12] Wilson, J. (1995) Quality control methods in cycle counting for record accuracy management, International Journal of Operations & Production Management, 15, 7, 27-39. [13] Stahl, R.A. (1998) Cycle Counting: A Quality Assurance Process, Hospital Material Management Quarterly, 20, 2, 22-28. [14] Bergman, R.P. (1988) A B Count Frequency Selection for Cycle Counting Supporting MRP II, CIM Review with APICS News, May, pp. 35-36. [15] Flores, B.E. and Whybark, D.C. (1986) Multiple Criteria ABC Analysis, International Journal of Operations & Production Management, 6, 3, 38-46. [16] Flores, B.E. and Whybark, D.C. (1987) Implementing Multiple Criteria ABC Analysis, Journal of Operations Management, 7, 1-2, 79-85.
19
[17] Tersine, R. (1994) Principles of Inventory and Materials Management, 4th edition, Prentice Hall: Englewood Cliffs, New Jersey, 1994. [18] Israel, G.D. (2003) Determining Sample Size, PEOD6, University of Florida, Agricultural Education and Communication Department, Institute of Food and Agricultural Sciences, http://edis.ifas.ufl.edu.
Biographical Sketches
John R. English is Dean and the LeRoy C. Paslay Chair in Engineering of the College of Engineering at Kansas State University. Dr. English is also Professor of Industrial Engineering. He has BSEE and MSOR degrees from the University of Arkansas and a Ph.D. in Industrial Engineering and Management from Oklahoma State University. He has been on the faculty at both Texas A&M University and the University of Arkansas. At the University of Arkansas, he served as the Department Head of Industrial Engineering for over seven years. His research interests include all aspects of quality and reliability engineering. He has numerous journal articles in these areas. He is a registered professional engineer in the state of Arkansas. He has served as General Chair and Chair of the Board of Directors for RAMS, Associate Editor for IEEE Transactions on Reliability, the VP for Systems Integration in IIE, the Sr. VP of Publications in IIE, and the Editor of Focused Issue IIE Transactions on Quality and Reliability Engineering. He is currently a member of the Administrative Committee for the Reliability Society of IEEE, a fellow of IIE, and a member of ASQ. Long Yu is a M.S. student in the Department of Industrial Engineering at the University of Arkansas. He holds a BS in Automatic Control from the Beijing Institute of Technology. His research interests reside in reliability and quality control. Manuel Rossetti is Associate Professor of Industrial Engineering. He received his Ph.D. in Industrial and Systems Engineering from The Ohio State University. Dr. Rossetti has published over 35 journal and conference articles in the areas of transportation, manufacturing, health care and simulation and he has obtained over $1.5 million dollars in extra-mural research funding. His research interests include the design, analysis, and optimization of manufacturing, health care, and transportation systems using stochastic modeling, computer simulation, and artificial intelligence techniques. He was selected as a Lilly Teaching Fellow in 1997/98 and has been nominated three times for outstanding teaching awards. He is currently serving as Departmental ABET Coordinator. He serves as an Associate Editor for the International Journal of Modeling and Simulation and is active in IIE, INFORMS, and ASEE. He served as co-editor for the WSC 2004 conference. Nebil Buyurgan is Assistant Professor of Industrial Engineering. He received his Ph.D. in Engineering Management with emphasis on manufacturing engineering from the University of Missouri-Rolla. After receiving his Ph.D. degree in 2004, he joined the Industrial Engineering department at the University of Arkansas. As the author or coauthor of over 20 technical papers, his research and teaching interests include network-
20
centric manufacturing systems, Auto-ID technologies, and modeling and analysis of discrete event manufacturing systems. He is a member of IIE, SME, ASEE, and IEEE.
21