Time Series Analysis
Time Series Analysis
Indecision and delays are the parents of failure. This site is intended to help managers and
administrators do a better job of anticipating, and hence a better job of managing
uncertainty, by using effective forecasting and other predictive techniques.
To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the
dialogue box, e.g. "cash flow" or "capital cycle" If the first appearance of the
word/phrase is not what you are looking for, try Find Next.
MENU
Companion Sites:
• Business Statistics
• Excel For Statistical Data Analysis
• Topics in Statistical Data Analysis
• Computers and Computational Statistics
• Questionnaire Design and Surveys Sampling
• Probabilistic Modeling
• Systems Simulation
• Probability and Statistics Resources
• Success Science
• Leadership Decision Making
• Linear Programming (LP) and Goal-Seeking Strategy
• Linear Optimization Solvers to Download
• Artificial-variable Free LP Solution Algorithms
• Integer Optimization and the Network Models
• Tools for LP Modeling Validation
• The Classical Simplex Method
• Zero-Sum Games with Applications
• Computer-assisted Learning Concepts and Techniques
• Linear Algebra and LP Connections
• From Linear to Nonlinear Optimization with Business
Applications
• Construction of the Sensitivity Region for LP Models
• Zero Sagas in Four Dimensions
• Business Keywords and Phrases
• Collection of JavaScript E-labs Learning Objects
• Compendium of Web Site Review
1. Introduction
2. Effective Modeling for Good Decision-Making
3. Balancing Success in Business
4. Modeling for Forecasting
5. Stationary Time Series
6. Statistics for Correlated Data
1. Introduction
2. Moving Averages and Weighted Moving Averages
3. Moving Averages with Trends
4. Exponential Smoothing Techniques
5. Exponenentially Weighted Moving Average
6. Holt's Linear Exponential Smoothing Technique
7. The Holt-Winters' Forecasting Technique
8. Forecasting by the Z-Chart
9. Concluding Remarks
1. Box-Jenkins Methodology
2. Autoregressive Models
1. Adaptive Filtering
2. Hodrick-Prescott Filter
3. Kalman Filter
1. Neural Network
2. Modeling and Simulation
3. Probabilistic Models
4. Event History Analysis
5. Predicting Market Response
6. Prediction Interval for a Random Variable
7. Census II Method of Seasonal Analysis
8. Delphi Analysis
9. System Dynamics Modeling
10. Transfer Functions Methodology
11. Testing for and Estimation of Multiple Structural Changes
12. Combination of Forecasts
13. Measuring for Accuracy
1. Introduction
2. Modeling Financial Time Series and Econometrics
3. Econometrics and Time Series Models
4. Simultaneous Equations
5. Further Readings
Chapter 10: Economic Order and Production Quantity Models for Inventory
Management
1. Introduction
2. Economic Order and Production Quantity for Inventory Control
3. Optimal Order Quantity Discounts
4. Finite Planning Horizon Inventory
5. Inventory Control with Uncertain Demand
6. Managing and Controlling Inventory
1. Markov Chains
2. Leontief's Input-Output Model
3. Risk as a Measuring Tool and Decision Criterion
4. Break-even and Cost Analyses
5. Modeling the Bidding Process
6. Product’s Life Cycle Analysis and Forecasting
Chapter 12: Learning and The Learning Curve
1. Introduction
2. Psychology of Learning
3. Modeling the Learning Curve
4. An Application
The ability to model and perform decision modeling and analysis is an essential feature of
many real-world applications ranging from emergency medical treatment in intensive
care units to military command and control systems. Existing formalisms and methods of
inference have not been effective in real-time applications where tradeoffs between
decision quality and computational tractability are essential. In practice, an effective
approach to time-critical dynamic decision modeling should provide explicit support for
the modeling of temporal processes and for dealing with time-critical situations.
One of the most essential elements of being a high-performing manager is the ability to
lead effectively one's own life, then to model those leadership skills for employees in the
organization. This site comprehensively covers theory and practice of most topics in
forecasting and economics. I believe such a comprehensive approach is necessary to fully
understand the subject. A central objective of the site is to unify the various forms of
business topics to link them closely to each other and to the supporting fields of statistics
and economics. Nevertheless, the topics and coverage do reflect choices about what is
important to understand for business decision making.
Almost all managerial decisions are based on forecasts. Every decision becomes
operational at some point in the future, so it should be based on forecasts of future
conditions.
Forecasts are needed throughout an organization -- and they should certainly not be
produced by an isolated group of forecasters. Neither is forecasting ever "finished".
Forecasts are needed continually, and as time moves on, the impact of the forecasts on
actual performance is measured; original forecasts are updated; and decisions are
modified, and so on.
For example, many inventory systems cater for uncertain demand. The inventory
parameters in these systems require estimates of the demand and forecast error
distributions. The two stages of these systems, forecasting and inventory control, are
often examined independently. Most studies tend to look at demand forecasting as if this
were an end in itself, or at stock control models as if there were no preceding stages of
computation. Nevertheless, it is important to understand the interaction between demand
forecasting and inventory control since this influences the performance of the inventory
system. This integrated process is shown in the following figure:
The decision-maker uses forecasting models to assist him or her in decision-making
process. The decision-making often uses the modeling process to investigate the impact
of different courses of action retrospectively; that is, "as if" the decision has already
been made under a course of action. That is why the sequence of steps in the modeling
process, in the above figure must be considered in reverse order. For example, the output
(which is the result of the action) must be considered first.
There may have also sets of constraints which apply to each of these
components. Therefore, they do not need to be treated separately.
Controlling the Decision Problem/Opportunity: Few problems in life, once solved, stay
that way. Changing conditions tend to un-solve problems that were previously solved,
and their solutions create new problems. One must identify and anticipate these new
problems.
Remember: If you cannot control it, then measure it in order to forecast or predict it.
Forecasting is a prediction of what will occur in the future, and it is an uncertain process.
Because of the uncertainty, the accuracy of a forecast is as important as the outcome
predicted by the forecast. This site presents a general overview of business forecasting
techniques as classified in the following figure:
Progressive Approach to Modeling: Modeling for decision making involves two distinct
parties, one is the decision-maker and the other is the model-builder known as the
analyst. The analyst is to assist the decision-maker in his/her decision-making process.
Therefore, the analyst must be equipped with more than a set of analytical methods.
Integrating External Risks and Uncertainties: The mechanisms of thought are often
distributed over brain, body and world. At the heart of this view is the fact that where the
causal contribution of certain internal elements and the causal contribution of certain
external elements are equal in governing behavior, there is no good reason to count the
internal elements as proper parts of a cognitive system while denying that status to the
external elements.
The decision process is a platform for both the modeler and the decision maker to engage
with human-made climate change. This includes ontological, ethical, and historical
aspects of climate change, as well as relevant questions such as:
Quantitative Decision Making: Schools of Business and Management are flourishing with
more and more students taking up degree program at all level. In particular there is a
growing market for conversion courses such as MSc in Business or Management and post
experience courses such as MBAs. In general, a strong mathematical background is not a
pre-requisite for admission to these programs. Perceptions of the content frequently focus
on well-understood functional areas such as Marketing, Human Resources, Accounting,
Strategy, and Production and Operations. A Quantitative Decision Making, such as this
course is an unfamiliar concept and often considered as too hard and too mathematical.
There is clearly an important role this course can play in contributing to a well-rounded
Business Management degree program specialized, for example in finance.
Specialists in model building are often tempted to study a problem, and then go off in
isolation to develop an elaborate mathematical model for use by the manager (i.e., the
decision-maker). Unfortunately the manager may not understand this model and may
either use it blindly or reject it entirely. The specialist may believe that the manager is too
ignorant and unsophisticated to appreciate the model, while the manager may believe that
the specialist lives in a dream world of unrealistic assumptions and irrelevant
mathematical language.
Such miscommunication can be avoided if the manager works with the specialist to
develop first a simple model that provides a crude but understandable analysis. After the
manager has built up confidence in this model, additional detail and sophistication can be
added, perhaps progressively only a bit at a time. This process requires an investment of
time on the part of the manager and sincere interest on the part of the specialist in solving
the manager's real problem, rather than in creating and trying to explain sophisticated
models. This progressive model building is often referred to as the bootstrapping
approach and is the most important factor in determining successful implementation of a
decision model. Moreover the bootstrapping approach simplifies the otherwise difficult
task of model validation and verification processes.
The time series analysis has three goals: forecasting (also called predicting), modeling,
and characterization. What would be the logical order in which to tackle these three goals
such that one task leads to and /or and justifies the other tasks? Clearly, it depends on
what the prime objective is. Sometimes you wish to model in order to get better forecasts.
Then the order is obvious. Sometimes, you just want to understand and explain what is
going on. Then modeling is again the key, though out-of-sample forecasting may be used
to test any model. Often modeling and forecasting proceed in an iterative way and there is
no 'logical order' in the broadest sense. You may model to get forecasts, which enable
better control, but iteration is again likely to be present and there are sometimes special
approaches to control problems.
Outliers: One cannot nor should not study time series data without being sensitive to
outliers. Outliers can be one-time outliers or seasonal pulses or a sequential set of outliers
with nearly the same magnitude and direction (level shift) or local time trends. A pulse is
a difference of a step while a step is a difference of a time trend. In order to assess or
declare "an unusual value" one must develop "the expected or usual value". Time series
techniques extended for outlier detection, i.e. intervention variables like pulses, seasonal
pulses, level shifts and local time trends can be useful in "data cleansing" or pre-filtering
of observations.
Further Readings:
Borovkov K., Elements of Stochastic Modeling, World Scientific Publishing, 2003.
Christoffersen P., Elements of Financial Risk Management, Academic Press, 2003.
Holton G., Value-at-Risk: Theory and Practice, Academic Press, 2003.
"Why are so many models designed and so few used?" is a question often discussed
within the Quantitative Modeling (QM) community. The formulation of the question
seems simple, but the concepts and theories that must be mobilized to give it an answer
are far more sophisticated. Would there be a selection process from "many models
designed" to "few models used" and, if so, which particular properties do the "happy few"
have? This site first analyzes the various definitions of "models" presented in the QM
literature and proposes a synthesis of the functions a model can handle. Then, the concept
of "implementation" is defined, and we progressively shift from a traditional "design then
implementation" standpoint to a more general theory of a model design/implementation,
seen as a cross-construction process between the model and the organization in which it
is implemented. Consequently, the organization is considered not as a simple context, but
as an active component in the design of models. This leads logically to six models of
model implementation: the technocratic model, the political model, the managerial
model, the self-learning model, the conquest model and the experimental model.
One must distinguishes between descriptive and prescriptive models in the perspective of
a traditional analytical distinction between knowledge and action. The prescriptive
models are in fact the furthest points in a chain cognitive, predictive, and decision
making.
Why modeling? The purpose of models is to aid in designing solutions. They are to
assist understanding the problem and to aid deliberation and choice by allowing us to
evaluate the consequence of our action before implementing them.
The principle of bounded rationality assumes that the decision maker is able to optimize
but only within the limits of his/her representation of the decision problem. Such a
requirement is fully compatible with many results in the psychology of memory: an
expert uses strategies compiled in the long-term memory and solves a decision problem
with the help of his/her short-term working memory.
Problem solving is decision making that may involves heuristics such as satisfaction
principle, and availability. It often, involves global evaluations of alternatives that could
be supported by the short-term working memory and that should be compatible with
various kinds of attractiveness scales. Decision-making might be viewed as the
achievement of a more or less complex information process and anchored in the search
for a dominance structure: the Decision Maker updates his/her representation of the
problem with the goal of finding a case where one alternative dominant all the others for
example; in a mathematical approach based on dynamic systems under three principles:
Cognitive science provides us with the insight that a cognitive system, in general, is an
association of a physical working device that is environment sensitive through perception
and action, with a mind generating mental activities designed as operations,
representations, categorizations and/or programs leading to efficient problem-solving
strategies.
Mental activities act on the environment, which itself acts again on the system by way of
perceptions produced by representations.
Designing and implementing human-centered systems for planning, control, decision and
reasoning require studying the operational domains of a cognitive system in three
dimensions:
Validation and Verification: As part of the calibration process of a model, the modeler
must validate and verified the model. The term validation is applied to those processes,
which seek to determine whether or not a model is correct with respect to the "real"
system. More prosaically, validation is concerned with the question "Are we building the
right system?" Verification, on the other hand, seeks to answer the question "Are we
building the system right?"
Without metrics, management can be a nebulous, if not impossible, exercise. How can we
tell if we have met our goals if we do not know what our goals are? How do we know if
our business strategies are effective if they have not been well defined? For example, one
needs a methodology for measuring success and setting goals from financial and
operational viewpoints. With those measures, any business can manage its strategic
vision and adjust it for any change. Setting a performance measure is a multi-perspective
at least from financial, customer, innovation, learning, and internal business viewpoints
processes.
Each of the above four perspectives must be considered with respect to four parameters:
Further Readings:
Calabro L. On balance, Chief Financial Officer Magazine, February 01, 2001. Almost 10 years after developing the
balanced scorecard, authors Robert Kaplan and David Norton share what they've learned.
Craven B., and S. Islam, Optimization in Economics and Finance, Springer , 2005.
Kaplan R., and D. Norton, The balanced scorecard: Measures that drive performance, Harvard Business Review,
71, 1992.
There are two main approaches to forecasting. Either the estimate of future
value is based on an analysis of factors which are believed to influence
future values, i.e., the explanatory method, or else the prediction is based
on an inferred study of past general data behavior over time, i.e., the
extrapolation method. For example, the belief that the sale of doll clothing
will increase from current levels because of a recent advertising blitz
rather than proximity to Christmas illustrates the difference between the
two philosophies. It is possible that both approaches will lead to the
creation of accurate and useful forecasts, but it must be remembered that,
even for a modest degree of desired accuracy, the former method is often
more difficult to implement and validate than the latter approach.
The plotted forecast errors on this chart, not only should remain with the
control limits, they should not show any obvious pattern, collectively.
The data in the validation period are held out during parameter estimation.
One might also withhold these values during the forecasting analysis after
model selection, and then one-step-ahead forecasts are made.
A good model should have small error measures in both the estimation and
validation periods, compared to other models, and its validation period
statistics should be similar to its own estimation period statistics.
Holding data out for validation purposes is probably the single most
important diagnostic test of a model: it gives the best indication of the
accuracy that can be expected when forecasting the future. It is a rule-of-
thumb that one should hold out at least 20% of data for validation
purposes.
You may like using the Time Series' Statistics JavaScript for computing
some of the essential statistics needed for a preliminary investigation of
your time series.
Let
[1 + 2A ] S2 / n
Where
As a good rule of thumb, the maximum lag for which autocorrelations are
computed should be approximately 2% of the number of n realizations,
although each ρ j,x could be tested to determine if it is significantly
different from zero.
n = [1 + 2A ] S2 t2 / (δ 2
mean2)
Once a model has been constructed and fitted to data, a sensitivity analysis
can be used to study many of its properties. In particular, the effects of
small changes in individual variables in the model can be evaluated. For
example, in the case of a model that describes and predicts interest rates,
one could measure the effect on a particular interest rate of a change in the
rate of inflation. This type of sensitivity study can be performed only if the
model is an explicit one.
Modeling the Causal Time Series: With multiple regressions, we can use
more than one predictor. It is always best, however, to be parsimonious,
that is to use as few variables as predictors as necessary to get a
reasonably accurate forecast. Multiple regressions are best modeled with
commercial package such as SAS or SPSS. The forecast takes the form:
Multiple regressions are used when two or more independent factors are
involved, and it is widely used for short to intermediate term forecasting.
They are used to assess which factors to include and which to exclude.
They can be used to develop alternate models with different factors.
Trend Analysis: Uses linear and nonlinear regression with time as the
explanatory variable, it is used where pattern over time have a long-term
trend. Unlike most time-series forecasting techniques, the Trend Analysis
does not assume the condition of equally spaced time series.
In the absence of any "visible" trend, you may like performing the Test for
Randomness of Fluctuations, too.
A seasonal index is how much the average for that particular period tends
to be above (or below) the grand average. Therefore, to get an accurate
estimate for the seasonal index, we compute the average of the first period
of the cycle, and the second period, etc, and divide each by the overall
average. The formula for computing seasonal factors is:
Si = Di/D,
where:
Si = the seasonal index for ith period,
Di = the average values of ith period,
D = grand average,
th
i = the i seasonal period of the cycle.
A seasonal index of 1.00 for a particular month indicates that the expected
value of that month is 1/12 of the overall average. A seasonal index of
1.25 indicates that the expected value for that month is 25% greater than
1/12 of the overall average. A seasonal index of 80 indicates that the
expected value for that month is 20% less than 1/12 of the overall average.
M Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Total
T
1 196 188 192 164 140 120 112 140 160 168 192 200 1972
2 200 188 192 164 140 122 132 144 176 168 196 194 2016
3 196 212 202 180 150 140 156 144 164 186 200 230 2160
4 242 240 196 220 200 192 176 184 204 228 250 260 2592
Mean: 208.6 207.0 192.6 182.0 157.6 143.6 144.0 153.0 177.6 187.6 209.6 221.0 2185
Index: 1.14 1.14 1.06 1.00 0.87 0.79 0.79 0.84 0.97 1.03 1.15 1.22 12
The first step in the seasonal forecast will be to compute monthly indices
using the past four-year sales. For example, for January the index is:
Similar calculations are made for all other months. Indices are
summarized in the last row of the above table. Notice that the mean
(average value) for the monthly indices adds up to 12, which is the number
of periods in a year for the monthly data.
Y = 1684 + 200.4T,
The main question is whether this equation represents the trend.
Determination of the Annual Trend for the Numerical Example
Year No: Actual Sales Linear Regression Quadratic Regression
1 1972 1884 1981
2 2016 2085 1988
3 2160 2285 2188
4 2592 2486 2583
Predicted values using both the linear and the quadratic trends are
presented in the above tables. Comparing the predicted values of the two
models with the actual data indicates that the quadratic trend is a much
superior fit than the linear one, as often expected.
We can now forecast the next annual sales; which, corresponds to year 5,
or T = 5 in the above quadratic equation:
You might like to use the Seasonal Index JavaScript to check your hand
computation. As always you must first use Plot of the Time Series as a
tool for the initial characterization process.
For testing seasonality based on seasonal index, you may like to use the
Test for Seasonality JavaScript.
Trend Removal and Cyclical Analysis: The cycles can be easily studied
if the trend itself is removed. This is done by expressing each actual value
in the time series as a percentage of the calculated trend for the same date.
The resulting time series has no trend, but oscillates around a central value
of 100.
Decomposition Analysis: It is the pattern generated by the time series and
not necessarily the individual data values that offers to the manager who is
an observer, a planner, or a controller of the system. Therefore, the
Decomposition Analysis is used to identify several patterns that appear
simultaneously in a time series.
Xt = St . Tt. Ct . I
The first three components are deterministic which are called "Signals",
while the last component is a random variable, which is called "Noise". To
be able to make a proper forecast, we must know to what extent each
component is present in the data. Hence, to understand and measure these
components, the forecast procedure involves initially removing the
component effects from the data (decomposition). After the effects are
measured, making a forecast involves putting back the components on
forecast estimates (recomposition). The time series decomposition process
is depicted by the following flowchart:
Therein you will find a detailed workout numerical example in the context
of the sales time series which consists of all components including a cycle.
The forecast for time period t + 1 is the forecast for all future time periods.
However, this forecast is revised only when new data becomes available.
In order to capture the trend, we may use the Moving-Average with Trend
(MAT) method. The MAT method uses an adaptive linearization of the
trend by means of incorporating a combination of the local slopes of both
the original and the smoothed time series.
M(t) = ∑ X(i) / n
i.e., finding the moving average smoothing M(t) of order n, which is a
positive odd integer number ≥ 3, for i from t-n+1 to t.
To have a notion of F(t), notice that the inside bracket can be written as:
where:
Notice that the smoothed value becomes the forecast for period t + 1.
Lt = α yt + (1 - α ) Ft
for the level and
Tt = β ( Lt - Lt-1 ) + (1 - β ) Tt-1
for the trend. We have two smoothing parameters α and β ; both must be
positive and less than one. Then the forecasting for k periods into the
future is:
Fn+k = Ln + k. Tn
Given that the level and trend remain unchanged, the initial (starting)
values are
F4 = L3 + T3, F3 = L2 + T2
Tt = β ( Lt - Lt-1 ) + (1 - β ) Tt-1
St = γ St-s + (1- γ ) yt / Lt
To obtain starting values, one may use the first a few year data. For
example for quarterly data, to estimate the level, one may use a centered
4-point moving average:
S7 = (y7 / L7 + y3 / L3 ) / 2
The monthly sales for the first nine months of a particular year together
with the monthly sales for the previous year.
From the data in the above table, another table can be derived and is
shown as follows:
The first column in Table 18 relates to actual sales; the seconds to the
cumulative total which is found by adding each month’s sales to the total
of preceding sales. Thus, January 520 plus February 380 produces the
February cumulative total of 900; the March cumulative total is found by
adding the March sales of 480 to the previous cumulative total of 900 and
is, therefore, 1,380.
The 12 months moving total is found by adding the sales in the current to
the total of the previous 12 months and then subtracting the corresponding
month for last year.
For example, the 12 months moving total for 2003 is 7,310 (see the above
first table). Add to this the January 2004 item 520 which totals 7,830
subtract the corresponding month last year, i.e. the January 2003 item of
940 and the result is the January 2004, 12 months moving total, 6,890.
The two groups of data, cumulative totals and the 12 month moving totals
shown in the above table are then plotted (A and B), along a line that
continues their present trend to the end of the year where they meet:
Forecasting by the Z-Chart
Click on the image to enlarge it
In the above figure, A and B represent the 12 months moving total,and the
cumulative data, respectively, while their projections into future are shown
by the doted lines.
Since finding three optimal, or even near optimal, parameters for updating
equations is not an easy task, an alternative approach to Holt-Winters
methods is to deseasonalize the data and then use exponential smoothing.
Moreover, in some time series, seasonal variation is so strong it obscures
any trends or cycles, which are very important for the understanding of the
process being observed. Smoothing can remove seasonality and makes
long term fluctuations in the series stand out more clearly. A simple way
of detecting trend in seasonal data is to take averages over a certain period.
If these averages change with time we can say that there is evidence of a
trend in the series.
Further Reading:
Yar, M and C. Chatfield (1990), Prediction intervals for the Holt-Winters forecasting procedure,
International Journal of Forecasting 6, 127-137.
Filtering Techniques: Often on must filters an entire, e.g., financial time
series with certain filter specifications to extract useful information by a
transfer function expression. The aim of a filter function is to filter a time
series in order to extract useful information hidden in the data, such as
cyclic component. The filter is a direct implementation of and input-output
function.
For the study of business cycles one uses not the smoothed series, but the
jagged series of residuals from it. H-P filtered data shows less fluctuation
than first-differenced data, since the H-P filter pays less attention to high
frequency movements. H-P filtered data also shows more serial correlation
than first-differenced data.
Further Readings:
Hamilton J, Time Series Analysis, Princeton University Press, 1994.
Harvey A., Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge
University Press, 1991.
Mills T., The Econometric Modelling of Financial Time Series, Cambridge University Press, 1995.
Outlier Considerations: Outliers are a few observations that are not well
fitted by the "best" available model. In practice, any observation with
standardized residual greater than 2.5 in absolute value is a candidate for
being an outlier. In such case, one must first investigate the source of data.
If there is no doubt about the accuracy or veracity of the observation, then
it should be removed, and the model should be refitted.
Whenever data levels are thought to be too high or too low for "business
as usual", we call such points the outliers. A mathematical reason to adjust
for such occurrences is that the majority of forecast techniques are based
on averaging. It is well known that arithmetic averages are very sensitive
to outlier values; therefore, some alteration should be made in the data
before continuing. One approach is to replace the outlier by the average of
the two sales levels for the periods, which immediately come before and
after the period in question and put this number in place of the outlier.
This idea is useful if outliers occur in the middle or recent part of the data.
However, if outliers appear in the oldest part of the data, we may follow a
second alternative, which is to simply throw away the data up to and
including the outlier.
Further Readings:
Delbecq, A., Group Techniques for Program Planning, Scott Foresman, 1975.
Gardner H.S., Comparative Economic Systems, Thomson Publishing, 1997.
Hirsch M., S. Smale, and R. Devaney, Differential Equations, Dynamical Systems, and an
Introduction to Chaos, Academic Press, 2004.
Lofdahl C., Environmental Impacts of Globalization and Trade: A Systems Study, MIT Press,
2002.
Using any method for forecasting one must use a performance measure to
assess the quality of the method. Mean Absolute Deviation (MAD), and
Variance are the most useful measures. However, MAD does not lend
itself to making further inferences, but the standard error does. For error
analysis purposes, variance is preferred since variances of independent
(uncorrelated) errors are additive; however, MAD is not additive.
Regression and Moving Average: When a time series is not a straight line
one may use the moving average (MA) and break-up the time series into
several intervals with common straight line with positive trends to achieve
linearity for the whole time series. The process involves transformation
based on slope and then a moving average within that interval. For most
business time series, one the following transformations might be effective:
• slope/MA,
• log (slope),
• log(slope/MA),
• log(slope) - 2 log(MA).
Further Readings:
Armstrong J., (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners,
Kluwer Academic Publishers, 2001.
Arsham H., Seasonal and cyclic forecasting in small firm, American Journal of Small Business,
9, 46-57, 1985.
Brown H., and R. Prescott, Applied Mixed Models in Medicine, Wiley, 1999.
Cromwell J., W. Labys, and M. Terraza, Univariate Tests for Time Series Models, Sage Pub.,
1994.
Ho S., M. Xie, and T. Goh, A comparative study of neural network and Box-Jenkins ARIMA
modeling in time series prediction, Computers & Industrial Engineering, 42, 371-375, 2002.
Kaiser R., and A. Maravall, Measuring Business Cycles in Economic Time Series, Springer, 2001.
Has a good coverage on Hodrick-Prescott Filter among other related topics.
Kedem B., K. Fokianos, Regression Models for Time Series Analysis, Wiley, 2002.
Kohzadi N., M. Boyd, B. Kermanshahi, and I. Kaastra , A comparison of artificial neural network
and time series models for forecasting commodity prices, Neurocomputing, 10, 169-181, 1996.
Krishnamoorthy K., and B. Moore, Combining information for prediction in linear regression,
Metrika, 56, 73-81, 2002.
Schittkowski K., Numerical Data Fitting in Dynamical Systems: A Practical Introduction with
Applications and Software, Kluwer Academic Publishers, 2002. Gives an overview of numerical
methods that are needed to compute parameters of a dynamical model by a least squares fit.
Introduction
The Standard Error of Estimate, i.e. square root of error mean square, is a
good indicator of the "quality" of a prediction model since it "adjusts" the
Mean Error Sum of Squares (MESS) for the number of predictors in the
model as follow:
If one keeps adding useless predictors to a model, the MESS will become
less and less stable. R-squared is also influenced by the range of your
dependent value; so, if two models have the same residual mean square
but one model has a much narrower range of values for the dependent
variable that model will have a higher R-squared. This explains the fact
that both models will do as well for prediction purposes.
You may like using the Regression Analysis with Diagnostic Tools
JavaScript to check your computations, and to perform some numerical
experimentation for a deeper understanding of these concepts.
Predictions by Regression
• = Σ x /n
This is just the mean of the x values.
• = Σ y /n
This is just the mean of the y values.
• Sxx = SSxx = Σ (x(i) - )2 = Σ x2 - ( Σ x)2 / n
• Syy = SSyy = Σ (y(i) - )2 = Σ y2 - ( Σ y) 2 / n
• Sxy = SSxy = Σ (x(i) - )(y(i) - ) = Σ x ⋅ y – (Σ x) ⋅ (Σ y) /
n
• Slope m = SSxy / SSxx
• Intercept, b = - m .
• y-predicted = yhat(i) = m⋅ x(i) + b.
• Residual(i) = Error(i) = y – yhat(i).
• SSE = Sres = SSres = SSerrors = Σ [y(i) – yhat(i)]2.
• Standard deviation of residuals = s = Sres = Serrors = [SSres /
(n-2)]1/2.
• Standard error of the slope (m) = Sres / SSxx1/2.
• Standard error of the intercept (b) = Sres[(SSxx + n. 2) /(n ⋅
SSxx] 1/2.
Now the question is how we can best (i.e., least square) use the sample
information to estimate the unknown slope (m) and the intercept (b)? The
first step in finding the least square line is to construct a sum of squares
table to find the sums of x values (Σ x), y values (Σ y), the squares of the
x values (Σ x2), the squares of the x values (Σ y2), and the cross-product of
the corresponding x and y values (Σ xy), as shown in the following table:
x y x2 xy y2
2 2 4 4 4
3 5 9 15 25
4 7 16 28 49
5 10 25 50 100
6 11 36 66 121
SUM 20 35 90 163 299
To estimate the intercept of the least square line, use the fact that the graph
of the least square line always pass through ( , ) point, therefore,
After estimating the slope and the intercept the question is how we
determine statistically if the model is good enough, say for prediction. The
standard error of slope is:
tslope = m / Sm.
which is large enough, indication that the fitted model is a "good" one.
You may ask, in what sense is the least squares line the "best-fitting"
straight line to 5 data points. The least squares criterion chooses the line
that minimizes the sum of square vertical deviations, i.e., residual = error
= y - yhat:
SSE = Σ (y – yhat)2 = Σ (error)2 = 1.1
The numerical value of SSE is obtained from the following computational
table for our numerical example.
x -2.2+2.3x y error squared
Predictor y-predicted observed y errors
2 2.4 2 -0.4 0.16
3 4.7 5 0.3 0.09
4 7 7 0 0
5 9.3 10 0.7 0.49
6 11.6 11 -0.6 0.36
Sum=0 Sum=1.1
Notice that this value of SSE agrees with the value directly computed from
the above table. The numerical value of SSE gives the estimate of
variation of the errors s2:
As the last step in the model building, the following Analysis of Variance
(ANOVA) table is then constructed to assess the overall goodness-of-fit
using the F-statistics:
Sum of Mean
Source DF F Value Prob > F
Squares Square
Notice also that there is a relationship between the two statistics that
assess the quality of the fitted line, namely the T-statistics of the slope and
the F-statistics in the ANOVA table. The relationship is:
t2slope = F
Confidence Region the Regression Line as the Whole: When the entire
line is of interest, a confidence region permits one to simultaneously make
confidence statements about estimates of Y for a number of values of the
predictor variable X. In order that region adequately covers the range of
interest of the predictor variable X; usually, data size must be more than
10 pairs of observations.
In all cases the JavaScript provides the results for the nominal (x) values.
For other values of X one may use computational methods directly,
graphical method, or using linear interpolations to obtain approximated
results. These approximation are in the safe directions i.e., they are
slightly wider that the exact values.
A. Planning:
For moderate VIF's, say between 2 and 8, you might be able to come-up with a
‘good' model.
Inspect rij's; one or two must be large. If all are small, perhaps the ranges of the X
variables are too small.
1. Collect date; check the quality of date; plot; try models; check the regression
conditions.
2. Consult experts for criticism.
Plot new variable and examine same fitted model. Also transformed Predictor
Variable may be used.
3. Are goals met?
Have you found "the best" model?
You might like to use Regression Analysis with Diagnostic Tools in performing
regression analysis.
Transfer Functions Methodology
The tests for structural breaks that I have seen are designed to detect only one break in a
time series. This is true whether the break point is known or estimated using iterative
methods. For example, for testing any change in level of the dependent series or model
specification, one may use an iterative test for detecting points in time by incorporating
level shift
(0,0,0,0,...,1,1,1,1,1) variables to account for a change in intercept. Other causes are the
change in variance and changes in parameters.
Further Reading:
Bai J., and P. Perron, Testing for and estimation of multiple structural changes, Econometrica, 66, 47-79, 1998.
Clements M., and D. Hendry, Forecasting Non-Stationary Economic Time Series, MIT Press, 1999.
Maddala G., and I-M. Kim, Unit Roots, Cointegration, and Structural Change, Cambridge Univ. Press, 1999. Chapter
13.
Tong H., Non-Linear Time Series: A Dynamical System Approach, Oxford University Press, 1995.
Box-Jenkins Methodology
Introduction
Forecasting Basics: The basic idea behind self-projecting time series forecasting models
is to find a mathematical formula that will approximately generate the historical patterns
in a time series.
Time Series: A time series is a set of numbers that measures the status of some activity
over time. It is the historical record of some activity, with measurements taken at equally
spaced intervals (exception: monthly) with a consistency in the activity and the method of
measurement.
Approaches to time Series Forecasting: There are two basic approaches to forecasting
time series: the self-projecting time series and the cause-and-effect approach. Cause-and-
effect methods attempt to forecast based on underlying series that are believed to cause
the behavior of the original series. The self-projecting time series uses only the time
series data of the activity to be forecast to generate forecasts. This latter approach is
typically less expensive to apply and requires far less data and is useful for short, to
medium-term forecasting.
Box-Jenkins Methodology
Basic Model: With a stationary series in place, a basic model can now be
identified. Three basic models exist, AR (autoregressive), MA (moving
average) and a combined ARMA in addition to the previously specified
RD (regular differencing): These comprise the available tools. When
regular differencing is applied, together with AR and MA, they are
referred to as ARIMA, with the I indicating "integrated" and referencing
the differencing procedure.
Referring to the above chart know that, the variance of the errors of the
underlying model must be invariant, i.e., constant. This means that the
variance for each subgroup of data is the same and does not depend on the
level or the point in time. If this is violated then one can remedy this by
stabilizing the variance. Make sure that there are no deterministic patterns
in the data. Also, one must not have any pulses or one-time unusual
values. Additionally, there should be no level or step shifts. Also, no
seasonal pulses should be present.
The reason for all of this is that if they do exist, then the sample
autocorrelation and partial autocorrelation will seem to imply ARIMA
structure. Also, the presence of these kinds of model components can
obfuscate or hide structure. For example, a single outlier or pulse can
create an effect where the structure is masked by the outlier.
ARMA (1, 0): The first model to be tested on the stationary series consists
solely of an autoregressive term with lag 1. The autocorrelation and partial
autocorrelation patterns are examined for significant autocorrelation often
early terms and to see whether the residual coefficients are uncorrelated;
that is the value of coefficients are zero within 95% confidence limits and
without apparent pattern. When fitted values are as close as possible to the
original series values, then the sum of the squared residuals will be
minimized, a technique called least squares estimation. The residual mean
and the mean percent error should not be significantly nonzero.
Alternative models are examined comparing the progress of these factors,
favoring models which use as few parameters as possible. Correlation
between parameters should not be significantly large and confidence limits
should not include zero. When a satisfactory model has been established, a
forecast procedure is applied.
ARMA (2, 1): Absent a satisfactory ARMA (1, 0) condition with residual
coefficients approximating zero, the improved model identification
procedure now proceeds to examine the residual pattern when
autoregressive terms with order 1 and 2 are applied together with a
moving average term with an order of 1.
Forecasting with the Model: The model must be used for short term and
intermediate term forecasting. This can be achieved by updating it as new
data becomes available in order to minimize the number of periods ahead
required of the forecast.
Autoregressive Models
where X(t-1) and Y(t-1) are the actual value (inputs) and the forecast
(outputs), respectively. These types of regressions are often referred to as
Distributed Lag Autoregressive Models, Geometric Distributed Lags, and
Adaptive Models in Expectation , among others.
The current value of the series is a linear combination of the p most recent
past values of itself plus an error term, which incorporates everything new
in the series at time t that is not explained by the past values. This is like a
multiple regressions model but is regressed not on independent variables,
but on past values; hence the term "Autoregressive" is used.
X(t) = a + b X(t-s) + ε t
and
X(t) = a + b X(t-s) + c X(t-2s) +ε t
Similarly, for AR(2), the behavior of the autocorrelations and the partial
autocorrelations are depicted below, respectively:
AR2 Autocorrelations and Partial Autocorrelations
Click on the image to enlarge it and THEN print it
Adjusting the Slope's Estimate for Length of the Time Series: The
regression coefficient is biased estimate and in the case of AR(1), the bias
is -(1 + 3 Φ 1) / n, where n is number of observations used to estimate the
parameters. Clearly, for large data sets this bias is negligible.
By constructing and studying the plot of the data one notices that the
series drifts above and below the mean of about 50.6. By using the Time
Series Identification Process JavaScript, a glance of the autocorrelation
and the partial autocorrelation confirm that the series is indeed stationary,
and a first-order (p=1) autoregressive model is a good candidate.
X(t) = Φ 0 + Φ 1X(t-1) + ε t,
Stationary Condition: The AR(1) is stable if the slope is within the open
interval (-1, 1), that is:
| Φ 1| < 1
Further Reading:
Ashenfelter , et al., Statistics and Econometrics : Methods and Applications , Wiley, 2002.
Introduction
The five major economic sectors, as defined by economists, are
agriculture, construction, mining, manufacturing and services. The first
four identified sectors concern goods, which production dominated the
world's economic activities. However, the fastest growing aspect of the
world's advanced economies includes wholesale, retail, business,
professional, education, government, health care, finance, insurance, real
estate, transportation, telecommunications, etc. comprise the majority of
their gross national product and employ the majority of their workers. In
contrast to the production of goods, services are co-produced with the
customers. Additionally, services should be developed and delivered to
achieve maximum customer satisfaction at minimum cost. Indeed, services
provide an ideal setting for the appropriate application of systems theory,
which, as an interdisciplinary approach, can provide an integrating
framework for designing, refining and operating services, as well as
significantly improving their productivity.
We are attempting to 'model' what the reality is so that we can predict it.
Statistical Modeling, in addition to being of central importance in
statistical decision making, is critical in any endeavor, since essentially
everything is a model of reality. As such, modeling has applications in
such disparate fields as marketing, finance, and organizational behavior.
Particularly compelling is econometric modeling, since, unlike most
disciplines (such as Normative Economics), econometrics deals only with
provable facts, not with beliefs and opinions.
Mean absolute error is a robust measure of error. However, one may also
use the sum of errors to compare the success of each forecasting model
relative to a baseline, such as a random walk model, which is usually used
in financial time series modeling.
Further Reading:
Franses Ph., and D. Van Dijk, Nonlinear Time Series Models in Empirical Finance, Cambridge
University Press, 2000.
Taylor S., Modelling Financial Time Series, Wiley, 1986.
Tsay R., Analysis of Financial Time Series, Wiley, 2001.
Simultaneous Equations
C = β 1 + β 2Y + ε
Y = C + I,
Where:
C = β 1 + β 2 (C + I) + ε .
Hence
C = β 1 / (1 - β 2) + β 2 I / (1 - β 2) + ε / (1 - β 2),
and
Y = β 1 / (1 - β 2) + I / (1 - β 2) + ε / (1 - β 2),
Now we are able to utilize the LSR analysis in estimating this equation.
This is permissible because investment and the error term are uncorrelated
by the fact that the investment is exogenous. However, using the first
equation one obtains an estimate slope β 2 /(1 - β 2), while the second
equation provides another estimate of 1 /(1 - β 2). Therefore taking the
ration of these reduced-form slopes will provide an estimate for β .
An Application: The following table provides consumption capital and
domestic product income in US Dollars for 33 countries in 1999.
Country C I Y Country C I Y
Australia 15024 4749 19461 South Korea 4596 1448 6829
Austria 19813 6787 26104 Luxembourg 26400 9767 42650
Belgium 18367 5174 24522 Malaysia 1683 873 3268
Canada 15786 4017 20085 Mexico 3359 1056 4328
China-PR 446 293 768 Netherlands 17558 4865 24086
New
China-HK 17067 7262 24452 11236 2658 13992
Zealand
Denmark 25199 6947 32769 Norway 23415 9221 32933
Finland 17991 4741 24952 Pakistan 389 79 463
France 19178 4622 24587 Philippines 760 176 868
Germany 20058 5716 26219 Portugal 8579 2644 9976
Greece 9991 2460 11551 Spain 11255 3415 14052
Iceland 25294 6706 30622 Sweden 20687 4487 26866
India 291 84 385 Switzerland 27648 7815 36864
Indonesia 351 216 613 Thailand 1226 479 1997
Ireland 13045 4791 20132 UK 19743 4316 23844
Italy 16134 4075 20580 USA 26387 6540 32377
Japan 21478 7923 30124
Further Readings:
Dominick, et al, Schaum's Outline of Statistics and Econometrics, McGraw-Hill, 2001.
Fair R., 1984, Specification, Estimation, and Analysis of Macroeconometric Models , Harvard
University Press), 1984.
Further Readings:
Ladiray D., and B. Quenneville, Seasonal Adjustment with the X-11 Method, Springer-Verlag,
2001.
The widely used statistical measures of error that can help you to identify
a method or the optimum value of the parameter within a method are:
Mean absolute error: The mean absolute error (MAE) value is the
average absolute error value. Closer this value is to zero the better the
forecast is.
Mean squared error (MSE): Mean squared error is computed as the sum
(or average) of the squared error values. This is the most commonly used
lack-of-fit indicator in statistical fitting procedures. As compared to the
mean absolute error value, this measure is very sensitive to any outlier;
that is, unique or rare large error values will impact greatly MSE value.
The manager must decide on the best age to replace the machine.
Computational aspects are arranged in the following table:
The analysis of the average cost over the age plot indicates that it follows
parabola shape as expected with the least cost of $38000 annually. This
corresponds to the decision of replacing the machine at the end of the third
year.
We know that we want a quadratic function that best fits; we might use
Quadratic Regression JavaScript to estimate its coefficients. The result is:
Average cost over the age = 3000(Age)2 -20200(Age) + 71600, for 1
≤ Age≤ 5.
You may like to use Optimal Age for Equipment Replacement JavaScript
for checking your computation and perform some experiments for a
deeper understanding.
Further Reading
Waters D., A Practical Introduction to Management Science, Addison-Wesley, 1998.
Product name P1 P2 P3 P4 P5 P6 P7 P8 P9
Cost ($100) 24 25 30 4 6 10 15 20 22
Annual demand 3 2 2 8 7 30 20 6 4
Compute the annual use of each product in terms of dollar value, and then
sort the numerical results into decreasing order, as is shown in the
following table. The total annual use by value is 1064.
Working down the list in the table, determine the dollar % usage for each
item. This row exhibits the behavior of the cumulative distribution
function, where the change from one category to the next is determined.
Rank the items according to their dollar % usage in three classes: A = very
important, B = moderately important, and C = least important.
Further Reading
Koch R., The 80/20 Principle: The Secret to Success by Achieving More with Less, Doubleday,
1999.
Alternatively, one may plot net profit and find the optimal quantity where
it is at its maximum as depicted by the following figure:
Finally, one may plot the marginal benefit and marginal cost curves and
choosing the point where they cross, as shown in the following figure:
You might like to use Quadratic Regression JavaScript to estimate the cost
and the benefit functions based on a given data set.
Further Reading:
Brealey R., and S. Myers, Principles of Corporate Finance, McGraw, 2002.
Introduction
The third element is the most difficult to measure and is often handled by
establishing a "service level" policy; e. g, a certain percentage of demand
will be met from stock without delay.
Production control systems are commonly divided into push and pull
systems. In push systems, raw materials are introduced in the line and are
pushed from the first to the last work station. Production is determined by
forecasts in a production-planning center. One of the best-known push
systems is material requirement planning (MRP) and manufacturing
resources planning (MRPII), both developed in western countries.
Meanwhile, in pull systems production is generated by actual demands.
Demands work as a signal, which authorizes a station to produce. The
most well-known pull systems are Just in time (JIT) and Kanban
developed in Japan.
Both push and pull systems offer different advantages. Therefore, new
systems have been introduced that adopts advantages of each, as a result
obtaining hybrid (push-pull) control policies. The constant work in process
and the two-boundary control are the best know hybrid systems with a
push-pull interface. In both systems, the last station provides an
authorization signal to the first one in order to start production, and
internally production in pushed from one station to another until the end of
the line as finished good inventory.
Inventory control decisions are both problem and opportunity for at least
three parties Production, Marking, and Accounting departments. Inventory
control decision-making has an enormous impact on the productivity and
performance of many organizations, because it handles the total flow of
materials. Proper inventory control can minimize stock out, thereby
reducing capital of an organization. It also enables an organization to
purchase or produce a product in economic quantity, thus minimizing the
overall cost of the product.
Further Readings:
Hopp W., and M. Spearman, Factory Physics Examines operating policies and strategic
objectives within a factory.
Louis R., Integrating Kanban With Mrpii: Automating a Pull System for Enhanced Jit Inventory
Management, Productivity Press Inc, 1997. It describes an automated kanban principle that
integrates MRP into a powerful lean manufacturing system that substantially lowers inventory
levels and significantly eliminates non-value-adding actions.
For every type of inventory models, the decision maker is concerned with
the main question: When should a replenishment order be placed? One
may review stock levels at a fixed interval or re-order when the stock falls
to a predetermined level; e. g., a fixed safety stock level.
Keywords, Notations Often Used for the Modeling and Analysis Tools for Inventory Control
Demand rate: x A constant rate at which the product is withdrawn from inventory
Ordering cost: C1 It is a fixed cost of placing an order independent of the amount ordered.
Set-up cost
Holding cost: C2 This cost usually includes the lost investment income caused by having the asset
tied up in inventory. This is not a real cash flow, but it is an important component of
the cost of inventory. If P is the unit price of the product, this component of the cost is
often computed by iP, where i a percentage that includes opportunity cost, allocation
cost, insurance, etc. It is a discount rate or interest rate used to compute the
inventory holding cost.
Backorder cost: C4 This cost includes the expense for each backordered item. It might be also an
expense for each item proportional to the time the customer must wait.
It is the time interval between when an order is placed and when the inventory is
Lead time: L
replenished.
The Classical EOQ Model: This is the simplest model constructed based
on the conditions that goods arrive the same day they are ordered and no
shortages allowed. Clearly, one must reorder when inventory reaches 0, or
considering lead time L
The following figure shows the change of the inventory level with time:
The figure shows time on the horizontal axis and inventory level on the
vertical axis. We begin at time 0 with an order arriving. The amount of the
order is the lot size, Q. The lot is delivered all at one time causing the
inventory to shoot from 0 to Q instantaneously. Material is withdrawn
from inventory at a constant demand rate, x, measured in units per time.
After the inventory is depleted, the time for another order of size Q
arrives, and the cycle repeats.
Ordering Holding
Total Cost = C1x/Q + C2/(2Q)
1/2
The Optimal Ordering Quantity = Q* = (2xC1/C2) , therefore,
The Optimal Reordering Cycle = T* = [2C1/(xC2)]1/2
Notice that one may incorporate the Lead Time (L), that is the time
interval between when an order is placed and when the inventory is
replenished.
Models with Shortages: When a customer seeks the product and finds the
inventory empty, the demand can be satisfied later when the product
becomes available. Often the customer receives some discount which is
included in the backorder cost.
Otherwise,
Q* = (2xC1/C2)1/2, with S* = 0.
Rather than the lot arrives instantaneously, the lot is assumed to arrive
continuously at a production rate K. This situation arises when a
production process feeds the inventory and the process operates at the rate
K greater than the demand rate x.
where,
t1 = {[2xC1C2]/[C4K(K-x)(C2+C4)]}1/2,
and
t2 = {[2xC1C4]/[C2K(K-x)(C2+C4)]}1/2
You may like using Inventory Control Models JavaScript for checking
your computation. You may also perform sensitivity analysis by means of
some numerical experimentation for a deeper understanding of the
managerial implications in dealing with uncertainties of the parameters of
each model
Further Reading:
Zipkin P., Foundations of Inventory Management, McGraw-Hill, 2000.
You may like using Inventory Control Models JavaScript for checking
your computation. You may also perform sensitivity analysis by means of
some numerical experimentation for a deeper understanding of the
managerial implications in dealing with uncertainties of the parameters in
the model
Further Reading:
Zipkin P., Foundations of Inventory Management, McGraw-Hill, 2000.
We already know from our analysis of the "Simple EOQ" approach that
any fixed lot size will create "leftovers" which increase total cost
unnecessarily. A better approach is to order "whole periods worth" of
stock. But the question is should you order one (period worth), or two, or
more? As usual, it depends. At first, increasing the buy quantity saves
money because order costs are reduced since fewer buys are made.
Eventually, though, large order quantities will begin to increase total costs
as holding costs rise.
Notice that this method assumes that ACi/i initially decreases then
increases, and never decreases again as t increases, but this is not always
true. Moreover, solution is myopic so it may leave only one, two, or a few
periods for the final batch, even if the setup cost is high. However,
Extensive numerical studies show that the results are usually within 1 or 2
percent of optimal (using mixed-integer linear programming) if horizon is
not extremely short.
Period 1 2 3 4 5 6 6 8 9 10 11 12
Demand 200 150 100 50 50 100 150 200 200 250 300 250
The ordering cost is $500, the unit price is $50 and the holding cost is $1
per unit per period. The main questions are the usual questions in general
inventory management, namely: What should be the order quantity? and
When should the orders placed? The following table provides the detailed
computations of the Silver-Meal approach with the resulting near optimal
ordering strategy:
Lot Mean
Period Demand Holding Cost Lot Cost
QTY Cost
First Buy
1 200 200 0 500 500
2 150 350 150 650 325
3 100 450 150+2(100)=350 850 283
4 50 500 150+200+3(50)=500 1000 250
5 50 550 150+200+150+4(50)=700 1200 240
6 100 650 15+200+150+200+5(100)=1200 1700 283
Second
Buy
6 100 100 0 500 500
7 150 250 150 650 325
8 200 450 150+2(200)=550 1050 350
Third Buy
8 200 200 0 500 500
9 200 400 200 700 350
10 250 650 200 + 2(250)=700 1200 400
Fourth Buy
10 250 250 0 500 500
11 300 550 300 800 400
12 250 800 300+2(250)=800 1300 433
Fifth Buy
12 250 250 0 500 500
Solution
Summary
Order ordering Period
Period Demand Holding $
QTY $ Cost
1 200 550 350 500 850
2 150 0 200 0 200
3 100 0 100 0 100
4 50 0 50 0 50
5 50 0 0 0 0
6 100 250 150 500 650
7 150 0 0 0 0
8 200 400 200 500 700
9 200 0 0 0 0
10 250 550 300 500 800
11 300 0 0 0 0
12 250 250 0 500 500
Total 2000 2000 1350 2500 3850
Further Reading:
Zipkin P., Foundations of Inventory Management, McGraw-Hill, 2000.
where D is the daily order, P is your unit profit, and L is the loss for any
left over item.
It can be shown that the optimal ordering quantity D* with the largest
expected daily profit is a function of the Empirical Cumulative
Distribution Function (ECDF) = F(x). More specifically, the optimal
quantity is X* where F(x) either equals or exceeds the ratio P/(P + L) for
the first time.
To verify this decision, one may use the following recursive formula in
computing:
The daily expected profit using this formula computed and recorded in the
last column of the above table with the optimal daily profit is $75.20.
Further Reading:
Silver E., D. Pyke, and R. Peterson, Inventory Management and Production Planning and
Scheduling, Wiley, 1998.
• Cycle inventory.
• Streamline ordering/production process.
• Increase repeatability.
• Safety Stock inventory.
• Better timing of orders.
• Improve forecasts.
• Reduce supply uncertainties.
• Use capacity cushions instead.
• Anticipation inventory.
• Match production rate with demand rate.
• Use complementary products.
• Off-season promotions.
• Creative pricing.
• Pipeline inventory.
• Reduce lead time.
• More responsive suppliers.
• Decrease lot size when it affects lead times.
Cash Flow and Forecasting: Balance sheets and profit and loss
statements indicate the health of your business at the end of the financial
year. What they fail to show you is the timing of payments and receipts
and the importance of cash flow.
Your business can survive without cash for a short while but it will need to
be "liquid" to pay the bills as and when they arrive. A cash flow statement,
usually constructed over the course of a year, compares your cash position
at the end of the year to the position at the start, and the constant flow of
money into and out of the business over the course of that year.
The amount your business owes and is owed is covered in the profit and
loss statement; a cash flow statement deals only with the money
circulating in the business. It is a useful tool in establishing whether your
business is eating up the cash or generating the cash.
Working Capital Cycle: Cash flows in a cycle into, around and out of a
business. It is the business's life blood and every manager's primary task is
to help keep it flowing and to use the cash flow to generate profits. If a
business is operating profitably, then it should, in theory, generate cash
surpluses. If it doesn't generate surpluses, the business will eventually run
out of cash and expire.
Each component of working capital, namely inventory, receivable and
payable has two dimensions, time, and money. When it comes to
managing working capital -- Time is money. If you can get money to
move faster around the cycle, e.g. collect moneys due from debtors more
quickly or reduce the amount of money tied up, e.g. reduce inventory
levels relative to sales, the business will generate more cash or it will need
to borrow less money to fund working capital. As a consequence, you
could reduce the cost of interest or you will have additional money
available to support additional sales growth. Similarly, if you can
negotiate improved terms with suppliers e.g. get longer credit or an
increased credit limit, you effectively create free finance to help fund
future sales.
The following are some of the main factors in managing a “good” cash
flow system:
Further Reading:
Schaeffer H., Essentials of Cash Flow, Wiley, 2002.
Silver E., D. Pyke, and R. Peterson, Inventory Management and Production Planning and
Scheduling, Wiley, 1998.
Simini J., Cash Flow Basics for Nonfinancial Managers, Wiley, 1990.
Marketing and Modeling Advertising Campaign
Selling Models
Selling focuses on the needs of seller. Selling models are concerned with
the sellers need to convert the product into cash. One of the most well
known selling models is the advertising/sales response model (ASR) that
assumes the shape of the relationship between sales and advertising is
known.
The Vidale and Wolfe Model: Vidale and Wolfe developed a single-
equation model of sales response to advertising based on experimental
studies of advertising effectiveness. This sales behavior through time
relative to different levels of advertising expenditure for a firm, consistent
with their empirical observation, has been developed.
This equation suggests that the change or increase in the rate of sales will
be greater the higher the sales response constant; the lower the sales decay
constant λ , the higher the saturation level, and the higher the advertising
expenditure.
The three parameters r, λ , and m are constant for a given product and
campaign.
A for 0 ≤ t ≤ T,
A(t) =
0 for t >T
While many marketing researchers have aligned the ASR approach as an
established school in advertising modeling, nevertheless they readily
admit the most aggravating problem is the assumption on the shape of the
ASR function. Moreover, ASR models do not consider the need and
motives leading to consumer behavior. It is well established that
marketing managers are concerned about delivering product benefit,
changing brand attitudes, and influencing consumer perceptions.
Marketing management realizes that advertising plans must be based on
the psychological and social forces that condition consumer behavior; that
is, what goes on inside the consumer's head.
Buying Models
Modeling Consumer Choice: When the modular and the decision maker
come up with a good model of customer choice among discrete options,
they often implement their model of customer choice. However, one might
take the advantage of using multi-method object -oriented software (e.g.,
AnyLogic ) that the practical problem can be modeled at multiple levels of
aggregation, where, e.g., the multi-nominal logit of discrete choice
methods are represented by object state-chart transitions (e.g. from
"aware" state to "buy" state) -- the transition is the custom probability
function estimated by the discrete choice method. Multi-level objects
representing subgroups easily represent nesting. Moreover, each object
can have multiple state-charts.
Consumer Behavior
Click on the image to enlarge it and THEN print it
The structure of the decision process of a typical consumer concerning a
specific brand X, contains three functional values namely attitude A(t),
level of buying B(t) and communication C(t).
Where:
A(t) = The consumers' Attitude toward the brand which results from some
variety of complex interactions of various factors, some of which are
indicated in the above Figure.
1) That the advertising rate is constant over time. It is clear that the return
on constant advertising is diminishing with time and hence it is not related
to the volume of sales; therefore further expenditures on advertising will
not bring abut any substantial increase in the sales revenues. The term
"advertising modeling" has been used to describe the decision process of
improving sales of a product or a service. A substantial expense in
marketing is advertising expenses. The effect of repetitions of a stimulus
on the consumer's ability to recall the message is a major issue in learning
theory. It is well established that advertising must be continuous to stop it
being forgotten.
Internet Advertising
Most websites offer some kind of graphic or text advertising, and there are
a bewildering variety of mailing lists, newsletters, and regular mailings.
However, before deciding where to advertise, one must think of why
advertising?
Banner Advertising: If you have spent any time surfing the Internet, you
have seen more than your fair share of banner ads. These small
rectangular advertisements appear on all sorts of Web pages and vary
considerably in appearance and subject matter, but they all share a basic
function: if you click on them, your Internet browser will take you to the
advertiser's Web site.
Over the past few years, most of us have heard about all the money being
made on the Internet. This new medium of education and entertainment
has revolutionized the economy and brought many people and many
companies a great deal of success. But where is all this money coming
from? There are a lot of ways Web sites make money, but one of the main
sources of revenue is advertising. And one of the most popular forms of
Internet advertising is the banner ad.
Total Cost:
Number of Exposures:
Reset
Entering numerical values for any two input cells then click on
Calculate to get the numerical value for the other one.
Like print ads, banner ads come in a variety of shapes and sizes with
different cost and the effectiveness. The main factors are the total cost, the
cost per thousand impressions (CPM), and number of ads shown, i.e., the
exposures. By entering two of these factors, the above JavaScript
calculates the numerical value of the other one.
Suppose that a consumer has decided to shop around several retail stores
in an attempt to find a desired product or service. From his or her past
shopping experience, the shopper may know:
Although the model might includes predictors from all four categories
indicating that clickstream behavior is important when determining the
tendency to buy, however one must determine the contribution in
predictive power of variables that were never used before in online
purchasing studies. Detailed clickstream variables are the most important
ones in classifying customers according to their online purchase behavior.
Therefore, a good model enables e-commerce retailers to capture an
elaborate list of customer information.
Concluding Remarks
Further Readings:
Arsham H., A Markovian model of consumer buying behavior and optimal advertising pulsing
policy, Computers and Operations Research, 20(1), 35-48, 1993.
Arsham H., Consumer buying behavior and optimal advertising strategy: The quadratic profit
function case, Computers and Operations Research, 15(2), 299-310, 1988.
Arsham H., A stochastic model of optimal advertising pulsing policy, Computers and
Operations Research, 14(3), 231-239, 1987.
Gao Y., (Ed.), Web Systems Design and Online Consumer Behavior, Idea Group Pub., Hershey
PA, 2005.
Wang Q., and Z. Wu, A duopolistic model of dynamic competitive advertising, European Journal
of Operational Research, 128(1), 213-226, 2001
Markov Chains
Several of the most powerful analytic techniques with business
applications are based on the theory of Markov chains. A Markov chain is
a special case of a Markov process, which itself is a special case of a
random or stochastic process.
There are many kinds of random processes. Two of the most important
distinguishing characteristics of a random process are: (1) its state space,
or the set of values that the random variables of the process can have, and
(2) the nature of the indexing parameter. We can classify random
processes along each of these dimensions.
1. State Space:
o continuous-state: X(t) can take on any value over a
continuous interval or set of such intervals
o discrete-state: X(t) has only a finite or countable
number of possible values {x0, x1 … ,xi,..}
A discrete-state random process is also often called a chain.
2. Index Parameter (often it is time t):
o discrete-time: permitted times at which changes in
value may occur are finite or countable X(t) may be
represented as a set {X i}
o continuous-state: changes may occur anywhere
within a finite or infinite interval or set of such intervals
What is the probability that the system is in the i th state, at the nth
transitional period?
To answer this question, we first define the state vector. For a Markov
chain, which has k states, the state vector for an observation period n, is a
column vector defined by
x(n) = x1
x2
.
.
xk
where xi = probability that the system is in the ith state at the time of
observation. Note that the sum of the entries of the state vector has to be
one. Any column vector x,
x1
x2
.
x=
.
xk
1
0
x(0) =
0
0
In the next observation period, say end of the first week, the state vector
will be
.25
.20
x(1)= Px(0) =
.25
.30
Similarly, we can find the state vector for 5th, 10th, 20th, 30th, and 50th
observation periods.
.2495
.2634
x(5)= P5x(0) =
.2339
.2532
.2495
.2634
x(10)= P10x(0) =
.2339
.2532
.2495
.2634
x(20)= P20x(0) =
.2339
.2532
.2495
.2634
x(30) =
.2339
.2532
.2495
.2634
x(50) =
.2339
.2532
The same limiting results can be obtained by solving the linear system of
equations Π P = Π using this JavaScript. It suggests that the state vector
approached some fixed vector, as the number of observation periods
increase. This is not the case for every Markov Chain. For example, if
0 1
P= 1 0
, and
1
x(0) =
0
These computations indicate that this system oscillates and does not
approach any fixed vector.
You may like using the Matrix Multiplications and Markov Chains
Calculator-I JavaScript to check your computations and to perform some
numerical experiment for a deeper understanding of these concepts.
Further Reading:
Taylor H., and S. Karlin, An Introduction to Stochastic Modeling, Academic Press, 1994.
What production levels for the three industries balance the economy?
Solution: Write the equations that show the balancing of the production
and consumption industry by industry X = DX + E:
Production Consumption
by by A by B by C external
Industry A: x1 = .10x1 + .43x2 + 20 000
Industry B: x2 = .15x1 + .37x3 + 30 000
Industry C: x3 = .23x1 + .03x2 + .02x3 + 25 000
Now solve this resulting system of equations for the output productions
Xi, i = 1, 2, 3.
You may like using the Solving System of Equations Applied to Matrix
Inversion JavaScript to check your computations and performing some
numerical experiment for a deeper understanding of these concepts.
Further Reading:
Dietzenbacher E., and M. Lahr, (Eds.), Wassily Leontief and Input-Output Economics,
Cambridge University, 2003.
Many decisions involve trading money now for money in the future. Such
trades fall in the domain of financial economics. In many such cases, the
amount of money to be transferred in the future is uncertain. Financial
economists thus deal with both risk (i.e., uncertainty) and time, which are
discussed in the following two applications, respectively.
- Two Investments -
Investment I Investment II
Payoff % Prob. Payoff % Prob.
1 0.25 3 0.33
7 0.50 5 0.33
12 0.25 8 0.34
Expected value is another name for the mean and (arithmetic) average.
The variance is not expressed in the same units as the expected value. So,
the variance is hard to understand and to explain as a result of the squared
term in its computation. This can be alleviated by working with the square
root of the variance, which is called the Standard (i.e., having the same
unit as the data have) Deviation:
Standard Deviation = σ = (Variance) ½
Both variance and standard deviation provide the same information and,
therefore, one can always be obtained from the other. In other words, the
process of computing standard deviation always involves computing the
variance. Since standard deviation is the square root of the variance, it is
always expressed in the same units as the expected value.
For the dynamic process, the Volatility as a measure for risk includes the
time period over which the standard deviation is computed. The Volatility
measure is defined as standard deviation divided by the square root of the
time duration.
CV =100 |σ / | %
You might like to use Multinomial for checking your computation and
performing computer-assisted experimentation.
You might like to use Performance Measures for Portfolios in check your
computations, and performing some numerical experimentation.
As Another Application, consider an investment of $10000 over a 4-year
period that returns T(t) an the end of year t, with R(t) being statistically
independent as follow:
R(t) Probability
$2000 0.1
$3000 0.2
$4000 0.3
$5000 0.4
One may compute the expected return: E[R(t)] = 2000(0.1) +….= $4000
However the present worth, using the discount factor [(1+I)n -1]/[I(1+I)n]
= 2.5887, n=4, for the investment is:
4000(2.5887) - 10000 = $354.80.
Not bad. However, one needs to know its associated risk. The variance of
R(t) is:
Var[R(t)] = E[R(t)2] - {E[R(t)]}2 = $2106.
Therefore, its standard deviation is $1000.
Further Reading:
Elton E., Gruber, M., Brown S., and W. Goetzman, Modern Portfolio Theory and Investment
Analysis, John Wiley and Sons, Inc., New York, 2003.
Total Cost: The sum of the fixed cost and total variable cost for any given
level of production, i.e., fixed cost plus total variable cost.
Total Revenue: The product of forecasted unit sales and unit price, i.e.,
forecasted unit sales times the unit price.
BE = FC / (UP - VC)
where:
Therefore,
The loss is reduced as output rises and she breaks even at 600 sandwiches
per month. Any output higher than this will generate a profit for Rachel.
To show this in a graph, plot the total costs and total revenue. It is also
normal to show the fixed cost. The horizontal axis measures the level of
output. At a certain level of output, the total cost and total revenue curves
will intersect. This highlights the break-even level of output.
The level of break even will depend on the fixed costs, the variable cost
per unit and the selling price. The higher the fixed costs, the more the units
will have to be sold to break even. The higher the selling price, the fewer
units need to be sold.
For some industries, such as the pharmaceutical industry, break even may
be at quite high levels of output. Once the new drug has been developed
the actual production costs will be low, however, high volumes are needed
to cover high initial research and development costs. This is one reason
why patents are needed in this industry. The airline and
telecommunications industries also have high fixed costs and need high
volumes of customers to begin to make profits. In industries where the
fixed costs are relatively small and the contribution on each unit is quite
high, break-even output will be much lower.
Can a firm reduce its break-even output? Not surprisingly, firms will
be eager to reduce their break even level of output, as this means they
have to sell less to become profitable. To reduce the break even level of
output a firm must do one or more of the following:
An order is received from a new customer who wants 300 units but would
only be willing to pay $100 for each unit. From the costing data in the
table above, we can calculate the average cost of each unit to be
$250,000/2,000 units = $125. Therefore, it would appear that accepting
the order would mean selling the firm would lose $25 on each unit sold.
The order would, however, in fact add to the firm’s profits. The reason for
this is that the indirect costs are fixed over the range of output 0-2500
units. The only costs that would increase would be the direct cost of
production, i.e. labor, materials and other direct costs. The direct cost of
each unit can be found by dividing the total for direct costs by the level of
output. For example, the material cost for 2,000 units is $80,000. This
means that the material cost for each unit would be $80,000/2,000 = $40.
If we repeat this for labor and other direct costs then the cost of production
an extra unit would be as follows:
Each extra unit sold would, therefore, generate an extra $10 contribution
(selling price – direct costs). Hence, accepting the order would actually
add to the overall profits for the firm by $3,000*(300*$10 contribution).
Providing the selling price exceeds the additional cost of making the
product, and then this contribution on each unit will add to profits.
Other issues concerned with accepting the order: It will also help the
firm to utilize any spare capacity that is currently lying idle. For example,
if a firm is renting a factory, then this will represent an indirect cost for the
firm. It does not matter how much of the factory is used, the rent will
remain the same.
By accepting this order the firm may also generate sales with new
customers or, via word-of-mouth, with other customers. The firm will
have to decide whether the attractions of extra orders and higher sales
outweigh the fact that these sales are at a lower selling price than normal.
It will want to avoid having too many of its sales at this discounted price,
as this lower price may start to be seen as normal. Customers already
paying the higher price may be unhappy and demand to be allowed to buy
at this lower price.
Although the lower price is above the marginal cost of production, it may
be that the firm does not cover its indirect and direct costs if too many are
sold at the low price. Tough the contribution sold on these discounted
units is positive; sales still have to be high enough to allow for enough unit
contributions to cover the indirect costs.
Contribution and full costing: When costing, a firm can use either
contribution (marginal) costing, whereby the fixed costs are kept separate,
or it can apportion overheads and use full costing. If the firm uses full
costing then it has to decide how the overheads are to be apportioned or
allocated to the different cost centers.
We can produce a costing statement that highlights the costs and revenues
that arise out of each profit center:
If a firm wishes to work out the profit made by each profit center then the
overheads will have to be allocated to each one. In the example below,
overheads are allocated equally:
It is worth noting that the firm’s overall profit should not be any different
whether it uses contribution of full costing. All that changes is how it deals
with the costs-either apportioning them out to the cost or profit centers for
full costing or deducting them in total from the total contribution of the
centers for contribution costing. If the indirect costs are allocated, the
decision about how to allocate them will affect the profit or loss of each
profit center, but it will not affect the overall profit of the firm.
In some ways these rules are no more or less accurate than dividing
their indirect costs equally although they may appear to be intuitively
appealing and in some sense feel fairer. Consequences of unfair
overhead allocation: We can rationalize over the reason chosen for the
basis of overhead allocation; however, we must realize that no method is
perfect. Costs being apportioned require a method to be chosen
independently, precisely because there is no direct link between the cost
and the cost center. The method chosen can have unfortunate effects on
the organization as a whole. If the firm uses departments as cost centers
then it is possible that using absorption costing could lead to resentment
by staff. This can be illustrated through the following example.
Hopkinson Ltd. has decided to allocate fixed overheads using labor costs
as the basis of allocation. Fixed overheads for the organization total
$360,000 and will be allocated on the basis of labor costs (i.e. in the ratio
2:3:4) between the three branches.
A B C
($) ($) ($)
Sales Revenue 165,000 240,000 300,000
Labor Costs 40,000 60,000 80,000
Materials Costs 20,000 30,000 40,000
Other Direct Costs 10,000 10,000 10,000
Allocating overheads in this way gives the result that branch B generates
the highest profit and branch C is the least profitable. The staff at branch C
may be labeled as poor performers. This could lead to demotivation,
rivalry between branches and lower productivity. Staff at branch C may
also be worried that promotions or bonuses may not be available to them
due to rating lowest out of three branches. However, this result is arrived
at only because the high fixed overheads were allocated in these ways. If
we ignored the fixed costs and considered contribution only, the following
results occur:
A B C
($) ($) ($)
Sales Revenue 165,000 240,000 300,000
Labor Costs 40,000 60,000 80,000
Materials Costs 20,000 30,000 40,000
The problems that can occur when allocating overheads can lead
arguments between managers over how they should be divided up. To
boost their particular division’s performance, managers will eager to
change a method that shifts some of their indirect costs onto another
division.
In some ways, however, it does not matter what rules are used to allocate
indirect costs. Whichever rule is used is inaccurate (by definition indirect
costs cannot be clearly be associated with a particular cost center) but the
actual process of allocating overheads makes everyone aware of their
importance and of the need to monitor and control them. Furthermore,
provided the rules are not changed over time, managers will be able to
analyze the trend profit figures for different departments, products or
regions. A significant increase in indirect costs will decrease the profits of
all business units to some degree, regardless of how these costs are
allocated. If the indirect costs continue to rise, all the managers will be
able to notice this trend in their accounts.
In the following question, we will look at the costing data for Beynon’s
Ltd., as small family chain of bakeries. The chain is owned and managed
as a family concern, with the father, James Beynon, has been convinced of
the merits of segmental reporting. He is worried because his youngest son,
who he considers to be inexperienced in retail management, runs the
branch. Consider the following breakdown of costs:
The data in the above appears to confirm the father’s belief that in the
long-term interest of the firm, he may have to close down the Browndale
branch and concentrate his efforts on the other two branches. If we use
contribution costing, however, we see a different picture:
HIGHFIELDS BRWONDALE NORTON
($) ($) ($)
Sales Revenue 22,000 17,000 26,000
Staffing costs 7,000 8,000 9,000
Supplies 5,000 4,000 6,000
Branch running 1,000 1,000 1,000
Total
($)
Overall Contribution 23,000
Indirect Costs 18,000
Profit 5,000
You may like using the Break-Even Analysis and Costing Analysis
JavaScript for performing some sensitivity analysis on the parameters for
investigation of their impacts on your decision making.
Further Reading:
Schweitzer M., E. Trossmann, and G. Lawson, Break-Even Analyses: Basic Model, Variants,
Extensions, Wiley, 1991.
Further Reading:
Varian H.R., Microeconomics Analysis, Norton, New York, 1992.
The stage in a product's life cycle conventionally, divided into four stages
as depicted in the following figure:
Characteristics:
Type of Decisions:
• Delphi method
• historical analysis of comparable products
• input-output analysis
• panel consensus
• consumer survey
• market tests
Characteristics:
Type of Decisions:
• facilities expansion
• marketing strategies
• production planning
Characteristics:
Type of Decisions:
Characteristics:
• sales decline
• prices drop
• profits decline
They do not want to be taken by surprise and ruined. They are anxious to
learn in time when the turning points will come because they plan to
arrange their business activities early enough so as not to be hurt by, or
even to profit from.
Further Readings:
Ross Sh., An Elementary Introduction to Mathematical Finance: Options and other Topics,
Cambridge University Press, 2002. It presents the Black-Scholes theory of options as well as
introducing such topics in finance as the time value of money, mean variance analysis, optimal
portfolio selection, and the capital assets pricing model.
Ulrich K., and S. Eppinger, Product Design and Development, McGraw-Hill, 2003.
Urban G., and J. Hauser, Design and Marketing Of New Products, Prentice Hall, 1993.
Zellner A., Statistics, Econometrics and Forecasting, Cambridge University Press, 2004.
Learning and The Learning Curve
To make it narrow, you must give plenty of training, and follow it up with
continuing floor support, help desk support, and other forms of just-in-
time support so that people can quickly get back to the point of
competence. If they stay in the valley of despair for too long, they will
lose hope and hate the new software and the people who made them
switch.
Success Characteristic:
Need to train workers in new method based on the facts that the longer a
person performs a task, the quicker it takes him/her:
1. Learn-on-the-job approach:
o learn wrong method
o bother other operators, lower production
o anxiety
2. Simple written instructions: only good for very simple jobs
3. Pictorial instructions: "good pictures worth 1000 words"
4. Videotapes: dynamic rather than static
5. Physical training:
o real equipment or simulators, valid
o does not interrupt production
o monitor performance
o simulate emergencies
Modeling the Learning Curve: Learning curves are all about ongoing
improvement. Managers and researchers noticed, in field after field, from
aerospace to mining to manufacturing to writing, that stable processes
improve year after year rather than remain the same. Learning curves
describe these patterns of long-term improvement. Learning curves help
answer the following questions.
The learning curve was adapted from the historical observation that
individuals who perform repetitive tasks exhibit an improvement in
performance as the task is repeated a number of times.
With proper instruction and repetition, workers learn to perform their jobs
more efficiently and effectively and consequently, e.g., the direct labor
hours per unit of a product are reduced. This learning effect could have
resulted from better work methods, tools, product design, or supervision,
as well as from an individual’s learning the task.
• Log-Linear: y(t) = k tb
• Stanford-B: y(t) = k (t + c)b
• DeJong: y(t) = a + k tb
• S-Curve: y(t) = a + k (t + c)b
The Log-Linear equation is the simplest and most common equation and it
applies to a wide variety of processes. The Stanford-B equation is used to
model processes where experience carries over from one production run to
another, so workers start out more productively than the asymtote predicts.
The Stanford-B equation has been used to model airframe production and
mining. The DeJong equation is used to model processes where a portion
of the process cannot improve. The DeJong equation is often used in
factories where the assembly line ultimately limits improvement. The S-
Curve equation combines the Stanford-B and DeJong equations to model
processes where both experience carries over from one production run to
the next and a portion of the process cannot improve.
An Application: Because of the learning effect, the time required to
perform a task is reduced when the task is repeated. Applying this
principle, the time required to perform a task will decrease at a declining
rate as cumulative number of repetitions increase. This reduction in time
follows the function: y(t) = k t b, where b = log(r)/log (2), i.e., 2b = r, and r
is the learning rate, a lower rate implies faster learning, a positive number
less than 1, and k is a constant.
For example, industrial engineers have observed that the learning rate
ranges from 70% to 95% in the manufacturing industry. An r = 80%
learning curve denotes a 20% reduction in the time with each doubling of
repetitions. An r = 100% curve would imply no improvement at all. For an
r = 80% learning curve, b = log(0.8)/log(2) = -0.3219.
Numerical Example: Consider the first (number if cycles) and the third
(their cycle times) columns for the following data set:
b = -0.32
k = 101.08 = 12
y(t) = 12 t -0.32
r = 2b = 2-0.32 = 80%
Further Readings:
Dilworth J., Production and Operations Management: Manufacturing and Non-manufacturing,
Random House Business Division, 2003.
Krajewski L., and L. Ritzman, Operations Management: Strategy and Analysis, Addison-Wesley
Publishing Company, 2004.
Economics and finance use and analysis ratios for comparison and as a
measuring tool and decision process for the purpose of evaluating certain
aspects of company's operations. The following are among the widely
used ratios:
Price Indices
Index numbers are used when one is trying to compare series of numbers
of vastly different size. It is a way to standardize the measurement of
numbers so that they are directly comparable.
The simplest and widely used measure of inflation is the Consumer Price
Index (CPI). To compute the price index, the cost of the market basket in
any period is divided by the cost of the market basket in the base period,
and the result is multiplied by 100.
Period 1 Period 2
q1 = p1 = q1 = p1 =
Items
Quantity Price Quantity Price
Apples 10 $.20 8 $.25
Oranges 9 $.25 11 $.21
A better price index could be found by taking the geometric mean of the
two. To find the geometric mean, multiply the two together and then take
the square root. The result is called a Fisher Index.
In USA, since January 1999, the geometric mean formula has been used to
calculate most basic indexes within the Comsumer Price Indeces (CPI); in
other words, the prices within most item categories (e.g., apples) are
averaged using a geometric mean formula. This improvement moves the
CPI somewhat closer to a cost-of-living measure, as the geometric mean
formula allows for a modest amount of consumer substitution as relative
prices within item categories change.
Notice that, since the geometric mean formula is used only to average
prices within item categories, it does not account for consumer
substitution taking place between item categories. For example, if the
price of pork increases compared to those of other meats, shoppers might
shift their purchases away from pork to beef, poultry, or fish. The CPI
formula does not reflect this type of consumer response to changing
relative prices.
The following are some of useful and widely used price indices:
where pi is the price per unit in period i and qi is the quantity produced in
period n, and V i = pi qi the value of the i units, and subscripts 1 indicate
the reference period of n periods.
pi
qi
Clear
Laspeyres' Index:
Lj = Σ (piq1) Σ (p1q1), the first sum is over i = 1, 2,..., j while the
second one is over all i = 1, 2,..., n,
where pi is the price per unit in period i and qi is the quantity produced in
period I, and subscripts 1 indicate the reference period of n periods.
Paasche's Index:
where pi is the price per unit in period i and qi is the quantity produced in
period I, and subscripts 1 indicate the reference period of n periods.
Fisher Index:
For more economics and financial ratios and indices, visit the Index
Numbers and Ratios with Applications site
Probabilistic Modeling: