Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

The Indian Institute of Planning & Management: Operational Research

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

The Indian Institute of Planning & Management

Project Trimester
(Fall - Winter 2008-10)

OPERATIONAL RESEARCH

Submitted By: Group F4148

Ananth Kumar P.

ACKNOWLEDGEMENT
We would like to express our gratitude and thanks to Prof. Mashkur Zafar for their
invaluable co-operation, continued support and guidance as well as healthy criticism throughout
resulting works. Their able guidance, encouragement and valuable suggestion led our way pass
easily through most difficult period during this Project.

We would also like to thank the faculty and staff of IIPM because without their help this project
would not have been completed.

INTRODCUTION
Operations Research is an interdisciplinary branch of applied mathematics that uses methods
such as mathematical modeling, statistics, and algorithms to arrive at optimal or near optimal
solutions to complex problems. It is typically concerned with determining the maxima (of profit,
assembly line performance, crop yield, bandwidth, etc) or minima (of loss, risk, etc.) of some
objective function. Operations research helps management achieve its goals using scientific
methods. OR is the discipline of applying advanced analytical methods to help make better
decisions.

By using techniques such as mathematical modeling to analyze complex situations, operations


research gives executives the power to make more effective decisions and build more productive
systems based on:

 More complete data


 Consideration of all available options
 Careful predictions of outcomes and estimates of risk
 The latest decision tools and techniques

Basic OR concepts:-

1. Integer programming- is an interdisciplinary branch of applied mathematics that uses


methods such as mathematical modeling, statistics, and algorithms to arrive at optimal or near
optimal solutions to complex problems. It is typically concerned with determining the maxima
(of profit, assembly line performance, crop yield, bandwidth, etc) or minima (of loss, risk, etc.)
of some objective function. Operations research helps management achieve its goals using
scientific methods.

2. Linear Programming- A linear programming problem may be defined as the problem of


maximizing or minimizing
a linear function subject to linear constraints. The constraints may be equalities
or inequalities. Here is a simple example.
Find numbers x1 and x2 that maximize the sum x1 + x2 subject to the constraints
x1 ≥ 0, x2 ≥ 0, and
x1 + 2x2 ≤ 4
4x1 + 2x2 ≤ 12
−x1 + x2 ≤ 1

In this problem there are two unknowns, and five constraints. All the constraints are inequalities
and they are all linear in the sense that each involves an inequality in some linear function of the
variables. The first two constraints, x1 ≥ 0 and x2 ≥ 0, are special. These are called non-
negativity constraints and are often found in linear programming problems. The other constraints
are then called the main constraints. The function to be maximized (or minimized) is called the
objective function. Here, the objective function is x1 + x2 .

3. Forecasting- Forecasting is the process of estimation in unknown situations. Prediction is a


similar, but more general term. Both can refer to estimation of time series, cross-sectional or
longitudinal data. Usage can differ between areas of application: for example in hydrology, the
terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain
specific future times, while the term "prediction" is used for more general estimates, such as the
number of times floods will occur over a long period. Risk and uncertainty are central to
forecasting and prediction. Forecasting is used in the practice of Customer Demand Planning in
everyday business forecasting for manufacturing companies. The discipline of demand planning,
also sometimes referred to as supply chain forecasting embraces both statistical forecasting and a
consensus process

Forecasting is commonly used in discussion of time-series data.

4. Stimulation- Simulation is the imitation of some real thing, state of affairs, or process. The act
of simulating something generally entails representing certain key characteristics or behaviours
of a selected physical or abstract system. Simulation is used in many contexts, including the
modeling of natural systems or human systems in order to gain insight into their functioning. [1]
Other contexts include simulation of technology for performance optimization, safety
engineering, testing, training and education. Simulation can be used to show the eventual real
effects of alternative conditions and actionKey issues in simulation include acquisition of valid
source information about the relevant selection of key characteristics and behaviours, the use of
simplifying approximations and assumptions within the simulation, and fidelity and validity of
the simulation outcome.

5. Queuing- Queuing theory is the mathematical study of waiting lines (or queues). The theory
enables mathematical analysis of several related processes, including arriving at the (back of the)
queue, waiting in the queue (essentially a storage process), and being served by the server(s) at
the front of the queue. The theory permits the derivation and calculation of several performance
measures including the average waiting time in the queue or the system, the expected number
waiting or receiving service and the probability of encountering the system in certain states, such
as empty, full, having an available server or having to wait a certain time to be served.

6. Game theory- Game theory is a branch of applied mathematics that is used in the social


sciences, most notably in economics, as well as in biology, engineering, political
science, international relations, computer science, and philosophy. Game theory attempts to
mathematically capture behavior in strategic situations, in which an individual's success in
making choices depends on the choices of others. While initially developed to analyze
competitions in which one individual does better at another's expense (zero sum games), it has
been expanded to treat a wide class of interactions, which are classified according to
several criteria. Today, "game theory is a sort of umbrella or 'unified field' theory for the rational
side of social science, where 'social' is interpreted broadly, to include human as well as non-
human players (computers, animals, plants)" (Aumann 1987).

linear regression

In statistics, linear regression refers to any approach to modeling the relationship between one
or more variables denoted y and one or more variables denoted X, such that the model depends
linearly on the unknown parameters to be estimated from the data. Such a model is called a
“linear model”. Most commonly, linear regression refers to a model in which the conditional
mean of y given the value of X is an affine function of X. Less commonly, linear regression could
refer to a model in which the median, or some other quantile of the conditional distribution of y
given X is expressed as a linear function of X. Like all forms of regression analysis, linear
regression focuses on the conditional probability distribution of y given X, rather than on the
joint probability distribution of y and X, which is the domain of multivariate analysis
Linear regression was the first type of regression analysis to be studied rigorously, and to be used
extensively in practical applications. This is because models which depend linearly on their
unknown parameters are easier to fit than models which are non-linearly related to their
parameters and because the statistical properties of the resulting estimators are easier to
determine.
Multiple regression

The general purpose of multiple regression (the term was first used by Pearson, 1908) is to learn
more about the relationship between several independent or predictor variables and a dependent
or criterion variable. For example, a real estate agent might record for each listing the size of the
house (in square feet), the number of bedrooms, the average income in the respective
neighborhood according to census data, and a subjective rating of appeal of the house. Once this
information has been compiled for various houses it would be interesting to see whether and how
these measures relate to the price for which a house is sold. For example, you might learn that
the number of bedrooms is a better predictor of the price for which a house sells in a particular
neighborhood than how "pretty" the house is (subjective rating). You may also detect "outliers,"
that is, houses that should really sell for more, given their location and characteristics.

Personnel professionals customarily use multiple regression procedures to determine equitable


compensation. You can determine a number of factors or dimensions such as "amount of
responsibility" (Resp) or "number of people to supervise" (No_Super) that you believe to
contribute to the value of a job. The personnel analyst then usually conducts a salary survey
among comparable companies in the market, recording the salaries and respective characteristics
(i.e., values on dimensions) for different positions. This information can be used in a multiple
regression analysis to build a regression equation of the form:

Salary = .5*Resp + .8*No_Super

Once this so-called regression line has been determined, the analyst can now easily construct a
graph of the expected (predicted) salaries and the actual salaries of job incumbents in his or her
company. Thus, the analyst is able to determine which position is underpaid (below the
regression line) or overpaid (above the regression line), or paid equitably.

In the social and natural sciences multiple regression procedures are very widely used in
research. In general, multiple regression allows the researcher to ask (and hopefully answer) the
general question "what is the best predictor of ...". For example, educational researchers might
want to learn what are the best predictors of success in high-school. Psychologists may want to
determine which personality variable best predicts social adjustment. Sociologists may want to
find out which of the multiple social indicators best predict whether or not a new immigrant
group will adapt and be absorbed into society.

Computational Approach
The general computational problem that needs to be solved in multiple regression
analysis is to fit a straight line to a number of points.

In the simplest case - one dependent and one independent variable - you can
visualize this in a scatterplot.

 Least Squares
 The Regression Equation
 Unique Prediction and Partial Correlation
 Predicted and Residual Scores
 Residual Variance and R-square
 Interpreting the Correlation Coefficient R

Least Squares
In the scatterplot, we have an independent or X variable, and a dependent or Y
variable. These variables may, for example, represent IQ (intelligence as measured
by a test) and school achievement (grade point average; GPA), respectively. Each
point in the plot represents one student, that is, the respective student's IQ and
GPA. The goal of linear regression procedures is to fit a line through the points.
Specifically, the program will compute a line so that the squared deviations of the
observed points from that line are minimized. Thus, this general procedure is
sometimes also referred to as least squares estimation. 

The Regression Equation


A line in a two dimensional or two-variable space is defined by the equation
Y=a+b*X; in full text: the Y variable can be expressed in terms of a constant (a)
and a slope (b) times the X variable. The constant is also referred to as the
intercept, and the slope as the regression coefficient or B coefficient. For example,
GPA may best be predicted as 1+.02*IQ. Thus, knowing that a student has an IQ
of 130 would lead us to predict that her GPA would be 3.6 (since, 1+.02*130=3.6).

For example, the animation below shows a two dimensional regression equation
plotted with three different confidence intervals (90%, 95% and 99%).

In the multivariate case, when there is more than one independent variable, the
regression line cannot be visualized in the two dimensional space, but can be
computed just as easily. For example, if in addition to IQ we had additional
predictors of achievement (e.g., Motivation, Self- discipline) we could construct a
linear equation containing all those variables. In general then, multiple regression
procedures will estimate a linear equation of the form:

Y = a + b1*X1 + b2*X2 + ... + bp*Xp

Unique Prediction and Partial Correlation


Note that in this equation, the regression coefficients (or B coefficients) represent
the independent contributions of each independent variable to the prediction of the
dependent variable. Another way to express this fact is to say that, for example,
variable X1 is correlated with the Y variable, after controlling for all other
independent variables. This type of correlation is also referred to as a partial
correlation (this term was first used by Yule, 1907). Perhaps the following
example will clarify this issue. You would probably find a significant negative
correlation between hair length and height in the population (i.e., short people have
longer hair). At first this may seem odd; however, if we were to add the variable
Gender into the multiple regression equation, this correlation would probably
disappear. This is because women, on the average, have longer hair than men; they
also are shorter on the average than men. Thus, after we remove this gender
difference by entering Gender into the equation, the relationship between hair
length and height disappears because hair length does not make any unique
contribution to the prediction of height, above and beyond what it shares in the
prediction with variable Gender. Put another way, after controlling for the variable
Gender, the partial correlation between hair length and height is zero.

Predicted and Residual Scores


The regression line expresses the best prediction of the dependent variable (Y),
given the independent variables (X). However, nature is rarely (if ever) perfectly
predictable, and usually there is substantial variation of the observed points around
the fitted regression line (as in the scatterplot shown earlier). The deviation of a
particular point from the regression line (its predicted value) is called the residual
value.

Residual Variance and R-square


The smaller the variability of the residual values around the regression line relative
to the overall variability, the better is our prediction. For example, if there is no
relationship between the X and Y variables, then the ratio of the residual variability
of the Y variable to the original variance is equal to 1.0. If X and Y are perfectly
related then there is no residual variance and the ratio of variance would be 0.0. In
most cases, the ratio would fall somewhere between these extremes, that is,
between 0.0 and 1.0. 1.0 minus this ratio is referred to as R-square or the
coefficient of determination. This value is immediately interpretable in the
following manner. If we have an R-square of 0.4 then we know that the variability
of the Y values around the regression line is 1-0.4 times the original variance; in
other words we have explained 40% of the original variability, and are left with
60% residual variability. Ideally, we would like to explain most if not all of the
original variability. The R-square value is an indicator of how well the model fits
the data (e.g., an R-square close to 1.0 indicates that we have accounted for almost
all of the variability with the variables specified in the model).

Interpreting the Correlation Coefficient R


Customarily, the degree to which two or more predictors (independent or X
variables) are related to the dependent (Y) variable is expressed in the correlation
coefficient R, which is the square root of R-square. In multiple regression, R can
assume values between 0 and 1. To interpret the direction of the relationship
between variables, look at the signs (plus or minus) of the regression or B
coefficients. If a B coefficient is positive, then the relationship of this variable with
the dependent variable is positive (e.g., the greater the IQ the better the grade point
average); if the B coefficient is negative then the relationship is negative (e.g., the
lower the class size the better the average test scores). Of course, if the B
coefficient is equal to 0 then there is no relationship between the variables.

The Importance of Residual Analysis


Even though most assumptions of multiple regression cannot be tested explicitly,
gross violations can be detected and should be dealt with appropriately. In
particular outliers (i.e., extreme cases) can seriously bias the results by "pulling" or
"pushing" the regression line in a particular direction (see the animation below),
thereby leading to biased regression coefficients. Often, excluding just a single
extreme case can yield a completely different set of results.

OPTIMISING PROBLEM

labour
service Units Meal sold hours materials dollar
unit 1 100 2 200
unit 2 100 4 150
unit 3 100 4 100
unit 4 100 6 100
unit 5 100 8 80
unit 6 100 10 50

Microsoft Excel 12.0 Answer Report


Worksheet: [optimising problem.xlsx]Sheet2
Report Created: 2/17/2010 10:36:58 AM

Target Cell (Max)


Original
Cell Name Value Final Value
$B$1 Maximise
8 u1 1 1

Adjustable Cells
Original
Cell Name Value Final Value
$B$1
6 ratio u1 0.01 0.01
$C$1
6 ratio v1 0.166666667 0.166666667
$D$1
6 ratio v2 0.003333333 0.003333333

Constraints
Cell Name Cell Value Formula Status Slack
$B$2 $B$21<=$D$2
1 unit1 Lhs 0 1 Binding 0
$B$2 - $B$22<=$D$2 Not 0.16666666
2 unit2 Lhs 0.166666667 2 Binding 7
$B$2 $B$23<=$D$2
3 unit3 Lhs 1.38778E-15 3 Binding 0
$B$2 - $B$24<=$D$2 Not 0.33333333
4 unit4 Lhs 0.333333333 4 Binding 3
$B$2 $B$25<=$D$2 Not
5 unit5 Lhs -0.6 5 Binding 0.6
$B$2 - $B$26<=$D$2 Not 0.83333333
6 unit6 Lhs 0.833333333 6 Binding 3
$B$2 $B$27=$D$2 Not
7 constr Lhs 1 7 Binding 0

Microsoft Excel 12.0 Sensitivity Report


Worksheet: [optimising problem.xlsx]Sheet2
Report Created: 2/17/2010 10:36:58 AM

Reduce Allowabl
    Final d Objective e Allowable
Coefficien
Cell Name Value Cost t Increase Decrease
$B$1
6 ratio u1 0.01 0 100 1E+30 100
$C$1 0.16666666
6 ratio v1 7 0 0 0 3
$D$1 0.00333333
6 ratio v2 3 0 0 300 0

Constrain Allowabl
    Final Shadow t e Allowable
Cell Name Value Price R.H. Side Increase Decrease
$B$2 unit1 0.38461538
1 Lhs 0 1 0 1 5
-
$B$2 unit2 0.16666666 0.16666666
2 Lhs 7 0 0 1E+30 7
$B$2 unit3 1.38778E-
3 Lhs 15 0 0 0.2 1
-
$B$2 unit4 0.33333333 0.33333333
4 Lhs 3 0 0 1E+30 3
$B$2 unit5
5 Lhs -0.6 0 0 1E+30 0.6
-
$B$2 unit6 0.83333333 0.83333333
6 Lhs 3 0 0 1E+30 3
$B$2 constr
7 Lhs 1 1 1 1E+30 1

Microsoft Excel 12.0 Limits Report


Worksheet: [optimising problem.xlsx]Limits Report 1
Report Created: 2/17/2010 10:36:58
AM

  Target  
Cell Name Value
$B$1 Maximise
8 u1 1

Targe Targe
  Adjustable   Lower t Upper t
Resul Resul
Cell Name Value Limit t Limit t
$B$1
6 ratio u1 0.01 0 0 0.01 1
$C$1 0.16666666 0.16666666 0.16666666
6 ratio v1 7 7 1 7 1
$D$1 0.00333333 0.00333333 0.00333333
6 ratio v2 3 3 1 3 1

unit1

units u1 v1 v2
unit1 100 -2 -200
unit2 100 -4 -150
unit3 100 -4 -100
unit4 100 -6 -100
unit5 100 -8 -80
unit6 100 -10 -50
constr 0 2 200

Model
Decesion Variable
u1 v1 v2
0.16666 0.00333
ratio 0.01 7 3

Maximis
e 1

constrai
n Lhs Rhs
unit1 0 <= 0
-
0.1666
unit2 7 <= 0
1.39E-
unit3 15 <= 0
-
0.3333
unit4 3 <= 0
unit5 -0.6 <= 0
-
0.8333
unit6 3 <= 0
constr 1 = 1
Microsoft Excel 12.0 Answer Report
Worksheet: [optimising problem.xlsx]unit3
Report Created: 2/17/2010 11:03:34 AM

Target Cell (Max)


Original
Cell Name Value Final Value
$B$1 Maximise
8 u1 1 1

Adjustable Cells
Original
Cell Name Value Final Value
$B$1
6 ratio u1 0.01 0.01
$C$1
6 ratio v1 0.166666667 0.0625
$D$1
6 ratio v2 0.003333333 0.0075

Constraints
Slac
Cell Name Cell Value Formula Status k
$B$2 $B$21<=$D$2 Not 0.62
1 unit1 Lhs -0.625 1 Binding 5
$B$2 $B$22<=$D$2 Not 0.37
2 unit2 Lhs -0.375 2 Binding 5
$B$2 $B$23<=$D$2
3 unit3 Lhs 4.55191E-14 3 Binding 0
$B$2 $B$24<=$D$2 Not 0.12
4 unit4 Lhs -0.125 4 Binding 5
$B$2 $B$25<=$D$2 Not
5 unit5 Lhs -0.1 5 Binding 0.1
$B$2 $B$26<=$D$2
6 unit6 Lhs 1.36335E-13 6 Binding 0
$B$2 Not
7 constr Lhs 1 $B$27=$D$27 Binding 0
Microsoft Excel 12.0 Sensitivity Report
Worksheet: [optimising problem.xlsx]Sheet3
Report Created: 2/17/2010 10:52:48 AM

Adjustable Cells
    Final Reduced Objective Allowable Allowable
Coefficien
Cell Name Value Cost t Increase Decrease
$B$1 0.00888888
6 ratio u1 9 0 100 1E+30 100
$C$1 0.05555555
6 ratio v1 6 0 0 2 7
$D$1 0.00666666 116.666666 33.3333333
6 ratio v2 7 0 0 7 3

Constraints
Constrain
    Final Shadow t Allowable Allowable
Cell Name Value Price R.H. Side Increase Decrease
-
$B$2 unit1 0.55555555 0.55555555
1 Lhs 6 0 0 1E+30 6
-
$B$2 unit2 0.33333333 0.33333333
2 Lhs 3 0 0 1E+30 3
$B$2 unit3 3.63043E- 0.77777777 0.14285714
3 Lhs 14 8 0 3 0.5
-
$B$2 unit4 0.11111111 0.11111111
4 Lhs 1 0 0 1E+30 1
-
$B$2 unit5 0.08888888 0.08888888
5 Lhs 9 0 0 1E+30 9
$B$2 unit6 0.22222222 0.15384615
6 Lhs 1.2701E-13 2 0 4 0.625
$B$2 constr 0.88888888
7 Lhs 1 9 1 1E+30 1

Microsoft Excel 12.0 Limits Report


Worksheet: [optimising problem.xlsx]Limits Report 2
Report Created: 2/17/2010 10:52:48
AM
  Target  
Cell Name Value
$B$1 Maximise 0.8888888
8 u1 89

  Adjustable   Lower Target Upper Target


Cell Name Value Limit Result Limit Result
$B$1 0.0088888 0.0088888 0.88888888
6 ratio u1 89 0 0 89 9
$C$1 0.0555555 0.0555555 0.8888888 0.0555555 0.88888888
6 ratio v1 56 56 89 56 9
$D$1 0.0066666 0.0066666 0.8888888 0.0066666 0.88888888
6 ratio v2 67 67 89 67 9

unit4

Availabl
units u1 v1 v2 e
unit1 100 -2 -200 0
unit2 100 -4 -150 0
unit3 100 -4 -100 0
unit4 100 -6 -100 0
unit5 100 -8 -80 0
unit6 100 -10 -50 0
constr 0 6 100 1

Model
Decesion Variable
u1 v1 v2
0.00888 0.05555 0.00666
ratio 9 6 7

Maximis 0.88888
e 9

constrai
n Lhs Rhs
-
unit1 0.55556 <= 0
-
unit2 0.33333 <= 0
3.63E-
unit3 14 <= 0
-
unit4 0.11111 <= 0
-
unit5 0.08889 <= 0
1.27E-
unit6 13 <= 0
constr 1 = 1

unit6

Availabl
units u1 v1 v2 e
unit1 100 -2 -200 0
unit2 100 -4 -150 0
unit3 100 -4 -100 0
unit4 100 -6 -100 0
unit5 100 -8 -80 0
unit6 100 -10 -50 0
constr 0 10 50 1

Model
Decesion Variable
u1 v1 v2
ratio 0.01 0.0625 0.0075

Maximis
e 1

constrai
n Lhs Rhs
unit1 -0.625 <= 0
unit2 -0.375 <= 0
unit3 0 <= 0
unit4 -0.125 <= 0
unit5 -0.1 <= 0
unit6 0 <= 0
constr 1 = 1
Linear programming Problem

RMC
MATERIAL REQUIREMENT
SOLVENT AMOUNT
MATERIAL FUEL ADDITIVE BASE AVAILABLE
MATERIAL 1 0.4 0.5 20
MATERIAL 2 0 0.2 5
MATERIAL 3 0.6 0.3 21
PROFIT PER TON 40 30

MODEL
DECISION VARIABLE
SOLVENT
FUEL ADDITIVE BASE
TONS
PRODUCED 25 20

MAX TOTAL
PROFIT 1600

CONSTRAINTS AMOUNT USED (LHS)


MATERIAL 1 20 <= 20
MATERIAL 2 4 <= 5
MATERIAL 3 21 <= 21

Microsoft Excel 10.0 Answer Report


Worksheet: [Book1]RMC

Target Cell (Max)


Original
Cell Name Value Final Value
$B$1 MAX TOTAL PROFIT FUEL
4 ADDITIVE 0 1600
Adjustable Cells
Original
Cell Name Value Final Value
$B$1 TONS PRODUCED FUEL
2 ADDITIVE 0 25
$C$1 TONS PRODUCED SOLVENT
2 BASE 0 20

Constraints
Slac
Cell Name Cell Value Formula Status k
$B$1 MATERIAL 1 AMOUNT USED $B$17<=$D$1
7 (LHS) 20 7 Binding 0
$B$1 MATERIAL 2 AMOUNT USED $B$18<=$D$1 Not
8 (LHS) 4 8 Binding 1
$B$1 MATERIAL 3 AMOUNT USED $B$19<=$D$1
9 (LHS) 21 9 Binding 0

Microsoft Excel 10.0 Sensitivity Report


Worksheet: [Book1]RMC

Adjustable Cells
Fina Reduc
    l ed
Ce Val Gradi
ll Name ue ent
TONS
$B PRODUCE
$1 D FUEL
2 ADDITIVE 25 0
TONS
PRODUCE
$C D
$1 SOLVENT
2 BASE 20 0

Constraints
Fina Lagra
    l nge
Ce Val Multip
ll Name ue lier
$B MATERIAL 20 33.333
$1 1 33366
7 AMOUNT
USED
(LHS)
MATERIAL
2
$B AMOUNT
$1 USED
8 (LHS) 4 0
MATERIAL
3
$B AMOUNT
$1 USED 44.444
9 (LHS) 21 44213

Microsoft Excel 10.0 Limits Report


Worksheet: [Book1]Limits Report 1

  Target  
Va
Ce lu
ll Name e
MAX
TOTAL
$B PROFIT
$1 FUEL 16
4 ADDITIVE 00

T T
a U a
r p r
Lo g p g
w e e e
  Adjustable   er t r t
R L R
e i e
Va Li s m s
Ce lu mi u i u
ll Name e t lt t lt
TONS 1
$B PRODUCE 6 6
$1 D FUEL 0 2 0
2 ADDITIVE 25 0 0 5 0
TONS
PRODUCE 1 1
$C D 0 6
$1 SOLVENT 0 2 0
2 BASE 20 0 0 0 0
Microsoft Excel 10.0 Answer Report
Worksheet: [SHAKTI SOLVER.xls]RMC

Target Cell (Max)

Cell Name
MAX TOTAL PROFIT FUEL
$B$14 ADDITIVE

Adjustable Cells

Cell Name
TONS PRODUCED FUEL
$B$12 ADDITIVE
TONS PRODUCED
$C$12 SOLVENT BASE

Constraints
Cell Name
MATERIAL 1 AMOUNT
$B$17 USED (LHS)
MATERIAL 2 AMOUNT
$B$18 USED (LHS)
MATERIAL 3 AMOUNT
$B$19 USED (LHS)

Microsoft Excel 10.0 Sensitivity Report


Worksheet: [SHAKTI SOLVER.xls]RMC

Adjustable Cells
Allowabl Allowabl
    Final Reduced Objective e e
Cell Name Valu Cost Coefficie Increase Decreas
e nt e
$B$1 TONS PRODUCED FUEL
2 ADDITIVE 25 0 40 20 16
$C$1 TONS PRODUCED SOLVENT
2 BASE 20 0 30 20 10

Constraints
Constrain Allowabl Allowabl
    Final Shadow t e e
Valu Decreas
Cell Name e Price R.H. Side Increase e
$B$1 MATERIAL 1 AMOUNT USED 33.3333333
7 (LHS) 20 3 20 1.5 1E+30
$B$1 MATERIAL 2 AMOUNT USED
8 (LHS) 4 0 5 1E+30 1
$B$1 MATERIAL 3 AMOUNT USED 44.4444444
9 (LHS) 21 4 21 1E+30 2.25

Microsoft Excel 10.0 Limits Report


Worksheet: [SHAKTI SOLVER.xls]RMC

  Target  
Valu
Cell Name e
MAX TOTAL PROFIT FUEL
$B$14 ADDITIVE 1600

Lowe Targe Uppe Targe


  Adjustable   r t r t
Valu Resul Resul
Cell Name e Limit t Limit t
TONS PRODUCED FUEL
$B$12 ADDITIVE 25 #N/A #N/A 25 1600
TONS PRODUCED SOLVENT
$C$12 BASE 20 #N/A #N/A 20 1600
Microsoft Excel 10.0 Answer Report
Worksheet: [SHAKTI SOLVER.xls]SHAKTI

Target Cell (Max)


Original
Cell Name Value Final Value
$B$1
4 MAX TOTAL PROFIT AEROPLANE 0 816

Adjustable Cells
Original
Cell Name Value Final Value
$B$1
2 UNITS PRODUCED AEROPLANE 0 84
$C$1
2 UNITS PRODUCED BOAT 0 44

Constraints
Slac
Cell Name Cell Value Formula Status k
$B$1 $B$17<=$D$1
7 LABOUR AMOUNT USED (LHS) 560 7 Binding 0
$B$1 MACHINE TIME AMOUNT USED $B$18<=$D$1
8 (LHS) 300 8 Binding 0
$B$1 $B$19<=$D$1 Not
9 P&V AMOUNT USED (LHS) 128 9 Binding 12

Microsoft Excel 10.0 Sensitivity Report

Adjustable Cells

Cell
RMC
MATERIAL REQUIREMENT
AMOUNT
MATERIAL
Microsoft AEROPLANE
Excel 10.0 Limits BOAT AVAILABLE
LABOUR
Report 3 7 560
MACHINE [SHAKTI
Worksheet: TIME 2 3 300
SOLVER.xls]SHAKTI
P&V 1 1 140
PROFIT PER TON 5 9

MODEL
  Target  
DECISION VARIABLE
Valu
Cell NameAEROPLANE e BOAT
UNITS MAX TOTAL PROFIT
PRODUCED
$B$14 AEROPLANE 816 84 44

MAX TOTAL
PROFIT 816
Lowe Targe
  Adjustable   r t
CONSTRAINTS Valu(LHS)
AMOUNT USED Resul
Cell
LABOUR Name e Limit <= t
560 560
UNITS
MACHINE TIME PRODUCED 300 <= 300
$B$12 AEROPLANE
P&V 84 128 0<= 396 140
$C$1 UNITS PRODUCED
2 BOAT 44 0 420
ANOVA for Multiple Linear Regression

Multiple linear regression attempts to fit a regression line for


a response variable using more than one explanatory variable. The ANOVA
calculations for multiple regression are nearly identical to the calculations for
simple linear regression, except that the degrees of freedom are adjusted to reflect
the number of explanatory variables

PROBLEM ON MULTIPLE REGRESSION

WITH HELP OF MULTIPLE REGRESSION FINDING HOW SOME STOCKS REPRESENT


OR EFFECT THE WHOLE SENSEX AND HOW THEY ARE CO RELATED WITH SENSEX

STOCKS TAKEN

BHARTI
ACC
DR.REDDY
GAIL
HDFC
ICICI
NTPC
ONGC

USING ANOVA TABLE

nse

Regression Statistics
0.98522098
Multiple R 5
0.97066038
R Square 9
Adjusted R 0.96748853
Square 9
Standard Error 29.6098491
Observations 83

ANOVA
Significance
  df SS MS F F
268303.952 306.023432
Regression 8 2146431.618 3 4 1.78981E-53
876.743163
Residual 74 64878.99411 6
Total 82 2211310.612      

Standard
  Coefficients Error t Stat P-value Lower 95% Upper 95%
- - -
546.294572 4.14534971 283.707395
Intercept 6 131.7849181 5 8.93226E-05 -808.88175 2
1.11423432 3.72213018 0.00038263 0.51775859 1.71071005
X Variable 1 4 0.299353937 2 1 3 6
0.57274590 0.33157555 0.81391626
X Variable 2 9 0.121036434 4.73201243 1.04156E-05 4 4
-
0.12849103 1.51640250 0.13367801 0.04034538 0.29732745
X Variable 3 5 0.084734122 4 9 6 5
0.94550178 2.76294010 0.00722280 0.26363633 1.62736723
X Variable 4 2 0.34220857 6 6 3 2
0.76851876 9.19764546 0.60202973 0.93500779
X Variable 5 5 0.083556033 7 7.04942E-14 8 2
6.22672145
X Variable 6 0.93653779 0.150406245 7 2.61521E-08 0.63684681 1.23622877
4.62150022 7.42942451 3.38203140 5.86096905
X Variable 7 9 0.622053595 6 1.54342E-10 3 5
0.95445532 7.29291411 0.69368251 1.21522813
X Variable 8 2 0.13087434 9 2.78481E-10 1 3

RESIDUAL
OUTPUT

Observation Predicted Y Residuals


5115.22218
1 3 -60.97218315
5132.79761
2 6 -14.59761572
5160.79888
3 2 -51.9488818
5150.55585
4 5 -8.405855194
5 5130.44366 11.35633996
5082.78844
6 4 31.66155616
5014.39298
7 7 49.20701321
4960.04139
8 9 28.55860073
4987.94804
9 6 9.101953508
4975.20661
10 1 -4.306611363
4863.32865
11 3 -16.62865322
4754.34018
12 1 71.80981869
4712.27547
13 3 38.27452662
4717.17912
14 2 -5.479122389
4627.28894
15 5 -63.38894486
4754.73771
16 3 -43.93771318
4817.76671
17 8 -52.21671841
18 4842.92660 -46.77660918
9
4937.69625
19 4 -39.29625373
4891.35384
20 3 -9.653843379
5007.24668
21 8 -3.296687722
4952.19729
22 6 0.452703678
5011.13510
23 3 -12.1851025
5046.60016
24 2 11.44983804
5032.95825
25 2 29.2917481
5022.17628
26 7 32.52371298
4957.21575
27 4 31.78424624
5024.50639
28 9 27.94360137
5084.53489
29 4 19.01510571
5074.74481
30 4 15.80518644
5110.60775
31 7 -2.457756948
5013.71623
32 1 -8.166230864
4971.82441
33 5 -30.0744152
5070.08847
34 8 -37.38847815
5114.88336
35 6 7.116634072
5118.34192
36 2 4.908077907
5142.47697
37 9 -10.77697883
5085.90915
38 3 22.99084691
5070.14798
39 7 -3.447987065
5142.36834
40 4 5.581655819
5089.46771
41 5 22.53228528
42 5132.89822 1.75177592
4
5109.59840
43 8 7.701592208
5096.77880
44 3 8.921196647
4999.49011
45 7 33.55988269
5011.13971
46 4 30.91028603
47 5015.80112 25.94887984
4962.26949
48 3 25.43050658
4942.32661
49 7 10.27338334
4970.99162
50 3 14.85837706
5150.14443
51 3 -5.544433254
5183.77711
52 6 -5.377115778
5184.12312
53 6 3.826874414
5172.83883
54 1 -3.388831272
5201.87967
55 8 -0.829678467
5218.09919
56 3 14.10080684
57 5242.07799 35.82201027
5259.48580
58 4 22.31419562
5270.07613
59 3 -6.97613327
5262.02785
60 6 -17.27785594
5272.06937
61 3 -22.66937305
5175.76932
62 1 34.63067874
5197.16041
63 6 36.78958433
64 5240.13083 19.76917
5248.47694
65 9 3.723050887
5337.50559
66 3 -62.65559283
67 5283.34364 -57.69364493
5
5261.39889
68 7 -39.69889653
5120.62891
69 7 -26.47891704
5031.99982
70 5 4.000174789
5017.96264
71 7 -10.06264651
4854.13296
72 4 -1.032963651
4827.96480
73 4 39.28519614
4878.80197
74 2 3.248027785
4875.20730
75 1 24.49269864
4810.01026
76 3 20.08973675
4921.14291
77 3 10.70708655
4894.76486
78 3 -49.41486269
79 4752.31233 -33.66232972
4769.31559
80 5 -12.0655951
4772.43168
81 6 -12.03168634
4795.06060
82 1 -2.410601132
4766.04682
83 3 -8.846823128
CONCLUSION

During this project we came across several aspects of Operational


Research .
We get know about several Techniqes like ANOVA, MULTIPLE
REGRESSION
Solving LPP in EXCEL. We get knowledge about how correlation can
be used to predicting the other variables .We get knowledge about the
various basic of Operational Research .we get to know how some
stocks represent the whole sensex.We get to know about practical view
in Operational Research .

We get knowledge about using several soft wares like TORA,


PH2STAT which are now a days used in solving practical problems.

You might also like