Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 4 IMB 516

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Industrial Analysis

IMB 516

Dr. Kobamelo Mashaba, Lecture 3


University of Botswana
Faculty of Engineering Experimental Design
Mechanical Department
email:mashabak@ub.bw
Tel: 355 4301
Experimental design

Experimental design (ED): Is a strategy of planning , conducting, analysing


and interpreting experiments so that sound and valid conclusion can be drawn.

Experimental design is a very powerful problem solving technique that assists industrial
engineers for tackling quality control problems effectively and economically.

Experiments in companies are often conducted in a series of trials or tests that produce
quantifiable outcomes.

For continuous improvement in product/process quality, it is fundamental to understand


the process behavior, the amount of variability, and its impact on processes.
One of the common approaches employed by many engineers for experimentation
in companies is one-variable-at-a-time (OVAT) where engineers vary one variable
at a time keeping all other variables in the experiment fixed.

This approach depends upon guesswork, luck, experience, and intuition for its
success. Moreover, this type of experimentation requires large resources to obtain
a limited amount of information about the process.

One variable-at-a-time experiments are often unreliable, inefficient, time


consuming, and may yield false optimum condition for the process.
Experimental Design Goal

For the successful application of an industrially designed experiment, one generally


requires planning skills, statistical skills, teamwork skills, and engineering skills.

Goal

In industry we do experiments :To systematically investigate the process or


product variables that affect product quality.

This is done by changing input variables and observing the response


Experimental Studies

In experiments usually some manipulation is attempted, in order to see if the


outcome is related to the factor being controlled.

❖ Twenty plots of carrots are grown in a field. Each plot is randomly


allocated to one of five fertilizers, with four plots for each fertilizer. At the
end of the experiment, the carrots from each plot are weighed. The yield
of carrots with different fertilizers is being studied.

❖ People with a certain disease are randomly allocated to three different


drugs. The drugs are being compared for their influence on the progress
of the disease.

The goal of a study is to find out the relationships between certain explanatory
factors and response variables.
An experimental study aims to answer the question: whether there is a
cause-and-effect relationship between the explanatory factor and the
response variable.

An observational study usually can only answer whether there is an


association between the explanatory factor and the response variable. In
general, external evidence is required to rule out possible alternative
explanations for a cause-and-effect relationship.
Industrial Application of ED
❖ Reducing product and process design and development time.
❖ Studying the behaviour of a process over a wide range of operation conditions
❖ Minimising the effects of variation in manufacturing condition.
❖ Increasing process productivity by reducing scrap,rework etc
❖ Improving the process yield and stability of ongoing manufacturing process
❖ Making Products insensitive to environmental variations.
❖ Studying the relationship between a set of independent process variables
Three principles of experimental design

❖ Replication, to provide an estimate of experimental error;

❖ randomization, to ensure that this estimate is statistically valid; and

❖ local control, to reduce experimental error by making the experiment

more efficient.
Replications

The number of replications (sample size) is the number of experimental


units that receive each treatment. The sample size should be small enough
that negligible treatment differences are not declared statistically
significant and large enough that meaningful treatment differences are
declared statistically significant. Repeated measurements on the same
experimental unit may or may not constitute true replications; treating
dependent observations as if they were independent is one of the most
common statistical errors found in the scientific literature.
Randomization

Randomization means the use of a random device to assign the


treatments to the experimental units. Randomization prevents the
introduction of systematic bias into the experiment and provides the link
between the actual experiment and the statistical model that underlies the
data analysis. Thus, randomization is essential to the valid use of
statistical methods.
Completely randomized design,

In a completely randomized design, treatment levels or combinations are


assigned to experimental units at random. This is typically done by listing the
treatment levels or treatment combinations and assigning a random number to
each.
Randomized block design

Randomized block design: Is the arranging of experimental units in


groups (blocks) that are similar to one another followed by completely
randomized design .

The randomized block design takes account of known factors that affect
outcome/response but are not of primary interest.

Example: Diet of people experiment.

The sex of the patient is a blocking factor accounting for treatment


variability between males and females. This reduces sources of variability
and thus leads to greater precision.
Matched pairs design

Matched pairs design is an experimental design where pairs of


participants are matched in terms of key variables, such as age and IQ.
One member of each pair is then placed into the experimental group and
the other member into the control group.

An individual case is considered matched in a sample when it possesses


similar attributes to another case in the sample. Matching can be used to
reduce or eliminate confounding within an experiment. When matching is
utilized in a study, the researcher matches the attributes of a case with
another case in the sample and applies a treatment and control to each
pair of matched individuals.
Matched pairs design

Example of matched pairs design for a hypothetical medical experiment, in which


1000 subjects each receive one of two treatments - a placebo or a cold vaccine.
The 1000 subjects are grouped into 500 matched pairs. Each pair is matched on
gender and age. For example, Pair 1 might be two women, both age 21. Pair 2
might be two men, both age 21. Pair 3 might be two women, both age 22; and so
on.

For this hypothetical example, the matched pairs design is an improvement over a
completely randomized design. Like the completely randomized design, the
matched pairs design uses randomization to control for confounding.
Type I and Type II Errors and Their Application

Type I and Type II errors are used for quality engineering, and are related to
hypothesis testing.

A Type I error (alpha) is the probability of rejecting a true null hypothesis.

A Type II error (beta) is the probability of failing to reject a false null hypothesis.
Experimental errors

Type I errors are also called:

1. Producer’s risk
2. False alarm
3. error

Type II errors are also called:

1. Consumer’s risk
2. Misdetection
3. error
Calculations of type I and type II error

Example: A certain type of cold vaccine is known to be only 25% effective after a period of
2 years. To determine if a new vaccine is superior in providing protection against the same
virus for a longer period of time. 20 people are chosen at random and inoculated. If 9
people or more of chosen people receiving the new vaccine surpass the 2 year period
without contracting the virus, then the new vaccine will be more superior to the one
presently in use.

Solution:

● H0: The new vaccine is equally effective after a period of 2 years as the one
commonly used. Therefore p=0.25
● H1: The new vaccine is more effective after a period of 2 years as the one commonly
used. Therefore p>0.25
Calculations of type I and type II error

A type I error will occur when 9 or more individuals surpass the 2 year period without
contracting the virus using a new vaccine that is equivalent to the one in use.
∝= P(type I error)
= P(X≥9 l p=0.25) β= P(type II error)
20 We are using P(X<9 l p=0.5)
= ∑ b(x:20,0.25) Binomial sums 8
x=9
8
= ∑ b(x:20,0.5)
x=0
= 1- ∑ b(x:20,0.25)
x=0 = 0.2517
= 1-0.9591
= 0.0409
The probability of committing a type II error, is impossible to
compute unless if we have a specific alternative hypothesis. In
our example we are testing a null hypothesis that p = 0.25
against the alternative hypothesis that p = 0.5

This method Binomial sums is a poor test procedure as compared to normal curve approximation.
∝= P(type I error)
= P(X≥9 l p=0.25)

= ∑ b(x:20,0.25)
Normal curve approximation of type I and type
II error
For our vaccine problem let’s assume the critical value is 36.5, using 100 random sample that means
all values above 36.5 constitute the critical region while all below 36.5 fall in the acceptance region.

∝= P(type I error) Find the Z value given by (Critical


= P(X>36.5 l H0 is true) value (x)-mean (μ))/ sigma (σ)

μ = np = 100*0.25 = 25 The z value corresponding to x = 36.5: z =


σ=√( npq)= √(100*0.25*0.75) = 2.656
4.33 ∝= P(z>2.656)
= 1- P(z<2.656)
If Ho is false, and the true value of = 1-0.9961
H1 is p = 0.5, we can determine the = 0.0039
probability of a type II error.
β= P(type II error)
μ = np = 100*0.5 = 50 P(X<36.5 l H1 is true)
σ=√( npq)= √(100*0.5*0.5) = 5 β= P(z<-2.7)
Then the Z value will be = -2.7 = 0.0035
Problems with OVAT/OFAT

Lets recall that OFAT means varying a one factor at a time to evaluate the effects of
various factors on the output.

Problems:

1. We are not able to know the interaction of various factors with each other.
● The interaction means the effects of one factor depends on setting of
another factor
2. We are not able to find the optimum setting of the factor to set the best
outcome.
3. We will get a limited knowledge about the product and process performance.
Terms and Concepts

Factor or (explanatory variable): Is an independent variable that may affect the response and of which
different levels are used in an experiment.

Level: It is the setting or adjustment of a factor at a specific level during an experiment.

Response variable: It is an output variable that shows the observed results or value of an experimental
treatment which we want to optimize.

Effect: it ia a relationship between a factor and a response variable.

Types of effects: 1. Main effect 2. Dispersion effect 3. Interaction effect.

Hw: Give definition and example of each type of effect.


Observed value: It is a particular value of a response variable determined as
aresults of a test or measurements.

Noise factors: It is an undesirable factor that is difficult or expensive to


control as a part of standard conditions.

Experimental units: It is the smallest entity receiving a particular treatment


that yields the value of the response variable.

Treatment: It is a specific setting or combination of factors levels for an


experimental unit.
Experimental error: It is the variation that occurs in the response variable.

Experimental run: It is a single performance of the experiment for a specific set of


treatment combinations.

n- represents the number of experimental run n=L^F

Where:

● n = number of runs
● L = number of levels
● F= number of factors
Case study: Machining Problem
The machine operator can vary the feed, speed, temp of
the cooling process

The process engineer want to find the best setting


that gives the best surface finish.

The process engineer decided to test this factors using full


factorial experiments, to generate maximum amount of
process knowledge.

The feed,speed and temperature are called In this experiment we have 3 factors and 2 levels (low &
factors or independent variables High) therefore we have a total of 8 experimental runs.
(Replicates)
Other independent variables that can not be
manipulated are hardness of the material, The replicates not the same due to experimental error.
humidity of the room that affect the surface 1. Drift in the factor level
finish 2. Variation in the measurements
3. The existence of noise factors

You might also like