Tolerance Analysis
Tolerance Analysis
Tolerance
Analysis
Basic Introduction
Fred Schenkelberg
Statistical Tolerance Analysis
Attribution-NonCommercial-NoDerivatives
http://creativecommons.org/licenses/by-nc-nd/4.0
Feel free to email, share, and pass this ebook around the web
but please don’t alter any of its contents when you do.
Thanks!
www.accendoreliability.com
Contents
Objective 4
Interpretation of Histograms 35
PDF Construction as an
Extension of a Histogram 50
Normal Distribution 53
Lognormal Distribution 59
Uniform Distribution 66
Triangle Distribution 70
Objective
Tolerances serve to communicate from the design team to the suppliers
and manufacturing teams the expected range of values for an element of a
product or system.
The longer answer involves the agreement between what is possible and
what is desired.
Width, length, weight, roughness, hardness, and any measure you deem
worth specifying will vary from one part to the next. Manufacturing
processes impart some amount of variation among each item produced. In
many cases the variation is acceptable for the intended function. In some
cases the variation is unacceptably large and leads to failures. When the
design does not account for the variation, holes will not align, components
will not fit, or performance will be poor.
Functional System
You may have noticed the difference between poor and well-crafted dresser
drawers: Poor drawers with poor alignment may bind when opening or
closing, whereas well-crafted drawers work as expected without binding
and do so smoothly.
Manufacturable
There are many ways to form parts: from casting to stamping and from
die cutting to hand cutting. Each process has inherent variability. When
the process is stable and the equipment well maintained, the parts will
reflect the inherent variability of the manufacturing process. Each type of
manufacturing process has limits on precision and, generally, the more
precise methods are more expensive.
Using only very tight tolerances may require using expensive manufacturing
processes for the parts. If not all part precision contributes to final
performance then some if not most of the part tolerances may be less
stringent. This may allow less expensive part manufacturing methods and
still create a product with the desired performance.
Ideally, there would be no variation and the design nominal values would be
sufficient. Variation adds complexity to the design.
Ideally, the design would accommodate the entire range of variation that
arises with manufacturing of the parts. There would be no scrap or wasted
material. Variation happens, and designs that can accommodate the most
variation are good (for the supplier).
Ideally, the part variation is stable and predictable and within the needs
of the design. This doesn’t happen by chance. Communication between
the design team and the supplier has to be clear and complete concerning
design requirements and part variation. The two sides have to find a happy
balance that leads to an acceptable economic and quality solution.
In this ebook we will discuss these three methods, with emphasis on the
Monte Carlo method. Each method has a role to play in most designs and
each has advantages. Of the three approaches, the Monte Carlo method
when done well provides an accurate analysis of the information you have
available. It provides a realistic estimate of the resulting assembly variation.
Being familiar with each method increases your capability to use the
appropriate tool as you engineer an appropriate design.
In the worst-case method you simply add the dimensions using the extreme
values for those dimensions. Thus, if a part is specified at 25 ± 0.1 mm,
then use either 25.1 or 24.9 mm, whichever leads to the most unfavorable
situation.
The actual range of variation should be the measured values from a stable
process. It may be based on vendor claims for process variation, industry
standards, or engineering judgment.
Simple Example
Let’s consider a stack of five plates and we want to estimate the combined
thickness. If each plate is 25 ± 0.1 mm, then the combined thickness will
be 5 times 25 mm, for 125 mm for the nominal thickness. The math for the
minimum and maximum is about as simple.
The tolerance is ± 0.1 mm; thus, combining five plates at maximum and
minimum tolerances provides a tolerance for five plates of ± 0.5 mm.
Thus, the stack of five plates will have a thickness of 125 ± 0.5 mm or a range
in thickness from 124.5 to 125.5 mm.
Worst-case tolerance analysis is quick and easy. You need just the
tolerances of the components involved. There is no need for distributions
or assumptions about distributions. We should have evidence that the
part tolerances are real though. If the plate is specified as 25 ± 0.1 mm,
then verify that the measured values actually fall within the range of the
tolerance.
If the design function and the manufacturing process work using the worst-
case tolerance analysis, then that is a safe way to set tolerances.
There are, however, cases where the tolerance stack is too large for the
design or assembly process. In that case, consider conducting the analysis
using the root sum squared or Monte Carlo methods.
This of course assumes the part dimensions are tightly grouped and within
the tolerance range.
In the RSS method, one assumes that the normal distribution describes
the variation of dimensions. The bell-shaped curve is symmetrical and fully
described with two parameters, the mean μ and the standard deviation σ.
The variances, not the standard deviations, are additive and provide an
estimate of the combined part variation. The result of adding the means and
taking the root sum square of the standard deviations provides an estimate
v sys = | n
i=1
v 2
i
where σi is the standard deviation of the ith part and n is the number of parts
in the stack.
μ + 3σ μ + 2σ μ + σ μ μ + σ μ + 2σ μ + 3σ
68.2%
95.4%
99.7%
The normal distribution has the property that approximately 68.2% of the
values fall within one standard deviation (1σ) of the mean. Likewise, 95.4%
fall within two standard deviations (2σ), and 99.7% fall within three standard
deviations (3σ). The plot above shows the probability for various regions
relative to the standard deviations away from the mean.
Simple Example
Using the same example as with the worst-case method, we have five plates,
each of which will have a different dimension. For any given set of five we
do not know the five individual dimensions, yet we can estimate what those
dimensions will be using statistics.
On average, the plates are 25 mm thick. Assuming that each part will be
slightly different than the average value and that the normal distribution
describes the variation, we then need to estimate the standard deviation of
the part thickness.
For this example let’s measure 30 plates and calculate the standard
deviation. If we find that the standard deviation is 0.33 mm we know that
most parts will have dimensions within the tolerance of 0.99 mm, if the
parts follow a normal distribution (and how to check this assumption will
be discussed later). This is our estimate of how the part thickness actually
varies.
When you stack five blocks, the average thickness will 5 times the mean
thickness, or 125 mm.
In this case we add the five variances, 0.332, and take the square root of that
sum:
v sys = | 5
i=1
0 . 33 2
i = 0.7379
Since approximately 99.7% of the values are within ± 3σ, the range of
combined thickness values for the stack of five plates should be within 125 ±
(3 × 0.7379 or 2.2137) mm, or most fall between 122.79 and 127.21 mm.
where the mean is that of the combined means of the parts involved in the
stack. In this example the system mean is 125 mm.
The tolerance is the desired value. In this examples let’s assume that we
would like the total stack to be within 2 mm of the mean, or a tolerance of 2.
σsys is the standard deviation of the combined parts found by using the RSS
standard deviations of the parts involved.
We subtract 0.5 to find the one-sided probability of the result, being below
the maximum value (mean plus tolerance), and we multiple the resulting
probability by 2 to find the chance that the final assembly is either above or
below the desired tolerance.
When gathering measurements is not feasible, then assuming that the parts
will have dimensions centered within the tolerance range and have ±3σ
across the tolerance range is a conservative starting assumption. Of course,
this implies that the part creation process is capable of creating 99.7% of
the parts within the tolerance specifications.
| ^xi - x h
N
i=1
v= N-1
The Monte Carlo method uses the part variation information to build a
system of randomly selected parts and determine the system dimension. By
repeating the simulated assembly a sufficient number of times, the method
provides a set of assembly dimensions that we can then compare to the
system tolerances to estimate the number of systems within a specific range
or tolerance.
x1
x2
f ^ x 1, x 2, g, x nh y
xn
The resulting distribution will be a normal distribution when all of the input
part dimensions are normally distributed. Also, if there are a large number
of parts in the stack, the result is also likely to be a normal distribution. For
a few parts with other than normal distribution of dimensions, the result will
likely not be normally distributed.
Then use the estimated standard deviation to estimate the number of runs m
needed to achieve a desired sampling confidence with 1% accuracy:
Za 2 # v 2
m =c m
Er ^nh
JK Z a 2 # v NO 2
KK O
K n OO
KK Er ^nh OOO
m=K
K n OP
L
Simple Example
What is the appropriate tolerance for individual plates used in a stack of five
to achieve a combined thickness of 125 ± 3 mm?
Given the variation of the plate thicknesses, how many assemblies will have
thicknesses outside the desired tolerance range?
Step 2: Define the system and create a transfer function that defines how
the dimensions combine: y = f(x1, x2, …, xn).
Assuming no bow or warp in the plates, we find that the stack thickness of
five plates is simply the added thicknesses of each plate: y = x1 + x2 + x3 + x4 +
x5.
In this simple example, the five plates are the same and from the same
population having a mean value of 25 mm and standard deviation of 0.33
mm.
Thus you may require the use of a Monte Carlo package such as Crystal Ball
(an Excel add-on) or 3DCS Variation Analyst within SolidWorks.
For each of the five parts in the example, draw at random a thickness
value from the normal distribution with a mean of 25 mm and a standard
deviation of 0.33 mm. If you are using Excel the thickness for one plate
can be found with =NORMINV(RAND(),<mean value>,<standard deviation
value>). Do this for each part in the stack.
One run set of inputs may result in 25.540, 25.008, 24.565, 24.878, and
24.248 mm.
Use the input values in the transfer function to determine the run’s result.
In this example, we add the five thickness values to find the stack thickness.
The sum of the five thicknesses for one run in Step 5 results in 124.238 mm.
Step 7: Repeat Steps 5 and 6 for the required number of runs (Step 4) and
record each run’s result.
When all the input distributions are normal distributions, the resulting
distribution will also be normal. For the five plates the result is a normal
distribution with a mean of 125 mm and standard deviation of 0.7379 mm.
As with the RSS method we can then calculate the percentage of assemblies
that have dimensions within 125 ± 3 mm. We find that 99.33% fit within the
defined tolerance when each plate is 25 ± 0.99 mm.
If you assume or actually have part dimensions that can be described by the
normal distribution, then the Monte Carlo method results will equal the RSS
method results.
For a conservative estimate to use in the Monte Carlo method you should
assume a uniform distribution.
Vendor Data
Ask your vendors whether they have data or recommended tolerances for
their specific parts. Ask whether they have estimates for the distribution of
the resulting parts.
Measure Parts
When possible measure parts and create an estimate for the appropriate
distribution. Ideally, you should measure hundreds of parts, yet 30 parts will
provide a fairly accurate distribution estimate.
Plot the data using a histogram to get a sense of the appropriate distribution
to employ.
There are three basic types of histograms: counts, normalized counts, and
relative frequency. Let’s explore the construction of each type using a set of
30 length measurements
Lengths
4.5 3.4 4.3 4.6 3.8
4.5 4.3 4.5 4.0 4.5
3.9 4.2 4.6 4.4 4.0
4.6 4.3 4.8 4.4 4.0
4.1 4.7 3.9 4.5 3.7
4.2 4.6 5.2 4.2 5.0
Count Histogram
This basic form of histogram counts the number of values that fall within
bins. Bins are non-overlapping equal ranges along the measurement axis.
The bins may be assigned arbitrarily or by using a systematic rule (see Bin
Size Guidelines below).
Using the measurements above, you can determine the range. The highest
(longest) value is 3.4 and the lowest (shortest) value is 5.2. The difference
is the range. Divide the range by the desired number of bins, then round the
resulting bin size value to a number that is convenient to work with. This
result is the bin width.
Set the first bin to include the lowest value. It is good practice to select a
starting point that creates bin boundaries that do not coincide with any of
the data points, thus avoiding any issues concerning to which bin a value
belongs.
Another common practice is to define the bin as including the first low end
of the bin range and not including the high end of the bin range (a low-open-
high-close chart).
The length data have a minimum of 3.4 and a maximum of 5.2 for a range of
5.2 − 3.4 = 1.8
Dividing 1.8 into 6 bins gives bin sizes of 0.3 (which is a rough estimate of the
standard deviation, which is actually 3.87).
Starting the first bin at 3.35 and adding enough bins to span all the data
creates seven bins ranging from 3.35 to 5.45 in steps of 0.3.
For each bin, count the number of measurements that fall within that bin
range.
Then plot the counts per bin across the measurement range of values. Note
that the count is the frequency of values occurring in the bin interval.
Histogram of Lengths
10
8
6
Count
4
2
0
Length
Notice that the plot is roughly centered on 4.4, with a slight left skew (i.e.,
there are more values to the left of the mean).
The plot is close to being symmetrical, and altering the number of bins may
enhance or reduce the amount of skewness.
Histogram of Lengths
14
12
10
8
Count
6
4
2
0
Length
Histogram of Lengths
8
6
Count
4
2
0
Length
Try a few bin counts to explore the data; there is no single correct way to
plot the data in a histogram. In general, try to set a bin count and size that
minimize the number of bins with one or no values, yet reveal the overall
shape of the data. The first plot with seven bins seems reasonable.
A histogram can show the normalized count or frequency where the count
in each bin is divided by the total number of observations.
30
20
% of Totla
10
Length
A histogram can show the relative frequency where the count in each bin is
divided by the total number of observations times the width of the bin. The
resulting histogram has an area equal to one.
0.6
0.4
0.2
0.0
Length
Interpretation of Histograms
It does take some experience and experimentation to fully interpret
histograms, yet the basic structure of a distribution is often evident. Here
are eight different shapes that histograms may take (with brief explanations
of each).
For a short-tailed distribution, the tails approach zero very quickly. Such
distributions commonly have a truncated (“sawed-off”) look. The classical
short-tailed distribution is the uniform (rectangular) distribution in which
the probability is constant over a given range and then drops to zero
everywhere else.
For a long-tailed distribution, the tails decline to zero very slowly and hence
one is apt to see probability a long way from the body of the distribution.
The optimal (unbiased and most precise) estimator for location for
the center of a distribution strongly depends on the tail length of the
distribution. The common choice of taking N observations and using
the calculated sample mean as the best estimate for the center of the
distribution is a good choice for the normal distribution (moderate tailed),
a poor choice for the uniform distribution (short tailed), and a horrible
choice for the Cauchy distribution (long tailed). Although for the normal
distribution the sample mean is as precise an estimator as we can get, for
the uniform and Cauchy distributions, the sample mean is not the best
estimator.
is the best estimator of location. For a Cauchy distribution, the median is the
best estimator of location.
The mode of a distribution is that value which most frequently occurs or has
the largest probability of occurrence. The sample mode occurs at the peak
of the histogram.
into the tails. The normal distribution is the classic example of a unimodal
distribution.
Consider now the histogram shown below, also illustrating data from a
bimodal (two-peak) distribution.
In contrast to the previous example, in this case the bimodality is not due
to an underlying deterministic model but due to a mixture of probability
models. In this case, each of the modes appears to have a rough bell-shaped
component.
One could easily imagine the above histogram being generated by a process
consisting of two normal distributions with the same standard deviation
but with two different locations (one centered at approximately 9.17 and the
other centered at approximately 9.26). If this is the case, then the research
challenge is to determine physically why there are two similar but separate
subprocesses.
As a second choice, one could conceptually argue that the mean (the point
on the horizontal axis where the distribution would balance) would serve
well as the typical value. As a third choice, others may argue that the
median (that value on the horizontal axis which has exactly 50% of the data
to the left and 50% to the right) would serve as a good typical value.
of “centerness,” the analyst should report at least two (mean and median)
and preferably all three (mean, median, and mode) in summarizing and
characterizing a dataset.
The issues for skewed left data are similar to those for skewed right data.
1. operator blunders,
2. equipment failures,
3. day-to-day effects,
4. batch-to-batch differences,
5. anomalous input conditions, and
6. warm-up effects
Lengths
4.5 3.4 4.3 4.6 3.8
4.5 4.3 4.5 4.0 4.5
3.9 4.2 4.6 4.4 4.0
4.6 4.3 4.8 4.4 4.0
4.1 4.7 3.9 4.5 3.7
4.2 4.6 5.2 4.2 5.0
Histogram of Lengths
10
8
6
Count
4
2
0
Length
The value of 3.5 falls about at the midpoint for the first bin and 5.0 is also at
about the midpoint of a cell. If we assume that the values within a bin are
equally distributed, we may count 0.5 for each split bin. Adding the full bin
with a count of 1 for the full bin above 5.0, we have approximately 2 of the 30
readings outside the tolerance, or about 6.6%
Checking the actual readings we find 2 of the 30 readings fall outside the
tolerance, 3.4 and 5.2.
The number of bins into which the data are to be separated is an arbitrary
choice. In general, between 5 and 20 bins is a good start. Experiment with
different bin counts as you explore the nature of the dataset.
There are a few published guidelines that provide a calculated starting bin
count.
n = 1 + log 2 ^ N h
n = 1 + 3.3 log 10 ^ N h
Rice’s rule is to set the number of bins to twice the cube root of the number
of observations:
n = 23 N
The PDF formula provides a convenient way to create random numbers for
use in Monte Carlo studies.
PDF Properties
For discrete data the probability function, p(x), has the following properties.
P 6 X = x@ = p ^ x h
|p j =1
j
0 # p^ xh # 1
For continuous data the probability function, f(x), has the following
properties.
p 6a # x # b@ = # f ^ x h dx
b
# f ^ x h dx = 1
3
-3
0 # f^ xh # 1
PDF Construction as an
Extension of a Histogram
The histogram below shows the same 30 points used in the histogram
discussion. The red line is the PDF curve for the normal distribution with a
mean of 4.2 and a standard deviation of 0.3.
0.8
0.6
0.4
0.2
0.0
Length
The PDF doesn’t perfectly fit the histogram, even though the 30 data points
were generated from the normal distribution with a mean of 4.2 and a
standard deviation of 0.3.
Changing the number of bins to five allows the PDF curve to appear as a
better fit.
0.8
0.6
0.4
0.2
0.0
Length
If the fit is close enough, the PDF function is easy to use to determine
probabilities of items outside a tolerance range.
The cumulative density function (CDF) is the integral of the PDF, thus also
ranging from zero to one, yet it provides the area under the PDF, which is
the cumulative probability from negative infinity to the point of interest. We
will use the CDF as a means to create random numbers from a selection of
distributions within Excel.
Probability distributions also have two other functions, the hazard function
and the reliability function.
Normal Distribution
When to Use
The normal distribution works when the data are symmetrical and exhibit a
bell-shaped curve. It works when the process is not constrained or adjusted
(centered) regularly. However, it also does not work well to describe items
that have been sorted into groups of different sizes or values.
-^ x - n h
2
f^ xh =
e 2v 2
v 2r
where μ is the location parameter, also known as the mean, and σ is the scale
parameter, also known as the standard deviation
The following plot shows the normal distribution plotted with three different
mean values, 0, 5, and 12, each with a standard deviation of 1.5.
The following plot shows the normal distribution plotted with three different
standard deviation values, 1, 2, and 3, and a mean of 12.
When the data are normalized by subtracting the mean and dividing by the
standard deviation, the resulting distribution will have μ = 0 and σ = 1.
-x 2
f^ xh =
e 2
2r
CDF
F^ xh =
1 -^ x - nh2
#
x
e 2v 2 dx
v 2r -3
Note that the integral of the normal distributor PDF does not have closed-
form solutions and must be calculated numerically.
F^ xh =
1 -^ x h
#
x 2
e 2 dx
2r -3
We can use the data to estimate the mean and standard deviation, thus
estimating the normal distribution.
| n
i=1
xi
n-x= n
| ^xi - x h
n 2
i=1
v-s= n-1
Note that most calculators and software packages calculate the standard
deviation by using the above formula for the sample standard deviation.
Some default to the population standard deviation, implying that you are
calculating the parameter will all of the data in the population, which is
rarely the case. The population standard deviation is
| ^xi - x h
n 2
i=1
v= n
You would only use the population standard deviation formula when working
with a small (n ≤ 30) population. With more than 30 values, the difference
between the two formulas is minor. You should always check you calculator
or formula to be sure you are using the sample standard deviation when
working with a sample of data from a population.
=NORMINV(RAND(), μ, σ)
Lognormal Distribution
When to Use
All of the contributions to the variation are positive. Values that fit a
lognormal distribution are all positive. The distribution is defined from zero
to positive infinity.
Repair time, additive coating processes, and delay times are examples of
measures that will often fit a lognormal distribution well.
^ ln^ x - i h m h2
-d n
f^ xh =
e 2v 2
; x 2 i ; m, v 2 0
^ x - i h v 2r
where σ is the standard deviation of the natural logarithm of the data values
(also referred to as the shape parameter), θ is the location parameter or the
offset or shift to the right for the distribution, and m is the median of the data
values (also known as the scale parameter).
The following plot shows the lognormal distribution plotted with three
different median values, 2, 5, and 12, each with a standard deviation of 1.
The following plot shows the lognormal distribution plotted with three
different standard deviation values, 0.5, 1, and 2, with a mean of 12.
When θ = 0, there is no offset and the distribution is fully described with two
parameters.
ln^ x h2
-c m
f^ xh =
e 2
; x 2 0; v 2 0
x v 2r
CDF
As noted before, the cumulative density function is the integral of the PDF.
For a lognormal distribution, it is given by
^ ln^ x - i h m h2
-d n
F^ xh = #
e x 2v 2
dx; x 2 i; m, v 2 0
- 3 ^ x - i h v 2r
Note the integral of the lognormal distributor PDF does not have a closed-
form solution and must be calculated numerical.
ln^ x h
-d n
2
F^ xh = #
x
e 2v 2
dx; x 2 i; v 2 0
-3 x v 2r
Estimating Parameters
The lognormal distribution has two parameters, μ and σ. These are the
natural log of data mean and the standard deviation. With this
parametrization μ=log(m). The μ and σ do describe the distribution, including
the reliability function, defined as
c ln ^ t h - n m
R^ t h = 1 - U v
There is one difference though: You must first calculate the natural
logarithm of each data value.
Let’s say we have the time to failure times for four heater elements. We
know that the time to failure distribution is lognormal from previous work.
Calculate μ
In the table we have the time to failure data and the calculation of the
natural log of each data reading. To calculate μ we calculate the mean or
average value of the four ln(Time to Failure) readings:
Calculate σ
The calculation of σ requires a little more math. The formula for the
calculation of standard deviation includes the sum of values squared and
the sum of squares of the values:
n | i = 1 t 2i - _ | i = 1 t i i
n n 2
n ^n - 1h
s=
We need the sum of ln(Time to Failure) for the second summation term and
the sum of squares for the first summation term. Expanding the table to
make the calculations we find the two summation results.
4 ^153.5064h - 24.7626 2
4 ^4 - 1h
s= = 0.2642
=LOGINV(RAND(), μ, σ)
Uniform Distribution
When to Use
Instead of using the tolerance or specification values as the values for the
uniform distribution, you could use the observed range of values or worst-
case range of values. The underlying assumption is that each part has an
equal chance to have any value within the defined range. In other words, the
larger the defined range the less chance a part will have a dimension at the
edge. Another assumption is that there is no chance of parts with values
falling outside the defined range.
1
f ^ x h = ) b - a for a # x # b
0 otherwise
The following plot shows the uniform distribution plotted with three different
location parameters, 0, 2, and 4, each with a scale parameter of 1.
The following plot shows the uniform distribution plotted with three different
scale parameters, 1, 2, and 4, with a location parameter of 0.
1 for 0 # x # 1
f^ xh = '
0 otherwise
CDF
x-a
F ^ x h = ) b - a for a # x # b
0 otherwise
x for 0 # x # 1
F^ xh = '
0 otherwise
may be minimum and maximum observed values. They may also represent
the range of the measurements or be estimated directly from a histogram or
PDF plot.
=(b-a)*RAND()
Triangle Distribution
When to Use
You should use the triangle distribution when you know or suspect that the
distribution has a region of higher probability of occurrence than at the
edges of the region. It can be useful when you suspect a normal or lognormal
distribution yet do not have sufficient data to estimate its parameters. The
triangle distribution provides a conservative estimate for the normal or
lognormal distribution.
Z] 2 ^ x - ah
]]
]] ^b - ah^c - ah for a # x # c
[ 2 ^b - xh
]
f ^ x h = ]]] for c # x # b
]] ^b - ah^b - ch
] 0 otherwise
\
The following plot shows the triangle distribution plotted with three different
location parameters, 0, 2, and 4, each with a scale of 1 and shape parameter
at mid point of range.
The following plot shows the triangle distribution plotted with three different
scale parameters, 1, 2, and 4, with a location parameter of 0 and shape
parameter at mid point of range.
When the mode c is midway between the a and b values, the triangle
distribution is symmetric, as is the orange triangular distribution plot above.
CDF
Z]
]] ^ x - a h2
]] ^b - ah^c - ah for a # x # c
]
^b - x h2
F ^ x h = ]] 1 -
[]
for c # x # b
]] ^b - a h^b - c h
] 0 otherwise
\
Design for Reliability Series 72 Accendo Reliability
Statistical Tolerance Analysis
]] 2 ` x - a j for a # x # a + b
Z] 2
]] b-a 2
F ^ x h = ]]] 1 - 2 a b - a k for 2 # x # b
[ b-x 2
a+b
]]
0 otherwise
\
The triangle distribution parameters are defined at the endpoints a and
b. These may be the tolerance range or specification, or they may be the
minimum and maximum observed values. They may also represent the
range of the measurements or be estimated directly from a histogram or
PDF plot.
The mode parameter c is the most likely value. If you suspect that the
distribution is centered and symmetrical, then set c to a + b/2. If you suspect
that the distribution is not symmetrical, set c to the most likely value.
=a+(b-a)*(RAND()+RAND())/2
=(c-a)/(b-a)
One key aspect that reliability engineers should consider is the difficulty in
manufacturing parts within the specified tolerances. If it is likely to have
parts at or beyond these tolerances, premature failure can occur or poor
product performance can result.