Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Optimal Experimental Designs For Clustered Read-Out Data of Reliability Tests Via Particle Swarm Optimization

Uploaded by

이원재
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Optimal Experimental Designs For Clustered Read-Out Data of Reliability Tests Via Particle Swarm Optimization

Uploaded by

이원재
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Computers & Industrial Engineering 171 (2022) 108471

Contents lists available at ScienceDirect

Computers & Industrial Engineering


journal homepage: www.elsevier.com/locate/caie

Optimal experimental designs for clustered read-out data of reliability tests


via particle swarm optimization
Kangwon Seo a,b ,∗, Wonjae Lee a
a Department of Industrial and Manufacturing Systems Engineering, University of Missouri, United States of America
b
Department of Statistics, University of Missouri, United States of America

ARTICLE INFO ABSTRACT

Keywords: Clustered data from a product’s reliability experiments may arise from many sources of heterogeneity in
Accelerated life test practice such as different test units suppliers, pieces of equipment, and operators. It is of interest for reliability
Clustered read-out data practitioners to investigate how to plan such experiments with the aim of achieving the greatest test efficiency.
Interval censoring
In this paper, we develop optimal experimental designs of accelerated life tests under the interval censoring
Particle swarm optimization
scheme when the obtained read-out observations are expected to be correlated and thereby clustered induced
Optimal experimental design
by two suppliers effects. The optimality condition is sought such that the design produces the minimum
prediction variance at the product’s use condition. The correlated read-out data is modeled by a binomial
generalized linear mixed model, and the Fisher information matrix is derived based on marginalized quantities
in the covariance matrix. The particle swarm optimization is applied to search the design space and find the
optimal solution with numerical stability under the complex nonlinear objective function. The optimal design
shows the balanced test units allocations over two suppliers for each test condition. However, it is also found
that degrees of such balance could be vary depending on the location of test condition, which allows flexible
test units allocations for some test conditions when the number of test units from suppliers are imbalanced.

1. Introduction Reliability tests can be expedited by setting some environmental


conditions higher than usual stress levels. Such experiments are called
The interval censoring scheme is commonly used for products’ accelerated life tests (ALTs), which is being widely adopted in manu-
reliability tests. It is particularly useful when continuous in situ moni- facturing industries to collect a product’s lifetime data in a relatively
toring of test units’ failures is infeasible. Under the interval censoring short time period. Once data is collected, the relationship between
scheme, a test is started with 𝑛 test units and products’ failures are stress conditions and a product lifetime distribution is established by
only inspected at some pre-determined time points 𝑡1 , 𝑡2 , … , 𝑡𝑘 . Thus a proper regression model, and parameters of the model are estimated
the exact failure times are never known but only the number of failures by using, e.g., the method of maximum likelihood. The lifetime of the
occurring in an interval is observed. Data collected through this test are product in a usual stress condition is predicted by extrapolating test
called read-out or grouped data (Tobias & Trindade, 2011). results obtained with the high stress levels to a normal product usage
An example of the interval-censored reliability test data can be stress level using the estimated regression model. It is common to use a
found in Malevich et al. (2021) where performances of diamond im-
single accelerating factor such as temperature or two factors at the same
pregnated drilling tools are assessed. In this test, inspections on the
time such as temperature and humidity. While the model described in
diamonds’ wear-out are not available when the tools are operating,
this paper is flexible to accommodate less or more factors, we consider
but the broken diamonds could be only detected by analyzing the
an ALT with two stress factors for the sake of illustration.
surface of the tool with a microscope at inspection times. Another
A poorly designed experiment of an ALT is likely to be a waste of a
example can be found in Sun et al. (2020) where the reliability of
great deal of time and cost; to take full advantage of test outcomes, it is
servo turret, a sub-system of the lathe processing used in numerical
required to plan an ALT prudently with specific objectives to achieve. In
control machine tools, is evaluated based on interval-censored data
and accurate failure data. For more examples of reliability tests with this paper, we aim to develop reliability tests under interval censoring
interval censoring, see Lawless (2011), Meeker and Escobar (2014), that are optimally designed with respect to some criteria measuring
Nelson (2009); and Tobias and Trindade (2011). statistical efficiency of expected test outcomes. Specifically, a test plan

∗ Correspondence to: E3437M Thomas & Nell Lafferre Hall, Columbia, MO 65211, United States of America.
E-mail addresses: seoka@missouri.edu (K. Seo), wlee@mail.missouri.edu (W. Lee).

https://doi.org/10.1016/j.cie.2022.108471
Received 24 February 2022; Received in revised form 11 July 2022; Accepted 12 July 2022
Available online 18 July 2022
0360-8352/© 2022 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

with the minimum prediction variance at the usual stress condition of For the interval censoring schemes, Wu et al. (2021) investigated
a product, called the 𝑈𝑐 -optimal design by Yang and Pan (2013), is the reliability sampling design such as the minimum number of inspec-
sought, which is a popularly used design criterion for a reliability test. tion intervals and sample sizes based on the testing procedure for the
We are particularly interested in clustered test data that might lifetime performance index. Wu and Huang (2019) investigated infer-
be induced by heterogeneous sources of test units, operators, or test ence and design problems related to multiple stress accelerated life test
apparatus. Failure time data obtained under such test protocols may with progressive type-I interval censoring where the optimal number of
be correlated to each other, which should be taken into account for test units, number of inspections, and length of the inspection interval
experimental designs before the test as well as the statistical inference are determined under several optimality criteria with cost constraint.
on failure time distribution after the test. This paper describes the Wu and Huang (2014) studied the optimal inspection interval length
case where test units put into a test are provided by two different under progressive type-I interval censoring. They suggested two selec-
suppliers. In such cases, read-out data collected from test units provided tion criteria, minimizing the asymptotic variance of the mean lifetime,
by the same supplier are correlated by a shared supplier effect, which is
and minimizing the determinant of the covariance matrix of the model
modeled by the class of generalized linear mixed model (GLMM) with
parameters estimates. For more problems of determining the optimal
the binomial response (McCulloch & Searle, 2001; Stroup, 2012) in this
inspection times for interval censoring, see e.g., Malevich and Müller
article.
(2019) and the references therein.
Main quantities of interest to be optimized include combinations
Lee and Pan (2012) and Yang and Pan (2013) developed the optimal
of stress levels and the number of test units allocated to each stress
condition. We are also interested in decisions of how many number of design and statistical inference, respectively, of ALTs with interval
test units, at each stress condition, should be assigned to each supplier. censoring. However, their models are based on an assumption that all
Intuitively, it seems to be ideal if two suppliers provide the same failure time data are independent of each other, and may not be suitable
number of test units; but it turns out such restriction can be relaxed solutions for the clustered data. Tse et al. (2008) investigated optimal
depending on the stress condition. and practical ALT plans under progressive Type-I interval censoring
The associated optimization problem is challenging. Decision vari- with random removals to obtain the smallest asymptotic variance of
ables include continuous quantities of stress conditions and discrete the estimated 𝑞th quantile.
quantities of the number of test units, with some constraints that make Multiple methods were proposed for data analysis of correlated
a solution be a feasible test plan. The objective function is evaluated failure time observations. León et al. (2007) show how to use Bayesian
based on marginalized quantities that are approximated by a sampling methods to make inferences from an ALT where the test units come
method; and thus, it is a stochastic quantity. In addition, numerical from different groups, such as batches, and the group effect is random
issues are likely to occur because of the matrix inversion operation. and significant both statistically and practically. Kensler et al. (2015)
Based on our experience, traditional optimization routines have been propose a nonlinear mixed model analysis for reliability experiments
found to be neither stable nor fast for this problem. Instead, the particle with random blocks and subsampling. In Seo and Pan (2016, 2017),
swarm optimization (PSO) (Kennedy & Eberhart, 1995), a heuristic correlated failure time observations obtained from a constant-stress
search algorithm, is utilized, which provides a numerically stable, ALT under the right censoring scheme and those from a step-stress ALT,
relatively fast, and availability for more general objective functions to respectively, were modeled by the Poisson GLMM.
find the optimal design.
PSO has been found to be successful in a wide range of optimiza-
The major contributions of this study are summarized as follows.
tion tasks including optimal experimental designs. In Hakamipour and
• We could develop a framework to create the interval censored Rezaei (2015), due to nonlinearity and complexity of the objective
ALT plan with the minimum prediction variance at the product function, the PSO is applied to provide the optimal test plan for a step-
use condition when the observed failure time data was clustered. stress ALT. Lukemire et al. (2020) suggested an effective and flexible
• Clustered read-out data was modeled by the binomial GLMM, PSO algorithm to find locally 𝐷-optimal designs for experiments with
and the corresponding information matrix was derived by the ordinal outcomes modeled by the cumulative logit link. Qiu et al.
quasi-likelihood approach. (2014) applied PSO in the biological sciences to find various types
• We successfully demonstrated the effective optimization process of optimal designs and compare the performance with the differential
for an optimal experimental design with correlated non-normal evolution algorithm. Qiu (2014) developed the ultra-dimensional PSO
data by PSO. to find 𝐷-optimal designs for multi-variable exponential and Poisson
The remainder of the paper is organized as follows. In Section 2, regression models with several variables and all pairwise interactions.
some related previous researches are reviewed. In Section 3, the clus- PSO has also been used by Lukemire et al. (2016) to search for locally
tered read-out data is modeled by a binomial GLMM, and optimization 𝐷-optimal designs for generalized linear models with discrete and
problem is defined. In Section 4, PSO is introduced and applied to continuous factors and a binary outcome. They demonstrated two real
our problem. Section 5 illustrates numerical examples with simulation applications to identify experimental designs with mixed factors, which
studies and sensitivity study. Lastly, Section 6 provides a conclusion. have shown that the 𝐷-efficiencies of the designs searched by PSO
were better than the implemented designs. Shi et al. (2019) look for
2. Literature review Bayesian optimal designs for Exponential models, using PSO, which is
useful in HIV studies, and re-design a car-refueling study for a logistic
The optimal experimental designs for ALT have been studied by model with ten factors and interaction factors. In addition, Chen et al.
many researchers. In Kim and Sung (2021), optimal ALT plans are (2011) suggest variants of the PSO method for optimal experimental
developed under the assumptions of cyclic-stress loading, Type-I cen- designs in the linear model and nonlinear models. It shows that the
soring, and lognormal lifetime distribution. Seo and Pan (2018, 2022) PSO techniques work well to find a wide range of optimal experimental
developed the optimal ALT plans for correlated failure time observa- designs. Wong et al. (2015) propose a projection-based PSO method
tions caused by multiple test chambers and suppliers under the right- to find different types of optimal designs for mixture models with
censoring test schemes. Zhao et al. (2020) proposed a new optimality and without constraint on the components in an efficient way. Chen
criterion that minimizes the asymptotic variance of predicted reliability et al. (2015a) pursue minimax optimal designs and argue that the PSO
evaluated at the mission ending time for accelerated reliability testing. methods can easily generate a variety of minimax optimal designs.

2
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

3. Modeling 0 (1 − 𝜋 )0 = (1 − 𝜋 )(1 − 𝜋 )𝜋 , that can be interpreted as ‘‘survives in


𝜋𝑖6 𝑖6 𝑖1 𝑖2 𝑖3
the first interval, survives in the second interval, and fails in the third
In this section, clustered read-out data from a reliability test with interval’’.
interval censoring are modeled. Test units are supposed to be pro-
vided by multiple heterogeneous suppliers, which is the source of data Proposition 1. The (𝑘 + 1)th factor in the last expression of Eq. (5) is void.
∏ 𝑟𝑖𝑗 ∏ 𝑟
clustering. Suppose the following information is given for the test. That is, 𝑘+1
𝑗=1 𝜋𝑖𝑗 (1 − 𝜋𝑖𝑗 )
𝑠𝑖𝑗
= 𝑘𝑗=1 𝜋𝑖𝑗𝑖𝑗 (1 − 𝜋𝑖𝑗 )𝑠𝑖𝑗 .

• The total number of test units is 𝑛, of which 𝑛1 test units are The proposition follows from observing two cases. First, when 𝑇𝑖 <
provided by the supplier-1 and 𝑛2 test units are provided by the 𝑡𝑘 , hence one of 𝑟𝑖𝑗 , 𝑗 = 1, … , 𝑘 is 1, it is given that 𝑟𝑖,𝑘+1 = 𝑠𝑖,𝑘+1 = 0, and
supplier-2. Suppose that there exist supplier-to-supplier hetero- therefore there is no contribution of (𝑘 + 1)th factor to the likelihood.
geneity and the observations of test units offered by the same The example illustrated in Fig. 1 corresponds to this case. Second, when
supplier are correlated. 𝑇𝑖 ≥ 𝑡𝑘 , that is, all 𝑟𝑖𝑗 , 𝑗 = 1, … , 𝑘 are 0 and all 𝑠𝑖𝑗 , 𝑗 = 1, … , 𝑘 are 1,
• All test units enter a test at time 0, and during the test, they are Eq. (5) becomes (1 − 𝜋𝑖1 )(1 − 𝜋𝑖2 ) ⋯ (1 − 𝜋𝑖𝑘 )𝜋𝑖,𝑘+1 where 𝜋𝑖,𝑘+1 = 𝑃 (𝑇𝑖 <
inspected at times 𝑡1 < 𝑡2 < ⋯ < 𝑡𝑘 . Let 𝑡0 = 0, and 𝑡𝑘+1 = ∞. ∞|𝑇𝑖 ≥ 𝑡𝑘 ) which is 1. Therefore for both cases the (𝑘 + 1)th factor can
If a test unit has failed in the 𝑗th interval, then the failure time be omitted.
𝑡𝑗−1 ≤ 𝑇 < 𝑡𝑗 , 𝑗 = 1, 2, … , 𝑘 + 1. The likelihood of all the test units provided by the supplier-𝑠,
• Two stress factors, temperature and relative humidity, are used therefore, is given as
to accelerate products’ lives. To measure the effects of each stress 𝑛𝑠 𝑘
∏ ∏ 𝑟
factor and a possible supplier effect, the tests run at four different 𝜋𝑖𝑗𝑖𝑗 (1 − 𝜋𝑖𝑗 )𝑠𝑖𝑗 . (6)
combinations of stress conditions. 𝑖=1 𝑗=1
• Suppose we want to determine the optimal design that leads to This quantity is identical, up to the constant factor, with that obtained
the minimum prediction variance at the product’s use condition,
from 𝑛𝑠 𝑘 binomial random variables, say, 𝑌𝑖𝑗 , with the number of trials
which corresponds to the 𝑈𝑐 -optimal design (Yang & Pan, 2013).
𝑟𝑖𝑗 + 𝑠𝑖𝑗 for the observed number of successes 𝑟𝑖𝑗 , and the success
The test data obtained from the above protocol could be modeled probability 𝜋𝑖𝑗 . That is, given the data (𝑟𝑖𝑗 , 𝑠𝑖𝑗 ) and supplier effect 𝑧𝑠 ,
as the binomial GLMM. Let 𝑍𝑠 ∼ 𝑁(0, 𝜎𝑧2 ) denote the supplier effect on the conditional distribution of 𝑌𝑖𝑗 is given as
the failure time data of test units provided by a supplier-𝑠. For now,
𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 ∼ 𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝜋𝑖𝑗 ), (7)
suppose the supplier effect is given as 𝑍𝑠 = 𝑧𝑠 , 𝑠 = 1, 2. For test units
𝑖 = 1, … , 𝑛𝑠 from the supplier-𝑠, let 𝑝𝑠𝑖𝑗 be the probability of failure of which we call the pseudo response variable. The observed data is 𝑌𝑖𝑗 =
the 𝑖th test unit, provided by the supplier 𝑠, in the 𝑗th interval, 𝑟𝑖𝑗 for 𝑖 = 1, … , 𝑛𝑠 and 𝑗 = 1, … , 𝑘. In addition, we observe, from
𝑝𝑠𝑖𝑗 = 𝑃 (𝑡𝑗−1 ≤ 𝑇𝑖 < 𝑡𝑗 ), 𝑗 = 1, 2, … , 𝑘 + 1. (1) e.g., Fig. 1, 𝑟𝑖𝑗 + 𝑠𝑖𝑗 = 1 indicates that 𝑇𝑖 ≥ 𝑡𝑗−1 . Therefore, we have
𝑟𝑖𝑗 + 𝑠𝑖𝑗 ∼ 𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖(𝑅𝑖 (𝑡𝑗−1 )), and thus
In what follows, for the sake of notational simplicity, we omit the
subscript 𝑠 from 𝑝𝑠𝑖𝑗 and other quantities. Let 𝜋𝑖𝑗 be the conditional E(𝑟𝑖𝑗 + 𝑠𝑖𝑗 ) = 𝑅𝑖 (𝑡𝑗−1 ). (8)
probability of the 𝑖th test unit failed in the 𝑗th interval given that it
Regression analysis of lifetimes involves specifications for the dis-
survived at the last inspection time 𝑡𝑗−1 ,
tribution of a lifetime 𝑇 given a vector of covariates 𝐱 as well as
𝑃 (𝑡𝑗−1 ≤ 𝑇𝑖 < 𝑡𝑗 ) 𝑝𝑖𝑗 the supplier effect 𝑧𝑠 . In this research, the proportional hazard (PH)
𝜋𝑖𝑗 = 𝑃 (𝑇𝑖 < 𝑡𝑗 |𝑇𝑖 ≥ 𝑡𝑗−1 ) = = (2)
𝑃 (𝑇𝑖 ≥ 𝑡𝑗−1 ) 𝑅𝑖 (𝑡𝑗−1 ) model is applied to the correlated ALT data. The hazard function,
given 𝐱 and 𝑧𝑠 , is of the form ℎ(𝑡|𝐱, 𝑧𝑠 ) = ℎ0 (𝑡) exp(𝜷 ′ 𝐱 + 𝑧𝑠 ) where
where 𝑅(⋅) is the reliability function. In order for the failure of the 𝑖th
ℎ0 (𝑡) is the baseline hazard function; it is the hazard function for an
unit to occur in the 𝑗th interval, the unit should survive in the 1st,
individual whose covariate vector 𝐱 and the supplier effect 𝑧𝑠 are such
2nd, … , (𝑗 − 1)th intervals, and failed in the 𝑗th interval, and thus
that exp(𝜷 ′ 𝐱 + 𝑧𝑠 ) = 1 or 𝐱 = 𝑧𝑠 = 0. Note that no intercept term is
𝑝𝑖𝑗 = (1 − 𝜋𝑖1 )(1 − 𝜋𝑖2 ) ⋯ (1 − 𝜋𝑖,𝑗−1 )𝜋𝑖𝑗 .
included in 𝜷 ′ 𝐱, because it is subsumed in ℎ0 (𝑡). Both 𝜷 and 𝐱 are 𝑝 × 1
Let 𝑟𝑖𝑗 be an indicator variable for whether or not the 𝑖th test unit ′
vectors. Accordingly, we have 𝑅(𝑡|𝐱, 𝑧𝑠 ) = 𝑅0 (𝑡)exp(𝜷 𝐱+𝑧𝑠 ) where 𝑅0 (𝑡) is
fails in the 𝑗th interval, and 𝑠𝑖𝑗 be an indicator variable for whether or
the baseline reliability function. Then we obtain
not the 𝑖th test unit survives until 𝑡𝑗 . That is,
{ [ ] ′
𝑅𝑖 (𝑡𝑗 ) 𝑅0 (𝑡𝑗 ) exp(𝜷 𝐱+𝑧𝑠 )
1, 𝑡𝑗−1 ≤ 𝑇𝑖 < 𝑡𝑗 1 − 𝜋𝑖𝑗 = 𝑃 (𝑇𝑖 ≥ 𝑡𝑗 |𝑇𝑖 ≥ 𝑡𝑗−1 ) = = (9)
𝑟𝑖𝑗 = (3) 𝑅𝑖 (𝑡𝑗−1 ) 𝑅0 (𝑡𝑗−1 )
0, Otherwise
that results in
and { [ ]}
{ 𝑅0 (𝑡𝑗−1 )
1, 𝑇𝑖 ≥ 𝑡𝑗 log{− log(1 − 𝜋𝑖𝑗 )} = 𝜷 ′ 𝐱 + 𝑧𝑠 + log log (10)
𝑠𝑖𝑗 = (4) 𝑅0 (𝑡𝑗 )
0, Otherwise
where the last term in the right hand side is irrelevant to stress factors
Then 𝑠𝑖𝑗 = 𝑟𝑖,𝑗+1 + 𝑟𝑖,𝑗+2 + ⋯ + 𝑟𝑖,𝑘+1 for 𝑗 = 1, 2, … , 𝑘, and 𝑠𝑖𝑗 = 0 for but only depends on the interval 𝑗. Eq. (10) is the complementary
𝑗 = 𝑘 + 1. Fig. 1 illustrates an example of 𝑟𝑖𝑗 ’s and 𝑠𝑖𝑗 ’s when 𝑘 = 5 and log–log link function of binomial response 𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 .
the test unit failure is observed at 𝑡3 .
The joint probability distribution of 𝐫𝑖 = (𝑟𝑖1 , … , 𝑟𝑖𝑘 ) is 3.1. Weibull Failure time distribution
𝑀𝑢𝑙𝑡𝑖𝑛𝑜𝑚𝑖𝑎𝑙(1, 𝑝𝑖1 , … , 𝑝𝑖𝑘 ). Therefore the likelihood of the 𝑖th test unit
being failed at the 𝑗th interval is The Weibull distribution is popularly used for lifetime data, which
is motivated by empirical and theoretical reasons. First, the model is

𝑘+1
𝑟 ∏
𝑘+1
{ }𝑟𝑖𝑗 ∏
𝑘+1
𝑟
𝑝𝑖𝑗𝑖𝑗 = (1 − 𝜋𝑖1 )(1 − 𝜋𝑖2 ) ⋯ (1 − 𝜋𝑖,𝑗−1 )𝜋𝑖𝑗 = 𝜋𝑖𝑗𝑖𝑗 (1 − 𝜋𝑖𝑗 )𝑠𝑖𝑗 . fairly flexible and has been found to provide a good description of many
𝑗=1 𝑗=1 𝑗=1 types of lifetime data, especially for manufacturing items. The Weibull
hazard can be monotone increasing, decreasing, or constant, depending
(5)
on the shape parameter. Second, the Weibull distribution is one of the
For instance, the likelihood of the test unit of the example in Fig. 1 is three limiting extreme-value distributions. For example, the strength of
0 (1 − 𝜋 )1 × 𝜋 0 (1 − 𝜋 )1 × 𝜋 1 (1 − 𝜋 )0 × 𝜋 0 (1 − 𝜋 )0 × 𝜋 0 (1 − 𝜋 )0 ×
𝜋𝑖1 a long chain (or the failure time of a system in series) is equal to that
𝑖1 𝑖2 𝑖2 𝑖3 𝑖3 𝑖4 𝑖4 𝑖5 𝑖5

3
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

Fig. 1. An illustration of indicator variables with 𝑘 = 5 and the test unit failure occurred in the 3rd interval.

of the weakest link or component, and the limiting distribution of the Wedderburn, 1974), which is based on marginalized quantities of
minimum is a Weibull distribution in many cases (Lawless, 2011). conditional mean, variance, and covariance.
There exist multiple ways to parameterize the pdf of the Weibull The marginalized mean of 𝑌𝑖𝑗 is computed by using the conditional
distribution in the literature. One is given as follows. expectation as follows.
𝛼
𝑓 (𝑡) = 𝜆𝛼𝑡𝛼−1 𝑒−𝜆𝑡 = ℎ(𝑡)𝑅(𝑡), 𝑡 > 0 (11) E[𝑌𝑖𝑗 ] = E[E[𝑌𝑖𝑗 |𝑧𝑠 ]] = E[E[E[𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 ]]]
( ) ( ) (16)
where 𝛼 is the shape parameter, 𝜆 is the scale parameter or the intrinsic = 𝑓 𝑡𝛼𝑗−1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 ) − 𝑓 𝑡𝛼𝑗 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 )
failure rate; it is the failure rate of 𝑇 𝛼 given 𝛼, which is exponentially
𝛼 where 𝑓 (𝑠) is the Laplace transform of the function 𝑓 (𝑢), the pdf of
distributed, ℎ(𝑡) = 𝜆𝛼𝑡𝛼−1 is the hazard function, and 𝑅(𝑡) = 𝑒−𝜆𝑡 is the 2 ) in this article, evaluated at 𝑠. That is,
𝐿𝑜𝑔𝑛𝑜𝑟𝑚𝑎𝑙(0, 𝜎𝑍
reliability function.

The log of a Weibull random variable follows the extreme value
𝑓 (𝑠) = 𝑒−𝑠𝑢 𝑓 (𝑢)𝑑𝑢 (17)
distribution with log 𝜆 being its location parameter. Therefore, for the ∫0
Weibull distribution, the log of the scale parameter is often modeled as where
a linear function of covariates. That is, { ( )2 }
1 1 log 𝑢
log 𝜆𝑖𝑠 = 𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 (12)
𝑓 (𝑢) = √ exp − , 𝑢 > 0. (18)
𝑢𝜎𝑍 2𝜋 2 𝜎𝑍
𝛼 ′ 𝛼
Then we obtain 𝑅(𝑡|𝐱𝑖 , 𝑧𝑠 ) = 𝑒−𝑡 exp(𝛽0 +𝜷 𝐱𝑖 +𝑧𝑠 ) , and 𝑅0 (𝑡) = 𝑒−𝑡 exp(𝛽0 ) .
Closed-form expressions do not exist for 𝑓 (𝑠), and a wide variety of
Therefore, we have methods have been employed to provide approximations, both analyt-
( 𝛼 )
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) ical and numerical (Asmussen et al., 2016; Rossberg, 2008). In this
𝑟𝑖𝑗 + 𝑠𝑖𝑗 |𝑧𝑠 ∼ 𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖 𝑒 𝑗−1 (13)
paper, we use Monte Carlo sampling to compute 𝑓 (𝑠). That is, 𝑓 (𝑠)
and the link function becomes is computed by an average of 𝑒−𝑠𝑢𝑚 , 𝑚 = 1, … , 𝐿 where 𝑢𝑚 ’s are 𝑖.𝑖.𝑑.
( ) sample of size 𝐿 drawn from the distribution 𝑓 (𝑢). The derivation of
log{− log(1 − 𝜋𝑖𝑗 )} = 𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 + log 𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 . (14)
Eq. (16) is provided in Appendix.
Accordingly, the inverse link function is Let 𝜇 ∗ denote the Monte-Carlo estimation of Eq. (16). Then the
( ) marginalized variance of 𝑌𝑖𝑗 is obtained as follows.
− 𝑡𝛼𝑗 −𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
𝜋𝑖𝑗 = 1 − 𝑒 . (15)
var(𝑌𝑖𝑗 ) ≈ 𝜇𝑖𝑗∗ (1 − 𝜇𝑖𝑗∗ ) (19)
Given read-out data, model fitting of the binomial GLMM with the
link function of Eq. (14) produces not only the estimates of model Likewise, the marginal covariance of two observations included in the
̂ and the variance com-
parameters, i.e., regression coefficients (𝛽̂0 , 𝜷) same cluster 𝑠 is given as
( )
ponent 𝜎̂ 𝑧2 , but also the predicted values of suppliers’ effects 𝑧̂ 𝑠 ’s. Using cov(𝑌𝑖𝑗 , 𝑌𝑖′ 𝑗 ′ ) = 𝑓 𝑡𝛼𝑗−1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 ) + 𝑡𝛼𝑗′ −1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖′ )
these values, one can evaluate the mean time to failure (MTTF) of a ( ) ( )
test unit from each supplier. − 𝑓 𝑡𝛼𝑗−1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 ) 𝑓 𝑡𝛼𝑗′ −1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖′ ) ,
(20)
3.2. Marginalization
and the marginal covariance of observations from different clusters is
In Eq. (7), the data and supplier effects are assumed to be given, 0. Eqs. (19) and (20) are derived in Appendix.
and the binomial GLMM is established based on the conditional dis- The quasi-score equation to be solved to find the parameter esti-
tribution of 𝑌𝑖𝑗 . In planning phase, however, these quantities are not mates is given as
available; and therefore the marginalized response variable is sought. 𝐃𝐕−1 (𝐲 − 𝝁) = 𝟎 (21)
The marginal distribution of a response variable with random effects
is usually unknown. In this paper, therefore, the information matrix where 𝐲 is 𝑛𝑘 × 1 vector of 𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 ∼ 𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝜋𝑖𝑗 ) with
will be constructed by the quasi-likelihood approach (McCullagh, 1983; 𝑖 = 1, … , 𝑛, 𝑗 = 1, … , 𝑘 from suppliers 𝑠 = 1, 2, 𝝁 is the vector of

4
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

marginalized mean, 𝐕 is the 𝑛𝑘 × 𝑛𝑘 covariance matrix, and 𝐃 is the by minimizing the prediction variance at the product’s use condition
𝑛𝑘 × (𝑝 + 1) matrix of derivatives given as 𝐱𝑢𝑠𝑒 . That is,
𝜕𝝁 𝜕𝝁 𝜕𝜼 ′
min 𝐱𝑢𝑠𝑒 (𝐗′ 𝜟𝐕−1 𝜟𝐗)−1 𝐱𝑢𝑠𝑒 (31)
𝐃= = = 𝜟𝐗 (22) 𝜉
𝜕𝜷 𝜕𝜼 𝜕𝜷
where 𝜼 is the vector of 𝜂𝑖𝑗 = 𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 + log(𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 ), and thus 𝜟 where 𝜉 is the vector of decision variables that determines 𝐗, 𝜟, and 𝐕.
is the 𝑛𝑘 × 𝑛𝑘 diagonal matrix with elements given as follows. The aggregated design matrix is defined as
{ }
𝜕(𝑟𝑖𝑗 + 𝑠𝑖𝑗 ) 1 − 𝑒− exp(𝜂𝑖𝑗 ) ⎡𝑥11 𝑥12 𝑝1 ⎤
𝜕𝜇𝑖𝑗 𝜕(𝑟𝑖𝑗 + 𝑠𝑖𝑗 )𝜋𝑖𝑗 ⎢𝑥21 𝑥22 𝑝2 ⎥
= = ⎢ ⎥
𝜕𝜂𝑖𝑗 𝜕𝜂𝑖𝑗 𝜕𝜂𝑖𝑗
⎢𝑥31 𝑥32 𝑝3 ⎥
𝑒𝜂𝑖𝑗 ⎢𝑥 𝑥42 𝑝4 ⎥
= (𝑟𝑖𝑗 + 𝑠𝑖𝑗 ) 𝜓 = ⎢ 41 (32)
𝑒 exp(𝜂𝑖𝑗 )
⎢𝑥11 𝑥12 𝑝5 ⎥⎥
(𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 ) exp(𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 ) ⎢𝑥21 𝑥22 𝑝6 ⎥
= (𝑟𝑖𝑗 + 𝑠𝑖𝑗 ) (23) ⎢𝑥 𝑥32 𝑝7 ⎥
𝑒 𝑗 𝑗−1 0 𝑖 (𝑡𝛼 −𝑡𝛼 ) exp(𝛽 +𝜷 ′ 𝐱 +𝑧𝑠 ) ⎢ 31 ⎥
⎣𝑥41 𝑥42 𝑝8 ⎦
The marginalization of the above quantity is given as
[ ] where 𝑥𝓁1 and 𝑥𝓁2 , 𝓁 = 1, 2, 3, 4, are the 𝓁th combination of stress con-
(𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 ) exp(𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 ) ditions of temperature and relative humidity, respectively, expressed
E (𝑟𝑖𝑗 + 𝑠𝑖𝑗 ) in appropriately transformed and coded variables, 𝑝1 , … , 𝑝4 are the
(𝑡𝛼 −𝑡𝛼 ) exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
𝑒 𝑗 𝑗−1 (24)
∞ proportions of test units from the supplier-1, and 𝑝5 , … , 𝑝8 are those
′𝐱 −𝑡𝛼𝑗 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑢
= (𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 )𝑒𝛽0 +𝜷 𝑖 𝑢𝑒 𝑓 (𝑢)𝑑𝑢 from the supplier-2 assigned to each stress condition. Because the
∫0
design points (stress conditions) are supposed to be the same for
This quantity is also approximated by Monte Carlo sampling and be- the test units from two suppliers, only the first four rows of stress
comes 𝛿𝑖𝑗 , the 𝑖𝑗th diagonal element of 𝜟. The detailed derivation of conditions, 𝑥11 , 𝑥12 , 𝑥21 , 𝑥22 , 𝑥31 , 𝑥32 , 𝑥41 , 𝑥42 , are included in the set of
Eq. (24) is provided in Appendix. decision variables. Regarding the proportions of test units (the last
column of 𝜓), the first seven rows, 𝑝1 , … , 𝑝7 , are treated as decision

3.3. Optimality criterion variables, with a constraint of 7𝑖=1 𝑝𝑖 < 1, as they determine the last
∑7
value 𝑝8 by 𝑝8 = 1 − 𝑖=1 𝑝𝑖 . As a result, we have a 15-dimensional
The design matrix corresponding to the pseudo response 𝑌𝑖𝑗 is vector 𝜉 to generate the design.
constructed as follows.
𝜉 = (𝑥11 , 𝑥12 , 𝑥21 , 𝑥22 , 𝑥31 , 𝑥32 , 𝑥41 , 𝑥42 , 𝑝1 , 𝑝2 , 𝑝3 , 𝑝4 , 𝑝5 , 𝑝6 , 𝑝7 ) (33)
⎡𝟏𝑛 𝑘 𝐱𝑇 ⎤
⎢𝟏 11 𝐱1𝑇 ⎥ Given the aggregated design 𝜓, the expanded design matrix 𝐗 in
⎢ 𝑛21 𝑘 2 ⎥
⎢𝟏 𝐱𝑇 ⎥ Eq. (25) is generated. First, the number of test units 𝑛11 , 𝑛12 , … , 𝑛42
[ ] ⎢ 𝑛31 𝑘 3𝑇 ⎥
𝐗1 𝟏 𝐱 are obtained by the closest integers of 𝑛𝑝1 , 𝑛𝑝2 , … , 𝑛𝑝8 , respectively.
𝐗= = ⎢ 𝑛41 𝑘 4𝑇 ⎥ (25) Then, each stress condition (𝑥𝓁1 , 𝑥𝓁2 ) is expanded by the model form
𝐗2 ⎢𝟏𝑛12 𝑘 𝐱1 ⎥
⎢𝟏 𝑇⎥ of the linear predictor. For example, (1, 𝑥𝓁1 , 𝑥𝓁2 , 𝑥𝓁1 𝑥𝓁2 ) is the expanded
⎢ 𝑛22 𝑘 𝐱2 ⎥
⎢𝟏𝑛32 𝑘 𝐱3𝑇 ⎥ design point with the intercept, two main effects, and interaction effect.
⎢𝟏 𝑇⎥ Lastly, each design point is replicated 𝑛𝓁𝑠 𝑘 times according to its stress
⎣ 𝑛42 𝑘 𝐱4 ⎦
condition and supplier membership.
where 𝐗𝑠 , 𝑠 = 1, 2, is 𝑛𝑠 𝑘 × (𝑝 + 1) design matrix corresponding to the
supplier 𝑠, 𝑛𝓁𝑠 , 𝓁 = 1, 2, 3, 4; 𝑠 = 1, 2 is the number of test units allocated 4. Search algorithm via particle swarm optimization
at the 𝓁th stress combination provided by the supplier 𝑠, 𝟏𝑛𝓁𝑠 𝑘 is the
vector of 1’s repeated 𝑛𝓁𝑠 𝑘 times, and 𝐱𝓁 , 𝓁 = 1, 2, 3, 4, is (𝑝 + 1) × 1 Efficient optimization techniques are required to find optimal de-
design point expanded by the model form for each stress combination. signs with complex nonlinear objective functions and constraints. Seo
Accordingly the variance–covariance matrix is given as and Pan (2022) proposed a three-step greedy approach to find the D-
[ ] optimal design of the clustered ALT under the right-censoring scheme.
𝐕1 𝟎
𝐕= (26) However, as it is customized to the specific problem, this algorithm
𝟎 𝐕2
may not be applicable to other cases with a more wide range of data
where 𝐕𝑠 , 𝑠 = 1, 2, is 𝑛𝑠 𝑘 × 𝑛𝑠 𝑘 covariance matrix of 𝑌𝑖𝑗 of the supplier 𝑠 clustering structures. Furthermore, in our experience, we found that
with the marginalized variance as diagonal elements and the marginal- traditional optimization algorithms are not numerically stable. Nature-
ized covariance as off-diagonal elements. In addition, the matrix 𝜟 in inspired optimization algorithms may be a good alternative to deal with
Eq. (22) is given as such challenges caused by highly nonlinear and stochastic nature of
[ ]
𝜟 𝟎 the problem (Yang, 2020). In this paper, we use PSO which provides
𝜟= 1 (27) a relatively fast, numerically stable, and much more general search
𝟎 𝜟2
algorithm to find the optimal design.
where 𝜟𝑠 , 𝑠 = 1, 2, is 𝑛𝑠 𝑘 × 𝑛𝑠 𝑘 diagonal matrix with elements 𝛿𝑖𝑗 ’s in
PSO is a swarm intelligence-based algorithm and one of the most
Eq. (24).
widely used nature-inspired algorithm. PSO has been highlighted by
The information matrix is given as
many researchers and identified to provide good performance in a
[ ][ ]−1 [ ][ ]
[ ] 𝜟1 𝟎 𝐕1 𝟎 𝜟1 𝟎 𝐗1 wide range of applications (Banks et al., 2007). It is motivated by
 = 𝐗′ 𝜟𝐕−1 𝜟𝐗 = 𝐗′1 𝐗′2 the social behavior of animals, such as a flock of birds or a school
𝟎 𝜟2 𝟎 𝐕2 𝟎 𝜟2 𝐗2
of fish. The swarm possibly locates a better solution, although it does
(28)
not fully explore the search space. In a PSO system, multiple candidate
= 𝐗′1 𝜟1 𝐕−1
1
𝜟1 𝐗1 + 𝐗′2 𝜟2 𝐕−1
2
𝜟2 𝐗2 (29) solutions exist at the same time and collaborate together. Each solution,
called a particle, finds the optimal position to land in the search
= 1 + 2 (30)
space. During the process, a particle modifies its position based on
where 𝑠 , 𝑠 = 1, 2, denotes the information matrix contributed by the its own experience from the previous iterations and the experience of
observations from the 𝑠th supplier. The 𝑈𝑐 -optimal design is obtained neighboring particles.

5
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

A particle represents a vector that contains decision variables of an that the test results in neither too small number of failures, that
optimization problem. In our case, a particle is a 15-dimensional vector is expected with stress levels below these ranges, nor inconsistent
𝜉 in Eq (33). According to PSO, the new position of a particle is updated failure mechanisms from those observed at normal stress, that can
as follows. be occurred by conditions above these ranges.
• The total testing time is 30 time units, and products failures are
𝜉𝑖(𝑘+1) = 𝜉𝑖(𝑘) + 𝑉𝑖(𝑘+1) (34)
examined at 𝑘 = 5 equal-length inspection time points, i.e., 𝑡1 =
where the superscript (𝑘) denotes the iteration index, 𝜉𝑖 is the 𝑖th 6, 𝑡2 = 12, 𝑡3 = 18, 𝑡4 = 24, 𝑡5 = 30.
particle position, and 𝑉𝑖 is the velocity at the 𝑖th particle position. • The product’s lifetime has a Weibull distribution. The shape pa-
Both 𝜉𝑖 and 𝑉𝑖 are 15-dimensional vectors. 𝑉𝑖 is determined by two rameter is assumed to be constant and known as 𝛼 = 1.5, results in
attractors, the individual best (𝑑𝑖 ) and the global best (𝑔𝑖 ) by the the monotone increasing hazard function, and the scale parameter
following equation. is affected by levels of stress factors and the supplier effect as in
( ) ( ) Eq. (12). The stress variables are transformed, according to Eyring
𝑉𝑖(𝑘+1) = 𝑤𝑉𝑖(𝑘) + 𝑐1 𝑢1 𝑑𝑖(𝑘) − 𝜉𝑖(𝑘) + 𝑐2 𝑢2 𝑔𝑖(𝑘) − 𝜉𝑖(𝑘) . (35) model (Tobias & Trindade, 2011), to 𝑠1 = 11, 605∕(𝑡𝑒𝑚𝑝◦ C+273.15)
where 𝑑𝑖 is the best previous position of particle 𝑖, and 𝑔𝑖 is the best and 𝑠2 = log(𝑅𝐻). In addition, the coded variables 𝑥1 = (𝑠1 −
position among all particles in the population. A parameter 𝑤 is the 𝑠𝐻
1
)∕(𝑠𝐿 1
−𝑠𝐻
1
) where 𝑠𝐻
1
= 11, 605∕(110+273.15), 𝑠𝐿 1
= 11, 605∕(60+
inertia weight that controls trade-off between exploration and exploita- 273.15) and 𝑥2 = (𝑠2 − 𝑠𝐻 2
)∕(𝑠𝐿
2
− 𝑠𝐻
2
) where 𝑠𝐻2
= log(90), 𝑠𝐿2
=
tion of the search space. For 𝑤 ≥ 1, velocities increase over time, log(60) are applied to scale the design space of the experiment
accelerating towards the maximum velocity, and the swarm diverges. as a unit square in the first quadrant. According to the variable
For 𝑤 < 1, particles decelerate until their velocities reach zero. In transformation, The highest stress levels and the lowest stress
our research, the value of 𝑤 is set to 1∕(2 log(2)) ≈ 0.721, which is levels of both stress variables are coded as (𝑥𝐻 1
, 𝑥𝐻
2
) = (0, 0) and
the default in pso the R package. The constants 𝑐1 and 𝑐2 denote (𝑥𝐿1
, 𝑥𝐿 ) = (1, 1), while the use condition is located at 𝐱
2 𝑢𝑠𝑒 =
acceleration coefficients and are also referred to as trust parameters, (1.758, 3.159). Table 1 summarizes the variable transformation,
where 𝑐1 expresses how much confidence a particle has in itself, while and Fig. 2 shows the experimental region and the product’s use
𝑐2 expresses how much confidence a particle has in its neighbors. Most condition in coordinates of coded variables.
applications use 𝑐1 = 𝑐2 . In our research, we use the same default • It is assumed that the following preliminary knowledge of the re-
value 0.5 + log(2) ≈ 1.193 for both. Parameters 𝑢1 and 𝑢2 are random gression model form and coefficients are available from historical
draws from a uniformly distributed random variable with range [0, 1]. data of similar products, which are called planning values.
Sun et al. (2015) gives more qualitative recommendations for tuning
log 𝜆 = 0.2 − 4.086𝑥1 − 1.476𝑥2 + 0.01𝑥1 𝑥2 + 𝑧𝑠 (37)
parameters and what to try when the performance is poor. Li-Ping et al.
(2005) conducted extensive experiments to provide the information on and
the optimal parameter setting on the constricted PSO.
𝜎𝑧2 = 1. (38)
The best particle in its own previous positions or in the population
is judged by the fitness evaluation, which is determined by an objective For an issue of ALT model misspecification and possible solutions,
function, for each particle, and acts as a guide for a future direction. one can refer to Zhao et al. (2019) and references therein. Table 2
Iterations of PSO are carried out until some specified termination shows some quantities characterizing the failure time distribution
criterion is attained, i.e., specified maximum number of iterations or a under Eq. (37) evaluated at each corner points of the design
certain desired particle fitness. For a more comprehensive investigation region with some hypothesized values of supplier effects, 𝑧𝑠 =
of different PSO variants, see Örkcü et al. (2015). −0.707, 0, and 0.707, as well as those evaluated at the product’s
The optimization problem to be solved by PSO in this paper is given use condition. Fig. 3 shows the effects of increased stress levels
as follows. of temperature and RH on 𝜆. It can be shown that 10 ◦ C increase
′ of temperature in the design region results in the intrinsic failure
minimize 𝐱𝑢𝑠𝑒 (𝐗′ 𝜟𝐕−1 𝜟𝐗)−1 𝐱𝑢𝑠𝑒 (36)
rate increased by 2.26 times in average, and 10% increase of RH
subject to 0 ≤ 𝑥𝓁𝑠 ≤ 1, 𝓁 = 1, 2, 3, 4, 𝑠 = 1, 2, in the design region results in the intrinsic failure rate increases
0 ≤ 𝑝𝑖 ≤ 1, 𝑖 = 1, … , 7, by 1.63 times in average.

7
𝑝𝑖 < 1 To implement PSO, we use the R function psoptim (Bendtsen,
𝑖=1 2012). According to the GLMM described in Section 3, given decision
where 𝑥𝓁𝑠 ∈ R, 𝓁 = 1, 2, 3, 4, 𝑠 = 1, 2, and 𝑝𝑖 ∈ R, 𝑖 = 1, … , 7, are variables’ values, the design matrix 𝐗 is generated with the dimension
decision variables, and 𝐗, 𝐕, and 𝜟 are determined as described in 𝑛𝑘 × 𝑛𝑘 = 500 × 500. The marginalized quantities are computed based
Section 3.3. The constraints involved in Eq. (36) can be easily satisfied on MC sampling with the size 𝐿 = 10,000, which comprises the
by assigning a large fitness value when a particle is located in the elements of matrices 𝐕 and 𝜟. Then the objective function of Eq. (31)
infeasible region. is evaluated. For computational efficiency, we used a fixed sequence
of 𝐿 random numbers whenever the objective function was evaluated.
5. Numerical examples The default parameter values provided by psoptim were used in this
example. However, if the problem is expanded with a larger number
In this section, we illustrate the proposed method to create the 𝑈𝑐 - of suppliers or stress conditions, it may not work as efficiently, and
optimal design with an example modified from one described in Yang parameters of PSO, especially the population size, need to be carefully
and Pan (2013). Suppose an ALT with interval censoring scheme is chosen. For more information on issues of the high dimensionality
planned for an electronic part. The following information is given. involved in the PSO, refer to Chen et al. (2015b).
There exist two possible cases that cause failures of this evaluation.
• A total of 100 test units are used for testing, and these test units The first case occurs when the covariance matrix or the information
are provided by two suppliers. matrix is singular. The second case is when the sum of proportions of
• Temperature and relative humidity (RH) are used as stress factors. test units exceeds 1, i.e., 𝑝1 +⋯+𝑝7 ≥ 1. The fitness values are set infinity
The use condition of this device is given as 30 ◦ C of temperature whenever such cases occur to avoid failures of optimization routine. In
and 25% of RH. The ranges of stress conditions are given as from addition, to avoid an infeasible initial solution, we provide a random
60 ◦ C to 110 ◦ C for temperature and from 60% to 90% for RH, so but feasible initial solution to the psoptim function.

6
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

Table 1
Variable transformation.
Original stress variables Natural stress variables Coded variables
temp (◦ C) RH (%) 𝑠1 𝑠2 𝑥1 𝑥2
Use condition 30 25 38.281 1.398 1.758 3.159
Low level 60 60 34.834 1.778 1 1
High level 110 90 30.288 1.954 0 0

Table 2
Failure rates and related quantities under the linear predictor model of Eq. (37).
b
𝑞th quantiled
𝑥1 𝑥2 𝑧𝑠 𝜆a MTTF 𝐹 (𝑡𝑘 )c
𝑡0.01 𝑡0.1 𝑡0.632 e
Use condition 1.758 3.159 0 9.24𝑒 − 6 2050.12 0.002 105.76 506.60 2270.49
0 0 0.602 1.27 1.000 0.07 0.31 1.40
0 1 0.138 3.39 1.000 0.17 0.84 3.75
Negative supplier effect −0.707
1 0 0.010 19.29 0.810 1.00 4.77 21.37
1 1 0.002 51.27 0.319 2.64 12.67 56.78
0 0 1.221 0.79 1.000 0.04 0.20 0.87
0 1 0.279 2.11 1.000 0.11 0.52 2.34
Zero supplier effect 0
1 0 0.021 12.04 0.966 0.62 2.98 13.34
1 1 0.005 32.00 0.541 1.65 7.91 35.44
0 0 2.477 0.49 1.000 0.03 0.12 0.55
0 1 0.566 1.32 1.000 0.07 0.33 1.46
Positive supplier effect 0.707
1 0 0.042 7.52 0.999 0.39 1.86 8.32
1 1 0.010 19.97 0.794 1.03 4.94 22.12
a
Intrinsic failure rate, 𝜆 = exp(0.2 − 4.086𝑥1 − 1.476𝑥2 + 0.01𝑥1 𝑥2 + 𝑧𝑠 ).
b
Mean time to failure, E[𝑇 ] = 𝛤 (1 + (1∕𝛼))∕𝜆1∕𝛼 .
c
Probability of a test unit being failed by the end of the test, 𝐹 (𝑡𝑘 ) = 1 − exp(−𝜆𝑡𝛼𝑘 ).
d Time until (100 × 𝑞)% test units are failed, 𝑡 = (− log(1 − 𝑞)∕𝜆)1∕𝛼 .
𝑞
e Characteristic life, 𝑡 −1∕𝛼 .
0.632 = 𝜆

under the right censoring scheme which often includes the point with
highest stress level in the design region (Seo & Pan, 2018, 2022).
Second, the total number of test units assigned to each supplier
is 48 and 52, that seems to be fairly well balanced, but not exactly
evenly distributed between two suppliers. Specifically, there exist more
balance at two points with low temperature (𝑥1 , 𝑥2 ) = (1, 0) and
(𝑥1 , 𝑥2 ) = (1, 1), compared to the other two points (𝑥1 , 𝑥2 ) = (0.362, 0)
and (𝑥1 , 𝑥2 ) = (0, 1). This phenomenon is consistently observed from the
other near-optimal designs. For example, Fig. 5 shows another design
outcome found by PSO by running with different initial designs. It
shows balanced test units assignments for two points with low temper-
ature at which failure rates are small, but poorly balanced assignments
for the other two points at which failure rates are large. Despite of such
imbalanced test units allocations, the objective function value of this
design is 6.979, which is only a little larger than the optimal design.
Based on this, it can be found that highly balanced test unit allocations
are required at design points with low failure rates. On the other hand,
the prediction variance at the use condition, i.e., the objective function
value, is not sensitive to the balance between two suppliers at the
design points with high failure rates. This observation might be useful
Fig. 2. The region of experimental design and use condition in coded variables. in real applications where imbalanced test units are provided by two
suppliers to determine degrees of balanced test units allocations for
each design points with much of flexibility. For instance, if the number
Fig. 4 shows the 𝑈𝑐 -optimal design found by running our proposed of test units from two suppliers is 34 and 66, the design shown in Fig. 5
method with several different initial designs. In this plot, squares in could be a good plan, that provides almost the same performance with
left and right hand sides represent the design region for supplier-1 and the balanced design at all design points.
supplier-2, respectively, and the circles located at optimal stress condi-
tions with different sizes that are proportional to test units allocations. 5.1. Simulation study for designs evaluations and comparisons
The objective function value of this design is 6.976.
In this subsection, we use simulation for more comprehensive com-
Some notable characteristics are found from this result. First, the
parisons of the 𝑈𝑐 -optimal design in Fig. 4, the near optimal design
highest possible stress condition (𝑥1 , 𝑥2 ) = (0, 0) is not found to be
in Fig. 5, and the balanced factorial design where 100 test units are
included in the set of optimal stress conditions. Instead, (𝑥1 , 𝑥2 ) =
virtually equally assigned to corner points of the design region for each
(0.362, 0) is found to be one of stress conditions, that results in 𝜆 = 0.278
supplier, shown as below.
when 𝑧𝑠 = 0 at this design point. This is one of major differences of
optimal designs of ALT under the interval censoring scheme from those (𝑥11 , 𝑥12 ) = (0, 0), (39)

7
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

Fig. 3. Effects of stress factors’ increased levels on the intrinsic failure rate.

Fig. 4. 𝑈𝑐 -optimal design found by PSO.

Fig. 5. Near-optimal design found by PSO, which shows imbalanced test units allocations at two points (𝑥1 , 𝑥2 ) = (0.362, 0) and (𝑥1 , 𝑥2 ) = (0, 1).

(𝑥21 , 𝑥22 ) = (1, 0), (40) (STEP 1: Generate failure time data) Given a design, 𝑛 = 100
(𝑥31 , 𝑥32 ) = (0, 1), (41) failure time observations are simulated using the same parameter
values of 𝛼 and 𝜷 with those assumed for the design generation.
(𝑥41 , 𝑥42 ) = (1, 1), (42)
The suppliers effects are assumed to be 𝑧1 = −0.707 and 𝑧2 = 0.707
(𝑝1 , 𝑝2 , 𝑝3 , 𝑝4 ) = (0.12, 0.13, 0.12, 0.13), (43) so that the sample variance is consistent with the hypothesized
(𝑝5 , 𝑝6 , 𝑝7 , 𝑝8 ) = (0.13, 0.12, 0.13, 0.12) (44) value in Eq. (38), which is 1.
(STEP 2: Transform to read-out data) The simulated failure time
The simulation study was conducted by repeating the following data are transformed to the read-out data based on the interval
procedure 100 times for each design. censoring scheme assumed for the design generation. In this step,

8
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

example, the values of 𝑧1 = −0.707 and 𝑧2 = 0.707 illustrated in


the Table 2, which are chosen such that the unbiased estimate of 𝜎𝑧
becomes 1, have effects to make 𝜆 approximately as double and as a
half, respectively, compared to the case without any supplier effects.
In this subsection, we investigate optimal designs with smaller values
of 𝜎𝑧2 .
Figs. 8, 9, and 10 show the 𝑈𝑐 -optimal designs with 𝜎𝑧2 = 0.01,
𝜎𝑧2 = 0.2, and 𝜎𝑧2 = 0.5, respectively. The other parameters are set as the
same as those from the example of Section 5. Fig. 8 shows that when
the supplier effect is close to zero the number of test units between
two suppliers is not balanced anymore, which is not unexpected; while
the locations of stress conditions are not much different from those of
the optimal design with the large supplier effects. However, the slight
increase of 𝜎𝑧2 to 0.2 in Fig. 9 quickly makes the design being balanced
at stress conditions where the temperature is low, i.e., 𝑥1 = 1. The
optimal design under 𝜎𝑧2 = 0.5 in Fig. 10 is virtually the same as the
optimal design with 𝜎𝑧2 = 1 except the number of test units allocations at
stress conditions (𝑥1 , 𝑥2 ) = (0.370, 0) and (𝑥1 , 𝑥2 ) = (0, 1), which confirms
again the flexibility of test units allocation at the stress conditions with
Fig. 6. Predicted MTTF in log scale from the 𝑈𝑐 -optimal design, near-optimal design, high failure rates. This finding may be useful when the exact value of
and balanced factorial design from the simulation study. The red line indicates the true
value based on the assumed parameter values.
𝜎𝑧2 is unknown. Even a small value of supplier effects, e.g., 𝜎𝑧2 between
0.2 and 0.5, produces a robust design for an unknown, possibly large,
magnitude of supplier effects.

a single failure time observation is expanded to 𝑘 = 5 observations


6. Conclusion
of indicator variables as described in Section 3.
(STEP 3: Fit model) Using the read-out data, we fit the binomial
In this paper, optimal experimental designs have been investigated
GLMM, and obtain parameter estimates 𝛽̂0 , 𝛽̂1 , 𝛽̂2 , 𝛽̂12 , and 𝜎̂ 𝑧2 , as
for products’ reliability tests under the interval censoring scheme along
well as predicted supplier effects 𝑧̂ 1 and 𝑧̂ 2 .
with correlated observations caused by two suppliers for test units.
(STEP 4: Predict failure time) The probability distribution of
The 𝑈𝑐 -optimal design that minimizes the variance of failure time
failure time at the use condition is predicted using the estimated
prediction at the product’s use condition has been sought by model-
parameters.
ing correlated read-out data, developing the information matrix with
The MTTF values at the use condition predicted from 100 simulated marginalized quantities, and applying the optimization technique to
data are investigated, and the results are plotted in Fig. 6. For a determine the stress conditions of two factors and the number of test
moment, let us compare only the first two designs, the 𝑈𝑐 -optimal and units allocated to each stress condition provided by each supplier.
near-optimal designs. The comparison of the magnitudes of dispersion The optimal design found presents a balanced allocations of test
of the first two box plots shows that the prediction variances from units from two suppliers in general. However, we found degrees of the
those designs are quite similar, which is not unexpected because the balance depends on the test stress conditions, and the impact on the
similar objective function values have been obtained from two designs. design efficiency of imbalanced test units allocations at higher stress
However, the 𝑈𝑐 -optimal design shows a smaller bias, depicted by the conditions was negligible. Therefore, in practice, it is recommended to
deviation from the true value in red, compared to the near optimal assign as much balance first to lower stress conditions as possible, and
design. then flexibly assign remaining test units to higher stress conditions.
We also make comparisons of parameter estimates outcomes from The particle swarm optimization has provided quite flexible and
different designs. Fig. 7 shows estimates of parameters involved in stable implementation for the optimization problem that we dealt with
the model, 𝛽0 , 𝛽1 , 𝛽2 , 𝛽12 , and 𝜎𝑧2 . We observe that, from the first two in this paper. However, it still requires several hours to reach out
box plots, while there is not much difference between two designs to a convergence. Developing a more efficient optimization algorithm
with respect to the estimates of supplier effects 𝜎̂ 𝑧2 , the 𝑈𝑐 -optimal with stochastic objective function outcomes would be a worthy task for
design performs substantially better than near optimal design for the future research.
fixed effects parameters, especially 𝛽0 and 𝛽1 . Although performance of
two designs measured by the prediction variance at the use condition CRediT authorship contribution statement
is similar, the balance between two suppliers is still important for
accurate estimates of parameters, that is related to the 𝐷-efficiency of Kangwon Seo: Conceptualization, Methodology, Formal analysis,
a design. Therefore, we conclude that the balanced allocation of test Visualization, Investigation, Software, Data curation, Validation, Writ-
units from two suppliers improves the 𝐷-efficiency. ing – original draft, Writing – review & editing, Supervision. Wonjae
Interestingly, the last box plot of Fig. 6 shows that the balanced Lee: Formal analysis, Software, Data curation, Visualization, Writing -
factorial design produces a smaller prediction variance at the use original draft.
condition than the first two optimal designs. However, the parameter
estimates of this design shows large bias in general from Fig. 7. Data availability

5.2. Design sensitivity by supplier effects Data will be made available on request.

It is of interest to investigate to what extend an optimal design Acknowledgments


changes according to changes of the supplier effect. In the previous
example, we specified 𝜎𝑧2 = 1 as the magnitude of the supplier effect. The authors would like to thank the referees and the editor for
However, such specification may be too large in real applications. For reviewing this article and providing valuable comments.

9
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

Fig. 7. Parameter estimates from simulated data created by the 𝑈𝑐 -optimal, near-optimal, and balanced factorial designs. The red lines indicate the true values assumed for data
generation. The true value of 𝜎𝑧2 is marked as 0.5, the maximum likelihood estimate (MLE) of −0.707 and 0.707, instead of 1 in this plot. This is because the MLE is found for
the GLMM fit.

Fig. 8. 𝑈𝑐 -optimal design with 𝜎𝑧2 = 0.01.

10
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

Fig. 9. 𝑈𝑐 -optimal design with 𝜎𝑧2 = 0.2.

Fig. 10. 𝑈𝑐 -optimal design with 𝜎𝑧2 = 0.5.

Appendix. Marginal quantities in Section 3.2 Marginalized variance

Likewise, to compute the marginalized variance, we first marginal-


Marginalized mean
ize the variance of 𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 with respect to 𝑟𝑖𝑗 + 𝑠𝑖𝑗 .
We first marginalize the mean of 𝑌𝑖𝑗 |𝑟𝑖𝑗 +𝑠𝑖𝑗 , 𝑧𝑠 with respect to 𝑟𝑖𝑗 +𝑠𝑖𝑗
var(𝑌𝑖𝑗 |𝑧𝑠 ) = var(E[𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 ]) + E[var(𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 )]
as follows.
= var((𝑟𝑖𝑗 + 𝑠𝑖𝑗 )𝜋𝑖𝑗 |𝑧𝑠 ) + E[(𝑟𝑖𝑗 + 𝑠𝑖𝑗 )𝜋𝑖𝑗 (1 − 𝜋𝑖𝑗 )|𝑧𝑠 ]
E[𝑌𝑖𝑗 |𝑧𝑠 ] = E[E[𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 ]]
= 𝜋𝑖𝑗2 var(𝑟𝑖𝑗 + 𝑠𝑖𝑗 |𝑧𝑠 ) + 𝜋𝑖𝑗 (1 − 𝜋𝑖𝑗 )E[𝑟𝑖𝑗 + 𝑠𝑖𝑗 |𝑧𝑠 ]
= E[(𝑟𝑖𝑗 + 𝑠𝑖𝑗 )𝜋𝑖𝑗 ] { }
−𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
= 𝜋𝑖𝑗 E[𝑟𝑖𝑗 + 𝑠𝑖𝑗 ] = 𝜋𝑖𝑗2 𝑒 𝑗−1 1 − 𝑒 𝑗−1
−𝑡𝛼 exp(𝛽 +𝜷 ′ 𝐱 +𝑧 )
0 𝑖 𝑠 −𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
= 𝜋𝑖𝑗 𝑒 𝑗−1 + 𝜋𝑖𝑗 (1 − 𝜋𝑖𝑗 )𝑒
{ } 𝛼 { }
−(𝑡 −𝑡 ) exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
𝛼 𝛼 −𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
= 1 − 𝑒 𝑗 𝑗−1 𝑒 𝑗−1 = 𝜋𝑖𝑗 𝑒 𝑗−1 1 − 𝜋𝑖𝑗 𝑒 𝑗−1
{ } 𝛼
−𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼𝑗 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −(𝑡𝛼 −𝑡𝛼 ) exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
=𝑒 −𝑒 = 1 − 𝑒 𝑗 𝑗−1 𝑒 𝑗−1
−𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼𝑗 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
[ { } 𝛼 ]
−(𝑡𝛼 −𝑡𝛼 ) exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
=𝑒 −𝑒 × 1 − 1 − 𝑒 𝑗 𝑗−1 𝑒 𝑗−1
2 ). The additional ex- { 𝛼 }
where 𝑈 = exp(𝑧𝑠 ) which is 𝐿𝑜𝑔𝑛𝑜𝑟𝑚𝑎𝑙(0, 𝜎𝑍 −𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
= 𝑒 𝑗−1 −𝑒 𝑗
pectation with respect to 𝑈 results in the marginal mean of 𝑌𝑖𝑗 as [ { 𝛼 }]
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
follows. × 1 − 𝑒 𝑗−1 −𝑒 𝑗
[ 𝛼 ]
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 { 𝛼 }
E[𝑌𝑖𝑗 ] = E[E[𝑌𝑖𝑗 |𝑧𝑠 ]] = E 𝑒 𝑗−1 −𝑒 𝑗 −𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
= 𝑒 𝑗−1 −𝑒 𝑗
[ 𝛼 ] [ 𝛼 ]
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 { 𝛼 }2
= E 𝑒 𝑗−1 −E 𝑒 𝑗 −𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
( ) ( ) − 𝑒 𝑗−1 −𝑒 𝑗
= 𝑓 𝑡𝛼𝑗−1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 ) − 𝑓 𝑡𝛼𝑗 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 )
Then the marginalized variance of 𝑌𝑖𝑗 is obtained as follows.
≈ 𝜇𝑖𝑗∗
var(𝑌𝑖𝑗 ) = var(E[𝑌𝑖𝑗 |𝑧𝑠 ]) + E[var(𝑌𝑖𝑗 |𝑧𝑠 )]
where 𝑓 (𝑠) is the Laplace transform of the function 𝑓 (𝑢), the pdf of { 𝛼 }
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
lognormal herein, evaluated at 𝑠. = var 𝑒 𝑗−1 −𝑒 𝑗

11
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471
[{ } [ ]
−𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼𝑗 exp(𝛽 +𝜷 ′ 𝐱 )𝑈 ′ −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
+E 𝑒 −𝑒 0 𝑖
= (𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 )𝑒𝛽0 +𝜷 𝐱𝑖 E 𝑈 𝑒 𝑗
{ 𝛼 }2 ] ∞
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 ′𝐱 −𝑡𝛼𝑗 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑢
− 𝑒 𝑗−1 −𝑒 𝑗 = (𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 )𝑒𝛽0 +𝜷 𝑖 𝑢𝑒 𝑓 (𝑢)𝑑𝑢
∫0
[{ 𝛼 }]
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 ≈ 𝛿𝑖𝑗
= E 𝑒 𝑗−1 −𝑒 𝑗
[{ 𝛼 }]
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
− E2 𝑒 𝑗−1 −𝑒 𝑗 References
= E[𝑌𝑖𝑗 ](1 − E[𝑌𝑖𝑗 ]) Asmussen, S., Jensen, J. L., & Rojas-Nandayapa, L. (2016). On the Laplace transform of
≈ 𝜇𝑖𝑗∗ (1 − 𝜇𝑖𝑗∗ ) the lognormal distribution. Methodology and Computing in Applied Probability, 18(2),
441–458.
Banks, A., Vincent, J., & Anyakoha, C. (2007). A review of particle swarm optimization.
Marginalized covariance
part I: Background and development. Natural Computing, 6(4), 467–484.
Bendtsen, C. (2012). PSO: Particle swarm optimization. Retrieved from: https://CRAN.
To compute the marginal covariance of two observations included R-project.org/package=pso, R package version 1.0.3.
in the same cluster 𝑠, we first marginalize out 𝑟𝑖𝑗 + 𝑠𝑖𝑗 as follows. Chen, R.-B., Chang, S.-P., Wang, W., Tung, H.-C., & Wong, W. K. (2015). Minimax
optimal designs via particle swarm optimization methods. Statistics and Computing,
cov(𝑌𝑖𝑗 , 𝑌𝑖′ 𝑗 ′ |𝑧𝑠 ) = cov(E[𝑌𝑖𝑗 |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑧𝑠 ], E[𝑌𝑖′ 𝑗 ′ |𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ , 𝑧𝑠 ]) 25(5), 975–988.
Chen, R.-B., Chang, S.-P., Wang, W., & Wong, W. (2011). Optimal experimental designs
+ E[cov(𝑌𝑖𝑗 , 𝑌𝑖′ 𝑗 ′ |𝑟𝑖𝑗 + 𝑠𝑖𝑗 , 𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ , 𝑧𝑠 )] via particle swarm optimization methods, Preprint, Department of Mathematics,
= cov((𝑟𝑖𝑗 + 𝑠𝑖𝑗 )𝜋𝑖𝑗 , (𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ )𝜋𝑖′ 𝑗 ′ |𝑧𝑠 ) + E[0] National Taiwan University, 3.
Chen, S., Montgomery, J., & Bolufé-Röhler, A. (2015). Measuring the curse of dimen-
= 𝜋𝑖𝑗 𝜋𝑖′ 𝑗 ′ cov((𝑟𝑖𝑗 + 𝑠𝑖𝑗 ), (𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ )|𝑧𝑠 ) sionality and its effects on particle swarm optimization and differential evolution.
Applied Intelligence: The International Journal of Artificial Intelligence, Neural Networks,
We can find (𝑟𝑖𝑗 + 𝑠𝑖𝑗 )(𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ )|𝑧𝑠 ∼
and Complex Problem-Solving Technologies, 42(3), 514–526.
−𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼𝑗′ −1 exp(𝛽0 +𝜷 ′ 𝐱𝑖′ +𝑧𝑠 )
𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖(𝑒 𝑒 ). Therefore, Hakamipour, N., & Rezaei, S. (2015). Optimal design for a bivariate simple step-stress
accelerated life testing model with type-II censoring and gompertz distribu-
cov((𝑟𝑖𝑗 + 𝑠𝑖𝑗 ), (𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ )|𝑧𝑠 ) tion. International Journal of Information Technology and Decision Making, 14(06),
1243–1262.
= E[(𝑟𝑖𝑗 + 𝑠𝑖𝑗 )(𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ )|𝑧𝑠 ] − E[(𝑟𝑖𝑗 + 𝑠𝑖𝑗 )|𝑧𝑠 ]E[(𝑟𝑖′ 𝑗 ′ + 𝑠𝑖′ 𝑗 ′ )|𝑧𝑠 ]
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of
−𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼𝑗′ −1 exp(𝛽0 +𝜷 ′ 𝐱𝑖′ +𝑧𝑠 ) ICNN’95-international conference on neural networks, Vol. 4 (pp. 1942–1948). IEEE.
=𝑒 𝑒
Kensler, J. L., Freeman, L. J., & Vining, G. G. (2015). Analysis of reliability experi-
−𝑡𝛼𝑗−1 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼𝑗′ −1 exp(𝛽0 +𝜷 ′ 𝐱𝑖′ +𝑧𝑠 ) ments with random blocks and subsampling. Journal of Quality Technology, 47(3),
−𝑒 𝑒
235–251.
=0 Kim, S. -H., & Sung, S. -I. (2021). Optimal design of cyclic-stress accelerated life tests for
lognormal lifetime distribution under type I censoring. Microelectronics Reliability,
and hence, given the supplier effect, the covariance between two Article 114315.
observations are zero. Now we marginalize out 𝑧𝑠 as follows. Lawless, J. F. (2011). Vol. 362, Statistical models and methods for lifetime data. John
Wiley & Sons.
cov(𝑌𝑖𝑗 , 𝑌𝑖′ 𝑗 ′ ) = cov(E[𝑌𝑖𝑗 |𝑧𝑠 ], E[𝑌𝑖′ 𝑗 ′ |𝑧𝑠 ]) + E[cov(𝑌𝑖𝑗 , 𝑌𝑖′ 𝑗 ′ |𝑧𝑠 )] Lee, J., & Pan, R. (2012). A GLM approach to step-stress accelerated life testing with
( ) interval censoring. Journal of Statistical Planning and Inference, 142(4), 810–819.
−𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) −𝑡𝛼𝑗′ −1 exp(𝛽0 +𝜷 ′ 𝐱𝑖′ +𝑧𝑠 )
= cov 𝑒 𝑗−1 ,𝑒 + E[0] León, R. V., Ramachandran, R., Ashby, A. J., & Thyagarajan, J. (2007). Bayesian
( ) modeling of accelerated life tests with random effects. Journal of Quality Technology,
−𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼𝑗′ −1 exp(𝛽0 +𝜷 ′ 𝐱𝑖′ )𝑈 39(1), 3–16.
= cov 𝑒 𝑗−1 ,𝑒 Li-Ping, Z., Huan-Jun, Y., & Shang-Xu, H. (2005). Optimal choice of parameters for
[ ] particle swarm optimization. Journal of Zhejiang University — Science A, 6(6),
−𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 −𝑡𝛼𝑗′ −1 exp(𝛽0 +𝜷 ′ 𝐱𝑖′ )𝑈
= E 𝑒 𝑗−1 𝑒 528–534.
Lukemire, J., Mandal, A., & Wong, W. K. (2016). Using particle swarm optimization
[ 𝛼 ] [ −𝑡𝛼 exp(𝛽 +𝜷 ′ 𝐱 )𝑈 ] to search for locally 𝐷-optimal designs for mixed factor experiments with binary
−𝑡 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈 0 𝑖′
− E 𝑒 𝑗−1 E 𝑒 𝑗 ′ −1 response. arXiv preprint arXiv:1602.02187.
( ) Lukemire, J., Mandal, A., & Wong, W. K. (2020). Optimal experimental designs for
= 𝑓 𝑡𝛼𝑗−1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 ) + 𝑡𝛼𝑗′ −1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖′ ) ordinal models with mixed factors for industrial and healthcare applications.
( ) ( ) Journal of Quality Technology, 1–13.
− 𝑓 𝑡𝛼𝑗−1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖 ) 𝑓 𝑡𝛼𝑗′ −1 exp(𝛽0 + 𝜷 ′ 𝐱𝑖′ ) Malevich, N., & Müller, C. H. (2019). Optimal design of inspection times for interval
censoring. Statistical Papers, 60(2), 449–464.
Malevich, N., Müller, C. H., Dreier, J., Kansteiner, M., Biermann, D., & Ferreira, M. D. P.
Marginalized quantity of 𝜟 (2021). Experimental and statistical analysis of the wear of diamond impregnated
tools. Wear, 468, Article 203574.
The marginalization of the quantity in Eq. (23) with respect to McCullagh, P. (1983). Quasi-likelihood functions. The Annals of Statistics, 11(1), 59–67.
𝑟𝑖𝑗 + 𝑠𝑖𝑗 is given as McCulloch, C. E., & Searle, S. R. (2001). Generalized, linear, and mixed models. Wiley.
Meeker, W. Q., & Escobar, L. A. (2014). Statistical methods for reliability data. John
[ ]
(𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 ) exp(𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 ) Wiley & Sons.
E (𝑟𝑖𝑗 + 𝑠𝑖𝑗 ) |𝑧𝑠 Nelson, W. B. (2009). Vol. 344, Accelerated testing: Statistical models, test plans, and data
(𝑡𝛼 −𝑡𝛼 ) exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
𝑒 𝑗 𝑗−1 analysis. John Wiley & Sons.
𝛼 𝛼 ′ Örkcü, H. H., Özsoy, V. S., Aksoy, E., & Dogan, M. I. (2015). Estimating the parameters
(𝑡𝑗 − 𝑡𝑗−1 ) exp(𝛽0 + 𝜷 𝐱𝑖 + 𝑧𝑠 )
= E[𝑟𝑖𝑗 + 𝑠𝑖𝑗 |𝑧𝑠 ] of 3-p Weibull distribution using particle swarm optimization: a comprehensive
(𝑡𝛼 −𝑡𝛼 ) exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) experimental comparison. Applied Mathematics and Computation, 268, 201–226.
𝑒 𝑗 𝑗−1
Qiu, J. (2014). Finding optimal experimental designs for models in biomedical studies via
(𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 ) exp(𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 ) −𝑡𝛼 exp(𝛽 +𝜷 ′ 𝐱 +𝑧 )
0 𝑖 𝑠 particle swarm optimization (Ph.D. thesis), UCLA.
= 𝑒 𝑗−1
(𝑡𝛼 −𝑡𝛼 ) exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 ) Qiu, J., Chen, R.-B., Wang, W., & Wong, W. K. (2014). Using animal instincts to design
𝑒 𝑗 𝑗−1 efficient biomedical studies via particle swarm optimization. Swarm and Evolutionary
(𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 ) exp(𝛽0 + 𝜷 ′ 𝐱𝑖 + 𝑧𝑠 ) Computation, 18, 1–10.
= Rossberg, A. (2008). Laplace transforms of probability distributions and their inversions
𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 +𝑧𝑠 )
𝑒𝑗 are easy on logarithmic scales. Journal of Applied Probability, 45(2), 531–541.
′𝐱 −𝑡𝛼𝑗 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
= (𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 )𝑒𝛽0 +𝜷 𝑖 𝑈𝑒 Seo, K., & Pan, R. (2016). Data analysis for accelerated life tests with constrained
randomization. In 2016 proc. ann. reliability & maintainability symp (pp. 489–495).
An additional marginalization with respect to 𝑈 results in IEEE.
[ ] Seo, K., & Pan, R. (2017). Data analysis of step-stress accelerated life tests with
′ −𝑡𝛼 exp(𝛽0 +𝜷 ′ 𝐱𝑖 )𝑈
E (𝑡𝛼𝑗 − 𝑡𝛼𝑗−1 )𝑒𝛽0 +𝜷 𝐱𝑖 𝑈 𝑒 𝑗 heterogeneous group effects. IISE Transactions, 49(9), 885–898.

12
K. Seo and W. Lee Computers & Industrial Engineering 171 (2022) 108471

Seo, K., & Pan, R. (2018). Planning accelerated life tests with random effects of test Wedderburn, R. W. (1974). Quasi-likelihood functions, generalized linear models, and
chambers. Applied Stochastic Models in Business and Industry, 34(2), 224–243. the Gauss–Newton method. Biometrika, 61(3), 439–447.
Seo, K., & Pan, R. (2022). Planning accelerated life tests with multiple sources of Wong, W. K., Chen, R.-B., Huang, C.-C., & Wang, W. (2015). A modified particle swarm
random effects. Journal of Quality Technology, 54(2), 162–183. optimization technique for finding optimal designs for mixture models. PLoS One,
Shi, Y., Zhang, Z., & Wong, W. K. (2019). Particle swarm based algorithms for 10(6), e0124720.
finding locally and bayesian d-optimal designs. Journal of Statistical Distributions Wu, S.-J., & Huang, S.-R. (2014). Planning progressive type-I interval censoring life
and Applications, 6(1), 1–17. tests with competing risks. IEEE Transactions on Reliability, 63(2), 511–522.
Stroup, W. W. (2012). Generalized linear mixed models: Modern concepts, methods and Wu, S.-J., & Huang, S.-R. (2019). Optimal progressive interval censoring plan under
applications. CRC press. accelerated life test with limited budget. Journal of Statistical Computation and
Sun, B., Balakrishnan, N., Chen, F., Xu, B., Yang, Z., & Liu, Y. (2020). Reliability Simulation, 89(17), 3241–3257.
evaluation of the servo turret with accurate failure data and interval censored Wu, S.-F., Wu, Y.-C., Wu, C.-H., & Chang, W.-T. (2021). Experimental design for the
data based on EM algorithm. Journal of Mechanical Science and Technology, 34(4), lifetime performance index of Weibull products based on the progressive type I
1503–1513. interval censored sample. Symmetry, 13(9), 1691.
Sun, J., Wu, X., Palade, V., Fang, W., & Shi, Y. (2015). Random drift particle swarm Yang, X. -S. (2020). Nature-inspired optimization algorithms. Academic Press.
optimization algorithm: convergence analysis and parameter selection. Machine Yang, T., & Pan, R. (2013). A novel approach to optimal accelerated life test planning
Learning, 101(1), 345–376. with interval censoring. IEEE Transactions on Reliability, 62(2), 527–536.
Tobias, P. A., & Trindade, D. (2011). Applied reliability. CRC Press. Zhao, X., He, K., Kuo, W., & Xie, M. (2020). Planning accelerated reliability tests
Tse, S.-K., Ding, C., & Yang, C. (2008). Optimal accelerated life tests under interval for mission-oriented systems subject to degradation and shocks. IISE Transactions,
censoring with random removals: The case of Weibull failure distribution. Statistics, 52(1), 91–103.
42(5), 435–451. Zhao, X., Pan, R., Del Castillo, E., & Xie, M. (2019). An adaptive two-stage bayesian
model averaging approach to planning and analyzing accelerated life tests under
model uncertainty. Journal of Quality Technology, 51(2), 181–197.

13

You might also like