The chains-of-rare-events model (ChRE) is extended. The ChRE was originally introduced in order to analyze occurrences which can be produced with simple, double, triple, etc., multiplicity. In the original ChRE, each occurrence of... more
The chains-of-rare-events model (ChRE) is extended. The ChRE was originally introduced in order to analyze occurrences which can be produced with simple, double, triple, etc., multiplicity. In the original ChRE, each occurrence of multiplicity (i) is independently distributed according to a Poisson law with parameter λi ; and a simple relation for these parameters is considered. In this way, ChRE
A tutorial illustrating designs in which plastic deformation can endanger system performance, thereby acting as an overstress failure mechanism, is presented. Analytic methods, based on continuum mechanics, are presented to design against... more
A tutorial illustrating designs in which plastic deformation can endanger system performance, thereby acting as an overstress failure mechanism, is presented. Analytic methods, based on continuum mechanics, are presented to design against such failures. Examples illustrate the use of these models in practical designs encountered in mechanical systems and in electronic packaging
The aim of this paper is to investigate how to improve the process of information combination, using the Dempster-Shafer Theory (DST). In presence of an overload of information and an unknown environment, the reliability of the sources of... more
The aim of this paper is to investigate how to improve the process of information combination, using the Dempster-Shafer Theory (DST). In presence of an overload of information and an unknown environment, the reliability of the sources of information or the sensors is usually unknown and thus cannot be used to refine the fusion process. In a previous paper [1], the authors have investigated different techniques to evaluate contextual knowledge from a set of mass functions (membership of a BPA to a set of BPAs, relative reliabilities of BPAs, credibility degrees, etc.). The purpose of this paper is to investigate how to use the contextual knowledge in order to improve the fusion process.
The determination of the reliability value for technical systems whose components are subjected to random failure possesses a wide range of applicability, eg in data communication networks, computer architectures and electrical power... more
The determination of the reliability value for technical systems whose components are subjected to random failure possesses a wide range of applicability, eg in data communication networks, computer architectures and electrical power networks. The ...
The redundancy allocation problem (RAP) has been studied for many different system structures, objective functions, and distribution assumptions. In this paper, we present a problem formulation and a solution methodology to maximize the... more
The redundancy allocation problem (RAP) has been studied for many different system structures, objective functions, and distribution assumptions. In this paper, we present a problem formulation and a solution methodology to maximize the system steady-state availability and minimize the system cost for the repairable series-parallel system designs. In the proposed approach, the components’ time-to-failure (TTF) and time-to-repair (TTR) can follow any distribution such as the Gamma, Normal, Weibull, etc. We estimate an approximation of the steady-state availability of each subsystem in the series-parallel system with an individual meta-model. Design of experiment (DOE), simulation and the stepwise regression are used to build these meta-models. Face centred design, which is a type of central composite design is used to design experiments. According to a max–min approach, obtained meta-models are utilized for modelling the problem alongside the cost function of the system. We use the augmented ε-constraint method to reformulate the problem and solve the model. An illustrative example which uses the Gamma distribution for TTF and TTR is explained to represent the performance of the proposed approach. The results of the example show that the proposed approach has a good performance to obtain Pareto (near-Pareto) optimal solutions (system configurations).
Sweden was re-regulated in 1996 followed by new laws and regulations. These new circumstances have become incentives to adopt more comprehensive and quantitative analysis methods applied on Electrical Distribution Systems (EDS). This... more
Sweden was re-regulated in 1996 followed by new laws and regulations. These new circumstances have become incentives to adopt more comprehensive and quantitative analysis methods applied on Electrical Distribution Systems (EDS). This paper provides a systematic presentation of the current risk management at a Distribution System Operator (DSO) as an integrated part of the net planning process. The description is
Sweden was re-regulated in 1996 followed by new laws and regulations. These new circumstances have become incentives to adopt more comprehensive and quantitative analysis methods applied on Electrical Distribution Systems (EDS). This... more
Sweden was re-regulated in 1996 followed by new laws and regulations. These new circumstances have become incentives to adopt more comprehensive and quantitative analysis methods applied on Electrical Distribution Systems (EDS). This paper provides a systematic presentation of the current risk management at a Distribution System Operator (DSO) as an integrated part of the net planning process. The description is
In this paper we propose an improved BDD approach to the network reliability analysis, that allows the user to compute an exact solution or an approximation based on reliability bounds when network complexity makes the former solution... more
In this paper we propose an improved BDD approach to the network reliability analysis, that allows the user to compute an exact solution or an approximation based on reliability bounds when network complexity makes the former solution practically impossible. To this purpose, a new algorithm for encoding the connectivity graph on a Binary Decision Diagram (BDD) has been developed; it reduces the computation memory peak with respect to previous approaches based on the same type of data structure without increasing the execution time, and allows us also to derive from a subset of the minpaths/mincuts a lower/upper bound of the network reliability, so that the quality of the obtained approximation can be estimated. Finally, a fair and detailed comparison between our approach and another state of the art approach presented in the literature is documented through a set of benchmarks.
Statistical information has been used extensively to define feasible solutions in set theoretic signal processing. However, the reliability of the set theoretic estimates resulting from the combination of multiple statistical constraints... more
Statistical information has been used extensively to define feasible solutions in set theoretic signal processing. However, the reliability of the set theoretic estimates resulting from the combination of multiple statistical constraints has thus far not been investigated. We address this question in order to provide a basis for a better use of statistical information in set theoretic estimation problems
Analyzing reliability of a system in design stage requires expertpsilas estimations and statistical data with various degrees of epistemic uncertainty and doing aggregation in a coherent framework. Dempster-Shafer (DS) theory is... more
Analyzing reliability of a system in design stage requires expertpsilas estimations and statistical data with various degrees of epistemic uncertainty and doing aggregation in a coherent framework. Dempster-Shafer (DS) theory is theorypotentially valuable tool for combination of evidence obtained from multiple different sources. One approach for fuzzy reliability assessment is using Vague set (VS) theory. DS theory has many similarities with VS theory. Uncertain raw data about the component reliability of a system can be combined using different combination methods of DS theory and can be represented in the form of triangular fuzzy vague number. Using the proper methods and equations, the fuzzy reliability of the system can be computed with triangular vague numbers of components reliability. Combining these two theories eliminates the gap between the representation of combined evidences and the way of representing the reliability of components in the VS theory for reliability assessment. Our proposed method eliminates this gap in very convenient form. Because of closed relevance of these two theories we can represent the output of DS combination in the form of vague triangular number in the VS theory. With this method we eliminate the loss of meaningful information in this conversion.
Analyzing reliability of a system in design stage requires expertpsilas estimations and statistical data with various degrees of epistemic uncertainty and doing aggregation in a coherent framework. Dempster-Shafer (DS) theory is... more
Analyzing reliability of a system in design stage requires expertpsilas estimations and statistical data with various degrees of epistemic uncertainty and doing aggregation in a coherent framework. Dempster-Shafer (DS) theory is theorypotentially valuable tool for combination of evidence obtained from multiple different sources. One approach for fuzzy reliability assessment is using Vague set (VS) theory. DS theory has many similarities with VS theory. Uncertain raw data about the component reliability of a system can be combined using different combination methods of DS theory and can be represented in the form of triangular fuzzy vague number. Using the proper methods and equations, the fuzzy reliability of the system can be computed with triangular vague numbers of components reliability. Combining these two theories eliminates the gap between the representation of combined evidences and the way of representing the reliability of components in the VS theory for reliability assessment. Our proposed method eliminates this gap in very convenient form. Because of closed relevance of these two theories we can represent the output of DS combination in the form of vague triangular number in the VS theory. With this method we eliminate the loss of meaningful information in this conversion.
The problem of using a quadratic test to examine the goodness-of-fit of an inverse Gaussian distribution with unknown parameters is discussed. Tables of approximate critical values of Anderson-Darling, Cramer-von Mises, and Watson test... more
The problem of using a quadratic test to examine the goodness-of-fit of an inverse Gaussian distribution with unknown parameters is discussed. Tables of approximate critical values of Anderson-Darling, Cramer-von Mises, and Watson test statistics are presented in a format requiring only the sample size and the estimated value of the shape parameter. A relationship is found between the sample size
ABSTRACT We consider three approaches to the modeling of systems with repairable components by a multivariate stochastic on-off process. First, we discuss the Palm calculus framework for stationary processes and its power in the... more
ABSTRACT We consider three approaches to the modeling of systems with repairable components by a multivariate stochastic on-off process. First, we discuss the Palm calculus framework for stationary processes and its power in the derivation of general formulae for joint downtime statistics in the case of statistically independent components. Second, a class of Generalized Semi-Markov (GSMP) models is proposed for incorporating both arbitrary component downtime distributions and statistical dependence of component failures. The case of two components is studied in detail. Third, we define the property referred to as weakened-by-failures for a system of repairable components, and prove that it implies association under fairly general conditions. We also give sufficient conditions for our GSMP models to possess this property.
The performance of space and frequency diversity techniques at 1800 MHz in an indoor environment are investigated. Three linear signal combining techniques are considered: signal selection (SEL), maximal ratio combining (MRC), and equal... more
The performance of space and frequency diversity techniques at 1800 MHz in an indoor environment are investigated. Three linear signal combining techniques are considered: signal selection (SEL), maximal ratio combining (MRC), and equal gain combining (EGC). The computations of received fading envelopes are performed by means of an analytical model, based on a three-dimensional ray-tracing (RT)/uniform theory of diffraction (UTD) technique; the reliability of the adopted approach is confirmed by comparison with some test measurements. The electromagnetic field components are adequately processed to obtain the single branch and combined signal envelope. The results show the very significant benefits that can be achieved both in terms of diversity gain and diversity advantage for both diversity techniques. Antenna spacings of about 0.75-1λ are nearly sufficient for achieving optimum performance, whereas frequency separation on the order of 10 MHz is needed for sufficiently decorrelated transmission on the two carriers
A consecutive-k-out-of-n:F system is strict if it operates in such a way that isolated failure strings of length less than k (which do not cause system failure) do not occur, or are immediately corrected. This paper gives closed formulas... more
A consecutive-k-out-of-n:F system is strict if it operates in such a way that isolated failure strings of length less than k (which do not cause system failure) do not occur, or are immediately corrected. This paper gives closed formulas for the failure probability of a strict-consecutive-k-out-of-n:F system.
Nowadays software systems are more and more complex. Such systems are composed of many sub-systems, in which each sub-system exists and interacts with other subsystems. Since a few years, multiagent systems (MAS) propose well considered... more
Nowadays software systems are more and more complex. Such systems are composed of many sub-systems, in which each sub-system exists and interacts with other subsystems. Since a few years, multiagent systems (MAS) propose well considered approaches for these kinds of systems. In this paper, we concentrate on recursive MAS which are well adapted to describe complex systems. Up to now, recursive MAS are only specified by informal languages. This paper proposes the use of type theory to specify recursive MAS. This work constitutes a first theoretical step in the elaboration of a generic software tool for reliable design of recursive MAS.
... Dr. Abhijit Dasgupta; CALCE Electronic Packaging Research Center; Univer-sity of Maryland; College Park, Maryland 20742 USA. Abhijit Dasgupta: For biography, see IEEE Trans. ... Some properties of a bivariate geometric distribution... more
... Dr. Abhijit Dasgupta; CALCE Electronic Packaging Research Center; Univer-sity of Maryland; College Park, Maryland 20742 USA. Abhijit Dasgupta: For biography, see IEEE Trans. ... Some properties of a bivariate geometric distribution (Nair, >), Dr. N. Unnikrishnan Nair 0 Dept. ...
In this paper we describe several computational algorithms useful in studying importance sampling (IS) for Markov chains. Our algorithms compute optimal IS measures and evaluate the estimate variance for a given measure. As knowledge of... more
In this paper we describe several computational algorithms useful in studying importance sampling (IS) for Markov chains. Our algorithms compute optimal IS measures and evaluate the estimate variance for a given measure. As knowledge of the optimal IS measure implies knowledge of the quantity to be estimated, our algorithms produce this quantity as a by-product. Since effective IS measures must often closely approximate the optimal measure, the use of these algorithms for small problems may produce insights that lead to effective measures for larger problems of actual interest. We consider two classes of problems: hitting times and fixed-horizon costs. 1
Degradation is a weakness that eventually can cause failure (e.g. car tire wear). When it is possible to measure degradation, such measures often provide more information than failure-time data for purposes of assessing and improving... more
Degradation is a weakness that eventually can cause failure (e.g. car tire wear). When it is possible to measure degradation, such measures often provide more information than failure-time data for purposes of assessing and improving product reliability. This is a paper which mainly pretends to divulge techniques that had been developed by Meeker & Escobar (1998). We think it is worth to make this topics known, because they are in the research frontier of the Reliability Theory (Lawless 2000). We compare in this work the explicit degradation models with the approximate degradation analysis. The explicit degradation model requires specific models developed by engineers and physical scientists, which are treated as mixed models with random effects. To obtain ML estimates we use S-PLUS following Pinheiro & Bates (2000), and also use bootstrap confidence intervals.