Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (23)

Search Parameters:
Keywords = NHPP

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 811 KiB  
Article
A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion
by Kwang Yoon Song, Youn Su Kim, Hoang Pham and In Hong Chang
Mathematics 2024, 12(11), 1641; https://doi.org/10.3390/math12111641 - 23 May 2024
Viewed by 649
Abstract
It is becoming increasingly common for software to operate in various environments. However, even if the software performs well in the test phase, uncertain operating environments may cause new software failures. Traditional proposed software reliability models under uncertain operating environments suffer from the [...] Read more.
It is becoming increasingly common for software to operate in various environments. However, even if the software performs well in the test phase, uncertain operating environments may cause new software failures. Traditional proposed software reliability models under uncertain operating environments suffer from the problem of being well-suited to special cases due to the large number of assumptions involved. To improve these problems, this study proposes a new software reliability model that assumes an uncertain operating environment. The new software reliability model is a model that minimizes assumptions and minimizes the number of parameters that make up the model, so that the model can be applied to general situations better than the traditional proposed software reliability models. In addition, various criteria based on the difference between the predicted and estimated values have been used in the past to demonstrate the superiority of the software reliability models. Also, we propose a new multi-criteria decision method that can simultaneously consider multiple goodness-of-fit criteria. The multi-criteria decision method using ranking is useful for comprehensive evaluation because it does not rely on individual criteria alone by ranking and weighting multiple criteria for the model. Based on this, 21 existing models are compared with the proposed model using two datasets, and the proposed model is found to be superior for both datasets using 15 criteria and the multi-criteria decision method using ranking. Full article
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

24 pages, 5231 KiB  
Article
Optimal Decision for Repairable Products Sale and Warranty under Two-Dimensional Deterioration with Consideration of Production Capacity and Customers’ Heterogeneity
by Ming-Nan Chen and Chih-Chiang Fang
Axioms 2023, 12(7), 701; https://doi.org/10.3390/axioms12070701 - 19 Jul 2023
Cited by 1 | Viewed by 1064
Abstract
An effective warranty policy is not only an obligation for the manufacturer or vendor, but it also enhances the willingness of customers to purchase from them in the future. To earn more customers and increase sales, manufacturers or vendors should be inclined to [...] Read more.
An effective warranty policy is not only an obligation for the manufacturer or vendor, but it also enhances the willingness of customers to purchase from them in the future. To earn more customers and increase sales, manufacturers or vendors should be inclined to prolong the service life of their products as an effort to gain more customers. Nevertheless, manufacturers or vendors will not provide a boundless warranty in order to dominate the market, since the related warranty costs will eventually exceed the profits in the end. Therefore, it is a question of weighing the advantage of extending the warranty term in order to earn the trust of new customers against the investment. In addition, since deterioration depends on both time and usage, the deterioration estimation for durable products may be incorrect when considering only one factor. For such problems, a two-dimensional deterioration model is suitable, and the failure times are drawn from a non-homogeneous Poisson process (NHPP). Moreover, customers’ heterogeneity, manufacturers’ production capacity, and preventive maintenance services are also considered in this study. A mathematical model with the corresponding solution algorithm is proposed to assist manufacturers in making systematic decisions about pricing, production, and warranty. Finally, managerial implications are also provided for refining related decision-making. Full article
Show Figures

Figure 1

22 pages, 1776 KiB  
Article
Prediction and Comparative Analysis of Software Reliability Model Based on NHPP and Deep Learning
by Youn Su Kim, Kwang Yoon Song and In Hong Chang
Appl. Sci. 2023, 13(11), 6730; https://doi.org/10.3390/app13116730 - 31 May 2023
Cited by 2 | Viewed by 1317
Abstract
Over time, software has become increasingly important in various fields. If the current software is more dependent than in the past and is broken owing to large and small issues, such as coding and system errors, it is expected to cause significant damage [...] Read more.
Over time, software has become increasingly important in various fields. If the current software is more dependent than in the past and is broken owing to large and small issues, such as coding and system errors, it is expected to cause significant damage to the entire industry. To address this problem, the field of software reliability is crucial. In the past, efforts in software reliability were made to develop models by assuming a nonhomogeneous Poisson-process model (NHPP); however, as models became more complex, there were many special cases in which models fit well. Hence, this study proposes a software reliability model using deep learning that relies on data rather than mathematical and statistical assumptions. A software reliability model based on recurrent neural networks (RNN), long short-term memory (LSTM), and gated recurrent units (GRU), which are the most basic deep and recurrent neural networks, was constructed. The dataset was divided into two, Datasets 1 and 2, which both used 80% and 90% of the entire data, respectively. Using 11 criteria, the estimated and learned results based on these datasets proved that the software reliability model using deep learning has excellent capabilities. The software reliability model using GRU showed the most satisfactory results. Full article
(This article belongs to the Special Issue Smart Service Technology for Industrial Applications II)
Show Figures

Figure 1

17 pages, 6303 KiB  
Article
Prediction of Tool Remaining Useful Life Based on NHPP-WPHM
by Yingzhi Zhang, Guiming Guo, Fang Yang, Yubin Zheng and Fenli Zhai
Mathematics 2023, 11(8), 1837; https://doi.org/10.3390/math11081837 - 12 Apr 2023
Cited by 3 | Viewed by 1338
Abstract
A tool remaining useful life prediction method based on a non-homogeneous Poisson process and Weibull proportional hazard model (WPHM) is proposed, taking into account the grinding repair of machine tools during operation. The intrinsic failure rate model is built according to the tool [...] Read more.
A tool remaining useful life prediction method based on a non-homogeneous Poisson process and Weibull proportional hazard model (WPHM) is proposed, taking into account the grinding repair of machine tools during operation. The intrinsic failure rate model is built according to the tool failure data. The WPHM is established by collecting vibration information during operation and introducing covariates to describe the failure rate of the tool operation. In combination with the tool grinding repair, the NHPP-WPHM under different repair times is established to describe the tool comprehensive failure rate. The failure threshold of the tool life is determined by the maximum availability, and the remaining tool life is predicted. Take the cylindrical turning tool of the CNC lathe as an example, the root mean square error, mean absolute error, mean absolute percentage error, and determination coefficient (R2) are used as indicators. The proposed method is compared with the actual remaining useful life and the remaining useful life prediction model based on the WPHM to verify the effectiveness of the model. Full article
Show Figures

Figure 1

18 pages, 6664 KiB  
Article
An Analysis of the New Reliability Model Based on Bathtub-Shaped Failure Rate Distribution with Application to Failure Data
by Tabassum Naz Sindhu, Sadia Anwar, Marwa K. H. Hassan, Showkat Ahmad Lone, Tahani A. Abushal and Anum Shafiq
Mathematics 2023, 11(4), 842; https://doi.org/10.3390/math11040842 - 7 Feb 2023
Cited by 11 | Viewed by 1688
Abstract
The reliability of software has a tremendous influence on the reliability of systems. Software dependability models are frequently utilized to statistically analyze the reliability of software. Numerous reliability models are based on the nonhomogeneous Poisson method (NHPP). In this respect, in the current [...] Read more.
The reliability of software has a tremendous influence on the reliability of systems. Software dependability models are frequently utilized to statistically analyze the reliability of software. Numerous reliability models are based on the nonhomogeneous Poisson method (NHPP). In this respect, in the current study, a novel NHPP model established on the basis of the new power function distribution is suggested. The mathematical formulas for its reliability measurements were found and are visually illustrated. The parameters of the suggested model are assessed utilizing the weighted nonlinear least-squares, maximum-likelihood, and nonlinear least-squares estimation techniques. The model is subsequently verified using a variety of reliability datasets. Four separate criteria were used to assess and compare the estimating techniques. Additionally, the effectiveness of the novel model is assessed and evaluated with two foundation models both objectively and subjectively. The implementation results reveal that our novel model performed well in the failure data that we examined. Full article
(This article belongs to the Special Issue Probability, Statistics and Their Applications 2021)
Show Figures

Figure 1

16 pages, 641 KiB  
Article
Performance Evaluation of a Cloud Datacenter Using CPU Utilization Data
by Chen Li, Junjun Zheng, Hiroyuki Okamura and Tadashi Dohi
Mathematics 2023, 11(3), 513; https://doi.org/10.3390/math11030513 - 18 Jan 2023
Cited by 2 | Viewed by 2441
Abstract
Cloud computing and its associated virtualization have already been the most vital architectures in the current computer system design. Due to the popularity and progress of cloud computing in different organizations, performance evaluation of cloud computing is particularly significant, which helps computer designers [...] Read more.
Cloud computing and its associated virtualization have already been the most vital architectures in the current computer system design. Due to the popularity and progress of cloud computing in different organizations, performance evaluation of cloud computing is particularly significant, which helps computer designers make plans for the system’s capacity. This paper aims to evaluate the performance of a cloud datacenter Bitbrains, using a queueing model only from CPU utilization data. More precisely, a simple but non-trivial queueing model is used to represent the task processing of each virtual machine (VM) in the cloud, where the input stream is supposed to follow a non-homogeneous Poisson process (NHPP). Then, the parameters of arrival streams for each VM in the cloud are estimated. Furthermore, the superposition of estimated arrivals is applied to represent the CPU behavior of an integrated virtual platform. Finally, the performance of the integrated virtual platform is evaluated based on the superposition of the estimations. Full article
Show Figures

Figure 1

18 pages, 3000 KiB  
Article
Are Infinite-Failure NHPP-Based Software Reliability Models Useful?
by Siqiao Li, Tadashi Dohi and Hiroyuki Okamura
Software 2023, 2(1), 1-18; https://doi.org/10.3390/software2010001 - 23 Dec 2022
Cited by 3 | Viewed by 2177
Abstract
In the literature, infinite-failure software reliability models (SRMs), such as Musa-Okumoto SRM (1984), have been demonstrated to be effective in quantitatively characterizing software testing processes and assessing software reliability. This paper primarily focuses on the infinite-failure (type-II) non-homogeneous Poisson process (NHPP)-based SRMs and [...] Read more.
In the literature, infinite-failure software reliability models (SRMs), such as Musa-Okumoto SRM (1984), have been demonstrated to be effective in quantitatively characterizing software testing processes and assessing software reliability. This paper primarily focuses on the infinite-failure (type-II) non-homogeneous Poisson process (NHPP)-based SRMs and evaluates the performances of these SRMs comprehensively by comparing with the existing finite-failure (type-I) NHPP-based SRMs. In more specific terms, to describe the software fault-detection time distribution, we postulate 11 representative probability distribution functions that can be categorized into the generalized exponential distribution family and the extreme-value distribution family. Then, we compare the goodness-of-fit and predictive performances with the associated 11 type-I and type-II NHPP-based SRMs. In numerical experiments, we analyze software fault-count data, collected from 16 actual development projects, which are commonly known in the software industry as fault-count time-domain data and fault-count time-interval data (group data). The maximum likelihood method is utilized to estimate the model parameters in both NHPP-based SRMs. In a comparison of the type-I with the type-II, it is shown that the type-II NHPP-based SRMs could exhibit better predictive performance than the existing type-I NHPP-based SRMs, especially in the early stage of software testing. Full article
Show Figures

Figure 1

23 pages, 6368 KiB  
Article
Bayesian Statistical Method Enhance the Decision-Making for Imperfect Preventive Maintenance with a Hybrid Competing Failure Mode
by Chih-Chiang Fang, Chin-Chia Hsu and Je-Hung Liu
Axioms 2022, 11(12), 734; https://doi.org/10.3390/axioms11120734 - 15 Dec 2022
Cited by 1 | Viewed by 1416
Abstract
The study aims to provide a Bayesian statistical method with natural conjugate for facilities’ preventive maintenance scheduling related to the hybrid competing failure mode. An effective preventive maintenance strategy not only can improve a system’s health condition but also can increase a system’s [...] Read more.
The study aims to provide a Bayesian statistical method with natural conjugate for facilities’ preventive maintenance scheduling related to the hybrid competing failure mode. An effective preventive maintenance strategy not only can improve a system’s health condition but also can increase a system’s efficiency, and therefore a firm needs to make an appropriate strategy for increasing the utilization of a system with reasonable costs. In the last decades, preventive maintenance issues of deteriorating systems have been studied in the related literature, and hundreds of maintenance/replacement models have been created. However, few studies focused on the issue of hybrid deteriorating systems which are composed of maintainable and non-maintainable failure modes. Moreover, due to the situations of the scarcity of historical failure data, the related analyses of preventive maintenance would be difficult to perform. Based on the above two reasons, this study proposed a Bayesian statistical method to deal with such preventive maintenance problems. Non-homogeneous Poisson processes (NHPP) with power law failure intensity functions are employed to describe the system’s deterioration behavior. Accordingly, the study can provide useful ways to help managers to make effective decisions for preventive maintenance. To apply the proposed models in actual cases, the study provides solution algorithms and a computerized architecture design for decision-makers to realize the computerization of decision-making. Full article
(This article belongs to the Special Issue A Hybrid Analysis of Information Technology and Decision Making)
Show Figures

Figure 1

14 pages, 1940 KiB  
Article
Software Reliability Growth Model with Dependent Failures and Uncertain Operating Environments
by Da Hye Lee, In Hong Chang and Hoang Pham
Appl. Sci. 2022, 12(23), 12383; https://doi.org/10.3390/app122312383 - 3 Dec 2022
Cited by 6 | Viewed by 1799
Abstract
Software is used in various industries, and its reliability has become an extremely important issue. For example, in the medical industry, software is used to provide medical services to underprivi-leged individuals. If a problem occurs with the software reliability, incorrect medical information may [...] Read more.
Software is used in various industries, and its reliability has become an extremely important issue. For example, in the medical industry, software is used to provide medical services to underprivi-leged individuals. If a problem occurs with the software reliability, incorrect medical information may be provided. The software reliability is estimated using a software reliability growth model. However, most software reliability growth models assume that the failures are independent. In addition, it is assumed that the test and operating environments are the same. In this study, we propose a new software reliability growth model that assumes that software failures are dependent and uncertain operating environments. A comparison of the proposed model against existing NHPP SRMEs using actual datasets shows that the proposed model achieves the best fit. Full article
Show Figures

Figure 1

19 pages, 4428 KiB  
Article
A Bayesian Pipe Failure Prediction for Optimizing Pipe Renewal Time in Water Distribution Networks
by Widyo Nugroho, Christiono Utomo and Nur Iriawan
Infrastructures 2022, 7(10), 136; https://doi.org/10.3390/infrastructures7100136 - 13 Oct 2022
Cited by 4 | Viewed by 2469
Abstract
The sustainable management of the water supply system requires methodologies to monitor, repair, or replace the aging infrastructure, but more importantly, it must be able to assess the condition of the networks and predict their behavior over time. Among other infrastructure systems, the [...] Read more.
The sustainable management of the water supply system requires methodologies to monitor, repair, or replace the aging infrastructure, but more importantly, it must be able to assess the condition of the networks and predict their behavior over time. Among other infrastructure systems, the water distribution network is one of the essential civil infrastructure systems; therefore, the effective maintenance and renewal of the infrastructure’s physical assets are essential. This article aims to determine pipe failure prediction to optimize pipe renewal time. This research methodology investigates the most appropriate parameters for predicting pipe failure in the optimization. In particular, the non-homogeneous Poisson process (NHPP) with the Markov chain Monte Carlo (MCMC) approach is presented for Bayesian inference, while maximum likelihood (ML) is applied for frequentist inference as a comparison method. It is concluded that the two estimations are relatively appropriate for predicting failures, but MCMC estimation is closer to the total observed data. Based on life-cycle cost (LCC) analysis, the MCMC estimation generates flatter LCC curves and lower LCC values than the ML estimation, which affects the decision making of optimum pipe renewal in water distribution networks. Full article
Show Figures

Figure 1

18 pages, 552 KiB  
Article
A Comprehensive Analysis of Proportional Intensity-Based Software Reliability Models with Covariates
by Siqiao Li, Tadashi Dohi and Hiroyuki Okamura
Electronics 2022, 11(15), 2353; https://doi.org/10.3390/electronics11152353 - 28 Jul 2022
Cited by 1 | Viewed by 1476
Abstract
This paper focuses on the so-called proportional intensity-based software reliability models (PI-SRMs), which are extensions of the common non-homogeneous Poisson process (NHPP)-based SRMs, and describe the probabilistic behavior of software fault-detection process by incorporating the time-dependent software metrics data observed in the development [...] Read more.
This paper focuses on the so-called proportional intensity-based software reliability models (PI-SRMs), which are extensions of the common non-homogeneous Poisson process (NHPP)-based SRMs, and describe the probabilistic behavior of software fault-detection process by incorporating the time-dependent software metrics data observed in the development process. The PI-SRM is proposed by Rinsaka et al. in the paper “PISRAT: Proportional Intensity-Based Software Reliability Assessment Tool” in 2006. Specifically, we generalize this seminal model by introducing eleven well-known fault-detection time distributions, and investigate their goodness-of-fit and predictive performances. In numerical illustrations with four data sets collected in real software development projects, we utilize the maximum likelihood estimation to estimate model parameters with three time-dependent covariates (test execution time, failure identification work, and computer time-failure identification), and examine the performances of our PI-SRMs in comparison with the existing NHPP-based SRMs without covariates. It is shown that our PI-STMs could give better goodness-of-fit and predictive performances in many cases. Full article
Show Figures

Figure 1

21 pages, 6985 KiB  
Article
Bayesian Decision Making of an Imperfect Debugging Software Reliability Growth Model with Consideration of Debuggers’ Learning and Negligence Factors
by Qing Tian, Chun-Wu Yeh and Chih-Chiang Fang
Mathematics 2022, 10(10), 1689; https://doi.org/10.3390/math10101689 - 15 May 2022
Cited by 4 | Viewed by 1596
Abstract
In this study, an imperfect debugging software reliability growth model (SRGM) with Bayesian analysis was proposed to determine an optimal software release in order to minimize software testing costs and also enhance the practicability. Generally, it is not easy to estimate the model [...] Read more.
In this study, an imperfect debugging software reliability growth model (SRGM) with Bayesian analysis was proposed to determine an optimal software release in order to minimize software testing costs and also enhance the practicability. Generally, it is not easy to estimate the model parameters by applying MLE (maximum likelihood estimation) or LSE (least squares estimation) with insufficient historical data. Therefore, in the situation of insufficient data, the proposed Bayesian method can adopt domain experts’ prior judgments and utilize few software testing data to forecast the reliability and the cost to proceed with the prior analysis and the posterior analysis. Moreover, the debugging efficiency involves testing staff’s learning and negligent factors, and therefore, the human factors and the nature of debugging process are taken into consideration in developing the fundamental model. Based on this, the estimation of the model’s parameters would be more intuitive and can be easily evaluated by domain experts, which is the major advantage for extending the related applications in practice. Finally, numerical examples and sensitivity analyses are performed to provide managerial insights and useful directions for software release strategies. Full article
Show Figures

Figure 1

9 pages, 1104 KiB  
Article
Efficiency Evaluation of Software Faults Correction Based on Queuing Simulation
by Yuka Minamino, Yusuke Makita, Shinji Inoue and Shigeru Yamada
Mathematics 2022, 10(9), 1438; https://doi.org/10.3390/math10091438 - 24 Apr 2022
Cited by 2 | Viewed by 1341
Abstract
Fault-counting data are collected in the testing process of software development. However, the data are not used for evaluating the efficiency of fault correction activities because the information on the fault detection and correction times of each fault are not recorded in the [...] Read more.
Fault-counting data are collected in the testing process of software development. However, the data are not used for evaluating the efficiency of fault correction activities because the information on the fault detection and correction times of each fault are not recorded in the fault-counting data. Furthermore, it is difficult to collect new data on the detection time of each fault to realize efficiency evaluation for fault correction activities from the collected fault-counting data due to the cost of personnel and data collection. In this paper, we apply the thinning method, using intensity functions of the delayed S-shaped and inflection S-shaped software reliability growth models (SRGMs) to generate sample data of the fault detection time from the fault-counting data. Additionally, we perform simulations based on the infinite server queuing model, using the generated sample data of the fault detection time to visualize the efficiency of fault correction activities. Full article
Show Figures

Figure 1

22 pages, 6221 KiB  
Article
Prediction of Membrane Failure in a Water Purification Plant Using Nonhomogeneous Poisson Process Models
by Takashi Hashimoto and Satoshi Takizawa
Membranes 2021, 11(11), 800; https://doi.org/10.3390/membranes11110800 - 20 Oct 2021
Cited by 2 | Viewed by 2306
Abstract
The prediction of membrane failure in full-scale water purification plants is an important but difficult task. Although previous studies employed accelerated laboratory-scale tests of membrane failure, it is not possible to reproduce the complex operational conditions of full-scale plants. Therefore, we aimed to [...] Read more.
The prediction of membrane failure in full-scale water purification plants is an important but difficult task. Although previous studies employed accelerated laboratory-scale tests of membrane failure, it is not possible to reproduce the complex operational conditions of full-scale plants. Therefore, we aimed to develop prediction models of membrane failure using actual membrane failure data. Because membrane filtration systems are repairable systems, nonhomogeneous Poisson process (NHPP) models, i.e., power law and log-linear models, were employed; the model parameters were estimated using the membrane failure data from a full-scale plant operated for 13 years. Both models were able to predict cumulative failures for forthcoming years; nonetheless, the power law model showed higher stability and narrower confidence intervals than the log-linear model. By integrating two membrane replacement criteria, namely deterioration of filtrate water quality and reduction of membrane permeability, it was possible to predict the time to replace all the membranes on a water purification plant. Finally, the NHPP models coupled with a nonparametric bootstrap method provided a method to select membrane modules for earlier replacement than others. Although the criteria for membrane replacement may vary among membrane filtration plants, the NHPP models presented in this study could be applied to any other plant with membrane failure data. Full article
(This article belongs to the Special Issue Water and Wastewater Treatment Technologies with Membrane Filtration)
Show Figures

Graphical abstract

16 pages, 3450 KiB  
Article
Modeling Software Fault-Detection and Fault-Correction Processes by Considering the Dependencies between Fault Amounts
by Qiuying Li and Hoang Pham
Appl. Sci. 2021, 11(15), 6998; https://doi.org/10.3390/app11156998 - 29 Jul 2021
Cited by 17 | Viewed by 1751
Abstract
Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process [...] Read more.
Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction. Full article
Show Figures

Figure 1

Back to TopTop