1. Introduction
The Burr-XII distribution, originally introduced by Burr [
1], has found extensive applications in various domains, including lifetime modeling for reliability analysis, addressing life-testing challenges, and devising acceptance sampling plans, as exemplified in the works of Abbasi et al. [
2] and other researchers. It has also been effectively utilized in the analysis of observational data across diverse fields such as meteorology, finance, and hydrology, as showcased in studies conducted by Chen et al. [
3], Ali and Jaheen [
4], Burr [
1] and Lio et al. [
5]. Moreover, Shao et al. [
6] delved into the modeling of extreme events using the three-parameter Burr-XII distribution (TPBXIID), notably in the context of flood-frequency analysis.
Our decision to employ the TPBXIID stems from its remarkable adaptability, spanning a wide spectrum of shapes, from highly skewed to nearly symmetric. This versatility renders it a valuable model for datasets that do not adhere to standard shapes. Notably, the distribution’s three parameters, denoted as , , and , offer straightforward interpretations, simplifying the analysis of statistical outcomes and enabling comparisons across different datasets. As a distribution limited to non-negative values, the Burr-XII distribution is frequently harnessed for modeling data related to lifetimes, sizes, or quantities. It consistently demonstrates an excellent fit to empirical datasets and, in specific data types, is known to outperform other commonly employed distributions, including the Weibull distribution.
Furthermore, the Burr-XII distribution serves as a generalized form encompassing several other distributions, including the Lomax (Pareto II), Burr-XII, and log-logistic distributions. In summary, the three-parameter Burr-XII distribution emerges as a versatile tool for modeling non-negative data, which are characterized by a broad spectrum of shapes. Its flexibility and interpretability render it a popular choice in statistical modeling.
Cook and Johnson [
7] applied the Burr model to attain superior fits for a uranium survey dataset, while Zimmer et al. [
8] delved into the statistical and probabilistic properties of the Burr-XII distribution and its relationships with other distributions commonly employed in reliability analyses. Additionally, Tadikamalla [
9] extended the two-parameter Burr-XII distribution by introducing an additional scale parameter, resulting in the TPBXIID. This extension has sparked increased interest in the applications of the Burr-XII distribution.
Tadikamalla also established mathematical connections among Burr-related distributions, revealing that the Lomax distribution constitutes a special instance of the Burr-XII distribution, and the compound Weibull distribution represents a generalization of the Burr distribution. Furthermore, he demonstrated that several widely used distributions, including Weibull, logistic, log-logistic, normal, and lognormal distributions, can be viewed as specific cases of the Burr-XII distribution by appropriately configuring the distribution parameters.
In essence, the TPBXIID proves to be highly adaptable, encompassing two shape parameters and one scale parameter in the distribution function, thereby allowing it to represent a diverse range of distribution shapes. The TPBXIID can be characterized through its cumulative distribution function (CDF) and probability density function (PDF), as expressed, respectively, in Equations (
1) and (
2).
where
and
represent the shape parameters, and
serves as the scale parameter. Notably, when
, the density function exhibits an upside-down bathtub shape (unimodal) with the mode located at
, while it assumes an L-shaped form when
.
Recently, Mead and Afify [
10] ventured into defining and examining the properties and applications of the five-parameter Burr-XII distribution, which is referred to as the Kumaraswamy exponentiated Burr-XII. Moreover, Shafqat et al. [
11] explored the utilization of moving average control charts in the context of Burr X and inverse Gaussian, and Aslam et al. [
12] studied a new generalized Burr-XII distribution with real-life applications.
The joint censoring approach proves to be a valuable and practical method for comparing life tests of products originating from various units within the same facility. Consider a scenario where two production lines operate within the same facility, generating products. In this setup, two independent samples of sizes m and n can be selected from each production line and subjected to simultaneous life-testing experiments. To optimize resource utilization, reduce costs, and save time, researchers often employ a joint progressive Type-II censoring scheme (JP-II-CS). This approach is instrumental in terminating life testing when a predetermined number of failures are observed.
Numerous studies in the literature have explored JP-II-CS and inference methods associated with it. For example, Rasouli and Balakrishnan [
13] introduced likelihood inference techniques for two exponential distributions based on JP-II-CS. Doostparast et al. [
14] delved into Bayes estimation under the linear exponential loss function using JP-II-CS data. Balakrishnan et al. [
15] provided likelihood inference procedures for
k exponential distributions under JP-II-CS, while Mondal and Kundu [
16] focused on the point and interval estimation of Weibull parameters within the context of JP-II-CS.
Goel and Krishna [
17] explored likelihood and Bayesian inference for
k Lindley populations under a joint Type-II censoring scheme. Krishna and Goel [
18] conducted a study on Lindley populations utilizing JP-II-CS. Additionally, Goel and Krishna [
19] discussed statistical inference for two Lindley populations under a balanced JP-II-CS. Bayoud and Raqab [
20] investigated classical and Bayesian inferences for two Topp–Leone models under JP-II-CS, while Chen and Gui [
21] addressed the statistical inference of the generalized inverted exponential distribution in the context of JP-II-CS.
Pandey and Srivastava [
22] focused on Bayesian inference for two log-logistic populations under JP-II-CS, and Qiao and Gui [
23] tackled the statistical inference of the Weighted Exponential Distribution under similar censoring conditions.
Recently, Hassan et al. [
24] delved into the statistical inference of the Burr Type III distribution under joint progressively Type II censoring. Kumar and Kumari [
25] explored Bayesian and likelihood estimation techniques for two inverse Pareto populations under joint progressive censoring conditions.
According to Rasouli and Balakrishnan [
13], JP-II-CS is described as follows. Let
, be lifetimes of
m units for product
A, and they are supposed to be independent and identically distributed (iid) random variables from TPBXIID with a CDF given by
and the PDF is
In a similar manner, consider a set of
n lifetimes denoted as
for the product
B. These lifetimes correspond to
n units and are treated as iid random variables following the TPBXIID. The CDF for this distribution is given by:
and PDF is
Where
,
,
and
are shape parameters and
and
are scale parameters. In this scenario, let
denote the total sample size and
indicate the order statistics of the
K random variables
. The JP-II-CS method is applied as follows: when the first failure occurs,
units are randomly removed from the remaining
surviving units. The same process is repeated for the second failure, where
units are randomly withdrawn from the remaining
surviving units, and so on. At the
failure, all remaining
surviving units are withdrawn from the experiment. The JP-II-CS is represented by
, and the total number of failures
r is predetermined before conducting the experiment. Suppose that
,
where
and
represent the number of units withdrawn at the time of
failure related to
X and
Y samples, respectively. These values are unknown and random variables. The data observed in this form will consist of
, where
, where
or 0 if
comes from
X or
Y failure, respectively,
with
and
.
In this study, we employ a JP-II-CS strategy to formulate statistical inferences and assess two independent samples from the TPBXIID. We derive point and interval estimators using Bayesian and maximum likelihood estimation (MLE) techniques. Subsequently, we calculate asymptotic confidence intervals (ACI) based on the observed information matrix. These confidence intervals (CIs) are computed through various bootstrap techniques, including Bootstrap-P (Boot-P), Bootstrap-T (Boot-T), Bias-Corrected Bootstrap (Boot-BC), and Bias-Corrected Accelerated Bootstrap (Boot-BCa) methods. We assume a gamma prior distribution for both the shape and scale parameters. Employing the Metropolis–Hastings (M-H) method, we obtain Bayes estimates and credible intervals (CRIs) for the informative prior under both the squared error (SE) and linear exponential (LINEX) loss functions. To evaluate the effectiveness of these diverse approaches, we conduct Monte Carlo simulations and analyze real-world data.
The paper is structured in the following way:
Section 2 outlines the derivation of the MLEs for the unknown parameters of TPBXIID. In
Section 3, there is a presentation of ACIs that depend on the MLEs.
Section 4 discusses different bootstrap CIs. The Bayesian analysis is performed in
Section 5. To illustrate the estimation methods developed in this paper, we analyze real datasets in
Section 6. The results of simulation are presented in
Section 7. Finally, a brief conclusion can be found in
Section 8.
5. Bayesian Estimation
In this section, we employ the MCMC approach within a Gibbs sampler framework that incorporates a nested M-H algorithm. We utilize this approach to generate parametric samples representing the unknown parameters
,
,
,
,
and
from their respective marginal posteriors, allowing us to derive Bayes estimates for these parameters. The Gibbs sampler is a recursive sampling technique employed to simulate samples from the full conditional posterior distributions, while the M-H algorithm is used to generate samples from arbitrary distributions (Hastings [
32]; Metropolis et al. [
33]). In this context, we generate N samples using the MCMC technique, with the initial M values discarded during the burn-in period. The remaining N–M sample values are subsequently utilized for further Bayesian analysis.
We assume that the parameters
,
,
,
,
and
have independent gamma prior distributions as
where
Where the hyperparameters
and
,
are supposed to be known and non-negative. By using the prior distribution for
,
,
,
,
and
, we obtain the joint prior distribution as follows:
Based on (
8) and (
42), the joint posterior density function of
,
,
,
,
and
given
is written as
We observed that (
43) is not amenable to analytical solutions due to the formidable challenge of deriving closed-form expressions for the marginal posterior distributions of individual parameters. Consequently, we recommend the utilization of the MCMC method to approximate (
43). Several studies have extensively explored the MCMC technique, including works by Chen and Shao [
34] and Ghazal and Hasaballah [
30,
35,
36]. From (
43), the conditional posterior density function of
,
,
,
,
and
can be obtained as the following proportionality: to simplify, we used
,
,
,
,
and
instead of
,
,
,
,
and
respectively:
It is evident that the full conditional posterior density function of
, as provided in Equation (
48), takes the form of a gamma density with a shape parameter of
and a scale parameter of
. Similarly, the full conditional posterior density function of
, as shown in Equation (
49), follows a gamma distribution with a shape parameter of
and a scale parameter of
. Consequently, samples of
and
can be readily generated using any gamma distribution generation method.
However, it is important to note that the conditional posterior distribution functions of
,
,
, and
, as described in Equations (
44)–(
47), cannot be analytically simplified into well-known distributions. Consequently, direct sampling using standard methods can be challenging. Nevertheless, as depicted in
Figure 1, these distributions exhibit similarities to the normal distribution.
5.1. Estimation Based on SE Loss Function
The SE loss function is represented by the equation:
In this equation, the positive constant
a is typically set to 1,
,
is a function of
to be estimated, and
denotes the SE estimate of
. The Bayes estimator under a quadratic loss function is computed as the mean of the posterior distribution:
The SE loss function is commonly employed in the literature and is considered one of the most prevalent loss functions. It exhibits symmetry, implying that it treats the overestimation and underestimation of parameters equally. However, in life-testing scenarios, one type of estimation error may have more significant consequences than the other.
5.2. Estimation Based on LINEX Loss Function
The LINEX loss function is defined as follows:
Here, is as previously defined, and represents the LINEX estimate of .
The shape parameter
a determines the direction and degree of symmetry for this loss function. Varian [
37] first introduced this loss function, while Zellner [
38] highlighted its intriguing properties. When
, overestimation is penalized more severely than underestimation, and the reverse is true when
a is negative. However, for values of
a close to zero, the LINEX loss function closely resembles the symmetry of the SE loss function. This function exhibits significant asymmetry when
with overestimation incurring higher costs than underestimation. Conversely, when
, the loss function increases nearly exponentially for
and decreases nearly linearly for
.
This is applicable provided that
exists and is finite. Now, we will outline the steps of the process for the M-H within Gibbs sampling method in Algorithm 5.
Algorithm 5: Metropolis –Hasting within Gibbs sampling |
Begin with initial guess of , M = burn-in. Put . Generate from Gamma . Generate from Gamma . Using M-H, generate , , and from ( 44)–( 47) with normal suggested distribution , , and where , , and can be obtained from the main diagonal in inverse FIM. (i) Generate a proposal from , from , from and from . (ii) Evaluate the acceptance probabilities
(iii) Generate u from a uniform distribution. (iv) If , do not reject the proposal and put ; otherwise, put . (v) If , do not reject the proposal and put ; otherwise, put . (vi) If , do not reject the proposal and put ; otherwise, put . (vii) If , do not reject the proposal and put ; otherwise, put . Put . Repeat Steps N times and obtain and , . To evaluate the CRIs of and , order and , , as , , , , and . Then, the CRIs of or is . Then, we obtain the Bayes estimates of based on SE loss function as
and the Bayes estimates of based on LINEX loss function as
|
The posterior expectation of the LINEX loss function (
52) can be expressed as follows:
Using the LINEX loss function, the Bayes estimate of
is given by the following:
6. Applications
In this part, we examine an actual set of data to illustrate how the suggested techniques operate in practical situations. The dataset we used was initially obtained from the National Climatic Data Center (NCDC) located in Asheville, North Carolina, USA. These data show the wind speed measured in knots for two samples: the first one for 23 days and the second one for 25 days. We calculated the average wind speeds for Alexandria city on a daily basis from 1 February 2017 to 23 February 2017 and from 1 February 2018 to 25 February 2018, respectively, in
Table 1 and
Table 2 as follows:
We used the Kolmogorov–Smirnov (K-S) test to check if the data distribution fit the TPBXIID model. For the first sample, the K-S test calculated a value of
for TPBXIID, which is smaller than the expected value of
at a significance level of
with
and a P-value of
. Similarly, for the second sample, the K-S test calculated a value of
for TPBXIID, which is smaller than the expected value of
at a significance level of
with
and a P-value of
. Therefore, we can conclude that TPBXIID fits both samples very well. We have also included
Figure 2 and
Figure 3 to show how well the empirical and fitted values match up. Overall, TPBXIID seems to be an excellent model for fitting these data.
From the above datasets, we have generated the JP-II-C sample with the censoring scheme. Assume that for the first sample and for the second sample; by implementing JP-II-CS where denotes the total sample size and when , and , then .
The generated datasets are provided below.
Depending on the data type used in this study, we calculate estimates based on MLEs and the bootstrap method for
and
; the results are shown in
Table 3, and the results of
ACIs, Boot-P CI, Boot-T CI, Boot-BC CI and Boot-BCa CI for
and
are given in
Table 4,
Table 5 and
Table 6. We used the MCMC method with a 11000 MCMC sample for Bayesian estimation and ignored the first 1000 values as ‘burn-in’. We also used the non-informative priors with hyperparameters
and
. Then, we obtained the Bayesian estimates for
and
under SE loss and LINEX loss functions, and the results are displayed in
Table 3. Moreover, the results of the
CRIs for
and
are tabled in
Table 4,
Table 5 and
Table 6.
8. Conclusions
This study utilized the JP-II-CS to compare life tests of items from different units within a single facility. Point and interval estimates for the TPBXIID were generated using diverse methodologies, including maximum likelihood, Bayesian, and parametric bootstrap techniques. However, it should be noted that obtaining explicit MLEs for unknown parameters is not possible, so we used numerical techniques to compute them. Similarly, Bayes estimators are not available in closed form, so we used the MCMC method to compute them for the SE and LINEX loss functions. We tested these techniques on a real dataset and also conducted a simulation study to compare their performance for different sample sizes.
Based on our findings from
Table 4,
Table 5 and
Table 6, we can conclude that Boot-T is better than Boot-P, Boot-BC and Boot-BCa in terms of having the smallest lengths. It is observed that from
Table 7,
Table 8,
Table 9,
Table 10,
Table 11 and
Table 12, the Bayes estimates under LINEX with
provide better estimates in the sense of having smaller MSEs. It is clear that from
Table 7,
Table 8,
Table 9,
Table 10,
Table 11 and
Table 12, when
and
r increase, the MSEs and the lengths decrease Additionally, in
Table 7,
Table 8,
Table 9,
Table 10,
Table 11 and
Table 12, the MSEs and CPs of MLE are smaller than those of MCMC. Finally, we found that the performance of Bayes estimates for the parameters
and
is better than that of MLEs.
This study demonstrated that Bayesian estimators outperformed MLEs in the context of the JP-II-CS for parameter estimation. The Bayesian approach, employing SE and LINEX loss functions, yielded more accurate and precise estimates for the TPBXIID in life testing and reliability analysis. This improvement was attributed to Bayesian estimation’s ability to incorporate prior knowledge or beliefs about the parameters, enhancing accuracy and precision. Additionally, Bayesian estimation provided a complete posterior distribution of the parameters, offering a comprehensive view of estimation uncertainty and variability. The use of the MCMC method within Bayesian estimation efficiently explored the parameter space, capturing intricate parameter relationships.
In summary, Bayesian estimation stands out by incorporating prior information, enabling a deeper understanding of uncertainty, and efficiently handling small sample sizes and sparse data. When applied to the JP-II-CS with SE and LINEX loss functions, Bayesian estimation outperformed maximum likelihood estimation in terms of accuracy and precision, showcasing its practical advantages in life testing and reliability analysis.