[go: up one dir, main page]

Next Article in Journal
Higher Dimensional Rotating Black Hole Solutions in Quadratic f(R) Gravitational Theory and the Conserved Quantities
Next Article in Special Issue
Discrete Versions of Jensen–Fisher, Fisher and Bayes–Fisher Information Measures of Finite Mixture Distributions
Previous Article in Journal
Order-Stability in Complex Biological, Social, and AI-Systems from Quantum Information Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Results on Varextropy Measure of Random Variables

by
Nastaran Marzban Vaselabadi
1,
Saeid Tahmasebi
1,
Mohammad Reza Kazemi
2 and
Francesco Buono
3,*
1
Department of Statistics, Persian Gulf University, Bushehr 7516913817, Iran
2
Department of Statistics, Faculty of Science, Fasa University, Fasa 7461686131, Iran
3
Dipartimento di Matematica e Applicazioni “Renato Caccioppoli”, Università degli Studi di Napoli Federico II, I-80126 Naples, Italy
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(3), 356; https://doi.org/10.3390/e23030356
Submission received: 15 February 2021 / Revised: 8 March 2021 / Accepted: 16 March 2021 / Published: 17 March 2021
(This article belongs to the Special Issue Measures of Information)

Abstract

:
In 2015, Lad, Sanfilippo and Agrò proposed an alternative measure of uncertainty dual to the entropy known as extropy. This paper provides some results on a dispersion measure of extropy of random variables which is called varextropy and studies several properties of this concept. Especially, the varextropy measure of residual and past lifetimes, order statistics, record values and proportional hazard rate models are discussed. Moreover, the conditional varextropy is considered and some properties of this measure are studied. Finally, a new stochastic comparison method, named varextropy ordering, is introduced and some of its properties are presented.
MSC:
62B10; 62G30

1. Introduction

In the context of information theory, entropy was first proposed by Clausius. He used the concept of entropy to quantitatively express the second law of thermodynamics, which opened up a new path for the development of thermodynamics [1]. This concept was continued by Shannon [2] and since then it has been used in several fields, such as image and signal processing and economics. Let X be an absolutely continuous random variable with probability density function (pdf) f ( x ) ; the differential entropy is a measure of uncertainty, and is defined by
H ( X ) = + f ( x ) log f ( x ) d x ,
where log ( · ) stands for the natural logarithm with the convention 0 log 0 = 0 . Song [3] introduced the concept of varentropy (VE) as an excellent alternative for the kurtosis measure. In fact, the VE can be used to compare the heavy-tailed distributions instead of kurtosis measure. Liu [4] studied some properties of VE under the concept of information volatility. Fradelizi et al. [5] obtained an optimal varentropy bound for log-concave distributions. The varentropy of a random variable X is defined as
V H ( X ) = V a r ( log f ( X ) ) .
Varentropy measures the variability in the information content of X. Recently, Di Crescenzo and Paolillo [6] studied the varentropy for residual lifetime. Maadani et al. [7] introduced a method for calculating this measure for the i-th order statistic. An alternative measure of uncertainty, known as extropy, was proposed by Lad et al. [8]. For an absolutely continuous random variable X with pdf f ( x ) , the extropy is defined as
J ( X ) = E 1 2 f ( X ) = 1 2 + [ f ( x ) ] 2 d x = 1 2 0 1 f ( F 1 ( u ) ) d u ,
where F 1 ( u ) = inf { x : F ( x ) u } is the quantile function of the cumulative distribution function (cdf) F. Recently, several authors have paid attention to extropy and its applications. Qiu [9] discussed some characterization results, monotone properties, and lower bounds of extropy of order statistics and record values. Moreover, Qiu and Jia [10] focused on the residual extropy of order statistics. Qiu and Jia [11] explored the extropy estimators with applications in testing uniformity.
In some situations, one may have two random variables with the same extropy; then, this problem leads to the well-known question “Which of the extropies is the most appropriate criterion for measuring the uncertainty?”. For example, the extropy values of standard uniform and an exponential distribution with the parameter 2 are both equal to 1 2 . This question motivates one to investigate the variance of 1 2 f ( X ) , which is called varextropy. Varextropy measure indicates how the information content is scattered around the extropy. It can be shown that the varextropy of the uniform distribution is zero and for exponential distribution of parameter 2 is 1 12 , so in the uniform distribution, extropy is more appropriate for measuring the uncertainty, because the uniform distribution has the least information volatility. In addition to the varentropy, the use of the varextropy is also required. Some comparative results for these measures varentropy and varextropy are conducted in the next section. One can observe that the new introduced varextropy measure is more flexible than the varentropy, in the sense that the latter is free of the model parameters in some cases.
Aiming to analyze the variability of such information content, in the present paper, an alternative measure analogous to (1) is proposed which can be used for measuring the dispersion of the residual and past lifetimes. On the ground of the above remarks, the motivation of this paper is to investigate the varextropy of random lifetime in reliability theory. Accordingly, this paper is organized as follows. In Section 2, at first, the definition of the varextropy measure is given and some of its properties are investigated. Especially, some of its extensions in residual and past lifetimes, order statistics, record values and proportional hazard rate models are provided and the approximate formula for varextropy using Taylor series is proposed. In Section 3, some results on the conditional varextropy measure are obtained. In Section 4, a new stochastic comparison method, named varextropy ordering is introduced, and some of its properties are presented. Throughout this paper, E [ · ] denotes expectation and f means the derivative of f.

2. Varextropy Measure

Hereafter, we introduce a measure of uncertainty which can be used as an alternative measure to Shannon entropy. It is known that Shannon entropy measures the uniformity of f, hence this remark motivated us to consider the varextropy measure.
Definition 1.
Let X be an absolutely continuous random variable with cdf F and pdf f . The varextropy can be defined as
V J ( X ) : = V a r 1 2 f ( X ) = 1 4 E [ f 2 ( X ) ] J 2 ( X ) .
Quantile functions Q ( u ) = F 1 ( u ) , 0 u 1 , are efficient alternatives to the cdf in modelling and analysis of statistical data, see, for instance, ref. [12]. Let U U ( 0 , 1 ) , the corresponding quantile based varextropy of X, be defined as
V J ( X ) = 1 4 E [ f 2 ( Q ( U ) ) ] E 2 [ f ( Q ( U ) ) ] = 1 4 0 1 f 2 ( F 1 ( u ) ) d u 0 1 f ( F 1 ( u ) ) d u 2 .
In the following, a few examples are given to illustrate the varextropy for random variables from some distributions.
Example 1.
(i) 
If X is uniformly distributed in [ 0 , a ] , then V J ( X )   =   V H ( X )   =   0 . As one can see, conceptually, the varextropy is compatible with varentropy and both take values greater than or equal to zero. So, when both varextropy and varentropy are zero, they represent certain information, that is, the event is certain.
(ii) 
If X follows the Weibull distribution with cdf
F ( x ) = 1 e λ x α , x > 0 ,
then, a direct computation yields
V J ( X ) = α 2 λ 2 4 λ 2 ( α 1 ) α Γ ( 2 ( α 1 ) α + 1 ) 3 2 ( α 1 ) α + 1 Γ 2 ( ( α 1 ) α + 1 ) 2 2 ( α 1 ) α + 2 , V H ( X ) = π 2 6 1 1 α 2 + 2 α 1 .
In particular, when α = 1 one has the exponential distribution with V J ( X ) = λ 2 48 and V H ( X ) = 1 .
(iii) 
If X follows a power distribution with parameter α > 1 , i.e., f ( x ) = α x α 1 , x ( 0 , 1 ) , then, we have
V J ( X ) = α 3 ( α 1 ) 2 4 ( 3 α 2 ) ( 2 α 1 ) 2 , V H ( X ) = ( α 1 ) 2 ψ ˙ ( α ) ( α 1 ) 2 ψ ˙ ( α + 1 ) ,
where ψ ˙ ( · ) is the trigamma function.
(iv) 
If X follows a two-parameter exponential distribution with density function
f ( x ) = λ exp { λ ( x μ ) } , x > μ ,
then, we have V J ( X ) = λ 2 48 and V H ( X ) = 1 . In this case, V H does not depend on the parameters and V J ( X ) < V H ( X ) for λ < 4 3 .
(v) 
If X follows the Laplace distribution with density function
f ( x ) = 1 2 β exp | x | β , x R ,
straightforward computations yield V J ( X ) = 1 192 β 2 and V H ( X ) = 1 . By comparing the varextropy of two-parameter exponential and Laplace distributions with β = 1 λ , then varextropy of two-parameter exponential distribution is four times as much as Laplace distribution.
(vi) 
If X is beta-distributed with parameters α and β, then
V J ( X ) = B ( 3 ( α 1 ) + 1 , 3 ( β 1 ) + 1 ) 4 B 3 ( α , β ) B 2 ( 2 ( α 1 ) + 1 , 2 ( β 1 ) + 1 ) 4 B 4 ( α , β ) , V H ( X ) = ( α 1 ) 2 ψ ˙ ( α ) + ( β 1 ) 2 ψ ˙ ( β ) ( α 1 + β 1 ) 2 ψ ˙ ( α + β ) ,
where B ( · , · ) and ψ ˙ ( · ) denote the beta and trigamma functions, respectively.
(vii) 
If X N ( μ , σ 2 ) , then V J ( X ) = 2 3 16 π σ 2 3 and V H ( X ) = 1 2 . In this case, the varextropy depends on the scale parameter σ 2 , whereas it is independent on the location parameter μ. From the above examples, it can be seen that the varextropy measure is more flexible than the varentropy, in the sense that the latter is free of the model parameters in some cases.
In the following, some properties of the varextropy, such as its behaviour for symmetric distributions or how it changes under monotonic transformations, are considered.
Proposition 1.
Suppose X is an absolutely continuous non-negative random variable with mean μ = E ( X ) < + and pdf f ( x ) = F ¯ ( x ) μ , 0 < x < + , where F ¯ ( x ) = 1 F ( x ) is the survival function of X with cdf F. Then, V J ( X ) = 1 48 μ 2 .
Proposition 2.
Let X ˜ be an absolutely continuous random variable with pdf f ˜ ( x ) = x f ( x ) μ , x > 0 , where f is a fixed pdf with mean μ = E ( X ) < + . Then, V J ( X ˜ ) = 1 4 μ 2 V a r [ X f ( X ) ] .
Proposition 3.
Let X be a symmetric random variable with respect to the finite mean μ = E ( X ) , i.e., F ( x + μ ) = 1 F ( μ x ) . Then, V J ( X + μ ) = V J ( μ X ) .
Remark 1.
Suppose that X is a continuous random variable with a symmetric density function f with respect to μ = 0 . Then, V J ( | X | ) = 4 V J ( X ) = V a r [ f ( X ) ] . For instance, if X N ( μ , σ 2 ) , from Example 1, one can get the varextropy for the half-normal distribution.
Proposition 4.
If Y = h ( X ) is a strictly monotone function of X, then V J ( Y ) = 1 4 V a r f ( X ) h ( X ) . Note that if Y = a X + b , then V J ( Y ) = 1 a 2 V J ( X ) , hence the varextropy is invariant under translations.
Remark 2.
Let X be a random variable with pdf f and let Φ be a convex function. Then the Φ-entropy is defined by Beigi and Goahri [13] as follows:
H Φ ( f ) = E [ Φ ( f ) ] Φ ( E ( f ) ) .
For the choice of Φ ( t ) = t 2 4 , we get H Φ ( f ) = V J ( f ) .
In the following, by using Taylor series, an approximate formula for the varextropy is obtained. For this aim, it is enough to approximate E [ f ( X ) ] as follows:
E [ f ( X ) ] E [ f ( μ ) + f ( μ ) ( X μ ) + 1 2 f ( μ ) ( X μ ) 2 ] = f ( μ ) + 1 2 f ( μ ) V a r ( X ) .
Theorem 1.
Let X be a random variable with pdf f and mean μ = E ( X ) < + . Then
V J ( X ) 1 16 ( f ( μ ) ) 2 [ μ 4 μ 2 2 ] + 1 4 [ f ( μ ) ] 2 μ 2 + 1 4 f ( μ ) f ( μ ) μ 3 ,
where μ r = E ( X μ ) r < + , r = 2 , 3 , 4 .
Proof. 
Making use of (2) and (3), then
V J ( X ) = 1 4 V a r [ f ( X ) ] = 1 4 E [ ( f ( X ) E ( f ( X ) ) ) 2 ] 1 4 E f ( μ ) + f ( μ ) ( X μ ) + 1 2 f ( μ ) ( X μ ) 2 E ( f ( X ) ) 2 = 1 4 E f ( μ ) E ( f ( X ) ) + f ( μ ) ( X μ ) + 1 2 f ( μ ) ( X μ ) 2 2 1 4 E 1 2 f ( μ ) V a r ( X ) + f ( μ ) ( X μ ) + 1 2 f ( μ ) ( X μ ) 2 2 .
Therefore, the stated result follows.    □
Definition 2.
For any random variables X and Y with pdf’s f and g respectively, the maximal correlation of X and Y is defined by
ρ ˜ ( X , Y ) = max E [ ( f ( X ) E [ f ( X ) ] ) ( g ( Y ) E [ g ( Y ) ] ) ] V a r [ f ( X ) ] V a r [ g ( Y ) ] = max C o v ( f ( X ) , g ( Y ) ) 4 V J ( X ) V J ( Y ) .
Note that 0 ρ ˜ ( X , Y ) 1 . Moreover, ρ ˜ ( X , Y ) = 0 if, and only if, X and Y are independent. See Beigi and Goahri [13] for for more details.
Remark 3.
If X is a random variable with pdf f and Y = | X | , then ρ ˜ ( X , Y ) = 1 .
Let ( X , Y ) denote the lifetimes of two components of a system with joint cdf F ( x , y ) and joint pdf f ( x , y ) . It is possible to introduce the bivariate version of extropy, denoted by J ( X , Y ) , in the following way:
J ( X , Y ) = 1 4 E [ f ( X , Y ) ] = 1 4 0 + 0 + f 2 ( x , y ) d x d y ,
see Balakrishnan et al. [14] for further details. Hence, the bivariate V J is given by
V J ( X , Y ) : = V J ( f ) = 1 16 V a r [ f ( X , Y ) ] .
In the case when X and Y are independent random variables, then
V J ( X , Y ) = V J ( X ) V J ( Y ) + V J ( X ) J 2 ( Y ) + J 2 ( X ) V J ( Y ) ,
and, if in addition, X and Y are identically distributed, then
V J ( X , Y ) = V J 2 ( X ) + 2 J 2 ( X ) V J ( Y ) = V J ( X ) V J ( X ) + 2 J 2 ( X ) .
For example, let X , Y i i d N ( 0 , 1 ) , then, by using Example 1 and J ( X ) = 1 4 π , then V J ( X , Y ) = ( 2 3 ) 2 768 π 2 + 2 3 128 π 2 3 = 1 768 π 2 .

2.1. Residual and Past Varextropies

As mentioned in the introduction, several researchers have dedicated their attention to the study of extropy. Now, we recall the definitions of residual and past extropy. Let X be a non negative and absolutely continuous random variable, then X t = [ X t | X t ] is the residual lifetime with pdf f t ( x ) = f ( x + t ) / F ¯ ( t ) , x > 0 and X [ t ] = [ X X t ] is the past lifetime with pdf f [ t ] ( x ) = f ( x ) F ( t ) , 0 < x < t . In analogy with the residual entropy, Qiu [10] defined the extropy for residual lifetime X t , i.e., the residual extropy at time t, as
J X t = 1 2 0 + f X t 2 ( x ) d x = 1 2 F ¯ 2 ( t ) t + f 2 ( x ) d x .
About the past lifetime X [ t ] = ( X | X t ) , Krishnan et al. [15] and Kamari and Buono [16] studied the past extropy defined as
J ( X [ t ] ) = 1 2 0 + f X [ t ] 2 ( x ) d x = 1 2 F 2 ( t ) 0 t f 2 ( x ) d x .
The residual extropy and the past extropy can be seen as expectations. So, the residual and the past varextropies of X at time t, V J ( X t ) and V J ( X [ t ] ) are
V J ( X t ) = 1 4 E f X t 2 ( X t ) J 2 ( X t ) , V J ( X [ t ] ) = 1 4 E f X [ t ] 2 ( X [ t ] ) J 2 ( X [ t ] ) .
Example 2.
(i) 
If X has an exponential distribution, then
V J ( X t ) = V J ( X [ t ] ) = V J ( X ) ,
i.e., it is independent of the lifetime of the system.
(ii) 
If X follows a power distribution with parameter α > 1 , then
V J ( X t ) = α 3 4 ( 1 t α ) 4 ( 1 t 3 α 2 ) ( 1 t α ) 3 α 2 α ( 1 t 2 α 1 ) 2 ( 2 α 1 ) 2 , V J ( X [ t ] ) = α 3 ( α 1 ) 2 4 t 2 ( 3 α 2 ) ( 2 α 1 ) 2 .
Proposition 5.
If Y = a X + b , with X non-negative random variable, a > 0 and b 0 , then
V J ( Y t ) = 1 a 2 V J X t b a , V J ( Y [ t ] ) = 1 a 2 V J X [ t b a ] .
It is observed that the residual varextropy can be written as
V J ( X t ) = V a r [ f ( X + t ) ] 4 F ¯ 2 ( t ) ,
hence, for all t 0 , the derivative of the residual varextropy is
t V J ( X t ) = V J ( X t ) 2 λ X ( t ) + t log ( V a r [ f ( X + t ) ] ) ,
where λ X ( t ) = f ( t ) F ¯ ( t ) is the hazard rate function. Solving the above differential equation leads to
V J ( X t ) = exp [ 2 λ X ( t ) + t log ( V a r [ f ( X + t ) ] ) ] d t + C ,
where C is a constant.

2.2. Reliability Theory

Hereafter, we consider two non-negative random variables X and X θ with cdf’s F ( x ) and F ( x ) , respectively. These variables satisfy the proportional hazard rate model (PHRM) with proportionality constant θ > 0 , if
F ¯ X θ ( x ) = [ F ¯ ( x ) ] θ , x > 0 .
For detail on PHRM and some properties of such a model associated with aging notions, see Gupta and Gupta [17].
Proposition 6.
Let X be a non-negative absolutely continuous random variable with cdf F and pdf f. Then,
V J ( X θ ) = θ 3 4 E f 2 ( F 1 ( 1 U ) ) U 3 ( θ 1 ) θ 4 4 E 2 f ( F 1 ( 1 U ) ) U 2 ( θ 1 ) ,
where U U ( 0 , 1 ) .
In reliability theory ( n k + 1 ) -out-of n systems, k { 1 , , n } , are important types of structures. If X 1 , X 2 , , X n denote the independent lifetimes of the components of a ( n k + 1 ) -out-of n system, then the lifetime of this system is equal to the order statistic X k : n . Hence, in the following proposition, we obtain an analytical expression for V J ( X k : n ) .
Proposition 7.
Let X 1 , X 2 , , X n be a random sample from an absolutely continuous cdf F ( x ) , then
V J ( X k : n ) = B ( 3 k 2 , 3 ( n k ) + 1 ) 4 B 3 ( k , n k + 1 ) E f 2 ( F 1 ( U 1 ) ) B 2 ( 2 k 1 , 2 ( n k ) + 1 ) 4 B 2 ( k , n k + 1 ) E 2 f ( F 1 ( U 2 ) ) ,
where U 1 B e t a ( 3 k 2 , 3 ( n k ) + 1 ) and U 2 B e t a ( 2 k 1 , 2 ( n k ) + 1 ) .
Remark 4.
Let X 1 : n = min { X 1 , X 2 , , X n } and X n : n = max { X 1 , X 2 , , X n } denote the lifetime of the series and parallel systems, respectively. Then,
(i) 
V J ( X 1 : n ) = n 3 4 E f 2 ( F 1 ( 1 U ) ) U 3 ( n 1 ) n 4 4 E 2 f ( F 1 ( 1 U ) ) U 2 ( n 1 ) ;
(ii) 
V J ( X n : n ) = n 3 4 E f 2 ( F 1 ( U ) ) U 3 ( n 1 ) n 4 4 E 2 f ( F 1 ( U ) ) U 2 ( n 1 ) ;
we note that (i) coincides with (4), since the series system is a particular case of PHRM with the choice of parameter θ = n .
Proposition 8.
Let X 1 , X 2 , , X n be a random sample from continuous symmetric distribution F ( x ) , then
V J ( X k : n ) = V J ( X n k + 1 : n ) .
In the following, a few examples are given to illustrate the varextropy for order statistics X k : n from some distributions.
Example 3.
(i) 
If X is uniformly distributed in [ a , b ] , then
V J ( X k : n ) = 1 b a 2 B ( 3 k 2 , 3 ( n k ) + 1 ) 4 B 3 ( k , n k + 1 ) B 2 ( 2 k 1 , 2 ( n k ) + 1 ) 4 B 2 ( k , n k + 1 ) ;
(ii) 
If X has exponential distribution with parameter θ , then
V J ( X k : n ) = θ 2 B ( 3 k 2 , 3 ( n k ) + 3 ) 4 B 3 ( k , n k + 1 ) B 2 ( 2 k 1 , 2 ( n k ) + 2 ) 4 B 2 ( k , n k + 1 ) ;
(iii) 
If X has Pareto distribution with shape and scale parameters λ and β respectively, then
V J ( X k : n ) = λ 2 β 2 β ( 3 k 2 , 3 ( n k ) + 2 λ + 3 ) 4 β 3 ( k , n k + 1 ) β 2 ( 2 k 1 , 2 ( n k ) + 1 λ + 3 ) 4 β 2 ( k , n k + 1 ) .
Table 1 gives the numerical values of V J ( X k : n ) with n = 10 for the standard uniform distribution. It can be observed that V J ( X k : n ) = V J ( X n k + 1 : n ) , as stated in Proposition 8, and V J ( X k : n ) is increasing with respect to k for k n + 1 2 ( n 2 + 1 ) when n is odd (even). Furthermore, it is decreasing with respect to k for k n + 1 2 ( n 2 ) . Therefore, the median of order statistics has a minimum varextropy. It should be noted that, when n is even, both of the middle random variables of order statistics, are the median.
In reliability tests, some products may fail under stress. In such experiments for getting the precise failure point, measurements may be made sequentially, and only values larger (or smaller) than all previous ones are recorded. Let X 1 , X 2 , , X n , , be a sequence of iid random variables with cdf F ( x ) and pdf f ( x ) . An observation X j is called an upper record (lower record) value if X j > ( < ) X i , i < j . In the following, we obtain varextropy measures for upper record values.
Proposition 9.
Let X U n be n-th upper record value with pdf f n ( x ) = 1 ( n 1 ) ! [ log ( 1 F ( x ) ) ] n 1 f ( x ) , then
V J ( X U n ) = 1 4 V a r [ f ( X U n ) ] = ( 3 n 3 ) ! 4 [ ( n 1 ) ! ] 3 E [ f 2 ( F 1 ( 1 e V ) ) ] [ ( 2 n 2 ) ! ] 2 4 [ ( n 1 ) ! ] 4 E 2 [ f ( F 1 ( 1 e W ) ) ] , n > 1 ,
where V G a m m a ( 3 n 2 , 1 ) and W G a m m a ( 2 n 1 , 1 ) .

2.3. The Discrete Case

In analogy with (2), the varextropy of a discrete random variable X taking values in the set { x i , i I } is expressed as
V J ( X ) = 1 4 i I P 3 ( X = x i ) i I P 2 ( X = x i ) 2 .
Example 4.
Let Y be a Bernoulli random variable having distribution P ( Y = 0 ) = 1 θ , P ( Y = 1 ) = θ , with 0 θ 1 , then the varextropy is given by
V J ( Y ) = 0.25 ( 1 θ ) 3 + θ 3 ( 1 θ ) 4 θ 4 2 θ 2 ( 1 θ ) 2 .
Figure 1 shows the values of V J ( Y ) as θ varies in [ 0 , 1 ] , it can be seen that 0 V J ( Y ) 0.0156 . Note that for θ = θ = 0.337009 , H ( Y ) = J ( Y ) = 0.639032 and V J ( Y ) = 0.0059 .
Example 5.
Let X be a discrete random variable such that, for a fixed h > 0 ,
P ( X = h ) = p , P ( X = 0 ) = 1 p q , P ( X = h ) = q ,
with 0 q 1 p 1 . We have
V J ( X ) = 0.25 p 3 + ( 1 p q ) 3 + q 3 ( p 2 + ( 1 p q ) 2 + q 2 ) 2 .
Now, by using the functionfzeroof MATLAB, it is found that if p = q = 0.1508 , then J ( X ) = 0.639032 and V J ( X ) = 0.0158 . Hence, with this choice of parameters, the considered random variable has the same entropy as the one considered in Example 4 with θ = θ , but the varextropy of X is larger. This implies that the coding procedure is more reliable for sequences generated by the random variable Y considered in Example 4.

3. General Results on Conditional Varextropy

Henceforward, we investigate some results on the conditional V J of a random phenomenon. Assume that X is a random variable defined on the probability space ( Ω , F , P ) and such that E | X | < + . The conditional variance of X given sub σ -field G is denoted by V a r ( X | G ) , where G F . Here, the definition of the conditional V J of X is given and some of its properties are discussed.
Definition 3.
Let X be a non-negative random variable with pdf f such that E ( f 2 ( X ) ) < + . Then, for a given σ-field F , the conditional VJ is defined as follows:
V J ( X | F ) = 1 4 V a r [ f ( X ) | F ] = 1 4 E [ ( f ( X ) E ( f ( X ) | F ) ) 2 | F ] .
In the following proposition, the varextropy version of the law of total variance is given.
Proposition 10.
Suppose that X is a random variable with pdf f, then
V J ( X ) = 1 4 V a r [ f ( X ) ] = 1 4 E [ V a r [ f ( X ) | F ] ] + 1 4 V a r [ E [ f ( X ) | F ] ] = E ( V J ( X | F ) ) + 1 4 V a r [ E [ f ( X ) | F ] ] .
It is clear that V J ( X ) E ( V J ( X | F ) ) .
Lemma 1.
Let X be a random variable with cdf F and support ( 0 , + ) . If F = { , Ω } , then V J ( X | F ) = V J ( X ) .
Proposition 11.
Let E ( f 2 ( X ) ) < + . Then, for σ-fields G F
E ( V J ( X | G ) ) E ( V J ( X | F ) ) .
Theorem 2.
Let E ( f 2 ( X ) ) < + and let F be a σ- field. Then E ( V J ( X | F ) ) = 0 if, and only if, f ( x ) = c , where c is a non negative constant, and X is F -measurable.
Proof. 
If we assume that E ( V J ( X | F ) ) = 0 , then V J ( X | F ) = 0 . Recalling the definition of V J ( X | F ) , then V a r [ f ( X ) | F ] = 0 . So, f is a constant function and X is F -measurable.
Let us suppose that f ( x ) = c , where c > 0 is a constant, and X is F -measurable. Again, by using Definition 3, we have V a r [ f ( X ) | F ] = 0 , so the result follows.  □
From the Markov property of the lifetime random variables X , Y and Z, we have the following lemma.
Lemma 2.
If X Y Z is a Markov chain, then
(i) 
V J ( Z | Y , X ) = V J ( Z | Y ) .
(ii) 
E [ V J ( Z | Y ) ] E [ V J ( Z | X ) ] .
Proof. 
(i)
By using the Markov property and definition of V J ( Z | Y , X ) , the result follows.
(ii)
Let G = σ ( X ) and F = σ ( X , Y ) , then from (5), we have
E [ V J ( Z | X ) ] E ( E [ V J ( Z | X , Y ) | X ] ) = E [ V J ( Z | X , Y ) ] = E [ V J ( Z | Y ) ] ,
and the result follows.
Remark 5.
Let ( X , Y ) denote the lifetimes of two components of a system with joint density function f ( x , y ) . Another measure of correlation is the maximal correlation ribbon (MC ribbon) defined in Beigi and Gohari [18]. The MC ribbon is equal to the set of ( λ 1 , λ 2 ) [ 0 , 1 ] 2 such that we have
V J ( f ) λ 1 V J ( E [ f | X ] ) + λ 2 V J ( E [ f | Y ] ) .

4. Stochastic Comparisons

Before proceeding to give the results of this section, we need the following definitions on stochastic orderings between random variables. For more details on these concepts, one can see Shaked and Shanthikumar [19].
Definition 4.
Suppose that X and Y are two random variables with density functions f and g and survival functions F ¯ ( x ) = 1 F ( x ) and G ¯ ( x ) = 1 G ( x ) , respectively. Then,
1 
X is smaller than Y in the stochastic ordering, denoted by X s t Y , if F ¯ ( t ) G ¯ ( t ) for all t;
2 
X is smaller than Y in the likelihood ratio ordering, denoted by X l r Y , if g ( t ) f ( t ) is increasing in t > 0 ;
3 
X is smaller than Y in the hazard rate order, denoted by X h r Y , if λ X ( x ) λ Y ( x ) for all x;
4 
X is smaller than Y in the dispersive order, denoted by X d i s p Y , if f ( F 1 ( u ) ) g ( G 1 ( u ) ) for all u ( 0 , 1 ) , where F 1 and G 1 are right continuous inverses of F and G, respectively;
5 
X is said to have decreasing failure rate (DFR) if λ X ( x ) is decreasing in x;
6 
X is smaller than Y in the convex transform order, denoted by X c Y , if G 1 ( F ( x ) ) is a convex function on the support of X;
7 
X is smaller than Y in the star order, denoted by X Y , if G 1 F ( x ) x is increasing in x 0 ;
8 
X is smaller than Y in the superadditive order, denoted by X s u Y , if G 1 ( F ( t + u ) ) G 1 ( F ( t ) ) + G 1 ( F ( u ) ) for t 0 , u 0 .
In the following, we introduce a new stochastic order based on the varextropy.
Definition 5.
The random variable X is said to be smaller than Y in the varextropy order, denoted by X V J Y , if V J ( X ) V J ( Y ) .
In the following example, we get some comparisons about the varextropy order based on the results given in Example 1.
Example 6.
(i) 
If X L a p l a c e ( 0 , 1 ) and Y E x p ( 1 ) , then we have X V J Y , since V J ( X ) = 1 192 and V J ( Y ) = 1 48 ;
(ii) 
If X W e i b u l l ( 1 , 2 ) and Y W e i b u l l ( 1 , 1 ) , then we have X V J Y , since V J ( X ) = 0.0129 and V J ( Y ) = 0.02 ;
(iii) 
If X E x p ( λ 1 ) and Y E x p ( λ 2 ) with λ 1 λ 2 , then X V J Y ;
(iv) 
If X N ( μ 1 , σ 1 2 ) and Y N ( μ 2 , σ 2 2 ) with σ 2 σ 1 , then X V J Y .
Remark 6.
X V J Y | X | V J | Y | , where the equivalence follows from Remark 1.
Since the varextropy is defined as the variance of the pdf multiplied by a constant, it is known that the uniform distribution is the only one for which the varextropy vanishes. Then, the following result is obtained.
Proposition 12.
If X U ( a , b ) , then X V J Y for any continuous random variable Y.
Proposition 13.
Let X k : n be kth order statistic of standard uniform distribution, then
(i) 
X k : n V J X 1 : n and X k : n V J X n : n for all 1 k n .
(ii) 
when n is even, we have X n 2 : n V J X k : n for all 1 k n .
(iii) 
when n is odd, we have X n + 1 2 : n V J X k : n for all 1 k n .
Remark 7
(Chernoff [20]). Let X be a random variable with standard normal distribution. If h is absolutely continuous and h ( X ) has finite variance, then
V a r ( h ( X ) ) E [ h ( X ) ] 2 .
From Remark 7, the following result is obtained.
Corollary 1.
If X is a standard normal random variable and h ( x ) = 1 2 π e x 2 2 , then
V J ( X ) 1 8 π 27 .
In the following, a lower bound for V J ( X ) based on Chebyshev inequality is given.
Corollary 2.
Let X be a random variable with pdf f ( x ) , then
V J ( X ) P ( | f ( X ) + 2 J ( X ) | 2 ) .
Finally, the following results from Shaked and Shanthikumar [19] are provided.
Proposition 14.
If X and Y are two random variables such that X d i s p Y , then X V J Y .
Example 7.
Let F X ( t ) = 1 exp ( 2 t ) , t > 0 and G Y ( t ) = 1 exp ( t ) , t > 0 . Then X d i s p Y implies X V J Y .
Corollary 3.
If X h r Y , and X or Y is DFR, then X V J Y .
Proof. 
If X h r Y , and X or Y is DFR, then X d i s p Y , due to Bagai and Kochar [21]. Thus, from Proposition 14, the result follows.  □
Corollary 4.
If X s u Y ( X Y o r X c Y ) and f ( 0 ) g ( 0 ) > 0 . Then X V J Y .
Proof. 
If X s u Y ( X Y o r X c Y ) and f ( 0 ) g ( 0 ) > 0 , then X d i s p Y , due to Ahmed et al. [22]. So, from Proposition 14, the result follows.  □
Corollary 5.
Suppose that X k : n and Y k : n are the kth order statistics of two continuous random variables X and Y, respectively. If X d i s p Y , then X k : n V J Y k : n .
Proof. 
The proof follows by Theorem 3.B.26 of [19].  □
Corollary 6.
If X s t Y ( X l r Y ) , then X V J Y .
Corollary 7.
Let X be a non negative random variable having a DFR distribution. If X k : n l r X , then X k : n V J X .
Corollary 8.
Let X be a non negative random variable having a DFR distribution. Then, X 1 : n V J X and X n : n V J X .

5. Conclusions

In this paper, some properties of VJ were obtained. This measure can be applied for measuring the information volatility contained in the associated residual and past lifetimes. Some of its properties based on order statistics, record values and proportional hazard rate models were considered. Moreover, using Taylor series, the approximate formula for V J ( X ) was proposed. Finally, the conditional V J of a random phenomenon was discussed and a new stochastic order, named varextropy ordering, was introduced. To continue future works, we list some properties and advantages of varextropy and its extends, to highlight the rationality and value of the new method.
(1)
V J of a uniform random variable as well as V H are both equal to zero, see Example 1.
(2)
The new introduced varextropy measure is more flexible than the varentropy, in the sense that the latter is free of the model parameters in some cases, see Example 1.
(3)
In this case of normal distribution, the varextropy only depends on the scale parameter σ 2 , see Example 1.
(4)
For symmetric distributions, the V J is unchanged under symmetry, see Proposition 3.
(5)
V J of half normal can be easily obtained via V J of normal distribution, see Remark 1.
(6)
V J can be approximated using Taylor series, for further details see Theorem 1.
(7)
V J is invariant under translations, for further details, see Proposition 4.
(8)
The residual V J of an exponential distribution is independent of lifetime model, more specific explanation can be seen in Example 2.
(9)
V J of the PHRM can be obtained from the original model properties, see Proposition 6.
(10)
For symmetric distributions, V J of k-th order statistic is equal to V J of ( n k + 1 ) -th order statistic from a sample of size n, for further details see Proposition 8.
(11)
The median of order statistics has a minimum V J , more specific explanation can be seen in Section 2.2.
(12)
V J of a random variable X is bigger than that of the expected value of conditional V J of X, see Proposition 10.
(13)
If X Y Z is a Markov chain, then V J ( Z | Y , X ) = V J ( Z | Y ) , for further details see Lemma 2.
(14)
For the one-parameter exponential distribution, when the value of parameter increases then the exponential distribution increases in varextropy order, see Example 6.
(15)
For the normal distribution, when the value of scale parameter increases then the normal distribution decreases in varextropy order independently of location parameter, see Example 6.
(16)
If X is smaller than Y in varextropy order then the result also holds for absolute value of X and Y and vice versa, see Remark 6.
(17)
Based on varextropy order, every continuous random variable is bigger than the uniform distribution, for further details, see Proposition 12.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

F.B. is partially supported by the GNAMPA research group of INdAM (Istituto Nazionale di Alta Matematica) and MIUR-PRIN 2017, Project “Stochastic Models for Complex Systems” (No. 2017 JFFHSH).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
cdfCumulative distribution function
pdfProbability density function
VJVarextropy

References

  1. Clausius, R.; Hirst, T.A. The Mechanical Theory of Heat: With Its Application to the Steam Engine and to the Physical Properties of Bodies; J. Van Voorst: London, UK, 1867. [Google Scholar]
  2. Shannon, C.E. A mathematical theory of communication. AT&T Bell Labs. Tech. J. 1948, 27, 379–423. [Google Scholar]
  3. Song, K.S. Rényi information, log likelihood and an intrinsic distribution measure. J. Statist. Plann. Inference 2001, 93, 51–69. [Google Scholar] [CrossRef]
  4. Liu, J. Information Theoretic Content and Probability. Ph.D. Thesis, University of Florida, Gainesville, FL, USA, 2007. [Google Scholar]
  5. Fradelizi, M.; Madiman, M.; Wang, L. Optimal concentration of information content for logconcave densities. In High Dimensional Probability VII. Progress in Probability; Houdre, C., Mason, D., Reynaud-Bouret, P., Rosinski, J., Eds.; Springer: Berlin/Heidelberger, Germany, 2016. [Google Scholar]
  6. Di Crescenzo, A.; Paolillo, L. Analysis and applications of the residual varentropy of random lifetimes. Probab. Eng. Inf. Sci. 2020. [Google Scholar] [CrossRef] [Green Version]
  7. Maadani, S.; Mohtashami Borzadaran, G.R.; Rezaei Roknabadi, A.H. Varentropy of order statistics and some stochastic comparisons. Commun. Stat. Theory Methods 2020, 1–16. [Google Scholar] [CrossRef]
  8. Lad, F.; Sanfilippo, G.; Agrò, G. Extropy: Complementary dual of entropy. Statist. Sci. 2015, 30, 40–58. [Google Scholar] [CrossRef]
  9. Qiu, G. The extropy of order statistics and record values. Stat. Probabil. Lett. 2017, 120, 52–60. [Google Scholar] [CrossRef]
  10. Qiu, G.; Jia, K. The residual extropy of order statistics. Stat. Probabil. Lett. 2018, 133, 15–22. [Google Scholar] [CrossRef]
  11. Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametr. Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
  12. Gilchrist, W. Statistical Modelling with Quantile Functions; Champman & Hall: New York, NY, USA, 2000. [Google Scholar]
  13. Beigi, S.; Gohari, A. Φ-Entropic Measures of Correlation. IEEE Trans. Inf. Theory 2018, 64, 2193–2211. [Google Scholar] [CrossRef]
  14. Balakrishnan, N.; Buono, F.; Longobardi, M. On weighted extropies. Commun. Stat. Theory Methods 2020. [Google Scholar] [CrossRef]
  15. Krishnan, A.S.; Sunoj, S.M.; Nair, N.U. Some reliability properties of extropy for residual and past lifetime random variables. J. Korean Stat. Soc. 2020. [Google Scholar] [CrossRef]
  16. Kamari, O.; Buono, F. On extropy of past lifetime distribution. Ric. Mat. 2020. [Google Scholar] [CrossRef]
  17. Gupta, R.C.; Gupta, R.D. Proportional reversed hazard rate model and its applications. J. Statist. Plan. Inference 2007, 137, 3525–3536. [Google Scholar] [CrossRef]
  18. Beigi, S.; Gohari, A. Monotone Measures for Non-Local Correlations. IEEE Trans. Inf. Theory 2015, 61, 5185–5208. [Google Scholar] [CrossRef] [Green Version]
  19. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
  20. Chernoff, H. A note on an inequality involving the normal distribution. Ann. Probab. 1981, 9, 533–535. [Google Scholar] [CrossRef]
  21. Bagai, I.; Kochar, S.C. On tail-ordering and comparison of failure rates. Commun. Stat. Theory Methods 1986, 15, 1377–1388. [Google Scholar] [CrossRef]
  22. Ahmed, A.N.; Alzaid, A.; Bartoszewicz, J.; Kochar, S.C. Dispersive and Superadditive Ordering. Adv. Appl. Probab. 1986, 18, 1019–1022. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The values of V J ( Y ) for Bernoulli distribution.
Figure 1. The values of V J ( Y ) for Bernoulli distribution.
Entropy 23 00356 g001
Table 1. The values of V J ( X k : n ) for the standard uniform distribution.
Table 1. The values of V J ( X k : n ) for the standard uniform distribution.
k12345
V J ( X k : n ) 8.859319 2.225035 1.407279 1.129121 1.027418
k678910
V J ( X k : n ) 1.027418 1.129121 1.407279 2.225035 8.859319
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vaselabadi, N.M.; Tahmasebi, S.; Kazemi, M.R.; Buono, F. Results on Varextropy Measure of Random Variables. Entropy 2021, 23, 356. https://doi.org/10.3390/e23030356

AMA Style

Vaselabadi NM, Tahmasebi S, Kazemi MR, Buono F. Results on Varextropy Measure of Random Variables. Entropy. 2021; 23(3):356. https://doi.org/10.3390/e23030356

Chicago/Turabian Style

Vaselabadi, Nastaran Marzban, Saeid Tahmasebi, Mohammad Reza Kazemi, and Francesco Buono. 2021. "Results on Varextropy Measure of Random Variables" Entropy 23, no. 3: 356. https://doi.org/10.3390/e23030356

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop