Computers and Structures 142 (2014) 54–63
Contents lists available at ScienceDirect
Computers and Structures
journal homepage: www.elsevier.com/locate/compstruc
An efficient method for the estimation of structural reliability intervals
with random sets, dependence modeling and uncertain inputs
Diego A. Alvarez ⇑, Jorge E. Hurtado
Universidad Nacional de Colombia, Apartado 127, Manizales, Colombia
a r t i c l e
i n f o
Article history:
Received 12 November 2013
Accepted 9 July 2014
Keywords:
Reliability interval
Reliability bounds
Random sets
Probability boxes
Possibility distributions
Probability distributions
a b s t r a c t
A general method for estimating the bounds of the reliability of a system in which the input variables are
described by random sets (probability distributions, probability boxes, or possibility distributions), with
dependence modeling is proposed. The method is based on an analytical property of the so-called design
point vector; this property is exploited by constructing a nonlinear projection of Monte Carlo samples of
the input variables in a two-dimensional diagram from which the analyst can easily extract the relevant
samples for computing both the lower and upper bounds of the failure probability using random set theory. The method, which is illustrated with some examples, represents a dramatic reduction in the number
of focal element evaluations performed when applying the Monte Carlo method to random set theory.
Ó 2014 Elsevier Ltd. All rights reserved.
1. Introduction
Uncertainty analysis in engineering should ideally be a part of
routine design because the variables and supposedly constant
parameters are either random or known with imprecision. In some
cases the uncertainty can be very large, such as the case of natural
actions provoking disasters or modeling errors leading to technological catastrophes. In approaching the estimation of the risk of
a given engineering problem, use is traditionally made of cumulative distributions functions (CDFs) defining the input variables and
then, by means of analytic or synthetic methods (i.e. Monte Carlo)
the probability of not exceeding undesirable thresholds, is computed [1,2].
One of the main problems in applying the probabilistic
approach is that the CDFs of the input variables are usually known
with imprecision. This is normally due to the lack of sufficient data
for fitting the model to each input random variable. For this reason,
the parameters of the input distributions are commonly known up
to confidence intervals, and even these latter are not wholly certain. This hinders the application of the probability-based
approach in actual design practice [3]. Even if the information is
abundant, there remains the problem of the high sensitivity of
the usually small probabilities of failure to the parameters of the
distribution functions [4–6]. Such a sensitivity is due to the fact
⇑ Corresponding author. Tel.: +57 68879300; fax: +57 68879334.
E-mail addresses: daalvarez@unal.edu.co (D.A. Alvarez), jehurtadog@unal.edu.co
(J.E. Hurtado).
http://dx.doi.org/10.1016/j.compstruc.2014.07.006
0045-7949/Ó 2014 Elsevier Ltd. All rights reserved.
that the estimation of a probability density function from empirical
data is an ill-posed problem [7,8]. This means that small changes in
the empirical sample affects the parameters defining the model
being fitted, with serious consequences in the tails, which are just
the most important zones of the distribution functions for probabilistic reliability methods [9–11].
These and other considerations have fostered the research on
alternative methods for incorporating uncertainty in the structural
analysis, such as fuzzy sets and related theories [12–16], antioptimization or convex-set modeling [5,10,17], interval analysis
[10,18–27], random sets [28–30], ellipsoid modeling [31,32] and
worst-case scenarios [33]. Also comparisons have been made
between probabilistic and the alternative methods [34–36] or their
combination has been explored [37–40].
Taking into account that the first- and second-order reliability
methods (FORM and SORM) can be very inaccurate in many cases
e.g. [2,41–46], the focus of present paper is the determination of
the reliability intervals under uncertain input variables by means
of Monte Carlo simulation. In this regard, attention is called to
[23] where an interval finite-element approach for linear structural
analysis and a Monte Carlo method for calculating intervals of the
failure probability is proposed and to [28,29] who developed an
even more general method of computing the bounds of the probability of failure under the general framework of random set theory
and that comprised uncertainty modeled in the form of probability
boxes, possibility distributions, CDFs, Dempster-Shafer structures
or intervals; in addition the method allows to model dependence
between the input variables.
55
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
Present paper is aimed to the goal of facilitating the Monte
Carlo solution of the interval reliability computation, which is
much more computationally demanding than the conventional
computation of a single reliability value [23,28,29,47]. In particular, a method based on random set theory is proposed that allows
selecting all relevant samples for a Monte Carlo estimation of the
bounds of the failure probability from a large mass of input variable realizations generated from the uncertain distributions.
Hence, the method avoids the large number of sample evaluations
with null contribution to the failure probability estimation, which
is the typical case in using plain Monte Carlo simulation.
The proposed approach is based on a property of FORM [48],
which consists in that the design point vector points to a direction
of steep evolution of the limit state function [49–51]. This property
also holds for functions arising from the perturbation represented
by the interval uncertainty in the distribution parameters. Therefore, in spite of FORM’s inaccuracy in many reliability problems
[2,41,44,45], its design point vector emerges as a powerful clustering device, because of the way that the performance function
evolves in this direction. Then, such a property is exploited by constructing a nonlinear transformation of the reliability problem
from d dimensions to a bi-dimensional space of two independent
variables whose marginal and joint density functions are explicitly
derived. The main characteristic of this transformation is that it
makes evident the organizing property mentioned above in a bidimensional representation of the entire set of random numbers,
allowing the selection of the relevant samples for interval or single
reliability computations on almost a blind basis. The proposed
approach is illustrated with detailed structural examples. The
paper ends with some conclusions and suggestions for future work.
2. A brief introduction to random sets
Random set theory is a mathematical tool, which can effectively
unify a wide range of theories for coping with aleatory and epistemic uncertainty. It is an extension of probability theory to setvalued rather than point-valued maps. In the following paragraphs
a brief summary of some of the most important concepts on
random sets required in the ensuing discussion is presented.
2.1. Copulas
A copula is a d-dimensional CDF C : ½0; 1d ! ½0; 1 such that each
of its marginal CDFs is uniform on the interval ½0; 1.
According to Sklar’s theorem (see Refs. [52,53]), copulas are
functions that relate a joint CDF with its marginals, carrying in this
way the dependence information in the joint CDF; Sklar’s theorem
states that a multivariate CDF F X 1 ;X 2 ;...;X d ðx1 ; . . . ; xd Þ ¼ P½X 1
6 x1 ; . . . ; X d 6 xd of a random vector ðX 1 ; X 2 ; . . . ; X d Þ with marginals
F X i ðxi Þ ¼ P½X i 6 xi can be written as F X 1 ;X 2 ;...;X d ðx1 ; . . . ; xd Þ
¼ C F X 1 ðx1 Þ; . . . ; F X d ðxd Þ , where C is a copula. The copula C contains
all information on the dependence structure between the components of ðX 1 ; X 2 ; . . . ; X d Þ whereas the marginal cumulative distribution functions F X i contain all information on the marginal
distributions.
In the following we will denote as lC , the Lebesgue-Stieltjes
measure corresponding to the copula C (see [54] for details).
The reader is referred to [55] for the standard introduction to
copulas.
2.2. Definition of a random set
Let us consider a universal set X – ; and its power set PðX Þ, a
probability space ðX; rX ; PX Þ and a measurable space ðF ; rF Þ where
F # PðX Þ. In the same spirit as the definition of a random variable,
a random set (RS) C is a ðrX rF Þ-measurable mapping
C : X ! F ; a # CðaÞ. In other words, a random set is like a random
variable whose realization is a set in F , not a number; let us call
each of those sets c :¼ CðaÞ 2 F a focal element while F is a focal
set.
Similarly to the definition of a random variable, the random set
can be used to define a probability measure on ðF ; rF Þ given by
PC :¼ PX C1 . In other words, an event R 2 rF has the probability
PC ðRÞ ¼ PX fa 2 X : CðaÞ 2 Rg:
ð1Þ
The random set C will be called henceforth also as ðF ; PC Þ.
Note that when every element of F is a singleton, then C
becomes a random variable X, and the focal set F is said to be specific; in other words, if F is a specific set then CðaÞ ¼ XðaÞ and the
probability of occurrence of the event F, is P X ðFÞ :¼ ðP X X 1 Þ
ðFÞ ¼ PX fa : XðaÞ 2 F g for every F 2 rX . In the case of random sets,
it is not possible to compute exactly PX ðFÞ but its upper and lower
probability bounds. [56] defined those upper and lower probabilities by,
LPðF ;PC Þ ðFÞ :¼ PX fa : CðaÞ # F; CðaÞ – ;g
¼ PC fc : c # F; c – ;g;
UPðF ;PC Þ ðFÞ :¼ P X fa : CðaÞ \ F – ;g ¼ PC fc : c \ F – ;g;
ð2aÞ
ð2bÞ
where
LPðF ;PC Þ ðFÞ 6 PX ðFÞ 6 UPðF ;PC Þ ðFÞ:
ð3Þ
Note that the equality in (3) holds when F is specific. The reader is
referred to Refs. [57,58] a complete survey on random sets.
2.3. Relationship between random sets and probability boxes, CDFs
and possibility distributions
Definition in Section 2.2 is very general; [28,59] showed that
making the particularizations X :¼ ð0; 1d ; rX :¼ ð0; 1d \ Bd and
PC lC for some copula that contains the dependence information
within the joint random set, and using intervals and d-dimensional
boxes as elements of F , it is enough to model possibility distributions, probability boxes, intervals, CDFs and Dempster-Shafer
structures or their joint combinations; these are some of the most
popular engineering representations of uncertainty. Let us denote
by P C lC the fact that P C is the probability measure generated
by PX which is defined by the Lebesgue-Stieltjes measure
corresponding to the copula C, i.e. lC . In other words,
PC ðCðGÞÞ ¼ lC ðGÞ for G 2 rX ; also B will stand for the Borel
r-algebra on R.
In the rest of this section, ðX; rX ; P X Þ will stand for a probability
space with X :¼ ð0; 1; rX :¼ ð0; 1 \ B :¼ [h2B fð0; 1 \ hg and PX will
be a probability measure corresponding to the CDF of a random
~ uniformly distributed on ð0; 1, i.e. F a~ ðaÞ :¼ PX ½a
~ 6 a
variable a
¼ a for a 2 ð0; 1; that is, PX is a Lebesgue measure on ð0; 1.
Probability boxes, CDFs and possibility distributions can be
interpreted as random sets, as will be explained in the following:
2.3.1. Probability boxes
A probability box or p-box (see e.g. [60]) hF; Fi is a set of CDFs
F : FðxÞ 6 FðxÞ 6 FðxÞ; F is a CDF; x 2 R delimited by lower and
upper CDF bounds F and F : R ! ½0; 1. It can be represented as
the random set C : X ! F ; a # CðaÞ (i.e. ðF ; P C Þ) defined on R
where
F
is
the
class
of
focal
elements
CðaÞ :¼ hF; Fið1Þ ðaÞ :¼ F ð1Þ ðaÞ; F ð1Þ ðaÞ for a 2 X with F ð1Þ ðaÞ
and F ð1Þ ðaÞ denoting the quasi-inverses of F and F (the quasiinverse of the CDF F is defined by F ð1Þ ðaÞ :¼ inf fx : FðxÞ P ag)
and PC is specified by (1). This is a good point to mention that
56
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
[47] recently published a nice survey on how to generate probability boxes from scarce data.
2.3.2. Cumulative distribution functions
When a basic variable is expressed as a random variable on
X # R, the probability law of the random variable can be expressed
using a CDF F X . A CDF can be represented as the random set
C : X ! F ; a # CðaÞ where F is the system of focal elements
CðaÞ :¼ F ð1Þ
ðaÞ for a 2 X and P C is defined by (1). Note that
X
F X ðxÞ ¼ P C ðX 6 xÞ for x 2 X.
2.3.3. Possibility distributions
A possibility distribution (see e.g. Ref. [61]), with membership
function A : X ! ð0; 1; X # R can be symbolized as the random
set C : X ! F ; a # CðaÞ (i.e. ðF ; P C Þ) defined on R where F is the
system of all a-cuts of A, i.e., CðaÞ Aa :¼ fx : AðxÞ P a; x 2 X g for
a 2 ð0; 1 and PC is defined by (1).
Similarly, intervals and Dempster-Shafer structures can be
modeled by random sets. The reader is referred to [28,59] for
details.
In Section 2.3 we used the particularization X :¼ ð0; 1;
rX :¼ ð0; 1 \ B :¼ [h2B fð0; 1 \ hg and PX will be a probability mea~ uniformly
sure corresponding to the CDF of a random variable a
~ 6 a ¼ a for a 2 ð0; 1; that
distributed on ð0; 1, i.e. F a~ ðaÞ :¼ PX ½a
is, PX is a Lebesgue measure on ð0; 1.
For that particularization, a sample from a random set is simply
obtained by generating an a from a uniform distribution on ð0; 1
and then, obtaining the corresponding focal element CðaÞ.
2.5. Combination of focal elements
After sampling each basic variable, a combination of the sampled focal elements is carried out. Usually, the joint focal elements
are given by the Cartesian product di¼1 ci where ci :¼ Ci ðai Þ are the
sampled focal elements from every basic variable. Some of these ci
are intervals, some other, points. Inasmuch as every sample of a
basic variable can be represented by ci or by the corresponding
ai , the joint focal element can be represented either by the ddimensional box c :¼ di¼1 ci # X or by the point a :¼ ½a1 ; a2 ; . . . ;
ad 2 ð0; 1d . Those two representations will be called the X- and
the a-representation respectively, and ð0; 1d will be referred to as
the X-space (see Figs. 1a and 1b).
2.6. Lower and upper probabilities
In [28,29,59] it was shown that using the particularization
X :¼ ð0; 1d ; rX :¼ ð0; 1d \ Bd and PC lC , it can be seen that X
contains the regions F LP :¼ fa 2 X : CðaÞ # F; CðaÞ – ;g and
F UP :¼ fa 2 X : CðaÞ \ F – ;g which are correspondingly formed
by all those points whose respective focal elements are completely contained in the set F or have in common at least one
point with F correspondingly (see Fig. 1b). Take into account that
the set F LP is contained in F UP and both sets are independent of
the copula C that relates the basic variables a1 ; . . . ; ad ; in this
case, the lower (2a) and upper (2b) probability measures of F
can be calculated by
UPðF ;PC Þ ðFÞ ¼
Z
Z
ð0;1d
I½a 2 F LP dCðaÞ ¼ lC ðF LP Þ;
ð4aÞ
I½a 2 F UP dCðaÞ ¼ lC ðF UP Þ;
ð4bÞ
ð0;1d
2.7. Solving the lower and upper probability integrals by means
of Monte Carlo simulation
In Alvarez [28] it is explained how to approximate integrals (4a)
and (4b) by means of simple Monte Carlo sampling. Basically, the
method consists in sampling n points from the copula C, namely
a1 ; a2 ; . . . ; an 2 ð0; 1d (Nelsen [55] provides methods to do it), and
then
retrieve
the
corresponding
focal
elements
cj :¼ Cðaj Þ; j ¼ 1; . . . ; n from F . Afterwards, integrals (4a) and (4b)
are computed by the unbiased estimators
^ ðF ;P Þ ðFÞ ¼ 1
LP
C
n
n h
n
i 1X
X
I cj # F ¼
I aj 2 F LP ;
n
j¼1
j¼1
^ ðF ;P Þ ðFÞ ¼ 1
UP
C
n
ð5aÞ
n h
n
i 1X
X
I cj \ F – ; ¼
I aj 2 F UP :
n
j¼1
j¼1
ð5bÞ
3. Random sets and the bounding of the probability of failure
when there are uncertain distributions
2.4. Sampling from a random set
LPðF ;PC Þ ðFÞ ¼
on condition that F LP and F UP are lC -measurable sets. In Eqs. (4a)
and (4b), I stands for the indicator function, that is I½ ¼ 1 when
the condition in brackets is true and is equal to zero otherwise.
In the framework of probability theory, a well established
definition of the reliability R of a structural system is R ¼ 1 Pf
where P f is the probability mass of the failure domain F of the ddimensional space X # Rd , determined by a limit state function
gðxÞ. This probability is defined as
Pf ¼
Z
X
I½x 2 FdF X ðxÞ;
where F X ðxÞ is the joint cumulative distribution function (CDF) of
the basic variables.
One situation under study in present paper is the following: the
joint CDF F X ðxÞ ¼ CðF X 1 ðx1 Þ; F X 2 ðx2 Þ; . . . ; F X d ðxd ÞÞ (see Section 2.1) is
defined with parameters ðh1 ; h2 ; . . .Þ, each of which is uncertain
and known only up to an interval of fluctuation ½h1 ; h1 ; ½h2 ; h2 ; . . .
For the sake of keeping the notation uncluttered, let us collect
these parameters and their extreme values in the vectors h; h and
h. The purpose is to compute an interval ½Pf ; P f that encloses the
actual but unknown probability of failure given the interval parameter fluctuation, by means of a Monte Carlo simulation as parsimonious as possible; this interval calculation can be performed by
means of random set theory.
In order to illustrate the differences of this kind of computation
with the conventional case of a single reliability estimation, let us
summarize the extension of plain Monte Carlo to this case after
Refs. [28,47].
For the conventional case in which the input CDFs have fixed
parameters, which is equivalent to have a fixed value of, e.g., the
mean and the variance, the limit state function gðxÞ ¼ 0 shatters
the X space in two domains, namely safe S ¼ fx : gðxÞ > 0g and
failure F ¼ fx : gðxÞ 6 0g. From the joint CDF F X ðxÞ; n samples are
generated; in order to simulate a sample from F X , drawn n points
T
ai ¼ ½a1i ; . . . ; aji ; . . . ; adi for i ¼ 1; 2; . . . n from the copula C. Thereaf-
ter, use the inverse transform method with each marginal CDF
ð1Þ
F X j ; j ¼ 1; 2; . . . ; d in order to obtain the realization xji ¼ F X j ðaji Þ
T
for i ¼ 1; 2; . . . n. The point xi ¼ ½x1i ; . . . ; xji ; . . . ; xdi will serve as a
sample from the target CDF F X .
Then, the Monte Carlo estimate of the failure probability is
Pf ¼
n
n
1X
1X
I½xi 2 F ¼
I½gðxi Þ 6 0:
n i¼1
n i¼1
57
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
Fig. 1. The four spaces used to solve the interval reliability problem. The basic variables are defined space X (Panel a); there the realizations of these variables by means of
random set theory are the focal elements which are depicted as boxes; also are shown the failure surface gðxÞ ¼ 0 together with the safe S and failure F domains. In the Xspace (Panel b) are defined the regions F LP and F UP together with the failure surfaces gðaÞ ¼ 0 and gðaÞ ¼ 0 (see Eq. (7)). Panel c shows the space U; there the circles represent
contours of the standard Gaussian probability density function; the failure surfaces gðuÞ ¼ 0 and gðuÞ ¼ 0 are also shown together with the corresponding design point
vectors. Finally in the space V (Panel d), the curves represent Eq. (12) for different values of k. The circular shaded region represents the cloud of points M when mapped to
this space.
Now consider the case in which F X j is unknown but belongs to the
probability box hF X j ; F X j i, which appeared from the uncertainty of
the parameters h 2 ½h; h that defines this joint CDF; then it is not
possible to sample points from F Xj but interval samples
h
i
Iji :¼ hF X j ; F Xj ið1Þ ðaji Þ :¼ F X j ð1Þ ðaji Þ; F X j ð1Þ ðaji Þ .
Let ci be the d-dimensional box with 2d vertices obtained as a
Cartesian product of those intervals, i.e. ci :¼ dj¼1 Iji ¼ I1i Idi ;
this d-dimensional box can be understood as a realization from
the random set C, that is, ci ¼ Cðai Þ.
In order to calculate the lower and upper probabilities of the
event F, which are bounds of P f , that is, P f 6 P f 6 P f , it is required
to calculate the image of the focal element ci through the function
g; this can be done by means of the optimization method, the sampling method, the vertex method, the function approximation
method or using interval arithmetic.
Afterwards, the lower and upper probabilities of failure of F are
estimated using Eqs. (5a) and (5b), as
bf ¼ 1
P
n
n
X
I½gðci Þ # F
i¼1
bf ¼ 1
P
n
n
X
I½gðci Þ \ F – ;:
i¼1
ð6Þ
Take into account that Zhang and coworkers (see e.g. [23,47]) have
proposed a methodology, which is summarized here for the sake of
convenience; their method can be regarded as a particularization of
the method proposed by Alvarez [28,59] when the focal sets are
mapped through the function g using the optimization method, as
will be shown in the following.
For a realization ai from a copula C, Zhang and coworkers define
the points
h
i
xðai Þ ¼ F X1 ð1Þ ða1i Þ; . . . ; F X j ð1Þ ðaji Þ; . . . ; F Xd ð1Þ ðadi Þ ;
h
i
xðai Þ ¼ F X1 ð1Þ ða1i Þ; . . . ; F X j ð1Þ ðaji Þ; . . . ; F Xd ð1Þ ðadi Þ ;
which are opposite vertices of a d-dimensional box xi ; xi ; this box
is the focal element ci Cðai Þ corresponding
to ai .
Note that the focal element Cðai Þ ¼ xi ; xi contains all possible
realizations of a variable xi given the information contained in the
p-box. Using the optimization method, Zhang and coworkers
compute the image of Cðai Þ through the function g, as
h
i
gðci Þ ¼ g ðCðai ÞÞ ¼ gðai Þ; gðai Þ ;
where
gðai Þ :¼ min g ðxi Þ
ð7aÞ
gðai Þ :¼ max g ðxi Þ
ð7bÞ
xi 2Cðai Þ
xi 2Cðai Þ
are limit state functions defined in X.
Since, I½gðci Þ # F ¼ I½gðai Þ 6 0 and I½gðci Þ \ F – ; ¼ I½gðai Þ 6 0
it follows that Eq. (6) can be written as:
bf ¼ 1
P
n
n
X
I½gðai Þ 6 0
i¼1
bf ¼ 1
P
n
n
X
I½gðai Þ 6 0:
i¼1
ð8Þ
Note that Zhang and coworkers originally formulated their method
assuming independence between the input variables, that is, they
Q
have assumed a product copula C ¼ dj¼1 aj and have performed
the sampling using for example simple Monte Carlo or deterministic low-discrepancy sequences such as Halton, Faure, Hammersley,
Sobol or good lattice points. In this sense, the method proposed by
Zhang and coworkers is a particularization of the one proposed by
Alvarez [28,59] inasmuch as the latter includes dependence
between the basic variables and also supports not only p-boxes
and CDFs but also possibility distributions and Dempster-Shafer
structures.
58
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
4. An efficient method for the calculation of the probability of
failure
4.2. Representing the U-space and the limit state function in two
dimensions
In [51] a reliability method built on the design point provided
by FORM was proposed. This method is based on the calculation
of the limit state function of a small chosen set of Monte Carlo
samples that have the highest resemblance to the design point.
The method provides the same probability of failure estimation
as the simple Monte Carlo, with a computational cost limited to
the evaluation of a subset of the chosen samples. In the following,
the main ideas of that methodology are summarized. The reader
must keep in mind that in this section only the conventional reliability problem with random variables is considered.
^ is
In this section, the ordering property of the vector w
exploited for the sake of extracting relevant samples for interval
reliability computations; this is in order to minimize the
computational labor it implies, which is much higher than that
posed by the estimation of a single value of the failure probability
corresponding to probability distributions without uncertainty in
their parameters.
In space U, let us generate n standard Gaussian samples
ui ; i ¼ 1; 2; . . . ; n. These samples will form a hyper-ring, whose
radious
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi follows a Chi distribution with approximate mean
d 0:5 and approximate variance 0.5 when d is large [46]. In
[64], it has been proposed to change this ddimensional
characterization of the samples by a bi-dimensional representation
composed by two nonlinear features: (a) their distance to the origin and (b) the cosine of the angle they make with the design point
^ These new variables are given by
vector w.
4.1. The importance of FORM for representing order statistics
The structure of the reliability problem is such that function g
decreases slowly from positive values towards negative ones in
the failure domain. In [50] it is shown that the evolution of the values of the limit state function g can be adequately represented in a
bi-dimensional plot using the design point vector as a reference.
For the sake of clarity in our exposition, we will summarize here
the most important conclusions given in that reference.
Let us assume that the reliability problem has been transformed
from the input space Y # Rd , where d is the dimensionality of the
problem (i.e. the number of input random variables), to the standard Gaussian space U # Rd using a suitable transformation
T : Y ! U [62,63],
u ¼ TðyÞ:
ð9Þ
The transformation requires the specification of the CDFs of each
basic variable.
In consequence, the function g becomes a function of u, as:
gðyÞ ¼ g T 1 ðuÞ ;
which will be denoted as gðuÞ for the sake of simplicity. The limit
state function in the standard gaussian space U is defined by
gðuÞ ¼ 0. The design point uI is calculated by solving the optimization programme [48]:
uI ¼ arg minkuk
u2U
subject to gðuÞ ¼ 0:
This implies that b ¼ kuI k.
^ where
The design point can be represented also as uI ¼ bw,
^ ¼
w
rgðuI Þ
krgðuI Þk
is a unit design point vector, normal to the tangent hyperplane at
the design point uI , and
krgðuI Þk ¼
gð0Þ
b
realizes the steepest descent of function gðuÞ when it shatters the
space U inasmuch as gð0Þ is constant and b is the minimum distance
from the origin or coordinates to the limit state function gðuÞ ¼ 0. In
^ calculated for the conventional case of a
consequence, vector w,
reliability analysis when CDFs are known with certainty, is a
direction that signals the evolution of the order statistics of u. See
Hurtado and Alvarez [49], Hurtado et al. [50], Hurtado and Alvarez
[51] for illustrations.
v1
vffiffiffiffiffiffiffiffiffiffiffiffi
u d
uX
¼ r ¼ t u2j ;
ð10Þ
j¼1
^ uÞ ¼
v 2 ¼ cos w ¼ cos \ðw;
^ uÞ
^ uÞ
ðw
ðw
:
¼
^
kuk
kwkkuk
ð11Þ
Therefore, the new representation of the random variables is given
by the mapping v :¼ ðv 1 ; v 2 Þ. Notice that these variables together
operate a highly nonlinear map U # Rd # V ½0; 1Þ ½1; 1. This
operation, however, does not destroy the clustering structure of
the samples in two classes. In fact, the cosine is a measure of the
belonging of the sample to one of them, because, the higher the
cosine, the higher the possibility of the presence of the sample in
the failure domain. However, the cosine is not a sufficient indicator
of such belonging because the samples in the safe domain that are
close to the interclass boundary are also characterized by high
cosine values. The distance from the origin, however, complements
the cosine, as the samples in the failure domain F are necessarily
located far from the origin. Besides, the two features
ðv 1 ; v 2 Þ ¼ ðr; cos wÞ are independent, because by rotating the
^ the prodUspace in such a way that any axis uk coincides with w,
uct r cos w will be equal to uk and hence the expected value of the
product of the two variables is E½v 1 v 2 E½r cos w ¼ E½uk ¼ 0,
because in the standard Gaussian space the variables have zero
mean. Therefore, the new variables v 1 ; v 2 are uncorrelated. But,
more fundamentally, they are also independent, because the cosine
does not depend on r, nor the other way around. Therefore, we have
transformed the reliability problem of dimensionality d, in which
variables y normally exhibit different degrees of correlation, to a
problem with only two variables, which are not simply uncorrelated
but independent, and which yields a visible discrimination of the
safe and failure classes of samples. See Hurtado and Alvarez [64],
Hurtado et al. [49], Hurtado and Alvarez [50], Hurtado [51] for practical demonstrations.
The major benefit of the proposed transformation lies in that it
allows exploiting the ordering property of the design point vector
^ because it defines one of the two new variables. In order to
w,
demonstrate this benefit, let us consider the general second-order
approximation to limit state functions which, after suitable transformations, can be approximated by the simple parabolic form
[41]:
gðuÞ ¼ b ud þ k
d1
X
u2k ¼ 0;
k¼1
where k stands for an average curvature. The generality of this formulation makes it ideal for our case. Other quadratic forms similar
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
to Eq. (4.2), that stem from different transformation procedures,
have been proposed in the context of the second-order reliability
method (SORM) [65–67].
For deriving Eq. (4.2) and similar quadratic forms, the U-space is
rotated in such a way that axis ud passes through the apex of the
paraboloid [41,65–67]. Therefore, the design point becomes
uI ¼ ½0; 0; . . . ; 0; b and the associated unit vector of the FORM
^ ¼ ½0; 0; . . . ; 0; 1. Hence,
hyperplane is w
^ ¼
v 2 ¼ cos w ¼ cos \ðu; wÞ
¼
ud
v1
½u1 ; u2 ; . . . ; ud1 ; ud ½0; 0; . . . ; 0; 1 ud
¼
^
r
kukkwk
then, iteratively, two values of k, namely kmin and kmax are found
such that their corresponding curves (as given by Eq. (13)) divide
the set M into three sets: samples that belong only to the failure
domain, samples that belong exclusively to the safe domain and a
third set that is bounded between both lines and that contains failure and safe realizations and whose samples must be evaluated in g
in order to determine if they belong to the failure or to the safe
region.
The numerical procedure proposed in [51] is the following:
Algorithm 1.
:
On the other hand,
d1
X
u2i ¼ r 2 u2d ¼ v 21 u2d :
i¼1
Replacing these results into Eq. (4.2), the limit state function
becomes
gðv Þ ¼ kv 21 v 22 þ v 1 v 2 b kv 21 ¼ 0:
ð12Þ
We have thus expressed the approximating SORM function only in
terms of the two nonlinear features v ¼ ðv 1 ; v 2 Þ ¼ ðr; cos wÞ. This is
an algebraic quadratic equation in either v 1 or v 2 . Solving for v 2 and
taking only the positive root, which will be denoted as v 2 , yields
v 2 ðv 1 Þ ¼
1 þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 þ 4kðb þ kv 21 Þ
2kv 1
:
ð13Þ
This equation represents the limit state function gðuÞ ¼ 0 of the
general SORM paraboloid proposed by Zhao and Ono [41] in the
V-space. In Hurtado [64], it has been shown that in the general case
the failure zone occupies the top position in the plot and a more
precise location depends only on d. Namely, the upper-right sector
for low d and the purely upper sector for large d.
In practice, the use of Eq. (13) is facilitated by the availability of
b, because the design point has been calculated, but it is hindered
by the fact that the estimation of k requires a full SORM analysis,
which is somewhat complex, as it requires the calculation of several samples (around 2d [43,68]), calculating or approximating
the Hessian, solving eigenproblems, operating space transformations and fitting the quadratic form.
4.3. Bounding the value of k and classifying a set of samples into the
failure and safe domains
In Hurtado and Alvarez [51], a method that approximates the
value of k using a very small number of calls to g was proposed.
In fact, what this procedure does is that it finds two values of k,
namely, kmin and kmax that bound the limit space function. As and
additional outcome, the algorithm splits a set of samples into the
failure and safe domains with minimal computational effort.
Suppose that a set of points M, sampled in space Y, is mapped
to the U-space. At this point, the samples M are unlabeled, that is,
it is unknown whether they belong to the failure set or not. In [51],
it was shown that the ordered increase of k produces a nested
structure of sample sets M. Using this fact, a numerical procedure
that bounds the value of k was proposed; it consists in computing
the value of g for those samples that are closest to the curve
defined by k ¼ 0 (which is the approximation of the failure surface
defined by FORM). The degree of closeness of each point in the
sample set M is found by the following equation, which comes
from (12):
k¼
59
b v 1v 2
;
v 21 ð1 v 22 Þ
ð14Þ
1. Perform a Monte Carlo sampling of n points in the space Y
(these samples can be drawn from the joint CDF of the random
variables). Map these values to the standarized normal space U.
Let us call this set of samples M.
2. Map the samples in M to the V-space using Eqs. (10) and (11).
3. Using reliability index b, compute (13) for k ¼ 0.
4. For each sample in M calculate k using Eq. (14).
5. Pick out those 20 samples with the smallest value of jkj. These
samples will make up the set P. The smallest value of k in this
set will correspond to kmin while the largest will be kmax .
6. Compute gðuÞ for all samples in P.
7. Divide the set P into three sets, namely P ; P , and P þ , with
roughly one-third of the samples each. The sets P and P þ will
contain the samples with the smallest and largest values of k
respectively; the set P will be composed by the rest of the
samples.
8. If any of the samples in P belongs to the failure set, the pick
another point from M whose k is the largest value of k that is
less than kmin . On the other hand, if any of the samples in P þ
belongs to the safe set, the choose another point from M whose
k is the smallest k that is greater than kmax . In any case, the
selected point will be added to the set P. Compute gðuÞ for that
point.
9. Go to step 7 until all of the samples in P and all of the samples
in P þ belong to the safe and failure domains respectively.
Using the forementioned algorithm, the only samples of M that
require to be classified by evaluating the limit state function g are
those whose image lies between the bounding curves that correspond to kmin and kmax ; these samples are the ones that compose
the set P. The samples below kmin are discarded while those above
kmax are assumed to correspond to the failure domain.
Let nf be the number of failure samples found in the sector comprised by kmin and kmax (this is the cardinality of the final set P) and
nkmax the number of samples above the line for kmax . Therefore, the
probability estimate, which will be the same as that given by simple Monte Carlo simulation, is
b f ¼ nf ¼ nf þ nkmax :
P
n
n
ð15Þ
5. The proposed algorithm
As discussed before, the calculation of the lower and upper
bounds of the probability of failure involves the computation of
Eq. (6); using the formulation of Zhang and coworkers [23,47],
these equations can be written in terms of the evaluation of two
limit state functions g and g which are written in terms of a as
Eq. (8). Note that Zhang and coworkers originally proposed Eq.
(8) when the input variables are probability boxes; however, that
formulation can become very general considering that xðai Þ and
xðai Þ are two opposite vertices of the focal element Cðai Þ; remember that according to Section 2.5, Cðai Þ appeared as the Cartesian
product of the samples obtained for each basic variable.
60
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
strength ry and the maximum von Mises stress on the top surface
of the tube at the origin rmax , that is
Let us apply to a the following bijective transformation:
h
u ¼ U1 ðaÞ ¼ U1 ða1 Þ; U1 ða2 Þ; . . . ; U1 ðad Þ
iT
ð16Þ
1
in order to map the point a to the U-space. Here U stands for the
inverse cumulative distribution function associated with the standard normal distribution.
Using the transformation (16), we can write Eq. (8) as:
n
X
bf ¼ 1
P
I½gðUðui ÞÞ 6 0
n i¼1
n
X
I½gðui Þ 6 0
bf ¼ 1
P
n
i¼1
n h
i
X
I gðui Þ 6 0 ;
i¼1
ð17Þ
Algorithm 2.
1. For each of the limit state functions (17), solve the FORM
problem in order to find associated reliability indexes b and b
^ and w.
^
and the vectors w
b f ¼ 1.
2. Set P
3. Using Monte Carlo sampling, draw n samples from the copula C,
taking into account that
n > max @100
b f;FORM
1P
b f;FORM
P
1
bf
1P
A
; 100
bf
P
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
r2x þ 3s2xz
rmax ¼
sxz
In this form, for both limit state functions (17), the parsimonious
method described in Section 4 can be employed, as illustrated in
Fig. 1. In this way, in order to apply the method of Section 4, one
has to take into account that the spaces Y and X are equivalent
and that both transformations (9) and (16) are also equivalent, that
is, T ¼ U1 .
The method proceeds as follows:
0
where
P þ F 1 sin h1 þ F 2 sin h2 Mc
þ
I
A
Td
¼
4I
rx ¼
n h
i
X
bf ¼ 1
P
I gðUðui ÞÞ 6 0 ;
n i¼1
or with a little abuse of notation as,
bf ¼ 1
P
n
gðxÞ ¼ ry rmax ;
ð18Þ
b f;FORM ¼ UðbÞ; this procedure guarantees that the lower
and that P
probability of failure P f will be estimated with a coefficient of
variation roughly less than 0.1 (see [51] for details).
4. Map the points sampled in step 3 to the space U using the
transformation (16). Let us call this set of samples M.
^ and perform the steps 2 to 9 of
5. Set g ¼ g; b ¼ b and w ¼ w
Algorithm 1. Then estimate the lower probability of failure
^ f ¼ P^f .
using (15) and set P
^ and perform the steps 2 to 9 of
6. Set g ¼ g; b ¼ b and w ¼ w
Algorithm 1. Then estimate the upper probability of failure
^ f ¼ P^f .
using (15) and set P
7. Verify that Eq. (18) is satisfied; in that case, end Algorithm 2,
otherwise, return to step 3 in order to draw additional samples
from C.
in rx the first term stands for the normal stress due to the axial
forces while the second represents the normal stress due to the
bending moment M,
M ¼ F 1 L1 cos h1 þ F 2 L2 cos h2
at the top fiber which is at a distance c ¼ d=2 from the neutral axis
of the bar; the area and moment of inertia and are given by:
A¼
p
4
2
d ðd 2tÞ
2
I¼
p
64
4
4
d ðd 2tÞ
:
The basic variables of the problem are described in Table 1. Here
X 1 ; X 2 and X 5 are modeled as CDFs, X 3 and X 4 are modeled as probability boxes, X 6 and X 7 are modeled as possibility distributions, and
variables X 8 ; X 9 ; X 10 and X 11 are modeled as intervals. Let us denote
by Nðl; rÞ the gaussian probability distributions function with
mean l and standard deviation r; on the other hand, Gumbelðl; rÞ,
represents a Gumbel (Type I extreme value) distribution
f ðx; l; rÞ ¼
1
r
exp
x l
r
exp
x l
r
with location parameters l and scale parameter r.
Finally, we will suppose that variables X 5 to X 11 are independent (and in consequence they are related by a product copula
QdimðrÞ
C prod ðrÞ :¼ i¼1 r i Þ), while variables X 1 to X 4 are related through
a Gumbel copula:
0
C Gumbel ða; dÞ :¼ exp @
dimð
XaÞ
i¼1
!1=d 1
A
ð ln ai Þ
d
with parameter d ¼ 10; in consequence the copula that relates all
input variables of this Example is
CðaÞ ¼ C Gumbel ð½a1 ; a2 ; a3 ; a4 ; 10Þ
11
Y
ai :
i¼5
ð20Þ
For each limit state function gðuÞ and gðuÞ, the classical the HLRF
algorithm (see Refs [48,70]) was employed to perform the FORM
analysis. After 7 iterations, the reliability index associated to the
limit state function gðuÞ, was found to be b ¼ 2:4296; on the other
hand, for the limit state function gðuÞ, a reliability index b ¼ 3:2177
Finally, since F LP # F UP , if a point already belongs to failure set
fu 2 U : gðuÞ 6 0g then, it automatically belongs to the failure set
fu 2 U : gðuÞ 6 0g. This fact can be employed to speed up the
calculations.
Let us illustrate the procedure with a numerical example.
6. Numerical example
6.1. Example 1
Consider the cantilever tube of diameter d and thickness t
shown in Fig. 2, which is a modified example from [69]; this tube
is subject to external forces F 1 ; F 2 ; P and a torsional moment T. The
limit state function is defined as the difference between the yield
ð19Þ
Fig. 2. Structure considered in Example 1.
61
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
Table 1
Input variables of the problem analyzed in Example 1.
Variable
Units
Modeled as
X 1 (F 1 )
X 2 (F 2 )
X 3 (P)
X 4 (T)
X 5 (ry )
X 6 (t)
X 7 (d)
X 8 (L1 )
X 9 (L2 )
X 10 (h1 )
X 11 (h2 )
kN
kN
kN
Nm
MPa
mm
mm
mm
mm
Degrees
Degrees
N(2, 0.2)
N(3, 0.3)
Gumbel([11.9,12.1], [1.19,1.21])
N(90, [8.95, 9.05])
N(220, 22)
Possibility distribution trapezoidal(2.8, 2.9, 3.1, 3.2)
Possibility distribution triangular(41.8, 42, 42.2)
Interval(119.75, 120.25)
Interval(59.75, 60.25)
interval(19 ; 21 )
interval(30 ; 35 )
was computed using 8 iterations. This means that the lower and
upper bound of the failure probability were estimated by FORM
as:
bf 2 ½P
b f;FORM ; P
b f;FORM
P
3
=
½UðbÞ; UðbÞ
=
½6:4608 104 ;
7:558 10 ; thus, according to Eq. (18), at least 154,680 and
13,130 samples are required in order to achieve an approximate
b f and P
bf
coefficient of variation of 0.1 in the computation of P
respectively. In this case, n ¼ 160; 000 and n ¼ 15; 000 realizations
from the copula (20) were used in the proposed method for the
b f and P
b f respectively.
computation of P
6.2. Example 2
Consider the 132-bar semi-spherical dome shown in Fig. 4
whose topology has been taken from [71]. Each bar has a 100mm2 cross sectional area.
On each free node of all of its composing polygons acts a vertical
load, so that in total, the dome is subject to the action of 37 loads
(variables x2 to x38 ); those loads follow normal distributions with
mean between 17 kN and 16 kN and standard deviation
between 1.6 and 1.7 kN; loads and are related through a Gumbel
copula (see Eq. (19)) with d ¼ 5. All loads were modeled as probability boxes. The modulus of elasticity (variable x1 ) is also Gaussian
with mean 205.8 kN/mm2 and a coefficient of variation of 0.05. In
consequence, the dimensionality of the problem is d ¼ 38 and each
focal element is a 37-dimensional box. The dome cannot displace
but is allowed to rotate in its supports. The limit state function is
defined as
Fig. 3. ðv 1 ; v 2 Þ representation for the example considered in Section 6.1 for both
limit state functions gðuÞ (top) and gðuÞ (bottom). In these plots it is possible to see
Eq. (12) for different values of k. Observe that the surface with k ¼ 0 provides a
rough separation between safe and failure samples. The dashed lines are the ones
corresponding to kmin (red) and kmax (blue) when the algorithm stopped at 267 and
192 evaluations of gðuÞ and gðuÞ, respectively. (For interpretation of the references
to colour in this figure legend, the reader is referred to the web version of this
article.)
4000
z, mm
These realizations were mapped to the U-space using the transformation (16) and subsequently were mapped to the V-space
using Eqs. (10) and (11). The plot of the samples in the V-space
is displayed in Fig. 3 for the limit state functions gðuÞ and gðuÞ.
Once the algorithm was applied, only 192 and 267 samples had
b f ¼ 1:894 104 and
to be evaluated for the estimation of P
b f ¼ 0:015 respectively; these results differ from the ones estiP
mated by FORM but coincides with the result calculated using
Monte Carlo simulation, inasmuch as all of the failure samples of
the simulation were correctly identified. The C.O.V. estimates of
the bounds of probability of failure are 0.057 and 0.066, which
are less than the target one of 0.10.
This example shows the efficiency of the method, inasmuch as
only 192 focal element evaluations were required in comparison
to the 160,000 focal element evaluations required by Monte Carlo
b f , achieving the same precision of the latter.
in the evaluation of P
2000
0
6000
4000
2000
y, mm
5000
0
−2000
−4000
−6000
0
−5000
x, mm
Fig. 4. 3D truss analyzed in Example 2. All measures are displayed in millimeters.
gðxÞ ¼ 28 dcentral ðxÞ;
where dcentral ðxÞ is the absolute vertical displacement of the central
node measured in millimeters.
The FORM analysis was performed using the classical HLRF
algorithm [48,70]; for the limit state functions (7a) and (7b) it
was found that b ¼ 2:5794 and b ¼ 1:6572; each evaluation of b
required 8 iterations. Since the limit state functions (7a) and (7b)
for this problem are linear, estimate of the interval that contains
the true failure probability in case that all random variables were
independent
is
b f;FORM ; P
b f;FORM ¼ ½UðbÞ; UðbÞ ¼ ½0:0049; 0:0487.
½P
62
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
According to Eq. (18), in order to guarantee a coefficient of variation less than 0.1, 1:5 106 samples from the copula
CðaÞ ¼ a1 C Gumbel ð½a2 ; a3 ; . . . ; a38 ; 5Þ;
b f ; the calculation of P
b f required
were employed for calculating P
15,000 samples from C. Those samples were mapped to the
ðv 1 ; v 2 Þ-plane, as shown in Fig. 5.
Once Algorithm 2 was applied, only 65 and 140 focal element
evaluations of the limit state functions gðaÞ and gðaÞ were required
b f and P
b f respectively. Since the limit state
in order to estimate P
function is linear, interval arithmetic was employed, inasmuch as
it is more efficient in terms of the computational time than the vertex or the optimization method.
b f , 59 failure samples
On the one hand, for the calculation of P
laid between kmin ¼ 5:8415 105 and kmax ¼ 6:4557 104 ,
plus the 62 samples above the line corresponding to kmax . Thereb f ¼ 59þ626 ¼ 8:0667 105 .
fore, P
1:510
b f , 35 failure
On the other hand, for the calculation of P
samples where found between kmin ¼ 6:6220 104 and
kmax ¼ 9:0329 105 , plus the 108 samples above the line correb f ¼ 35þ108 ¼ 0:0095.
sponding to kmax . Therefore, P
15000
In conclusion, using 65 + 140 = 205 focal element samples, the
result coincides with the one obtained by means of interval Monte
Carlo simulation using 1:5 106 focal element samples, inasmuch
as all of the failure samples of the simulations were correctly
identified. This shows the efficiency of the method.
7. Conclusions
In this paper, a very efficient method for the reliability analysis
of structures under uncertainty, in which the input variables are
modeled using any representation provided by random set theory
(that is, by possibility distributions, intervals, probability boxes,
CDFs or Dempster-Shafer structures) is presented. Each focal element that is sampled from the random set is modeled either as a
point in space X ð0; 1d , or as a d-dimensional box in the space
of input variables X . In X, a copula C naturally models the dependence between the input variables; also in X there exists two limit
state functions gðaÞ and gðaÞ that define the sets F LP and F UP ; the
focal elements corresponding to F LP and F UP contribute to the evaluation of the lower and upper probability of failure respectively.
After sampling a large number of points M from the copula C, a
nonlinear transformation U1 is employed to map the points in
X to the standard gaussian space U; in consequence the limit state
functions gðaÞ and gðaÞ are transformed to the functions gðuÞ and
gðuÞ, respectively. Using the point representation of the focal elements in U, an efficient algorithm proposed in [51] is executed; this
algorithm takes into account that the unit vectors that points to the
design points of gðuÞ and gðuÞ are directions of steepest change of
those limit state functions. Using this fact, another nonlinear mapping is performed on the samples M from the space U to a bidimensional representation which allows visualizing the evolution
of the order statistics of a limit state function. On this basis, it is
very easy to select some samples that are highly likely to produce
failure, because the failure domain is mapped to a standard position. Using the selected samples, the bounds of the failure probability are computed by a procedure that represents a drastic
reduction of the computational labor implied by plain Monte Carlo
simulation for problems defined with uncertain distributions,
while delivering the same results. The numerical experiments confirm the solid theoretical foundation of this proposal.
Acknowledgements
Financial support for the realization of the present research has
been received from the Universidad Nacional de Colombia. The
support is graciously acknowledged.
References
Fig. 5. ðv 1 ; v 2 Þ representation for the example considered in Section 6.2; for both
limit state functions gðuÞ (top) and gðuÞ (bottom). In these plots it is possible to see
Eq. (12) for different values of k. Observe that the surface with k ¼ 0 provides a
rough separation between safe and failure samples. The dashed lines are the ones
corresponding to kmin (red) and kmax (blue) when the algorithm stopped at 65 and
140 evaluations of gðuÞ and gðuÞ, respectively. (For interpretation of the references
to colour in this figure legend, the reader is referred to the web version of this
article.)
[1] Schuëller GI, Pradlwarter HJ, Koutsourelakis PS. A critical appraisal of
reliability estimation procedures for high dimensions. Probab Eng Mech
2004;19:463–74.
[2] Schuëller GI, Stix R. A critical appraisal of methods to determine failure
probabilities. Struct Safety 1987;4:293–309.
[3] Sexsmith RG. Probability-based safety analysis – value and drawbacks. Struct
Safety 1999;21:303–10.
[4] Elishakoff I. Essay on reliability index, probabilistic interpretation of safety
factor and convex models of uncertainty. In: Casciati F, Roberts JB, editors.
Reliability problems: general principles and applications in mechanics of solids
and structures. Wien: Springer-Verlag; 1991. p. 237–71.
[5] Ben-Haim Y. Robust reliability in the mechanical sciences. Berlin: Springer
Verlag; 1996.
[6] Oberguggenberger M, Fellin W. The fuzziness and sensitivity of failure
probabilities. In: Fellin W, Lessmann H, Oberguggenberger M, Vieider R,
editors. Analyzing uncertainty in civil engineering. Berlin: Springer; 2010. p.
33–49.
[7] Vapnik VN. The nature of statistical learning theory. New York: Springer
Verlag; 2000.
D.A. Alvarez, J.E. Hurtado / Computers and Structures 142 (2014) 54–63
[8] Vapnik VN. Statistical learning theory. New York: John Wiley and Sons; 1998.
[9] Elishakoff I. Essay on uncertainties in elastic and viscoelastic structures: from
A.M. Freudenthal’s criticisms to modern convex modeling. Comput Struct
1995;56:871–985.
[10] Elishakoff I, Ohsaki M. Optimization and anti-optimization of structures under
uncertainty. London: Imperial College Press; 2010.
[11] Elishakoff I. Are probabilistic and anti-optimization approaches compatible?
In:
Elishakoff
I,
editor.
Whys
and
hows
in
uncertainty
modelling. Wien: Springer-Verlag; 1999. p. 263–355.
[12] Cremona C, Gao Y. The possibilistic reliability theory: theoretical aspects and
applications. Struct Safety 1997;19:173–201.
[13] Möller B, Beer M. Fuzzy randomness: uncertainty in civil engineering and
computational mechanics. Berlin: Springer; 2010.
[14] Hall JW, Lawry J. Fuzzy label methods for constructing imprecise limit state
functions. Struct Safety 2003;25:317–41.
[15] Joslyn C, Booker JM. Generalized information theory for engineering modeling
and simulation. In: Nikolaidis E, Ghiocel DM, Singhal S, editors. Engineering
design reliability. Boca Raton: CRC Press; 2005. p. 9.1–9.40.
[16] Oberkampf WL, Helton JC. Evidence theory for engineering applications. In:
Nikolaidis E, Ghiocel DM, Singhal S, editors. Engineering design
reliability. Boca Raton: CRC Press; 2005. p. 10.1–10.30.
[17] Ben-Haim Y, Elishakoff I. Convex models of uncertainty in applied
mechanics. Amsterdam: Elsevier; 1990.
[18] Koyluoglu U, Cakmak S, Ahmet N, Soren RK. Interval algebra to deal with
pattern loading and structural uncertainty. J Eng Mech 1995;121:1149–57.
[19] Nakagiri S, Suzuki K. Finite element interval analysis of external loads
identified by displacement input with uncertainty. Comput Methods Appl
Mech Eng 1999;168:63–72.
[20] Qiu Z. Convex models and interval analysis method to predict the effect of
uncertain-but-bounded parameters on the buckling of composite structures.
Comput Methods Appl Mech Eng 2005;194:2175–89.
[21] McWilliam S. Anti-optimisation of uncertain structures using interval analysis.
Comput Struct 2001;79:421–30.
[22] Degrauwe D, Lombaert G, De Roeck G. Improving interval analysis in finite
element calculations by means of affine arithmetic. Comput Struct
2010;88:247–54.
[23] Zhang H, Mullen RL, Muhanna RL. Interval Monte Carlo methods for structural
reliability. Struct Safety 2010;32:183–90.
[24] Dessombz O, Thouverez F, Lâiné JP, Jézéquel L. Analysis of mechanical systems
using interval computations applied to finite element methods. J Sound Vib
2001;239:949–68.
[25] Impollonia N, Muscolino G. Interval analysis of structures with uncertain-butbounded axial stiffness. Comput Methods Appl Mech Eng 2011;200:1945–62.
[26] Qiu Z, Wang X. Comparison of dynamic response of structures with uncertainbut-bounded parameters using non-probabilistic interval analysis method and
probabilistic approach. Int J Solids Struct 2003;40:5423–39.
[27] Muhanna RL, Mullen RL. Interval methods for reliable computation. In:
Nikolaidis E, Ghiocel DM, Singhal S, editors. Engineering Design
Reliability. Boca Raton: CRC Press; 2005. p. 12.1–12.24.
[28] Alvarez DA. On the calculation of the bounds of probability of events using
infinite random sets. Int J Approx Reason 2006;43:241–67.
[29] Alvarez DA. A Monte Carlo-based method for the estimation of lower and
upper probabilities of events using infinite random sets of indexable type.
Fuzzy Sets Syst 2009;160:384–401.
[30] Tonon F. Using random set theory to propagate epistemic uncertainty through
a mechanical system. Reliab Eng Syst Safety 2004;85:169–81.
[31] Chernousko FL. What is ellipsoidal modelling and how to use it for control and
state estimation? In: Elishakoff I, editor. Whys and hows in uncertainty
modelling. Wien: Springer-Verlag; 1999. p. 127–88.
[32] Banichuk
NV,
Neittaanmäki
PJ.
Structural
optimization
with
uncertainties. Berlin: Springer; 2010.
[33] Hlaváček I, Chleboun J, Babuška I. Uncertain input data problems and the
worst scenario method. Amsterdam: Elsevier; 2004.
[34] Elishakoff I, Zingales M. Contrasting probabilistic and anti-optimization
approaches in an applied mechanics problem. Int J Solids Struct
2003;40:4281–97.
[35] Wang X, Wang L, Elishakoff I, Qiu Z. Probability and convexity concepts are not
antagonistic. Acta Mech 2011;219:45–64.
[36] Tonon F, Bernardini A, Elishakoff I. Hybrid analysis of uncertainty: probability,
fuzziness and anti-optimization. Chaos Solitons Fract 2001;12:1403–14.
[37] Jiang C, Han X, Lu GY, Liu J, Zhang Z, Bai YC. Correlation analysis of nonprobabilistic convex model and corresponding structural reliability technique.
Comput Methods Appl Mech Eng 2011;200:2528–46.
[38] Guo J, Du X. Sensitivity analysis with mixture of epistemic and aleatory
uncertainties. AIAA J 2007;45:2337–49.
63
[39] Du X. Unified uncertainty analysis by the first order reliability method. J Mech
Des 2008;130:091401–10.
[40] Guo J, Du X. Reliability sensitivity analysis with random and interval variables.
Int J Numer Methods Eng 2009;78:1585–617.
[41] Zhao YG, Ono T. New approximations for SORM: Part I. J Eng Mech 1999;
125:79–85.
[42] Mahadevan S, Shi P. Multiple linearization method for nonlinear reliability
analysis. J Eng Mech 2001;127:1165–73.
[43] Zhao YG, Ono T. A general procedure for first/second-order reliability method
(FORM/SORM). Struct Safety 1999;21. 955-112.
[44] Eamon CD, Thompson M, Liu Z. Evaluation of accuracy and efficiency of some
simulation and sampling methods in structural reliability analysis. Struct
Safety 2005;27:356–92.
[45] Valdebenito MA, Pradlwarter HJ, Schuëller GI. The role of the design point for
calculating failure probabilities in view of dimensionality and structural
nonlinearities. Struct Safety 2010;21:101–11.
[46] Katafygiotis LS, Zuev KM. Geometric insight into the challenges of solving
high-dimensional reliability problems. Probab Eng Mech 2008;23:208–18.
[47] Zhang Hao, Dai Hongzhe, Beer Michael, Wang Wei. Structural reliability
analysis on the basis of small samples: an interval quasi-Monte Carlo method.
Mech Syst Signal Process 2013;37(1–2):137–51.
[48] Hasofer AM, Lind NC. Exact and invariant second moment code format. J Eng
Mech Div 1974;100:111–21.
[49] Hurtado JE, Alvarez DA. The encounter of interval and probabilistic approaches
to structural reliability at the design point. Comput Methods Appl Mech Eng
2012;225–228:74–94.
[50] Hurtado Jorge E, Alvarez Diego A, Ramírez Juliana. Fuzzy structural analysis based
on fundamental reliability concepts. Comput Struct 2012;112–113:183–92.
[51] Hurtado Jorge E, Alvarez Diego A. A method for enhancing computational
efficiency in Monte Carlo calculation of failure probabilities by exploiting
FORM results. Comput Struct 2013;117:95–104.
[52] Sklar A. Fonctions de répartition à n dimensions et leurs marges. Publ Inst Stat
Univ Paris 1959;8:229–31.
[53] Sklar A. Random variables, distribution functions, and copulas – a personal
look backward and forward. In: Rüschendorf L, Schweizer B, Taylor M, editors.
Distributions with fixed marginals and related topics. Hayward, CA: Institute
of Mathematical Statistics; 1996. p. 1–14.
[54] Kolmogorov AN, Fomin SV. Introductory real analysis. New York: Dover
Publications; 1970. ISBN: O-486-61226-0.
[55] Nelsen RB. An introduction to copulas. Springer Series in Statistics. New
York: Springer; 2010.
[56] Dempster Arthur P. Upper and lower probabilities induced by a multivalued
mapping. Ann Math Stat 1967;38:325–39.
[57] Molchanov Ilya. Theory of random sets. Springer; 2005.
[58] Nguyen Hung T. An introduction to random sets. Chapman and Hall; 2006.
[59] Alvarez Diego A. Infinite random sets and applications in uncertainty analysis.
PhD thesis, Arbeitsbereich f?r Technische Mathematik am Institut f?r
Grundlagen der Bauingenieurwissenschaften. Leopold-Franzens-Universit?t
Innsbruck,
Innsbruck,
Austria;
2007.
<https://sites.google.com/site/
diegoandresalvarezmarin/RSthesis.pdf>.
[60] Ferson Scott, Kreinovich Vladik, Ginzburg Lev, Myers Davis S, Sentz Kari.
Constructing probability boxes and Dempster-Shafer structures. Report
SAND2002-4015, Sandia National Laboratories, Albuquerque, NM; January
2003. <http://www.ramas.com/unabridged.zip>.
[61] Dubois Didier, Prade Henri. Possibility theory. New York: Plenum Press; 1988.
[62] Liu PL, Der Kiureghian A. Multivariate distribution models with prescribed
marginals and covariances. Probab Eng Mech 1986;1:105–12.
[63] Melchers RE. Structural reliability: analysis and prediction. Chichester: John
Wiley and Sons; 1999.
[64] Hurtado JE. Dimensionality reduction and visualization of structural reliability
problems using polar features. Probab Eng Mech 2012;29:16–31.
[65] Fiessler B, Neumann HJ, Rackwitz R. Quadratic limit states in structural
reliability. J Eng Mech Div 1979;105:661–76.
[66] Tvedt L. Distribution of quadratic forms in normal space – application to
structural reliability. J Eng Mech 1990;116:1183–7.
[67] Cai GQ, Elishakoff I. Refined second-order reliability analysis. Struct Safety
1994;14:267–76.
[68] Der Kiureghian A, Lin HZ, Hwang SJ. Second-order reliability approximations. J
Eng Mech 1978;113:1208–25.
[69] Du Xiaoping. Unified uncertainty analysis by the first order reliability method.
J Mech Des 2008;130(9). pp. 091401-1–091401-10.
[70] Rackwitz R, Fiessler B. Structural reliability under combined load sequences.
Comput Struct 1978;9:489–94.
[71] Ohsaki M, Ikeda K. Stability and optimization of structures. New
York: Springer; 2010.