Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Convexity of Chance Constraints With Dependent Random Variables - The Use o Copulae - Henrion, Strugarek

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Convexity of chance constraints with

dependent random variables: the use of copulae

René Henrion1 and Cyrille Strugarek2


1
Weierstrass Institute for Applied Analysis and Stochastics, 10117 Berlin,
Germany. henrion@wias-berlin.de
2
Credit Portfolio Management, Calyon Credit Agricole CIB, 92920 Paris, France
cyrille.strugarek@calyon.com

Summary. We consider the convexity of chance constraints with random right-


hand side. While this issue is well understood (thanks to Prékopa’s Theorem) if
the mapping operating on the decision vector is componentwise concave, things be-
come more delicate when relaxing the concavity property. In an earlier paper, the
significantly weaker r-concavity concept could be exploited, in order to derive even-
tual convexity (starting from a certain probability level) for feasible sets defined
by chance constraints. This result heavily relied on the assumption of the random
vector having independent components. A generalization to arbitrary multivariate
distributions is all but straightforward. The aim of this paper is to derive the same
convexity result for distributions modelled via copulae. In this way, correlated com-
ponents are admitted, but a certain correlation structure is imposed through the
choice of the copula. We identify a class of copulae admitting eventually convex
chance constraints.

Acknowledgement. The work of the first author was supported by the DFG Research
Center Matheon Mathematics for key technologies in Berlin

1 Introduction
Chance constraints or probabilistic constraints represent an important tool for mod-
eling restrictions on decision making in the presence of uncertainty. They take the
typical form
P(h(x, ξ) ≥ 0) ≥ p, (1)
where x ∈ Rn is a decision vector, ξ : Ω → Rm is an m-dimensional random vector
defined on some probability space (Ω, A, P), h : Rn × Rm → Rs is a vector-valued
mapping and p ∈ [0, 1] is some probability level. Accordingly, a decision vector x
(which is subject to optimization with respect to some given objective function) is
declared to be feasible, if the random inequality system h(x, ξ) ≥ 0 is satisfied at
least with probability p. A compilation of practical applications in which constraints
of the type (1) play a crucial role, may be found in the standard references [Pre95],
2 René Henrion and Cyrille Strugarek

[Pre03]. Not surprisingly, one of the most important theoretical questions related to
such constraints is that of convexity of the set of decisions x satisfying (1).
In this paper, we shall be interested in chance constraints with separated random
vectors, which come as a special case of (1) by putting h(x, ξ) = g(x) − ξ. More
precisely, we want to study convexity of a set of feasible decisions defined by

M (p) = {x ∈ Rn |P(ξ ≤ g(x)) ≥ p}, (2)


n m m
where g : R → R is some vector-valued mapping. With F : R → R denoting the
distribution function of ξ, the same set can be rewritten as

M (p) = {x ∈ Rn |F (g(x)) ≥ p}. (3)

It is well-known from Prékopa’s classical work (see, e.g., [Pre95], Th. 10.2.1) that
this set is convex for all levels p provided that the law P ◦ ξ −1 of ξ is a log-concave
probability measure on Rm and that the components gi of g are concave. The latter
condition is satisfied, of course, in the important case of g being a linear mapping.
However, the concavity assumption may be too strong in many engineering appli-
cations, so the question arises, whether it can be relaxed to some weaker type of
(quasi-) concavity. To give an example, the function ex is not concave, while its log
is. It has been shown in an earlier paper that such relaxation is possible indeed,
but it comes at the price of a rather strong assumption of ξ having independent
components and of a slightly weaker result, in which convexity of M (p) cannot be
guaranteed for any level p but only for sufficiently large levels. We shall refer to
this property as eventual convexity. Note, however, that this is not a serious restric-
tion in chance constrained programming, because probability levels are chosen there
close to 1 anyway. To provide an example (see Example 4.1 in [HS08]), let the two
dimensional random vector ξ have a bivariate standard normal distribution with
independent components and assume that

gi (x1 , x2 ) = 1/(x21 + x22 + 0.1) (i = 1, 2).

Then, for p∗ ≈ 0.7 one has that M (p) is nonconvex for p < p∗ whereas it is convex
for p ≥ p∗ . Note, that though the multivariate normal distribution is log-concave
(see [Pre95]), the result by Prékopa mentioned above does not apply because the
gi are not concave. As a consequence, M (p) is not convex for all p as it would be
guaranteed by that result. Nonetheless, eventually convexity can be verified by the
tools developed in [HS08].
The aim of this paper is to go beyond the restrictive independence assumption
made in [HS08]. While to do this directly for arbitrary multivariate distributions of
ξ seems to be very difficult, we shall see that positive results can be obtained in case
that this distribution is modeled by means of a copula. Copulae allow to represent
dependencies in multivariate distributions in a more efficient way than correlation
does. They may provide a good approximation to the true underlying distribution
just on the basis of its one-dimensional margins. This offers a new perspective also
to modeling chance constraints not considered extensively so far to the best of our
knowledge. The paper is organized as follows: in a first section, some basics on
copulae are presented and the concepts of r-concave and r-decreasing functions
introduced. The following section contains our main result on eventual convexity
of chance constraints defined by copulae. In a further section, log-exp concavity of
copulae, a decisive property in the mentioned convexity result, is discussed. Finally,
Convexity of chance constraints: the use of copulae 3

a small numerical example illustrates the application of the convexity result in the
context of chance constrained programming.

2 Preliminaries
2.1 Basics on copulae

In this section, we compile some basic facts about copulae. We refer to the intro-
duction [Nel06] as a standard reference.
Definition 1. A copula is a distribution function C : [0, 1]m → [0, 1] of some ran-
dom vector whose marginals are uniformly distributed on [0, 1].
A theorem by Sklar states that for any distribution function F : Rm → [0, 1] with
marginals (Fi )1≤i≤m there exists a copula C for F satisfying

∀x ∈ Rm , F (x) = C (F1 (x1 ), . . . Fm (xm )) . (4)

Moreover, if the marginals Fi are continuous, then the copula C is uniquely given
by
C(u) = F F1−1 (u1 ), . . . , Fm
−1
(um ) ; Fi−1 (t) := inf r.

(5)
Fi (r)≥t

Note that Sklar’s Theorem may be used in two directions: either in order to identify
the copula of a given distribution function or to define a distribution function via a
given copula (for instance a copula with desirable properties could be used to fit a
given distribution function). The following are first examples for copulae:
m
• Independent or Product Copula: C(u) = Πi=1 ui
• Maximum or Comonotone Copula: C(u) = min1≤i≤m ui
Gaussian or Normal Copula: C Σ (u) = ΦΣ Φ−1 (u1 ), . . . , Φ−1 (um )


Here, ΦΣ refers to a multivariate standard normal distribution function with mean


zero, unit variances and covariance (=correlation) matrix Σ. Φ is just the one-
dimensional standard normal distribution function.
With regard to (4), the independent copula simply creates a joint distribution
with independent components from given marginals. The theoretical interest of the
maximum copula (also explaining its name) comes from the fact that any copula C
is dominated by the maximum copula:

C(u) ≤ min ui . (6)


1≤i≤m

The Gaussian copula has much practical importance (similar to the normal dis-
tribution function). It allows to approximate a multivariate distribution function F
having marginals Fi which are not necessarily normally distributed, by a composition
of a multivariate normal distribution function with a mapping having components
Φ−1 Fi defined via one-dimensional distribution functions ([KW97]).
A lot of other practically important copulae is collected in a whole family:
4 René Henrion and Cyrille Strugarek

Definition 2. A copula C is called Archimedean if there exists a continuous strictly


decreasing function ψ : [0, 1] → R+ such that ψ(1) = 0 and
m
!
−1
X
C(u) = ψ ψ(ui ) ,
i=1

ψ is called the generator of the Archimedean copula C. If limu→0 ψ(u) = +∞, then
C is called a strict Archimedean copula.
Conversely, we can start from a generator ψ to obtain a copula. The following
proposition provides sufficient conditions to get a copula from a generator:
Proposition 1. Let ψ : [0, 1] → R+ such that
1. ψ is strictly decreasing, ψ(1) = 0, limu→0 ψ(u) = +∞
2. ψ is convex
dk
3. (−1)k dt kψ
−1
(t) ≥ 0 ∀k = 0, 1, . . . , m, ∀t ∈ R+ .
Then, C(u) = ψ −1 ( m
P
i=1 ψ(ui )) is a copula.

Example 1. We have the following special instances of Archimedean copulae:


1. The independent copula is a strict Archimedean copula whose generator is
ψ(t) = − log(t).
2. Clayton copulas are the Archimedean copulas defined by the strict generator
ψ(t) = θ−1 (t−θ − 1), with θ > 0.
3. Gumbel copulas are the Archimedean copulas defined by the strict generator
ψ(t) = (− log(t))θ , with θ ≥ 1.
4. Frank copulas are the
 −θt  Archimedean copulas defined by the strict generator
ψ(t) = − log ee−θ −1−1
, with θ > 0.

2.2 r-concavity and r-decreasing density functions


We recall the definition of an r-concave function (see, e.g., [Pre95]):
Definition 3. A function f : Rs → (0, ∞) is called r-concave for some r ∈ [−∞, ∞],
if
f (λx + (1 − λ)y) ≥ [λf r (x) + (1 − λ)f r (y)]1/r ∀x, y ∈ Rs , ∀λ ∈ [0, 1]. (7)
In this definition, the cases r ∈ {−∞, 0, ∞} are to be interpreted by continuity.
In particular, 1-concavity amounts to classical concavity, 0-concavity equals log-
concavity (i.e., concavity of log f ), and −∞-concavity identifies quasi-concavity
(this means that the right-hand side of the inequality in the definition becomes
min{f (x), f (y)}). We recall, that an equivalent way to express log-concavity is the
inequality
f (λx + (1 − λ)y) ≥ f λ (x)f 1−λ (y) ∀x, y ∈ Rs , ∀λ ∈ [0, 1]. (8)
For r < 0, one may raise (7) to the negative power r and recognize, upon reversing
the inequality sign, that this reduces to convexity of f r . If f is r∗ -concave, then f
is r-concave for all r ≤ r∗ . In particular, concavity implies log-concavity. We shall
be mainly interested in the case r ≤ 1.

The following property will be crucial in the context of this paper:


Convexity of chance constraints: the use of copulae 5

Definition 4. We call a function f : R → R r-decreasing for some r ∈ R, if it is


continuous on (0, ∞) and if there exists some t∗ > 0 such that the function tr f (t)
is strictly decreasing for all t > t∗ .

Evidently, 0-decreasing means strictly decreasing in the classical sense. If f is a


nonnegative function like the density of some random variable, then r-decreasing
implies r′ -decreasing whenever r′ ≤ r. Therefore, one gets narrower families of r-
decreasing density functions with r → ∞. If f is not just continuous on (0, ∞)
but happens even to be differentiable there, then the property of being r-decreasing
amounts to the condition
tf ′ (t) + rf (t) < 0 for all t > t∗ .
It turns out that most one-dimensional distribution functions of practical use share
the property of being r-decreasing for any r > 0 (an exception being the the density
of the Cauchy distribution which is r-decreasing only for r < 2). The following
table, borrowed from [HS08] for the readers convenience, compiles a collection of
one-dimensional densities all of which are r-decreasing for any r > 0 along with the
respective t∗ -values from Definition 4 labeled as t∗r to emphasize their dependence
on r:

Table 1. t∗r - values in the definition of r-decreasing densities for a set of common
distributions.
Law Density t∗r
  √
2 µ+ µ2 +4rσ 2
normal √1
2πσ
exp − (t−µ)
2σ 2 2
r
exponential λ exp (−λt) (t > 0) λ
b−1 b+r−1 1/b
exp −atb
 
Weibull abt (t > 0) ab
a
Gamma b
Γ (a)
exp (−bt) ta−1 (t > 0) a+r−1
b
 2 √
χ 1
2n/2−1 Γ (n/2)
tn−1 exp − t2 (t > 0) n + r − 1
χ2 1
tn/2−1 exp − 2t

2n/2 Γ (n/2) 
(t > 0) n + 2r − 2

(log t−µ)2 2
1
log-normal √2πσt exp − 2σ2 (t > 0) eµ+(r−1)σ
2

t2
 √
Maxwell √2t exp − 2σ 2 (t > 0) σ r+2
2πσ 3
 2 q
2t
Rayleigh λ
exp − tλ (t > 0) r+1
2
λ

We shall need as an auxiliary result the following lemma whose proof is easy and
can be found in [HS08] (Lemma 3.1):
Lemma 1. Let F : R → [0, 1] be a distribution function with (r + 1)-decreasing
density f for some r > 0. Then, the function z 7→ F (z −1/r ) is concave on 0, (t∗ )−r ,


where t refers to Definition 4. Moreover, F (t) < 1 for all t ∈ R.

3 Main Result
The purpose of our analysis is to investigate convexity of the set
6 René Henrion and Cyrille Strugarek

M (p) = {x ∈ Rn |C ((F1 (g1 (x)), . . . , Fm (gm (x))) ≥ p } . (9)

Here, g is as in (3), the Fi are the one-dimensional marginals of some m-dimensional


distribution function F and C is a copula. If C is the exact copula associated with
F via (4) or (5), respectively, then the feasible set (9) coincides with the chance
constraint (3). In general, we rather have in mind the idea that F is unknown and C
serves its approximation. Then, (9) can be understood as a chance constraint similar
to (3) or (2) but with the distribution function F occuring there replaced by the
distribution function C ◦ H, where Hi (x) := Fi (xi ) for i = 1, . . . , m. We introduce
the following property of a copula which will be crucial for our convexity result:
Definition 5. Let q ∈ (0, 1)m . A copula C : [0, 1]m → [0, 1] is called log exp-concave
m
on Πi=1 [qi , 1) if the mapping C̃ : Rm → R defined by

C̃(u) = log C (eu1 , . . . , eum ) , (10)


m
is concave on Πi=1 [log qi , 0).
The following statement of our main result makes reference to Defintions 3, 4, 5.
Theorem 1. In (9), we make the following assumptions for i = 1, . . . , m:

1. There exist ri > 0 such that the components gi are (−ri )-concave.
2. The marginal distribution functions Fi admit (ri + 1)-decreasing densities fi .
m
3. The copula C is log exp-concave on Πi=1 [Fi (t∗i ), 1), where the t∗i refer to Defi-
nition 4 in the context of fi being (ri + 1)-decreasing.

Then, M (p) in (9) is convex for all p > p∗ := max{Fi (t∗i )|1 ≤ i ≤ m}.

Proof. Let p > p∗ , λ ∈ [0, 1] and x, y ∈ M (p) be arbitrary. We have to show that
λx + (1 − λ)y ∈ M (p). We put

qix := Fi (gi (x)) < 1, qiy := Fi (gi (y)) < 1 (i = 1, . . . , m) , (11)

where the strict inequalities rely on the second statement of Lemma 1. In particular,
by (11), the inclusions x, y ∈ M (p) mean that

C(q1x , . . . , qm
x
) ≥ p, C(q1y , . . . , qm
y
) ≥ p. (12)

Now, (11), (6), (12) and the definition of p entail that

1 > qix ≥ p > Fi (t∗i ) ≥ 0, 1 > qiy ≥ p > Fi (t∗i ) ≥ 0 (i = 1, . . . , m) . (13)

For τ ∈ [0, 1], we denote the τ -quantile of Fi by

F̃i (τ ) := inf{z ∈ R|Fi (z) ≥ τ }.

Note that, for τ ∈ (0, 1), F̃i (τ ) is a real number. Having a density, by assumption
2., the Fi are continuous distribution functions. As a consequence, the quantile
functions F̃i (τ ) satisfy the implication

q > Fi (z) =⇒ F̃i (q) > z ∀q ∈ (0, 1) ∀z ∈ R.

Now, (11) and (13) provide the relations


Convexity of chance constraints: the use of copulae 7

gi (x) ≥ F̃i (qix ) > t∗i > 0, gi (y) ≥ F̃i (qiy ) > t∗i > 0 (i = 1, . . . , m) (14)
t∗i
(note that > 0 by Definition 4). In particular, for all i = 1, . . . , m, it holds that
h i
min{F̃i−ri (qix ), F̃i−ri (qiy )}, max{F̃i−ri (qix ), F̃i−ri (qiy )} ⊆ 0, (t∗i )−ri .

(15)

Along with assumption 1., (14) yields for i = 1, . . . , m:


−1/ri
gi (λx + (1 − λ)y) ≥ λgi−ri (x) + (1 − λ)gi−ri (y)
 −1/ri
≥ λF̃i−ri (qix ) + (1 − λ)F̃i−ri (qiy ) . (16)

The monotonicity of distribution functions allows to continue by


 −1/ri 
Fi (gi (λx + (1 − λ)y)) ≥ Fi λF̃i−ri (qix ) + (1 − λ)F̃i−ri (qiy ) (17)

(i = 1, . . . , m) .
Owing to assumption 2., Lemma 1 guarantees that the functions z 7→ Fi (z −1/ri )
∗ −ri

are concave on 0, (ti ) . In particular, these functions are log-concave on the
indicated interval, as this is a weaker property than concavity (see Section 2.2). By
virtue of (15) and (8), this allows to continue (17) as
h  iλ h  i1−λ
Fi (gi (λx + (1 − λ)y)) ≥ Fi F̃i (qix ) Fi F̃i (qiy ) (i = 1, . . . , m) .

Exploiting the fact that the Fi as continuous distribution functions satisfy the re-
lation Fi (F̃i (q)) = q for all q ∈ (0, 1), and recalling that qix , qiy ∈ (0, 1) by (13), we
may deduce that
Fi (gi (λx + (1 − λ)y)) ≥ [qix ]λ [qiy ]1−λ (i = 1, . . . , m) .
Since the copula C as a distribution function is increasing w.r.t. the partial order in
Rm , we obtain that
C (F1 (g1 (λx + (1 − λ)y)), . . . , Fm (gm (λx + (1 − λ)y))) ≥
 
C [q1x ]λ [q1y ]1−λ , . . . , [qm
x λ y 1−λ
] [qm ] (18)

According to assumption 3. and (13), we have


 
log C [q1x ]λ [q1y ]1−λ , . . . , [qm
x λ y 1−λ
] [qm ] =
 
log C exp(log([q1x ]λ [q1y ]1−λ )), . . . , exp(log([qmx λ y 1−λ
] [qm ] )) =
x
log C (exp(λ log [q1x ] + (1 − λ) log [q1y ]), . . . , exp(λ log [qm y
] + (1 − λ) log [qm ])) ≥
λ log C(q1x , . . . , qm
x
) + (1 − λ) log C(q1y , . . . , qm
y
)≥
λ log p + (1 − λ) log p =
log p,
where the last inequality follows from (12). Combining this with (18) provides
C (F1 (g1 (λx + (1 − λ)y)), . . . , Fm (gm (λx + (1 − λ)y))) ≥ p.
Referring to (9), this shows that λx + (1 − λ)y ∈ M (p).
8 René Henrion and Cyrille Strugarek

Remark 1. The critical probability level p∗ beyond which convexity can be guaran-
teed in Theorem 1, is completely independent of the mapping g, it just depends on
the distribution functions Fi . In other words, for given distribution functions Fi , the
convexity of M (p) in (2) for p > p∗ can be guaranteed for a whole class of map-
pings g satisfying the first assumption of Theorem 1. Therefore, it should come at
no surprise that, for specific mappings g even smaller critical values p∗ may apply.

4 Log exp-concavity of copulae


Having a look to the assumptions of Theorem 1, the first one is easily checked from
a given explicit formula for the mapping g, while the second one can be verified, for
instance, via Table 1. Thus, the third assumption, namely log exp-concavity of the
copula, remains the key for applying the Theorem. The next proposition provides
some examples for log exp-concave copulae:

Proposition 2. The independent, the maximum and the Gumbel copulae are log
exp-concave on [q, 1)m for any q > 0.

Proof. For the independent copula we have that


m
Y m
X
log C (eu1 , . . . , eum ) = log eui = ui
i=1 i=1

and for the maximum copula it holds that

log C (eu1 , . . . , eum ) = log min eui = min log eui = min ui .
1≤i≤m 1≤i≤m 1≤i≤m

m
Both functions are concave on (−∞, 0) , hence the assertion follows. For the Gumbel
θ
copula (see Example 1), the strict generator ψ (t) := (− log t) (for some θ ≥ 1)
implies that ψ −1 (s) = exp −s1/θ , whence, for ui ∈ (−∞, 0),

m
! m
!
u1 um −1
X ui −1
X θ
log C (e , . . . , e ) = log ψ ψ (e ) = log ψ (−ui )
i=1 i=1
 !1/θ  !1/θ
m
X m
X
θ θ
= log exp − (−ui )  =− |ui |
i=1 i=1

= − kukθ .

Since k·kθ is a norm for θ ≥ 1 and since a norm is convex, it follows that − k·kθ is
concave on (−∞, 0)m . Thus, the Gumbel copula too is log exp-concave on [q, 1)m
for any q > 0.

Contrary to the previous positive examples, we have the following negative case:
Qm
Example 2. The Clayton copula is not log exp-concave on any domain i=1 [qi , 1)
where q ∈ (0, 1)m . Indeed, taking the generator ψ (t) := θ−1 t−θ − 1 , we see that


ψ −1 (s) = (θs + 1)−1/θ and calculate


Convexity of chance constraints: the use of copulae 9
m
! m
!
X X  
log C (eu1 , . . . , eum ) = log ψ −1 ψ (eui ) = log ψ −1 θ−1 e−θui − 1
i=1 i=1
m 
!
X 
= −θ−1 log 1−m+ e−θui .
i=1

Now, for any t < 0, define


 
ϕ (t) := log C et , . . . , et = −θ−1 log 1 − m + me−θt .


Its second derivative calculates as


θm (m − 1) eθt
ϕ′′ (t) = .
(m − (m − 1) eθt )2

Now, if C was log exp-concave on any m m


Q
i=1 [qi , 1) where q ∈ (0, 1) , then, in partic-
ular, ϕ would be concave on the interval [τ, 0), where τ := max log qi < 0. This
i=1,...,m
however, contradicts the fact that ϕ′′ (t) > 0 for any t.

Fig. 1. Plot of the function log C(ex , ey ) for the bivariate Gaussian copula C in case
of positive (left) and negative (right) correlation

Concerning the Gaussian copula it seems to be difficult to check log exp-concavity


even in the bivariate case. At least, numerical evidence shows that the bivariate
Gaussian copula is log exp-concave for non-negative correlation coefficient but fails
to be so for negative correlation coefficient. This fact is illustrated in Figure 1, where
the thick curve in the right diagram represents a convex (rather than concave) piece
of the function log C(ex , ey ).
The next proposition provides a necessary condition for a copula to be log exp-
concave. It relates those copulae with the well-known family of log-concave distri-
bution functions:

Proposition 3. If a copula C is log exp-concave on m


Q
i=1 [qi , 1), then it is log concave
on the same domain..

Proof. By definition of log exp-concavity, we have that


10 René Henrion and Cyrille Strugarek

log C (exp (λy1 + (1 − λ) z1 ) , . . . , exp (λym + (1 − λ) zm )) ≥


λ log C (exp y1 , . . . , exp ym ) + (1 − λ) log C (exp z1 , . . . , exp zm ) (19)

for all λ ∈ [0, 1] and all y, z ∈ m


Q Qm
i=1 [log qi , 0). Now, let λ ∈ [0, 1] and u, v ∈ i=1 [qi , 1)
be arbitrary. By concavity of log and monotonicity of exp, we have that

exp (log (λui + (1 − λ) vi )) ≥ exp (λ log ui + (1 − λ) log vi ) (i = 1, . . . , m) . (20)

Since C as a distribution function is nondecreasing with respect to the partial order


of Rm , the same holds true for log C by monotonicity of log. Consequently, first (20)
and then (19) yield that

log C (λu + (1 − λ) v) =
log C (exp (log (λu1 + (1 − λ) v1 )) , . . . , exp (log (λum + (1 − λ) vm ))) ≥
log C (exp (λ log u1 + (1 − λ) log v1 ) , . . . , exp (λ log um + (1 − λ) log vm )) ≥
λ log C (u1 , . . . , um ) + (1 − λ) log C (v1 , . . . , vm ) =
λ log C (u) + (1 − λ) log C (v) .
Qm
Hence, C is log concave on i=1 [qi , 1).

5 An example
A small numerical example shall illustrate the application of Theorem 1. Consider
a chance constraint of type (2):
 
−3/4 −1/4
P ξ1 ≤ x1 , ξ2 ≤ x2 ≥ p.

Assume that ξ1 has an exponential distribution with parameter λ = 1 and ξ2 has


a Maxwell distribution with parameter σ = 1 (see Table 1). Assume that the joint
distribution of (ξ1 , ξ2 ) is not known and is approximated by means of a Gumbel
copula C with parameter θ = 1. Then, the feasible set defined by the chance con-
straint above is replaced by the set (9) with F1 , F2 being the cumulative distribu-
tion functions of the exponential and Maxwell distribution, respectively, and with
−3/4 −1/4
g1 (x1 , x2 ) = x1 , g2 (x1 , x2 ) = x2 . When checking the assumptions of Theorem
1, we observe first, that the third one is satisfied due to Proposition 2. Concerning
the first assumption, note that the components g1 and g2 fail to be concave (they
are actually convex). According to Section 2.2, the gi would be at least r-concave, if
for some r < 0 the function gir was convex. In our example, we may choose r = −4/3
for g1 and r = −4 for g2 . Thus, we have checked the first assumption of the The-
orem with r1 = 4/3 and r2 = 4. It remains to verify the second assumption. This
amounts to require the density of the exponential distribution to be 7/3-decreasing
and the density of the Maxwell distribution to be 5-decreasing. According to Table
1 and with the given parameters of these distributions, these properties hold true
in the sense of Definition 4 with a t∗ -value of t∗1 = (rp 1 + 1)/λ = 7/3 in
√ case of the
exponential distribution and with a t∗ -value of t∗2 = σ (r2 + 1) + 2 = 7 in case of
the Maxwell distribution. Now, Theorem 1 guarantees convexity of the feasible set
(9), whenever p > p∗ , where
Convexity of chance constraints: the use of copulae 11

0.1 0.20
1.4
0.5
1.2
0.15
1.0
0.8
0.10
0.6
0.3
0.4 0.2 0.05 0.8
0.7
0.2 0.4 0.6
0.0 0.00 0.9
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0

Fig. 2. Illustration of the feasible set M (p) in (9) for different probability levels in
a numerical example. Convexity appears for levels of approximately 0.8 and larger

n √ o
p∗ = max F1 (7/3) , F2 7 .

Using easily accessible evaluation functions for the cumulative distribution functions
of the
√ exponential and Maxwell distribution, we calculate F1 (7/3) ≈ 0.903 and
F2 7 ≈ 0.928. Hence, M (p) in (9) is definitely convex for probability levels larger
than 0.928.
Figure 2 illustrates the resulting feasible set for different probability levels (fea-
sible points lie below the corresponding level lines). By visual analysis, convexity
holds true for levels p larger than approximately 0.8. Not surprisingly, our theoretical
result is more conservative. One reason for the gap is explained in Remark 1.

References
[HS08] Henrion, R., Strugarek, C.: Convexity of chance constraints with indepen-
dent random variables. Comput. Optim. Appl., 41, 263–276 (2008)
[KW97] Klaassen, C.A.J., Wellner, J.A.: Efficient Estimation in the Bivariate Nor-
mal Copula Model: Normal Margins are Least Favourable, Bernoulli, 3,
55–77 (1997)
[Nel06] Nelsen, R. B.: An Introduction to Copulas. Springer, New York (2006)
[Pre95] Prékopa, A.: Stochastic Programming. Kluwer, Dordrecht (1995)
[Pre03] Prékopa, A.: Probabilistic Programming. In: Ruszczynski, A. and Shapiro,
A. (eds.): Stochastic Programming. Hamdbooks in Operations Research
and Management Science, Vol. 10. Elsevier, Amsterdam (2003)

You might also like