Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Feature Selection for Regression Based on Gamma Test Nested Monte Carlo Tree Search
Next Article in Special Issue
A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization
Previous Article in Journal
A Unified Treatment of Tribo-Components Degradation Using Thermodynamics Framework: A Review on Adhesive Wear
Previous Article in Special Issue
Differentiable PAC–Bayes Objectives with Partially Aggregated Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses

1
ENS Paris-Saclay, 91190 Gif-sur-Yvette, France
2
Centre for Artificial Intelligence, Department of Computer Science, University College London, London WC1V 6LJ, UK
3
Inria, Lille–Nord Europe Research Centre and Inria London Programme, 59800 Lille, France
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(10), 1330; https://doi.org/10.3390/e23101330
Submission received: 22 August 2021 / Revised: 23 September 2021 / Accepted: 25 September 2021 / Published: 12 October 2021
(This article belongs to the Special Issue Approximate Bayesian Inference)

Abstract

:
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions. This extends the relevance and applicability of the PAC-Bayes learning framework, where most of the existing literature focuses on supervised learning problems with a bounded loss function (typically assumed to take values in the interval [0;1]). In order to relax this classical assumption, we propose to allow the range of the loss to depend on each predictor. This relaxation is captured by our new notion of HYPothesis-dependent rangE (HYPE). Based on this, we derive a novel PAC-Bayesian generalisation bound for unbounded loss functions, and we instantiate it on a linear regression problem. To make our theory usable by the largest audience possible, we include discussions on actual computation, practicality and limitations of our assumptions.

1. Introduction

Since its emergence in the late 1990s, the PAC-Bayes theory (see the seminal works of [1,2,3], the recent survey by [4] and work by [5]) has been a powerful tool to obtain generalisation bounds and to derive efficient learning algorithms. Generalisation bounds are helpful for understanding how a learning algorithm may perform on future similar batches of data. While the classical generalization bounds typically address the performance of individual predictors from a given hypothesis class, PAC-Bayes bounds typically address a randomized predictor defined by a distribution over the hypothesis class.
PAC-Bayes bounds were originally meant for binary classification problems [6,7,8], but the literature now includes many contributions involving any bounded loss function (without loss of generality, with values in [ 0 ; 1 ] ), not just the binary loss. Our goal is to provide new PAC-Bayes bounds that are valid for unbounded loss functions, and thus extend the usability of PAC-Bayes to a much larger class of learning problems. To do so, we reformulate the general PAC-Bayes theorem of [9] and use it as basic building block to derive our new PAC-Bayes bound.
Some ways to circumvent the bounded range assumption on the losses have been explored in the recent literature. For instance, one approach consists of assuming a tail decay rate on the loss, such as sub-gaussian or sub-exponential tails [10,11]; however, this approach requires the knowledge of additional parameters. Some other works have also looked into the analysis for heavy-tailed losses, e.g., ref. [12] proposed a polynomial moment-dependent bound with f-divergences, while [13] devised an exponential bound that assumes the second (uncentered) moment of the loss is bounded by a constant (with a truncated risk estimator, as recalled in Section 4 below). A somewhat related approach was explored by [14], who do not assume boundedness of the loss, but instead control higher-order moments of the generalization gap through the Efron-Stein variance proxy. See also [5].
We investigate a different route here. We introduce the HYPothesis-dependent rangE (HYPE) condition, which means that the loss is upper-bounded by a term that depends on the chosen predictor (but does not depend on the data). Thus, effectively, the loss may have an arbitrarily large range. The HYPE condition allows us to derive an upper bound on the exponential moment of a suitably chosen functional, which, combined with the general PAC-Bayes theorem, leads to our new PAC-Bayes bound. To illustrate it, we instantiate the new bound on a linear regression problem, which additionally serves the purpose of illustrating that our HYPE condition is easy to verify in practice, given an explicit formulation of the loss function. In particular, we shall see in the linear regression setting that a mere use of the triangle inequality is enough to check the HYPE condition. The technical assumptions on which our results are based are comparable to those of the classical PAC-Bayes bounds; we state them in full detail, with discussions, for the sake of clarity and to make our work accessible.
Our contributions are twofold. (i) We propose PAC-Bayesian bounds holding with unbounded loss functions, therefore overcoming a limitation of the mainstream PAC-Bayesian literature for which a bounded loss is usually assumed. (ii) We analyse the bound, its implications, limitations of our assumptions, and their usability by practitioners. We hope this will extend the PAC-Bayes framework into a widely usable tool for a significantly wider range of problems, such as unbounded regression or reinforcement learning problems with unbounded rewards.
Outline.Section 2 introduces our notation and definition of the HYPE condition and provides a general PAC-Bayesian bound, which is valid for any learning problem complying with a mild assumption. For the sake of completeness, we present how our approach (designed for the unbounded case) behaves in the bounded case (Section 3). This section is not the core of our work, but rather serves as a safety check and particularises our bound to more classical PAC-Bayesian assumptions. We also provide numerical experiments. Section 4 introduces the notion of softening functions and particularises Section 2’s PAC-Bayesian bound. In particular, we make explicit all terms in the right-hand side. Section 5.1 extends our results to linear regression (which has been studied from the perspective of PAC-Bayes in the literature, most recently by [15]). We also experimentally illustrate the behaviour of our bound. Finally, Section 6 presents, in detail, related works and Section 7 contains all proofs of the original claims we make in the paper.

2. Framework and Preliminary Results

The learning problem is specified by three variables ( H , Z , ) consisting of a set H of predictors, the data space Z , and a loss function : H × Z R + .
For a given positive integer m, we consider size-m datasets. The space of all possible datasets of this fixed size is S = Z m ; an arbitrary element of this space is s = ( z 1 , , z m ) . We denote S as a random dataset: S = ( Z 1 , , Z m ) where the random data points Z i are independent and sampled from the same distribution μ over Z . We call μ the data-generating distribution. The assumption that the Z i ’s are independent and identically distributed is typically called the i.i.d. data assumption. It means that the random sample S (of size m) has distribution μ m which is the product of m copies of μ .
For any predictor h H , we define the empirical risk of h over a sample s, denoted R s ( h ) , and the theoretical risk of h, denoted R ( h ) , as:
R s ( h ) = 1 m i = 1 m ( h , z i ) and R ( h ) = E μ [ ( h , Z ) ]
respectively, where E μ [ ( h , Z ) ] denotes the expectation with respect to Z μ . Finally, we define the risk gap Δ s ( h ) = R ( h ) R s ( h ) for any h H and s S . Often, Δ s ( h ) is referred to as the generalisation gap.
Notice that for a random dataset S, the empirical risk R S ( h ) is random, with expected value E μ m [ R S ( h ) ] = R ( h ) , where E μ m the expectation under the distribution of the random sample S.
In general, E μ [ · ] denotes an expectation under the distribution μ . When we want to emphasize the role of the random variable Z μ we write E Z [ · ] or E Z μ [ · ] instead of E μ [ · ] . We use a similar convention for expectations related to any other distributions and random quantities. We now introduce the key concept to our analysis.
Definition 1.
(HYPE). A loss function : H × Z R + is said to satisfy the hypothesis-dependent range (HYPE) condition if there exists a function K : H R + \ { 0 } such that sup z Z ( h , z ) K ( h ) for every predictor h. We then say that ℓ is HYPE(K) compliant.
Let M 1 + H be the set of probability distributions on H . We assume that all considered probability measures on H are defined on a fixed σ -algebra over H , while the notation M 1 + H hides the σ -algebra, for simplicity. For P , P M 1 + H , the notation P P indicates that P is absolutely continuous with respect to P (i.e., P ( A ) = 0 if P ( A ) = 0 for measurable A H ). We write P P to indicate that P P and P P , i.e., these two distributions are absolutely continuous with respect to each other.
We now recall a result from Germain et al. [9]. Note that while implicit in many PAC-Bayes works (including theirs), we make it explicit that both the prior P and the posterior Q must be absolutely continuous with respect to each other. We discuss this restriction below.
Theorem 1.
(Adapted from [9], Theorem 2.1.) For any P M 1 + ( H ) with no dependency on data, for any function F : R + × R + R , define the exponential moment:
χ : = E S E h P e F ( R S ( h ) , R ( h ) ) .
If F is convex, then for any δ [ 0 ; 1 ] , with probability of at least 1 δ over random samples S, simultaneously for all Q M 1 + ( H ) such that Q P we have:
F E h Q R S ( h ) , E h Q R ( h ) KL ( Q | | P ) + log χ δ .
The proof is deferred to Section 7.1. Note that the proof in [9] requires that P Q , although it is not explicitly stated; we highlight this in our own proof. While Q P is classical and necessary for the KL ( Q | | P ) to be meaningful, P Q appears to be more restrictive. In particular, we have to choose Q such that it has the exact same support as P (e.g., choosing a Gaussian and a truncated Gaussian is not possible). However, we can still apply our theorem when P and Q belong to the same parametric family of distributions, e.g., both ‘full-support’ Gaussian or Laplace distributions, but these are just two examples and there are many others.
Note that Alquier et al. [10] (Theorem 4.1) adapted a result from Catoni [8], which only requires Q P . This comes at the expense of what Alquier et al. [10] (Definition 2.3) called a Hoeffding’s assumption, which means that the exponential moment χ is assumed to be bounded by a function depending only on the hyperparameters (such as the dataset size m or parameters given by Hoeffding’s assumption). Our analysis does not require this assumption, which might prove restrictive in practice.
Theorem 1 may be seen as a basis to recover many classical PAC-Bayesian bounds. For instance, F ( x , y ) = 2 m ( x y ) 2 , recovers McAllester’s bound as recalled in [4] (Theorem 1). To get a usable bound, the outstanding task is to bound the exponential moment χ . Note that a previous attempt has been made in [11], as described in Section 6.1 below. Furthermore, under the assumption that the distribution P has no dependency on the data, we may swap the order of integration in the exponential moment thanks to Fubini-Tonelli’s theorem and the positiveness of the exponential:
χ = E h P E S e F ( R S ( h ) , R ( h ) ) .
This is the starting point for the way that the exponential moment was handled in several works in the PAC-Bayes literature. Essentially, for a fixed h, one may upper-bound the innermost expectation (with respect to S) using standard exponential moment inequalities.
In this work, we will use Theorem 1 with F ( x , y ) = m α D ( x , y ) , where α > 0 , and D : R + × R + R is a convex function. In this case, the high-probability inequality of the theorem takes the form:
D E h Q R S ( h ) , E h Q R ( h ) 1 m α KL ( Q | | P ) + log 1 δ E h P E S e m α D ( R S ( h ) , R ( h ) ) .
Our goal is to control E S e m α D ( R S ( h ) , R ( h ) ) for a fixed h, when D ( x , y ) = y x . This will readily give us control on the exponential moment χ . To do so, we propose the following theorem:
Theorem 2.
Let h H be a fixed predictor and α R . If the loss function ℓ is HYPE(K) compliant, then for Δ S ( h ) = R ( h ) R S ( h ) we have:
E S e m α Δ S ( h ) exp K ( h ) 2 2 m 1 2 α .
Proof. 
Let h H . Then:
E S e m α Δ S ( h ) = E exp m α 1 i = 1 m ( l ( h , Z i ) R ( h ) ) = E i = 1 m exp m α 1 ( h , Z i ) R ( h ) = i = 1 m E exp m α 1 ( h , Z i ) R ( h ) .
We now apply Hoeffding’s lemma, for any i { 1 . . m } , the random (in Z i ) variable ( h , Z i ) R ( h ) is centered, taking values in [ K ( h ) ; K ( h ) ] , so that:
E exp m α 1 ( ( h , Z i ) R ( h ) ) exp m 2 α 2 4 K ( h ) 2 8
and finally:
E S e m α Δ S ( h ) i = 1 m exp m 2 α 2 4 K ( h ) 2 8 = exp K ( h ) 2 2 m 1 2 α .
 □
The strength of this result lies in the fact that K ( h ) 2 m 1 2 α , is a decreasing factor in m, when α 1 / 2 , and more generally, one can control how fast the exponential moment will explode when m grows by the choice of the hyperparameter α .
For convenient cross-referencing, we state the following rewriting of Theorem 1.
Theorem 3.
Let the loss ℓ be HYPE(K) compliant. For any P M 1 + ( H ) with no data dependency, for any α R and for any δ [ 0 ; 1 ] , with probability of at least 1 δ over size-m random samples S, simultaneously for all Q such that Q P we have:
E h Q R ( h ) E h Q R S ( h ) + 1 m α KL ( Q | | P ) + log E h P exp K ( h ) 2 2 m 1 2 α δ .
Proof. 
We first apply Theorem 1 with F ( x , y ) = m α ( y x ) . More precisely, we use Equation (1) with D ( x , y ) = y x . We then conclude with Theorem 2.  □

3. Safety Check: The Bounded Loss Case

3.1. Theoretical Results

At this stage, the reader might wonder whether this new approach allows for the recovery of known results in the bounded case: the answer is yes.
In this section, we study the case where is bounded by some constant C R + \ { 0 } . In other words, we consider the case that sup h sup z ( h , z ) C . We provide a bound, valid for any choice of “priors” P and “posteriors” Q such that P Q , which is an immediate corollary of Theorem 3.
Proposition 1.
Let ℓ be HYPE(K) compliant, with K ( h ) = C constant, and let α R . Let P M 1 + ( H ) be a distribution with no data dependency. Then, for any δ [ 0 ; 1 ] , with probability of at least 1 δ over random m-samples S, simultaneously for all Q M 1 + ( H ) such that Q P we have:
E h Q R ( h ) E h Q R S ( h ) + KL ( Q | | P ) + log ( 1 / δ ) m α + C 2 2 m 1 α .
Remark 1.
We provide Proposition 1 to evaluate the robustness of our approach. For instance, by comparing it with the PAC-Bayesian bound found in Germain et al. [11]. This discussion can be found in Section 6.1, where the bound from Germain et al. [11] is presented in detail.
Remark 2.
At first glance, a naive remark: in order to control the rate of convergence of all the terms of the bound in Proposition 1 (as is often the case in classical PAC-Bayesian bounds), then the only case of interest is in fact α = 1 2 . However, one could notice that the factor C 2 is not optimisable, while the KL is. In this way, if it appears that C 2 is too big, in practice, one wants to have the ability to attenuate its influence as much as possible and this may lead us to consider α < 1 / 2 . The following lemma answers this question.
Lemma 1.
For any given K 1 > 0 , the function f K 1 ( α ) : = K 1 m α + C 2 m 1 α reaches its minimum at
α 0 = 1 2 + 1 2 log ( m ) log 2 K 1 C 2 .
Proof. 
The explicit calculus of the f K 1 and the resolution of f K 1 ( α ) = 0 provides the result.  □
Remark 3.
Lemma 1 indicates that with a fixed “prior” P and “posterior” Q, taking K 1 = KL ( Q | | P ) + log ( 1 / δ ) , gives the optimised value of the bound in Proposition 1. We numerically show in Section 3.2 (first experiment there) that optimising α leads to significantly better results.
Now the only remaining question is how to optimise the KL divergence. To do so, we may need to fix an “informed prior” to minimise the KL divergence with an interesting posterior. This idea has been studied by [16,17] and, more recently, by Mhammedi et al. [18], Rivasplata et al. [5], among others. We will adapt it to our problem in the simplest way.
We now introduce some additional notation. For a sample s = ( z 1 , , z m ) and k { 1 . . m } , we define s k : = { z 1 , , z k } and s > k : = { z k + 1 , , z m } . Then, similarly, for a random sample S, we have the splits S k and S > k .
Proposition 2.
Let ℓ be HYPE(K) compliant, with constant K ( h ) = C , and α 1 , α 2 R . Consider any “priors” P 1 M 1 + ( H ) (possibly dependent on S > m / 2 ) and P 2 M 1 + ( H ) (possibly dependent on S m / 2 ). Then, for any δ [ 0 ; 1 ] , with probability of at least 1 δ over random size-m samples S, simultaneously for all Q M 1 + ( H ) such that Q P 1 and Q P 2 we have:
E h Q R ( h ) E h Q R S ( h ) + 1 2 KL ( Q | | P 1 ) + log ( 2 / δ ) ( m / 2 ) α 1 + C 2 2 ( m / 2 ) 1 α 1 + 1 2 KL ( Q | | P 2 ) + log ( 2 / δ ) ( m / 2 ) α 2 + C 2 2 ( m / 2 ) 1 α 2 .
Proof. 
Let P 1 , P 2 , Q be as stated in Proposition 2. We first notice that by using Proposition 1 on the two halves of the sample, we obtain, with a probability of at least 1 δ / 2 :
E h Q R ( h ) E h Q 1 m / 2 i = 1 m / 2 ( h , Z i ) + KL ( Q | | P 1 ) + log ( 2 / δ ) ( m / 2 ) α 1 + C 2 2 ( m / 2 ) 1 α 1
and also with probability at least 1 δ / 2 :
E h Q R ( h ) E h Q 1 m / 2 i = 1 m / 2 ( h , Z m / 2 + i ) + KL ( Q | | P 2 ) + log ( 2 / δ ) ( m / 2 ) α 2 + C 2 2 ( m / 2 ) 1 α 2 .
Hence, with a probability of at least 1 δ , both inequalities hold, and the result follows by adding them and dividing by 2. □
Remark 4.
One can notice that the main difference between Proposition 2 and Proposition 1 lies in the implicit PAC-Bayesian paradigm that our priors must not depend on the data. With this last proposition, we implicitly allow P 1 to depend on S > m / 2 and P 2 on S m / 2 , which can in practice lead to far more accurate priors. We numerically show this fact in Section 3.2’s second experiment. Note that this idea is not new and has been studied, for instance, in [19] for the specific case of SVMs.

3.2. Numerical Experiments

Our experimental framework has been inspired by the work of [18].
Settings. We generate synthetic data for classification, and we are using the 0–1 loss. The data space is Z = X × Y = R d × { 0 , 1 } with d N . The set of predictors H is parameterised with d-dimensional ‘weight’ vectors: H = { h w : X Y | w R d } . For simplicity, we identify h w with w and we also identify the space H , with the weight space W = R d . For z = ( x , y ) Z and w W , we define the loss as ( w , z ) : = | 𝟙 ϕ ( w x ) > 1 / 2 y | , where ϕ ( r ) = 1 1 + e r . We want to learn an optimised predictor given a dataset S = ( Z i ) i = 1 . . m where Z i = ( X i , Y i ) . To do so, we use regularised logistic regression and compute:
w ^ ( S ) : = arg min w W λ | | w | | 2 2 1 m i = 1 m y i log ϕ ( w x i ) + ( 1 y i ) log 1 ϕ ( w x i )
where λ is a fixed regularisation parameter.
We also restrict the probability distributions (over W = R d ), considered for this learning problem. We consider the Gaussian distribution N ( w , σ 2 I d ) with centre w R d and diagonal covariance σ 2 I d R d × d with σ 2 > 0 .
Parameters. We set δ = 0.05 , λ = 0.01 . We approximately solve Equation (2) by using the minimize function of the optimisation module in Python, with the Powell method. To approximate gaussian expectations, we use Monte-Carlo sampling.
Synthetic data. We generate synthetic data for d = 10 according to the following process: for a fixed sample size m, we draw X 1 , , X m under the multivariate Gaussian distribution N ( 0 , I d ) and for each i we compute the label if X i as: Y i = 𝟙 { ϕ ( w x i ) > 1 / 2 } where w is the vector formed by the d first digits of the number π .
Normalisation trick. Given the predictors shape, we notice that for any w W :
𝟙 { ϕ ( w x ) > 1 / 2 } = 1 1 1 + exp ( w x ) > 1 2 w x < 0 .
Thus, the value of the prediction is exclusively determined by the sign of the inner product, and this quantity is definitely not influenced by the norm of the vector. Then, for any sample S, we call the normalisation trick the fact of considering w ^ ( S ) / | | w ^ ( S ) | | instead of w ^ ( S ) in our calculations. This process will not deteriorate the quality of the prediction and will considerably enhance the value of the KL divergence.

3.2.1. First Experiment

Our goal here is to highlight the point discussed in Remark 2, e.g., the influence of the parameter α in Proposition 1. We arbitrarily fix σ 0 2 = 1 / 2 , and define our naive prior as P 0 = N ( 0 , σ 0 2 I d ) . For a fixed dataset S, we define our posterior as P ( S ) : = N ( h ^ ( S ) , σ 2 I d ) , with σ 2 { 1 / 2 , , 1 / 2 J } (for J = log 2 ( m ) ) such that it is minimising the bound among candidates. We computed two curves: first, Proposition 1 with α = 1 / 2 second, Proposition 1 again with α equals to the value proposed in Lemma 1. Notice that to compute this last bound, we first optimised our choice of posterior with α = 1 / 2 and then optimised α , to be consistent with Lemma 1. Indeed, we proved this lemma by assuming that the KL divergence was already fixed, hence our optimisation process is in two steps. Note that we chose to apply the normalisation trick here, we then obtained the left curve of Figure 1.
Discussion. From this curve, we formulate several remarks. First, we remark on this specific case, our theorem provides a tight result in practice (with an error rate lesser than 10 % for the bound with optimised alpha). Second, we can now confirm that choosing an optimised α leads to a tighter bound. In further studies, it will be relevant to adjust α with regards to the different terms of our bound instead of looking for an identical convergence rate for all terms.

3.2.2. Second Experiment

We now study Proposition 2 to see if an informed prior effectively provides a tighter bound than a naive one. We will use the notations introduced in Proposition 2. For a dataset S, we define w 1 ( S ) = w ( S > m / 2 ) as the vector resulting from the optimisation of Equation (2) on S > m / 2 . Similarly, we define w 2 ( S ) : = w ( S m / 2 ) . We arbitrarily fix σ 0 2 = 1 / 2 , and define our informed priors as: P 1 = N ( w 1 ( S ) , σ 0 2 I d ) and P 2 = N ( w 2 ( S ) , σ 0 2 I d ) . Finally, we define our posterior as P ( S ) : = N ( w ^ ( S ) , σ 2 I d ) , with σ 2 { 1 / 2 , , 1 / 2 J } (for J = log 2 ( m ) ) with σ 2 optimising the bound among the same candidate than the first experiment. We computed two curves: first, Proposition 1 with α optimised accordingly to Lemma 1 secondly, Proposition 2 with α 1 , α 2 optimised as well, and informed priors as defined above. We chose to not apply the normalisation trick here, we then obtained the right curve of Figure 1.
Discussion. It is clear, that with this framework, having an informed prior is a powerful tool to enhance the quality of our bound. Notice that we voluntarily chose to not apply the normalisation trick here. The reason is that this trick appears to be too powerful in practice, and applying it leads to counterproductive results; to highlight our point: the bound without informed prior would be tighter than the one with informed prior. Furthermore, this trick is linked to the specific structure of our problem and is not valid for any classification problem. Thus, the idea of providing informed priors remains an interesting tool for most cases.

4. PAC Bayesian Bounds with Smoothed Estimator

We now move on to control the right-hand side term in Theorem 3 when K is not constant. A first step is to consider a transformed estimate of the risk, inspired by the truncated estimator from [20], also used in [21], and more recently in [13]. The following is inspired by the results of [13], which we summarise in Section 6.
The idea is to modify the estimator R S ( h ) for any h by introducing a threshold t and a function ψ which will attenuate the influence of the empirical losses ( ( h , Z i ) ) i = 1 . . m that exceed t.
Definition 2.
ψ -risks. For every t > 0 , ψ : R + R + , for any h H , we define the empirical ψ-risk R S , ψ , t and the theoretical ψ-risk R ψ , t as follows:
R S , ψ , t ( h ) : = t m i = 1 m ψ ( h , Z i ) t and R ψ , t ( h ) = E μ t ψ ( h , Z ) t
where Z μ . Notice that E S R S , ψ , t ( h ) = R ψ , t ( h ) .
We now focus on what we call softening functions, i.e., functions that will temper high values of the loss function .
Definition 3.
(Softening function). We say that ψ : R + R + is a softening function if:
  • x [ 0 ; 1 ] , ψ ( x ) = x ,
  • ψ is non-decreasing,
  • x 1 , ψ ( x ) x .
We let F denote the set of all softening functions.
Remark 5.
Notice that those three assumptions ensure that ψ is continuous at 1. For instance, the functions f : x x 𝟙 { x 1 } + 𝟙 { x > 1 } and g : x x 𝟙 { x 1 } + ( 2 x 1 ) 𝟙 { x > 1 } are in F . In Section 6 we compare these softening functions and those used by Holland [13].
Using ψ F , for a fixed threshold t > 0 , the softened loss function t ψ ( h , z ) t verifies for any h H , z Z :
t ψ ( h , z ) t t ψ K ( h ) t
because ψ is non-decreasing. In this way, the exponential moment in Theorem 3 can be far more controllable. The trade-off lies in the fact that softening (instead of taking directly ) will deteriorate our ability to distinguish between two bad predictions when both of them are greater than t. For instance, if we choose ψ F such as ψ = 1 on [ 1 ; + ) and t > 0 , if ψ ( h , z ) / t = 1 for a certain pair ( h , z ) , then we cannot tell how far ( h , z ) is from t and we only can affirm that ( h , z ) t .
We now move on to the following lemma, which controls the shortfall between E h Q [ R ( h ) ] and E h Q [ R ψ , t ( h ) ] for all Q M 1 + ( H ) , for a given ψ and t > 0 . To do that, we assume that K admits a finite moment under any posterior distribution:
Q M 1 + ( H ) , E h Q [ K ( h ) ] < + .
For instance, in the case of H identified with a weight space W = R N , and if K is polynomial in | | w | | (where | | . | | denotes the Euclidean norm), then this assumption holds if we consider Gaussian priors and posteriors.
Lemma 2.
Assume that Equation (3) holds, and let ψ F , Q M 1 + ( H ) , t > 0 . We have:
E h Q [ R ( h ) ] E h Q [ R ψ , t ( h ) ] + E h Q K ( h ) 𝟙 K ( h ) t .
Proof. 
Let ψ F , Q M 1 + ( H ) , t > 0 . We have, for h H :
R ( h ) R ψ , t ( h ) = E Z μ ( h , Z ) t ψ ( h , Z ) t
and using that x [ 0 , 1 ] , ψ ( x ) = x ,
= E Z μ ( h , Z ) t ψ ( h , Z ) t 𝟙 { ( h , Z ) t }
while using that ( h , z ) K ( h ) ,
= E Z μ ( h , Z ) t ψ ( h , Z ) t 𝟙 { ( h , Z ) t } 𝟙 K ( h ) t
and continuing:
E Z μ ( h , Z ) 𝟙 { ( h , Z ) t } 𝟙 K ( h ) t ( ψ 0 )
K ( h ) P Z μ ( h , Z ) t 𝟙 K ( h ) t ( ( h , Z ) K ( h ) )
Finally, by crudely bounding the probability by 1, we get:
R ( h ) R ψ , t ( h ) + K ( h ) 𝟙 K ( h ) t .
Hence the result by integrating over H with respect to Q. □
Finally we present the following theorem, which provides a PAC-Bayesian inequality bounding the theoretical risk by the empirical ψ -risk for ψ F .
Theorem 4.
Let ℓ be HYPE(K) compliant, and assume K satisfies Equation (3). Then for any P M 1 + ( H ) with no data dependency, for any α R , for any ψ F and for any δ [ 0 ; 1 ] , with probability of at least 1 δ over size-m random samples S, simultaneously for all Q such that Q P we have:
E h Q R ( h ) E h Q R S , ψ , t ( h ) + E h Q K ( h ) 𝟙 { K ( h ) t } + KL ( Q | | P ) + log 1 δ m α + 1 m α log E h P exp t 2 2 m 1 2 α ψ K ( h ) t 2 .
Proof. 
Let ψ F , we define the ψ -loss:
2 ( h , z ) = t ψ ( h , z ) t .
Since ψ is non decreasing, we have for all ( h , z ) H × Z :
2 ( h , z ) t ψ K ( h ) t : = K 2 ( h ) .
Thus, we apply Theorem 3 to the learning problem defined with 2 : for any α and δ ( 0 , 1 ) , with probability at least 1 δ over size-m random samples S, simultaneously for all Q such that Q P we have:
E h Q R ψ , t ( h ) E h Q R S , ψ , t ( h ) + KL ( Q | | P ) + log 1 δ m α + 1 m α log E h P exp K 2 ( h ) 2 2 m 1 2 α .
We then add E h Q K ( h ) 𝟙 K ( h ) t on both sides of the latter inequality and apply Lemma 2. □
Remark 6.
Notice that the function ψ : x x 𝟙 { x 1 } + 𝟙 { x > 1 } is such that for any given prior P we have E h P exp t 2 2 m 1 2 α ψ K ( h ) t 2 < + . So the exponential moment can be controlled with a good choice of ψ. Thus the strength of Theorem 4 is to provide a PAC-Bayesian bound valid for any set of posterior measures verifying Equation (3). The choice of ψ minimising the bound is still an open problem.

5. The Linear Regression Problem

5.1. Theoretical Result

We now focus on the celebrated linear regression problem and see how our theory translates to that particular learning problem. We assume that the data is a size-m random sample S = ( Z i ) i = 1 . . m where the Z i are i.i.d. drawn from the distribution μ , and Z i = ( X i , Y i ) with X i R N , Y i R .
Our goal here is to find the most accurate predictor h w (with w R N ), with respect to the loss function ( h w , z ) = | w , x y | , where z = ( x , y ) . We will make the following mild assumption: there exists B , C R + \ { 0 } such that for all z = ( x , y ) drawn under μ :
| | x | | B and | y | C
where | | . | | is the norm associated to the classical inner product of R N . Under this assumption we note that for all z = ( x , y ) drawn according to μ , we have:
( h w , z ) = | w , x y | | w , x | + | y ] | | w | | . | | x | | + | y | B | | w | | + C .
Thus we define K ( h w ) = B | | w | | + C for w R N . If we first restrict ourselves to the framework of Section 2, we want to use Theorem 3 and doing so, our goal is to bound ξ : = E w P exp K ( w ) 2 2 m 1 2 α . The shape of K invites us to consider a Gaussian prior. Indeed, we notice that if P = N ( 0 , σ 2 I N ) with 0 < σ 2 < m 1 2 α B 2 , then ξ < + . Notice that we cannot take just any Gaussian prior, however with a small α , the condition 0 < σ 2 < m 1 2 α B 2 may become quite loose. Thus, we have the following:
Theorem 5.
Let α R and N 6 . Assume that the loss ℓ is HYPE(K) compliant with K ( h ) = B | | h | | + C , with B > 0 , C 0 . For a prior distribution, consider any Gaussian P = N ( 0 , σ 2 I N ) with σ 2 = t m 1 2 α B 2 , 0 < t < 1 . Then, for any δ [ 0 ; 1 ] , with probability of at least 1 δ over size-m random samples S, simultaneously for all Q M 1 + ( H ) such that P Q we have:
E h Q [ R ( h ) ] E h Q [ R S ( h ) ] + KL ( Q | | P ) + log 2 / δ m α + C 2 2 m 1 α 1 + f ( t ) 1 + N m α log 1 + C 2 f ( t ) m 1 2 α + log 1 1 t
where f ( t ) = 1 t t .
The proof is deferred to Section 7.2. To compare our result with those found in the literature, we can fix α = 1 / 2 . Doing so, we lose the dependency in m for the choice of the variance of the prior (which now only depends on B), but we recover the classic decreasing factor 1 / m .
Remark 7.
Notice that for now we did not use Section 4, even if we could (because K is polynomial in | | w | | and we consider Gaussian priors and posteriors, so Equation (3) is satisfied). Doing so, we obtained a bound which appears to depend linearly on the dimension N. In practice, N may be too big, and in this case, introducing an adapted softening function ψ (one can think for instance of ψ ( x ) = x 𝟙 { x 1 } + 𝟙 { x > 1 } ) is a powerful tool to attenuate the weight of the exponential moment. This also extends the class of authorised Gaussian priors by avoidance, to stick with a variance σ 2 = t m 1 2 α B 2 0 < t < 1 .

5.2. Numerical Experiment

5.2.1. Setting

In this section we apply Theorem 5 on a concrete linear regression problem. The situation is as follows: we want to approximate the function f ( x ) = w , x , where w R d . We assume that W = [ c , c ] d so that w lies in an hypercube centred at 0 of half-side c > 0 , i.e., the set { ( w i ) i = 1 , , d i , | w i | c } . Doing so we have | | w | | c d .
Furthermore, we assume that input data are drawn inside a hypercube of half-side e > 0 , i.e., X = [ e , e ] d . Doing so we have for any data x , | | x | | e d .
For any data x R d , we define y = f ( x ) . As before, we identify the hypothesis set H with the weight space W = R d . As described in Section 5.1, we set ( h w , x , y ) = | w , x y | . We then remark that for any ( w , x , y ) :
( h w , x , y ) | w , x | + | y | | | w | | | | x | | + | w , x | e d | | w | | + | | w | | . | | x | | e d | | w | | + c d . e d e d | | w | | + c d e .
Then we can define B = e d and C = c d e to apply Theorem 5. We restrict (as before) the class of distributions over W to be d-dimensional Gaussians:
N ( w , σ 2 I d ) w H , σ 2 R + ,
which is the set of candidate distributions for this learning problem. Recall that in practice, given a fixed α R , we are only allowed to consider priors such that their variance σ 2 0 ; m 1 2 α B 2 . We want to learn an optimised predictor (posterior) given a random dataset S = ( ( X i , Y i ) ) i = 1 , , m . To do so, we consider synthetic data.

5.2.2. Synthetic Data

We draw w under a Gaussian (with mean 0 and standard deviation equal to 5) truncated to the hypercube centered at 0 of the half-side c > 0 . We generate synthetic data according to the following process: for a fixed sample size m, we draw X 1 , , X m under a Gaussian (with mean 0 and standard deviation equal to 5) truncated to the hypercube centered at 0 of the half-side e > 0 .

5.2.3. Experiment

First, we fix c = e = 10 . Our goal here is to obtain a generalisation bound on our problem. We fix arbitrarily, for a fixed α R , t 0 = 1 / 2 and σ 0 2 = t 0 m 1 2 α B 2 and we define our naive prior as P 0 = N ( 0 , σ 0 2 I d ) . For a given dataset S, we define our posterior as Q ( S ) : = N ( w ^ ( S ) , σ 2 I d ) , with σ 2 { σ 0 2 / 2 , , σ 0 2 / 2 J } ( J = log 2 ( m ) ), such that it is minimising the bound among candidates. Note that all the previously defined parameters are dependent on α , which is why we choose α { i / step 0 i step } for step a fixed integer (in practice step = 8 or 16) and we take the value of α minimising the bound among the candidates as well. Figure 2 contains two figures, one with d = 10 , the other with d = 50 . On each figure are computed the right-hand side term in 5 with an optimised α for each step.

5.2.4. Discussion

To the the best of our knowledge, this is the first attempt to numerically compute PAC-Bayes bounds for unbounded problems, making it impossible to compare to other results. We stress, however, that obtaining numerical values for the bound without assuming a bounded loss is a significant first step. Furthermore, we consider a rather hard problem: f is not linear, so we cannot rely on a linear approximation fitting perfectly data, and the larger the dimension, the larger the error, as illustrated by Figure 2. Thus, for any posterior Q, the quantity E h Q [ R ( h ) ] is potentially large in practice and our bound might not be tight. Finally, notice that optimising α (instead of taking α = 1 / 2 to recover a classic convergence rate) leads to a significantly better bound. A numerical example of this assertion is presented in Section 3.2. We aim to conduct further studies to consider the convergence rate as an hyperparameter to optimise, rather than selecting the same rate for all terms in the bound.

6. Existing Work

6.1. Germain et al., 2016

In Germain et al. [11] (Section 4), a PAC-Bayesian bound has been provided for all sub-gamma losses with a variance t 2 and scale parameter c > 0 , under a data distribution μ and a prior P, i.e., losses such that for every λ 0 , 1 c the following is satisfied:
log 1 δ E h P E S e λ ( R ( h ) R S ( h ) ) t 2 c 2 ( log ( 1 c λ ) λ c ) λ 2 t 2 2 ( 1 c λ ) .
Note that a sub-gamma loss (with regards to μ and P) is potentially unbounded. Germain et al. then propose the following PAC-Bayesian bound:
Theorem 6.
Ref. [11]. If the loss ℓ is sub-gamma with a variance t 2 and scale parameter c, under the data distribution μ and a fixed prior P H , then for any δ [ 0 ; 1 ] , with probability 1 δ over size-m random samples, simultaneously for all Q P we have:
E h Q R ( h ) E h Q R S ( h ) + KL ( Q | | P ) + log ( 1 / δ ) m + t 2 2 ( 1 c ) .
Theorem 6 will be quoted several times in this paper given that it is a concrete PAC Bayesian bound provided with the will to overcome the constraint of a bounded loss. It is also one of the only one found in the literature.
Can we apply this theorem to the bounded case? The answer is yes: we remark that thanks to Hoeffding’s lemma, if is bounded by C > 0 , then for any h H it holds that R S ( h ) R ( h ) [ C , C ] almost surely. So, λ R ,   log E z μ e λ ( R ( h ) R S ( h ) λ 2 C 2 2 . Therefore, for any prior P, we have:
log E h P E z μ e λ ( R ( h ) R S ( h ) λ 2 C 2 2 .
Thus, is sub-gamma with variance C 2 and scale parameter 0. Then, Theorem 6 can be applied with t 2 = C 2 , c = 0 .
Comparison with Proposition 1. We remark that by taking K = C and α = 1 in Proposition 1, we are recovering Theorem 6. However, our approach allows us to say that if we can obtain a more precise form of K such that h H , K ( h ) C and K is non-constant, 3, will ensure that:
1 m α log E h P exp K ( h ) 2 2 m 1 2 α C 2 2 m 1 α .
Thus, having precise information on the behavior of the loss function , with regards to the predictor h, allows us to obtain a tighter control of the exponential moment, and hence a tighter bound.
Remark 8.
We can see that Theorem 6 cannot control the factor C 2 / 2 . However, Ref. [11] remarked on this apparent weakness and partially corrected this issue [11] (Section 4, Equations (13) and (14)). Indeed, they proposed to balance the influence of m between the different terms of the PAC-Bayes bound by providing the same convergence rate in 1 / m to all terms.
We can then see Proposition 1 as a proper generalisation of Germain et al. [11] (Section 4, Equations (13) and (14)). Indeed, our bound exhibits properly the influence of the parameter α. Thus, we understand (and Lemma 1 proves it) that the choice of α deserves a study in itself in the way it is now a parameter of our optimisation problem. This fact has already been highlighted in Alquier et al. [10] (Theorem 4.1) (where λ : = m α ).

6.2. Holland, 2019

In [13], Holland proposed a PAC Bayesian inequality with unbounded loss. For that, he introduced a function ψ verifying a few specific conditions, different to those used in Section 4 to define our set of softening functions. Indeed, he considered a function ψ such that:
  • ψ is bounded,
  • ψ is non decreasing,
  • it exists b > 0 such that for all u R :
    log 1 u + u 2 b ψ ( u ) log 1 + u + u 2 b .
We remark that, as Holland did, we supposed that our softening functions are non-decreasing. We chose softening functions to be equal to the identity function ( x x ) on [ 0 , 1 ] , which is quite restrictive. However, we are imposing softening functions to be lesser than the identity on 1 , + ; whereas, Holland supposed ψ to be bounded and satisfy Equation (4). A concrete example of such a function ψ , lies in the piecewise polynomial function of Catoni and Giulini [21], defined by:
ψ ( u ) = 2 2 / 3 if u 2 u u 3 / 6 if u [ 2 2 / 3 , 2 2 / 3 ] 2 2 / 3 otherwise .
As in Section 4, we are considering the ψ -empirical risk R S , ψ , t for any t > 0 . Holland provided his theorem given the fact the following assumptions are realised:
  • Bounds on lower-order moments. For all h H , we have E Z μ [ ( h , Z ) 2 ] M 2 < + and E Z μ [ ( h , Z ) 3 ] M 3 < + .
  • Bounds on the risk. For all h H , we suppose R ( h ) m M 2 / ( 4 log ( δ 1 ) .
  • Large enough confidence, we require δ e 1 / 9 .
Now we can state Holland’s theorem.
Theorem 7.
Ref. [13]. Let P be a prior distribution on model H . Let the three assumptions listed above hold. Setting t 2 = m M 2 / ( 2 log ( δ 1 ) ) , then for any δ [ 0 ; 1 ] , with probability of at least 1 δ over the random draw of the size-m sample S, simultaneously for all Q it holds that:
E h Q R ( h ) E h Q R S , ψ , t ( h ) + 1 m KL ( Q | | P ) + 1 2 log 8 π M 2 δ 2 1 + 1 m ν ( H ) + O 1 m
where:
ν ( H ) : = E h P exp m ( R ( h ) R S , ψ , t ( h ) ) E h P exp R ( h ) R S , ψ , t ( h ) .

7. Proofs

7.1. Proof of Theorem 1

Proof. 
Let F : R + × R + R be a convex function, P a fixed prior, and δ [ 0 , 1 ] . Since E h P e F ( R S ( h ) , R ( h ) ) is a nonnegative random variable, we know that, by Markov’s inequality, for any h H :
P E h P e F ( R S ( h ) , R ( h ) ) > 1 δ E S E h P e F ( R S ( h ) , R ( h ) ) δ .
So with probability of at least 1 δ , we have:
E h P e F ( R S ( h ) , R ( h ) ) 1 δ E S E h P e F ( R S ( h ) , R ( h ) ) = χ δ .
Applying the log function on each side of this inequality gives us with probability of at least 1 δ over samples S:
log E h P e F ( R S ( h ) , R ( h ) ) log χ δ .
We now rename A : = log E h P e F ( R S ( h ) , R ( h ) ) .
Furthermore, if we denote by d Q d P the Radon-Nikodym derivative of Q with respect to P when Q P , we then have, for all Q such that Q P :
A = log E h Q d P d Q e F ( R S ( h ) , R ( h ) ) = log E h Q d Q d P 1 e F ( R S ( h ) , R ( h ) ) ( d P d Q = d Q d P 1 )
and by concavity of log and Jensen’s inequality,
E h Q log d Q d P + E h Q F ( R S ( h ) , R ( h ) ) = KL ( Q | | P ) + E h Q F ( R S ( h ) , R ( h ) )
while by convexity of F with Jensen’s inequality,
KL ( Q | | P ) + F E h Q R S ( h ) , E h Q R ( h ) .
Hence, for Q such that Q P ,
F E h Q R S ( h ) , E h Q R ( h ) KL ( Q | | P ) + A .
So with probability 1 δ , for Q such that Q P ,
F E h Q R S ( h ) , E h Q R ( h ) KL ( Q | | P ) + log χ δ .
This completes the proof of Theorem 1. □

7.2. Proof of Theorem 5

We first provide a technical property. Recall that:
ξ = E h P exp K ( h ) 2 2 m 1 2 α .
Proposition 3.
Let α R . Suppose the loss ℓ is HYPE(K) compliant with K ( h ) = B | | h | | + C , with B > 0 , C 0 . Then, for any Gaussian prior P = N ( 0 , σ 2 I N ) with σ 2 = t m 1 2 α B 2 , 0 < t < 1 and N 6 we have:
ξ 2 exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) 1 1 t N 1 + C 2 f ( t ) m 1 2 α N 1
with f ( t ) = 1 t t .
Proof. 
We recall that σ 2 = t m 1 2 α B 2 . By expliciting the expectation and K ( h ) we thus obtain:
ξ = 1 2 π σ 2 N h R N exp ( B | | h | | + C ) 2 2 m 1 2 α | | h | | 2 B 2 2 t m 1 2 α d h = 1 2 π σ 2 N h R N exp 1 2 m 1 2 α f ( t ) B 2 | | h | | 2 2 B C | | h | | C 2 d h = 1 2 π σ 2 N h R N exp B 2 f ( t ) 2 m 1 2 α | | h | | 2 2 C | | h | | B f ( t ) C 2 B 2 f ( t ) d h = exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) 1 ( 2 π σ 2 ) N h R N exp B 2 f ( t ) 2 m 1 2 α | | h | | C B f ( t ) 2 d h .
We will use the spherical coordinates in N-dimensional Euclidean space given in [22]:
φ : ( h 1 , , h N ) ( r , φ 1 , , φ N 1 )
where especially r = | | h | | and also the Jacobian of ϕ is given by:
d N V = r N 1 k = 1 N 2 sin k ( φ N 1 k ) = r N 1 d S N 1 V .
Let us also precise that as given in Blumenson [22] (page 66), we have that the surface of the sphere of radius 1 in N-dimensional space is:
φ 1 , , φ N 1 d S N 1 V d φ 1 d φ N 1 = 2 π N Γ N 2
where Γ is the Gamma function defined as:
Γ ( x ) = 0 + t x 1 e t d t for x > 1 .
Then, if we set:
A : = h R N exp B 2 f ( t ) 2 m 1 2 α | | h | | C B f ( t ) 2 d h
we obtain by a change of variable:
A = r , φ 1 , , φ N 1 exp B 2 f ( t ) 2 m 1 2 α r C B f ( t ) 2 d N V d r d φ 1 d φ N 1 = 2 π N Γ N 2 r = 0 + exp B 2 f ( t ) 2 m 1 2 α r C B f ( t ) 2 r N 1 d r = 2 π N Γ N 2 r = C B f ( t ) + r + C B f ( t ) N 1 exp B 2 f ( t ) 2 m 1 2 α r 2 d r = 2 π N Γ N 2 k = 0 N 1 N 1 k C B f ( t ) N k 1 r = C B f ( t ) + r k exp B 2 f ( t ) 2 m 1 2 α r 2 d r .
We fix a random variable X such that:
X N 0 , m 1 2 α B 2 ( f ( t ) .
We then have for any k positive integer, if k is even:
r = C B f ( t ) + r k exp B 2 f ( t ) 2 m 1 2 α r 2 d r r = + r k exp B 2 f ( t ) 2 m 1 2 α r 2 d r 2 π m 1 2 α B 2 f ( t ) E [ | X | k ] .
And if k is odd:
r = C B f ( t ) + r k exp B 2 f ( t ) 2 m 1 2 α r 2 d r r = 0 + r k exp B 2 f ( t ) 2 m 1 2 α r 2 d r 2 π m 1 2 α B 2 f ( t ) E [ | X | k 𝟙 ( X 0 ) ] 2 π m 1 2 α B 2 f ( t ) E [ | X | k ] .
So we have:
A 2 π N Γ N 2 k = 0 N 1 N 1 k C B f ( t ) N k 1 2 π m 1 2 α B 2 f ( t ) E [ | X | k ] .
As precised in [23], we have for any k:
E [ | X | k ] = m 1 2 α B 2 f ( t ) k 2 k / 2 Γ k + 1 2 π .
So finally:
A 2 π N k = 0 N 1 N 1 k C B f ( t ) N k 1 2 m 1 2 α B 2 f ( t ) k + 1 Γ k + 1 2 Γ N 2 .
Lemma 3.
If N 6 , then:
max k = 0 . . N 1 Γ k + 1 2 Γ N 2 = 1 .
Proof. 
As precised in the introduction of Srinivasan and Zvengrowski [24], Gauss [25] (page 147) proved that on the interval [ x 0 , + ) where x 0 [ 1.46 , 1.47 ] , Γ is a monotonic increasing function. So, for N 1 k 2 , Γ ( k + 1 2 ) Γ ( N 2 ) . And because Γ ( 1 / 2 ) = π , Γ ( 1 ) = 1 , we have:
max k = 0 . . N 1 Γ k + 1 2 Γ N 2 = max π Γ N 2 , Γ N 1 + 1 2 Γ N 2 = max π Γ N 2 , 1
Because N 6 , and Γ is monotone and increasing on [ 3 ; + ] , we have Γ ( N / 2 ) Γ ( 3 ) π . Hence the result. □
Using Lemma 3 allows us to write:
A 2 π N k = 0 N 1 N 1 k C B f ( t ) N k 1 2 m 1 2 α B 2 f ( t ) k + 1 .
We recall that σ 2 = t m 1 2 α B 2 and f ( t ) = 1 t t . Then we can write:
A 2 π N k = 0 N 1 N 1 k C B f ( t ) N k 1 2 σ 2 1 t k + 1 .
We now conclude with the final bound on ξ :
ξ exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) 1 ( 2 π σ 2 ) N A exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) 1 ( 2 π σ 2 ) N 2 π N k = 0 N 1 N 1 k C B f ( t ) N k 1 2 σ 2 1 t k + 1 2 exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) k = 0 N 1 N 1 k C B f ( t ) N k 1 1 1 t k + 1 B 2 2 t m 1 2 α N k 1 2 exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) k = 0 N 1 N 1 k C t ( 1 t ) 2 m 1 2 α N k 1 1 1 t k + 1 2 exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) 1 t N k = 0 N 1 N 1 k C 2 f ( t ) m 1 2 α N k 1 2 exp C 2 2 m 1 2 α f ( t ) 1 + f ( t ) 1 t N 1 + C 2 f ( t ) m 1 2 α N 1 .
This completes the proof of Proposition 3. □
Proof of Theorem 5.
We combine Theorem 3 with Proposition 3. We also upper-bound N 1 by N. □

Author Contributions

Conceptualization, M.H., B.G. and J.S.-T.; Formal analysis, M.H., B.G. and O.R.; Project administration, B.G.; Supervision, B.G.; Writing—original draft, M.H., B.G. and O.R.; Writing—review and editing, M.H., B.G., O.R. and J.S.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the U.S. Army Research Laboratory and the U. S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/R013616/1. BG acknowledges partial support from the French National Agency for Research, grants ANR-18-CE40-0016-01 and ANR-18-CE23-0015-02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shawe-Taylor, J.; Williamson, R.C. A PAC analysis of a Bayes estimator. In Proceedings of the 10th Annual Conference on Computational Learning Theory, Nashville, TN, USA, 6–9 July 1997; ACM: New York, NY, USA, 1997; pp. 2–9. [Google Scholar]
  2. McAllester, D.A. Some PAC-Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998; ACM: New York, NY, USA, 1998; pp. 230–234. [Google Scholar]
  3. McAllester, D.A. PAC-Bayesian model averaging. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, Santa Cruz, CA, USA, 7–9 July 1999; ACM: New York, NY, USA, 1999; pp. 164–170. [Google Scholar]
  4. Guedj, B. A Primer on PAC-Bayesian Learning. arXiv 2019, arXiv:stat.ML/1901.05353. [Google Scholar]
  5. Rivasplata, O.; Kuzborskij, I.; Szepesvári, C.; Shawe-Taylor, J. PAC-Bayes Analysis Beyond the Usual Bounds. In Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, Online, 6–12 December 2020. [Google Scholar]
  6. Seeger, M. PAC-Bayesian Generalization Error Bounds for Gaussian Process Classification. J. Mach. Learn. Res. 2002, 3, 233–269. [Google Scholar]
  7. Langford, J. Tutorial on practical prediction theory for classification. J. Mach. Learn. Res. 2005, 6, 273–306. [Google Scholar]
  8. Catoni, O. PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning; Institute of Mathematical Statistics: Waite Hill, OH, USA, 2007. [Google Scholar]
  9. Germain, P.; Lacasse, A.; Laviolette, F.; Marchand, M. PAC-Bayesian Learning of Linear Classifiers. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; Association for Computing Machinery: New York, NY, USA, 2009; pp. 353–360. [Google Scholar]
  10. Alquier, P.; Ridgway, J.; Chopin, N. On the properties of variational approximations of Gibbs posteriors. J. Mach. Learn. Res. 2016, 17, 1–41. [Google Scholar]
  11. Germain, P.; Bach, F.; Lacoste, A.; Lacoste-Julien, S. PAC-Bayesian Theory Meets Bayesian Inference. In Advances in Neural Information Processing Systems 29; Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2016; pp. 1884–1892. [Google Scholar]
  12. Alquier, P.; Guedj, B. Simpler PAC-Bayesian bounds for hostile data. Mach. Learn. 2018, 107, 887–902. [Google Scholar] [CrossRef] [Green Version]
  13. Holland, M. PAC-Bayes under potentially heavy tails. In Advances in Neural Information Processing Systems 32; Wallach, H., Larochelle, H., Beygelzimer, A., d Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2019; pp. 2715–2724. [Google Scholar]
  14. Kuzborskij, I.; Szepesvári, C. Efron-Stein PAC-Bayesian Inequalities. arXiv 2019, arXiv:1909.01931. [Google Scholar]
  15. Shalaeva, V.; Fakhrizadeh Esfahani, A.; Germain, P.; Petreczky, M. Improved PAC-Bayesian Bounds for Linear Regression. In Proceedings of the AAAI 2020—Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020. [Google Scholar]
  16. Lever, G.; Laviolette, F.; Shawe-Taylor, J. Distribution-Dependent PAC-Bayes Priors. In Algorithmic Learning Theory; Hutter, M., Stephan, F., Vovk, V., Zeugmann, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 119–133. [Google Scholar]
  17. Lever, G.; Laviolette, F.; Shawe-Taylor, J. Tighter PAC-Bayes Bounds through Distribution-Dependent Priors. Theor. Comput. Sci. 2013, 473, 4–28. [Google Scholar] [CrossRef]
  18. Mhammedi, Z.; Grünwald, P.; Guedj, B. PAC-Bayes Un-Expected Bernstein Inequality. In Advances in Neural Information Processing Systems 32; Wallach, H., Larochelle, H., Beygelzimer, A., d Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2019; pp. 12202–12213. [Google Scholar]
  19. Parrado-Hernández, E.; Ambroladze, A.; Shawe-Taylor, J.; Sun, S. PAC-Bayes bounds with data dependent priors. J. Mach. Learn. Res. 2012, 13, 3507–3531. [Google Scholar]
  20. Catoni, O. Challenging the empirical mean and empirical variance: A deviation study. Ann. Inst. H. Poincaré Probab. Statist. 2012, 48, 1148–1185. [Google Scholar] [CrossRef]
  21. Catoni, O.; Giulini, I. Dimension-free PAC-Bayesian bounds for matrices, vectors, and linear least squares regression. arXiv 2017, arXiv:math.ST/1712.02747. [Google Scholar]
  22. Blumenson, L.E. A Derivation of n-Dimensional Spherical Coordinates. Am. Math. Mon. 1960, 67, 63–66. [Google Scholar] [CrossRef] [Green Version]
  23. Winkelbauer, A. Moments and Absolute Moments of the Normal Distribution. arXiv 2012, arXiv:math.ST/1209.4340. [Google Scholar]
  24. Srinivasan, G.K.; Zvengrowski, P. On the Horizontal Monotonicity of |Γ(s)|. Can. Math. Bull. 2011, 54, 538–543. [Google Scholar] [CrossRef]
  25. Gauss, C.F. Disquisitiones Generales Circa Seriem Infinitam (reprint). In Werke; Cambridge University Press: Cambridge, UK, 2011; Volume 3. [Google Scholar]
Figure 1. Above, result of the first experiment which highlight the importance of optimising α . Below, result of the second experiment which show how effective an informed prior is.
Figure 1. Above, result of the first experiment which highlight the importance of optimising α . Below, result of the second experiment which show how effective an informed prior is.
Entropy 23 01330 g001
Figure 2. Evaluation of the right hand side in Theorem 5 with d = 10 and d = 50 .
Figure 2. Evaluation of the right hand side in Theorem 5 with d = 10 and d = 50 .
Entropy 23 01330 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Haddouche, M.; Guedj, B.; Rivasplata, O.; Shawe-Taylor, J. PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses. Entropy 2021, 23, 1330. https://doi.org/10.3390/e23101330

AMA Style

Haddouche M, Guedj B, Rivasplata O, Shawe-Taylor J. PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses. Entropy. 2021; 23(10):1330. https://doi.org/10.3390/e23101330

Chicago/Turabian Style

Haddouche, Maxime, Benjamin Guedj, Omar Rivasplata, and John Shawe-Taylor. 2021. "PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses" Entropy 23, no. 10: 1330. https://doi.org/10.3390/e23101330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop