[go: up one dir, main page]

In computational learning theory (machine learning and theory of computation), Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to real valued functions.

Definitions

edit

Rademacher complexity of a set

edit

Given a set  , the Rademacher complexity of A is defined as follows:[1][2]: 326 

 

where   are independent random variables drawn from the Rademacher distribution i.e.   for  , and  . Some authors take the absolute value of the sum before taking the supremum, but if   is symmetric this makes no difference.

Rademacher complexity of a function class

edit

Let   be a sample of points and consider a function class   of real-valued functions over  . Then, the empirical Rademacher complexity of   given   is defined as:

 

This can also be written using the previous definition:[2]: 326 

 

where   denotes function composition, i.e.:

 

Let   be a probability distribution over  . The Rademacher complexity of the function class   with respect to   for sample size   is:

 

where the above expectation is taken over an identically independently distributed (i.i.d.) sample   generated according to  .

Intuition

edit

The Rademacher complexity is typically applied on a function class of models that are used for classification, with the goal of measuring their ability to classify points drawn from a probability space under arbitrary labellings. When the function class is rich enough, it contains functions that can appropriately adapt for each arrangement of labels, simulated by the random draw of   under the expectation, so that this quantity in the sum is maximised.

Examples

edit

1.   contains a single vector, e.g.,  . Then:

 

The same is true for every singleton hypothesis class.[3]: 56 

2.   contains two vectors, e.g.,  . Then:

 

Using the Rademacher complexity

edit

The Rademacher complexity can be used to derive data-dependent upper-bounds on the learnability of function classes. Intuitively, a function-class with smaller Rademacher complexity is easier to learn.

Bounding the representativeness

edit

In machine learning, it is desired to have a training set that represents the true distribution of some sample data  . This can be quantified using the notion of representativeness. Denote by   the probability distribution from which the samples are drawn. Denote by   the set of hypotheses (potential classifiers) and denote by   the corresponding set of error functions, i.e., for every hypothesis  , there is a function  , that maps each training sample (features,label) to the error of the classifier   (note in this case hypothesis and classifier are used interchangeably). For example, in the case that   represents a binary classifier, the error function is a 0–1 loss function, i.e. the error function   returns 0 if   correctly classifies a sample and 1 else. We omit the index and write   instead of   when the underlying hypothesis is irrelevant. Define:

  – the expected error of some error function   on the real distribution  ;
  – the estimated error of some error function   on the sample  .

The representativeness of the sample  , with respect to   and  , is defined as:

 

Smaller representativeness is better, since it provides a way to avoid overfitting: it means that the true error of a classifier is not much higher than its estimated error, and so selecting a classifier that has low estimated error will ensure that the true error is also low. Note however that the concept of representativeness is relative and hence can not be compared across distinct samples.

The expected representativeness of a sample can be bounded above by the Rademacher complexity of the function class:[2]: 326 

 

Bounding the generalization error

edit

When the Rademacher complexity is small, it is possible to learn the hypothesis class H using empirical risk minimization.

For example, (with binary error function),[2]: 328  for every  , with probability at least  , for every hypothesis  :

 

Bounding the Rademacher complexity

edit

Since smaller Rademacher complexity is better, it is useful to have upper bounds on the Rademacher complexity of various function sets. The following rules can be used to upper-bound the Rademacher complexity of a set  .[2]: 329–330 

1. If all vectors in   are translated by a constant vector  , then Rad(A) does not change.

2. If all vectors in   are multiplied by a scalar  , then Rad(A) is multiplied by  .

3.  .[3]: 56 

4. (Kakade & Tewari Lemma) If all vectors in   are operated by a Lipschitz function, then Rad(A) is (at most) multiplied by the Lipschitz constant of the function. In particular, if all vectors in   are operated by a contraction mapping, then Rad(A) strictly decreases.

5. The Rademacher complexity of the convex hull of   equals Rad(A).

6. (Massart Lemma) The Rademacher complexity of a finite set grows logarithmically with the set size. Formally, let   be a set of   vectors in  , and let   be the mean of the vectors in  . Then:

 

In particular, if   is a set of binary vectors, the norm is at most  , so:

 
edit

Let   be a set family whose VC dimension is  . It is known that the growth function of   is bounded as:

for all  :  

This means that, for every set   with at most   elements,  . The set-family   can be considered as a set of binary vectors over  . Substituting this in Massart's lemma gives:

 

With more advanced techniques (Dudley's entropy bound and Haussler's upper bound[4]) one can show, for example, that there exists a constant  , such that any class of  -indicator functions with Vapnik–Chervonenkis dimension   has Rademacher complexity upper-bounded by  .

edit

The following bounds are related to linear operations on   – a constant set of   vectors in  .[2]: 332–333 

1. Define   the set of dot-products of the vectors in   with vectors in the unit ball. Then:

 

2. Define   the set of dot-products of the vectors in   with vectors in the unit ball of the 1-norm. Then:

 
edit

The following bound relates the Rademacher complexity of a set   to its external covering number – the number of balls of a given radius   whose union contains  . The bound is attributed to Dudley.[2]: 338 

Suppose   is a set of vectors whose length (norm) is at most  . Then, for every integer  :

 

In particular, if   lies in a d-dimensional subspace of  , then:

 

Substituting this in the previous bound gives the following bound on the Rademacher complexity:

 

Gaussian complexity

edit

Gaussian complexity is a similar complexity with similar physical meanings, and can be obtained from the Rademacher complexity using the random variables   instead of  , where   are Gaussian i.i.d. random variables with zero-mean and variance 1, i.e.  . Gaussian and Rademacher complexities are known to be equivalent up to logarithmic factors.

Equivalence of Rademacher and Gaussian complexity

edit

Given a set   then it holds that[5]:
 
Where   is the Gaussian Complexity of A. As an example, consider the rademacher and gaussian complexities of the L1 ball. The Rademacher complexity is given by exactly 1, whereas the Gaussian complexity is on the order of   (which can be shown by applying known properties of suprema of a set of subgaussian random variables).[5]

References

edit
  1. ^ Balcan, Maria-Florina (November 15–17, 2011). "Machine Learning Theory – Rademacher Complexity" (PDF). Retrieved 10 December 2016.
  2. ^ a b c d e f g Chapter 26 in Shalev-Shwartz, Shai; Ben-David, Shai (2014). Understanding Machine Learning – from Theory to Algorithms. Cambridge University Press. ISBN 9781107057135.
  3. ^ a b Mohri, Mehryar; Rostamizadeh, Afshin; Talwalkar, Ameet (2012). Foundations of Machine Learning. US, Massachusetts: MIT Press. ISBN 9780262018258.
  4. ^ Bousquet, O. (2004). Introduction to Statistical Learning Theory. Biological Cybernetics, 3176(1), 169–207. doi:10.1007/978-3-540-28650-9_8
  5. ^ a b Wainwright, Martin (2019). High-dimensional statistics : a non-asymptotic viewpoint. Cambridge, United Kingdom. pp. Exercise 5.5. ISBN 978-1-108-62777-1. OCLC 1089254580.{{cite book}}: CS1 maint: location missing publisher (link)
  • Peter L. Bartlett, Shahar Mendelson (2002) Rademacher and Gaussian Complexities: Risk Bounds and Structural Results. Journal of Machine Learning Research 3 463–482
  • Giorgio Gnecco, Marcello Sanguineti (2008) Approximation Error Bounds via Rademacher's Complexity. Applied Mathematical Sciences, Vol. 2, 2008, no. 4, 153–176