[go: up one dir, main page]

Foundations and Trends® in Machine Learning > Vol 4 > Issue 1

Optimization with Sparsity-Inducing Penalties

By Francis Bach, INRIA — SIERRA Project-Team, Laboratoire d'Informatique de l'Ecole Normale Supérieure, France, francis.bach@inria.fr | Rodolphe Jenatton, INRIA — SIERRA Project-Team, rodolphe.jenatton@inria.fr | Julien Mairal, Department of Statistics, University of California, USA, julien@stat.berkeley.edu | Guillaume Obozinski, INRIA — SIERRA Project-Team, guillaume.obozinski@inria.fr

 
Suggested Citation
Francis Bach, Rodolphe Jenatton, Julien Mairal and Guillaume Obozinski (2012), "Optimization with Sparsity-Inducing Penalties", Foundations and Trends® in Machine Learning: Vol. 4: No. 1, pp 1-106. http://dx.doi.org/10.1561/2200000015

Publication Date: 04 Jan 2012
© 2012 F. Bach, R. Jenatton, J. Mairal and G. Obozinski
 
Subjects
Optimization
 

Free Preview:

Download extract

Share

Download article
In this article:
1 Introduction 
2 Generic Methods 
3 Proximal Methods 
4 (Block) Coordinate Descent Algorithms 
5 Reweighted-ℓ2 Algorithms 
6 Working-Set and Homotopy Methods 
7 Sparsity and Nonconvex Optimization 
8 Quantitative Evaluation 
9 Extensions 
10 Conclusions 
Acknowledgments 
References 

Abstract

Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate nonsmooth norms. The goal of this monograph is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted ℓ2-penalized techniques, workingset and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.

DOI:10.1561/2200000015
ISBN: 978-1-60198-510-1
116 pp. $80.00
Buy book (pb)
 
ISBN: 978-1-60198-511-8
116 pp. $100.00
Buy E-book (.pdf)
Table of contents:
1: Introduction
2: Generic Methods
3: Proximal Methods
4: (Block) Coordinate Descent Algorithms
5: Reweighted-?2 Algorithms
6: Working-Set and Homotopy Methods
7: Sparsity and Nonconvex Optimization
8: Quantitative Evaluation
9: Extensions
10: Conclusions
Acknowledgements
References

Optimization with Sparsity-Inducing Penalties

Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate nonsmooth norms. Optimization with Sparsity-Inducing Penalties presents optimization tools and techniques dedicated to such sparsity-inducing penalties from a general perspective. It covers proximal methods, block-coordinate descent, reweighted ?2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provides an extensive set of experiments to compare various algorithms from a computational point of view. The presentation of Optimization with Sparsity-Inducing Penalties is essentially based on existing literature, but the process of constructing a general framework leads naturally to new results, connections and points of view. It is an ideal reference on the topic for anyone working in machine learning and related areas.

 
MAL-015