[go: up one dir, main page]

Skip to main content
Log in

Random Sampling of Sparse Trigonometric Polynomials, II. Orthogonal Matching Pursuit versus Basis Pursuit

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

We investigate the problem of reconstructing sparse multivariate trigonometric polynomials from few randomly taken samples by Basis Pursuit and greedy algorithms such as Orthogonal Matching Pursuit (OMP) and Thresholding. While recovery by Basis Pursuit has recently been studied by several authors, we provide theoretical results on the success probability of reconstruction via Thresholding and OMP for both a continuous and a discrete probability model for the sampling points. We present numerical experiments, which indicate that usually Basis Pursuit is significantly slower than greedy algorithms, while the recovery rates are very similar.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A simple proof of the restricted isometry property for random matrices, Constr. Approx. (2007), to appear.

  2. R. F. Bass and K. Gröchenig, Random sampling of multivariate trigonometric polynomials, SIAM J. Math. Anal. 36 (2004), 773–795.

    Article  MATH  MathSciNet  Google Scholar 

  3. G. Bennett, Probability inequalities for the sum of independent random variables, J. Amer. Statist. Assoc. 57 (1962), 33–45.

    Article  MATH  Google Scholar 

  4. Å. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996.

    MATH  Google Scholar 

  5. A. Böttcher and D. Potts, Probability against condition number and sampling of multivariate trigonometric random polynomials, Electron. Trans. Numer. Anal. 26 (2007), 178–189.

    MathSciNet  Google Scholar 

  6. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004.

    MATH  Google Scholar 

  7. E. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math. 6(2) (2006), 227–254.

    Article  MATH  MathSciNet  Google Scholar 

  8. E. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52(2) (2006), 489–509.

    Article  MathSciNet  Google Scholar 

  9. E. Candès, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math. 59(8) (2006), 1207–1223.

    Article  MATH  MathSciNet  Google Scholar 

  10. E. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory 52 (2006), 5406–5425.

    Article  MathSciNet  Google Scholar 

  11. E. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (2005), 4203–4215.

    Article  MathSciNet  Google Scholar 

  12. S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput. 20(1) (1999), 33–61.

    Article  MATH  MathSciNet  Google Scholar 

  13. D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), 1289–1306.

    Article  MathSciNet  Google Scholar 

  14. D. L. Donoho, For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution, Comm. Pure Appl. Math. 59 (2006), 797–892.

    Article  MATH  MathSciNet  Google Scholar 

  15. D. L. Donoho and J. Tanner, Sparse nonnegative solutions of underdetermined linear equations, Proc. Natl. Acad. Sci. USA 102 (2005), 9446–9451.

    Article  MathSciNet  Google Scholar 

  16. D. L. Donoho and Y. Tsaig, Extensions of compressed sensing, Signal Process 86(3) (2006), 549–571.

    Article  Google Scholar 

  17. A. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, and M. Strauss, Near-optimal sparse Fourier representations via sampling, in Proc. STOC, 2002.

  18. A. Gilbert, S. Muthukrishnan, and M. Strauss, Improved time bounds for near-optimal sparse Fourier representation via sampling, in Proc. SPIE, 2005 (Wavelets XI), 2005.

  19. A. Gilbert and J. Tropp, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inform. Theory, to appear.

  20. M. Grant, S. Boyd, and Y. Ye, CVX: Matlab software for disciplined convex programming, Version 1.0RC3, http://www.stanford.edu/~boyd/cvx, 2007.

  21. R. Gribonval, B. Mailhe, H. Rauhut, K. Schnass, and P. Vandergheynst, Average case analysis of multichannel thresholding, in Proc. ICASSP 2007, 2007.

  22. R. Gribonval and P. Vandergheynst, On the exponential convergence of matching pursuits in quasi-incoherent dictionaries, IEEE Trans. Inform. Theory 52 (2006), 255–261.

    Article  MathSciNet  Google Scholar 

  23. K. Gröchenig, H. Rauhut, and B. Pötscher, Learning trigonometric polynomials from random samples and exponential inequalities for eigenvalues of random matrices, Preprint, 2007.

  24. J. Keiner, S. Kunis, and D. Potts, NFFT Software package, C subroutine library, http://www.tu-chemnitz.de/~potts/nfft, 2002–2006.

  25. S. Kunis and H. Rauhut, OMP-NFFT, MatLab-toolbox for orthogonal matching pursuit on sparse trigonometric polynomials, http://www.tu-chemnitz.de/~skunis/software.php, 2006.

  26. S. Mallat and Z. Zhang, Matching pursuit with time-frequency dictionaries, IEEE Trans. Signal Process. 41 (1993), 3397–3415.

    Article  MATH  Google Scholar 

  27. Y. Mansour, Randomized interpolation and approximation of sparse polynomials, SIAM J. Comput. 24 (1995), 357–368.

    Article  MATH  MathSciNet  Google Scholar 

  28. mosek ApS, MOSEK optimization software, http://www.mosek.com.

  29. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Software 8 (1982), 43–71.

    Article  MATH  MathSciNet  Google Scholar 

  30. G. Peskir, Best constants in Kahane–Khintchine inequalities for complex Steinhaus functions, Proc. Amer. Math. Soc. 123 (1993), 3101–3111.

    Article  MathSciNet  Google Scholar 

  31. D. Potts, G. Steidl, and M. Tasche, Fast Fourier transforms for nonequispaced data: A tutorial, in Modern Sampling Theory: Mathematics and Applications (J. J. Benedetto and P. Ferreira, Eds.), Chap. 12, pp. 249–274, Birkhäuser, Boston, 2001.

    Google Scholar 

  32. H. Rauhut, Random sampling of sparse trigonometric polynomials, Appl. Comput. Harmon. Anal. 22 (2007), 16–42.

    Article  MATH  MathSciNet  Google Scholar 

  33. H. Rauhut, Stability results for random sampling of sparse trigonometric polynomials, Preprint, 2006.

  34. H. Rauhut, On the impossibility of uniform recovery using greedy methods, Preprint, 2007.

  35. J. Romberg and E. Candes, L1MAGIC, MatLab toolbox for basis pursuit, http://www.acm.caltech.edu/l1magic, 2006.

  36. M. Rudelson and R. Vershynin, Geometric approach to error correcting codes and reconstruction of signals, Int. Math. Res. Not. 64 (2005), 4019–4041.

    Article  MathSciNet  Google Scholar 

  37. M. Rudelson and R. Vershynin, Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements, in Proc. CISS 2006 (40th Annual Conference on Information Sciences and Systems), 2006.

  38. M. Rudelson and R. Vershynin, On sparse reconstruction from Fourier and Gaussian measurements, Comm. Pure Appl. Math., to appear.

  39. J. F. Sturm, Using SeDuMi, a Matlab toolbox for optimization over symmetric cones, Optim. Methods Softw. 11–12 (1999), 625–653, now at http://sedumi.mcmaster.ca/.

    Article  MathSciNet  Google Scholar 

  40. J. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory 50 (2004), 2231–2242.

    Article  MathSciNet  Google Scholar 

  41. J. Tropp, Topics in Sparse Approximation, PhD Thesis, UT-Austin, 2004.

  42. J. Tropp, Random subdictionaries of general dictionaries, Appl. Comput. Harmon. Anal., to appear.

  43. A. Van der Vaart and J. Wellner, Weak Convergence and Empirical Processes, Springer, New York, 1996.

    MATH  Google Scholar 

  44. R. Vershynin, Frame expansions with erasures: an approach through the noncommutative operator theory, Appl. Comput. Harmon. Anal. 18 (2005), 167–176.

    Article  MATH  MathSciNet  Google Scholar 

  45. J. Zou, A sublinear algorithm for the recovery of signals with sparse Fourier transform when many samples are missing, Appl. Comput. Harmon. Anal. 22 (2007), 61–77.

    Article  MATH  MathSciNet  Google Scholar 

  46. J. Zou, A. Gilbert, M. Strauss, and I. Daubechies, Theoretical and experimental analysis of a randomized algorithm for sparse Fourier transform analysis, J. Comput. Phys. 211 (2005), 572–595.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Kunis.

Additional information

Communicated by Emmanuel Candes.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kunis, S., Rauhut, H. Random Sampling of Sparse Trigonometric Polynomials, II. Orthogonal Matching Pursuit versus Basis Pursuit. Found Comput Math 8, 737–763 (2008). https://doi.org/10.1007/s10208-007-9005-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-007-9005-x

Keywords

Mathematics Subject Classification (2000)

Navigation