[go: up one dir, main page]

Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4224))

Abstract

Leave-one-out Cross Validation (LOO-CV) gives an almost unbiased estimate of the expected generalization error. But the LOO-CV classical procedure with Support Vector Machines (SVM) is very expensive and cannot be applied when training set has more that few hundred examples. We propose a new LOO-CV method which uses modified initialization of Sequential Minimal Optimization (SMO) algorithm for SVM to speed-up LOO-CV. Moreover, when SMO’s stopping criterion is changed with our adaptive method, experimental results show that speed-up of LOO-CV is greatly increased while LOO error estimation is very close to exact LOO error estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Duan, K., Keerthi, S.S., Poo, A.N.: Evaluation of simple performance measures for tuning svm hyperparameters. Neurocomputing 51, 41–59 (2003)

    Article  Google Scholar 

  2. Lee, J., Lin, C.: Automatic model selection for support vector machines. technical report. (2000), http://www.csie.ntu.edu.tw/~cjlin/papers/modelselect.ps.gz

  3. Platt, J.: Fast training of SVMs using sequential minimal optimization. In: Advances in kernel methods-support vector learning, pp. 185–208. MIT Press, Cambridge (1999)

    Google Scholar 

  4. DeCoste, D., Wagstaff, K.: Alpha seeding for support vector machines. In: Int. Conf. Knowledge Discovery Data Mining, pp. 345–349 (2000)

    Google Scholar 

  5. Lee, M.M.S., Keerthi, S.S., Ong, C.J., DeCoste, D.: An efficient method for computing leave-one-out error in SVM with gaussian kernels. In: JAIR, vol. 15, pp. 750–757 (2004)

    Google Scholar 

  6. Vapnik, V.N.: Statistical Learning Theory. Wiley edition, Chichester (1998)

    MATH  Google Scholar 

  7. Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using the second order information for training SVM. In: JMLR, vol. 6, pp. 1889–1918 (2005)

    Google Scholar 

  8. Chang, C.C., Lin, C.J.: Libsvm: a library for Support Vector Machines. Software Available at (2001), http://www.csie.ntu.edu.tw/~cjlin/libsvm

  9. Burbidge, R.: Stopping criteria for SVMs, Available at (2002), http://stats.ma.ic.ac.uk/rdb/public/~html/pubs/hermes.pdf

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lebrun, G., Lezoray, O., Charrier, C., Cardot, H. (2006). Speed-Up LOO-CV with SVM Classifier. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2006. IDEAL 2006. Lecture Notes in Computer Science, vol 4224. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11875581_13

Download citation

  • DOI: https://doi.org/10.1007/11875581_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-45485-4

  • Online ISBN: 978-3-540-45487-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics