Description
Fitting a model several times when building a validation or a learning curve can be costly while the evaluation of the scorer can be very fast.
It would be interesting if it would be possible to evaluate a list of scorers given to:
sklearn.learning_curve.learning_curve
sklearn.learning_curve.validation_curve
as the argument scoring = [scorer1, scorer2, scorer3, ... ]
As a workaround, I'm going to see if my scorer can return a float with extra info in the form of extra properties, like the extra scorers that I want to evaluate (in my particular case, I'm even going to try including confusion matrixes), although I'm quite new at Python, and I don't know if such a thing is possible/easy. Nevertheless such an approach seems unnecessarily twisted.
Learning curve and validation curves are the two functions that are relevant to me. I don't know whether there are any other methods which may be susceptible of this enhancement.
Would you consider as feasible expanding scoring functionality to accept lists of scorers?