-
-
Notifications
You must be signed in to change notification settings - Fork 26k
[MRG+1] ENH add a benchmark on mnist #3562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Ping @IssamLaradji |
Thanks. Looks great. I will have |
MNIST dataset benchmark | ||
======================= | ||
|
||
Benchmark multi-layer perceptron, Extra-Trees, linear svm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right :-)
I know you knew I'd ask this but: should we be keeping this much duplicate code between |
Good question. My opinion is that you will always tidbit differences that will matter. For instance, you will want to give different sets of parameters for each classifier in each benchmark. Prep-processing of the dataset will be different. I also like the fact that the benchmark is straightforward and self-contained. |
Travis highlights again an omp cv heisen failure. |
I know I said +1 to the covtype changes, but in order to be particularly useful for playing with parameters, it should be usable in interactive mode. This means making a function like Then: $ ipython -i benchmarks/bench_mnist.py
In [1]: main(my_classifiers) |
That wouldn't be hard to do. I could define an overall benchmark method which would take an estimator as parameter. |
What would expect as output of your main function? |
As a main() it can just print to stdout. On 14 August 2014 21:12, Arnaud Joly notifications@github.com wrote:
|
@jnothman I agree that making the benchmarks more interactive would be interesting. However, I think that this is already useful as it is. And for my usecase, a static script is already pretty good. |
LGTM +1 for merge as it is. We can always improve this later (there is no public API issue for benchmark scripts). |
Did you mean to add changes to two other files? |
Did you look at the changes I did in #3939? |
I think it would be nice to have the output including all algorithms in the docstring. Otherwise LGTM |
@amueller I have taken into account your last remark. |
Thanks :) |
[MRG+1] ENH add a benchmark on mnist
I have extracted and improved the benchmark of #3204 on mnist.
This will be helpful whenever you want to bench and compare new algorithms (ELM, multilayer neural network, ...).