-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
Examples shouldn't use weighted macro-averaged P/R/F #6847
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I can take it if it is not already taken. Thanks. |
You're welcome On 4 June 2016 at 03:30, aakashpatel25 notifications@github.com wrote:
|
@jnothman I'd like to take this issue if its already not taken... but just want a little bit of explanation what needs to be done. |
@aakashpatel25 will have to speak for whether it's taken. Find places where weighted average is used, and choose a more appropriate metric variant. Update the test output to match. |
@jnothman So, if I understand correctly after reading through the codebase, |
That sort of thing. On 15 June 2016 at 04:06, Mehul Ahuja notifications@github.com wrote:
|
We've moved away from the default precision/recall/f-score being a prevalence-weighted macro-average (
average="weighted"
) to requiring the user to specify an option, on the basis that the weighted average is somewhat non-standard / not in textbooks. The example atexamples/model_selection/grid_search_digits.py
still uses the weighted average, as do some docstrings. We should identify if a more standard averaging option is appropriate to illustrate the point in each case.The text was updated successfully, but these errors were encountered: