Replies: 1 comment 5 replies
-
Maybe @glemaitre or @jeremiedbb have an input on this ? |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Regarding the problem of class imbalance, it seems the consensus is now to use cost sensitive learning. That is to use cost imbalance (instead of class imbalance) as weights in the evaluation metric and then use the weights for learning. The idea is to get closer to real world metrics ($$$). I understand and agree with this.
However there is also the problem of probability calibration: calibration. To me it seems that using cost imbalance would break the calibration in probability. Am I right in thinking this ?
I would be tempted to fit a model with weights then use a probability calibration approach. But I am not sure that it would works as expected: typically, wouldn't the probability calibration approach need to be weighted too ?
Edit: does the weighting change the ranking of the first step ? Maybe we don't need to weight the first step and only the second step should be weighted ?
Beta Was this translation helpful? Give feedback.
All reactions