You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here scores1.mean() is not equal to scores2.mean(). How is this possible? Is it that when n_jobs is None, it is uses warm_state to be True for subsequent TimeSeriesSplit folds and when n_jobs = -1, it has to train all models parallel and hence cannot reuse the parameters of previous split folds?
The text was updated successfully, but these errors were encountered:
NagabhushanS
changed the title
Difference is scores returned by cross_val_score when n_jobs is set and when not set?
Difference in scores returned by cross_val_score when n_jobs is set and when not set?
Apr 19, 2019
That was my assumption too. I've not yet been able to confirm it with the lgbm code. If the random state is not seeded separately in each process, it won't give the same results.
Try asking at lightgbm. It's unlikely to be a big on our side as such.
I came across a weird issue while cross validating my lightgbm model using a TimeSeriesSplit cv.
Following is the sample code:
Here scores1.mean() is not equal to scores2.mean(). How is this possible? Is it that when n_jobs is None, it is uses warm_state to be True for subsequent TimeSeriesSplit folds and when n_jobs = -1, it has to train all models parallel and hence cannot reuse the parameters of previous split folds?
The text was updated successfully, but these errors were encountered: