8000 Suggestions for Ch6 - Classification 2 · Issue #70 · UBC-DSCI/introduction-to-datascience-python · GitHub
[go: up one dir, main page]

Skip to content
Suggestions for Ch6 - Classification 2 #70
@joelostblom

Description

@joelostblom
  • np.random.seed is not recommended anymore
    • Instead we shuold create a random state object and use it methods
    • However, I don't think we will ever need to use np.random.seed. With both sklearn methods and pandas sample we will always control randomness via the random_state parameter.
    • I think it is more confusing than helpful to also talk about seeds here, and we can just show the effect of a different random_state when doing the train_test split instead.
  • "only function that uses randomness in Python"
  • We haven't taught how to use a lambda function like this. We could since it is useful and very flexible, or if we think it is too complex, we could teach how to perform the same operation either via cancer["Class"].map({'M': 'Malignant', 'B': 'Benign'}) (which would be convenient but a new thing) or simply filtering: cancer.loc[cancer['Class'] == 'M', 'Class'] = 'Malignant' + the benign version (which is less elegant but builds on what they have learned already)
  • We are setting the seed in this cell but there is no randomness to control...
  • We don't need to set the color range
  • Downgrade to pandas <1.5 to avoid warning until next Altair release FutureWarning: iteritems is deprecated and will be removed in a future version. Use .items instead.
  • Explain why it is important to use stratify and shuffle, not just mention what they do (include a short example for each in a sentence or so)
  • This sentence is meant to include the actual number of observations "We can see from .info() in the code above that the training set contains observations, while the test set contains observations"
  • The .info method always prints columns vertically, not just when there is a large number of them.
  • We should not use groupby + count here. That creates a count for each column, so it doesn't generalize well. Instead we should use .groupby + size OR just use value_counts(normalize=True) which means we don't have to divide by the length after (btw we should get dataframe length via df.shape[0] not using len (applies to the entire book)). I don't think there is a normalize option to size, but can't double check because I'm in the air
    • We also shouldn't create a new dataframe unless we are using it again later.
  • This sentence is meant to include actual numbers: "we see about % of the training data are benign and % are malignant"
  • Imports seems to be missing for all sklearn stuff.
  • Could we just call it "model" instead of "model specification" everywhere? Make it flow better so it is easier to read.
  • Some introduction to why we use captical X and lower case y?
    • X should be created as cancer_train.drop(columns='Class'), which is more generalizable
    • We could possibly even skip this throughout the book and just pass the dataframe?
      • If we keep it, we should call it X_train and do the same for the test data.
      • I think we should probably keep the assignment because we used them a few times later, like when creating X_tune / y_tune in grdisearch, which should just be the same _train instead
  • Comments about "weights uniform" and "hidden seed"?
  • Why do we use concat to add the predictions? Can't we just create a copy of the new columns the regular way in the original dataframe (if it is OK to modify at this point)? Or use df.assign(pred=df['predictions']) to just print it without overwriting the variable (if we have taught assign before here.
    • We should also not overwrite variable names like this in my opinion.
    • Maybe something like this:
      cancer_test_predictions = cancer_test.copy()
      cancer_test_predictions['predicted'] = knn_fit.predict(cancer_test.drop(columns='Class'))
  • X_test is created in the accuracy section but was needed in the previous code cell.
  • I think we can do a better job explaining accuracy and start by computed it manually using y_test and the predictions created in cancer_test_predictions, so that students get a better grasp of what it is.
  • Missing actual value in "The output shows that the estimated accuracy of the classifier on the test data was %."
    • I just realized that all of these might be fixed when the code actually runs...
  • Do we ever even need to show confusion_matrix when we are saying that CnfusionMatrixDisiplay is the recommend way?
    • This is one of the many places where it feels unnecessarily noise to assign to a variable instead of just displaying the plot.
  • We need to explain confusion matrices and what is in each square, there are resources in BAIT 509 and DSCI 573 for this.
  • We talk about "majority classifier" but we never show how to create one. This is really easy to setup in sklearn and it is a useful strategy for students to learn so that they can compare against a baseline. I suggest we show them how to setup a dummy classifier (and a dummy regressor in later chapters). We might also consider renaming "majority classifier" to "dummy classifier" in the text so not be ambiguous
  • The code in the cross-validation section repeats some of the things that were done earlier in the chapter like the train_test_split
  • Cross-validation fig72 legend overlaps equation
  • Quoting the docs for the cv param of cross_validate does not seem that helpful?
  • The first time we use cross_validate, we do many things at once (cros_validate, reutrn_train, convert to pandas dataframe, set cv to 5 although its the default). It could be better to do these one at a time.
  • "We can then aggregate the mean and standard deviation". Again Search and replace in the entire chapter.
    • We don't need func= in agg
  • "provides two build-in methods"
  • I think we should introduce grid search and randomized search one at a time to make it easier to understand and also include more explanation as well as an illustration. We can take this from BAIT 509 https://bait509-ubc.github.io/BAIT509/lectures/lecture6.html#automated-hyperparameter-optimization
  • Do we cover what range does somewhere or do we just start using it?
  • "the cross-validation accuracy actually starts to decrease!
  • param_grid_lots is not a great name, something like large_param_grid sounds better. same with the plots name.
  • Do we explain n_jobs=-1?
  • Rewrite this "We can evaluate a classifier by splitting the data randomly into a training and test data set, using the training set to build the classifier, and using the test set to estimate its accuracy. Finally, we can tune the classifier (e.g., select the number of neighbors in K-NN) by maximizing estimated accuracy via cross-validation." to this "We can tune and evaluate a classifier by splitting the data randomly into a training and test data set. The training set is used to build the classifier and we can tune the classifier (e.g., select the number of neighbors in K-NN) by maximizing estimated accuracy via cross-validation. After we have tuned the model we can use the test set to estimate its accuracy."
  • Swap step 2 and 3, since 2 and 4 belong together and 3 is independent of the two.
  • Change step 4 to read: "Setup GridSearchCV (or RandomizedSearchCV) to estimate the classifier accuracy for a range of parameter values (values of "K" in our case)."
  • Change step 5 to read "Execute the grid search by passing the training data to the fit() function of GridSearchCV instance.
  • "yields a high cross-validation accuracy estimate that"
  • Relating to step 6 and 7, GridSearchCV already provides us with the best model as an attribute, granted hat we still want them to look at the plot, is there value in teaching how to extract the already fit best model from GridSearchCV.best_estimator_ instead of refitting it (which is redundant and potentially time consuming).
  • We don't show how we create the irrelevant columns
  • "Forward selection in Python"
    • In general I think this section is not that well-written and we should go through it and improve the language as well as how the different code snippets are introduced.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0