You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This section gets us started with displaying basic binary classification using 2D data. We first show how to display training versus testing data using [various marker styles](https://plot.ly/python/marker-style/), then demonstrate how to evaluate a kNN classifier's performance on the **test split** using a continuous color gradient to indicate the model's predicted score.
39
40
40
-
### Display training and test splits
41
-
42
-
```python
43
-
44
-
```
45
41
46
-
### Visualize predictions on test split
42
+
### Display training and test splits
47
43
48
-
```python
49
44
50
-
```
45
+
Here, we display all the negative labels as squares, and positive labels as circles. We differentiate the training and test set by adding a dot to the center of test data.
51
46
52
47
```python
53
48
import numpy as np
54
49
import plotly.express as px
55
50
import plotly.graph_objects as go
56
51
from sklearn.datasets import make_moons
52
+
from sklearn.model_selection import train_test_split
57
53
from sklearn.neighbors import KNeighborsClassifier
58
54
59
55
X, y = make_moons(noise=0.3, random_state=0)
60
-
X_test, _ = make_moons(noise=0.3, random_state=1)
61
-
62
-
clf = KNeighborsClassifier(15)
63
-
clf.fit(X, y.astype(str)) # Fit on training set
64
-
y_pred = clf.predict(X_test) # Predict on new data
Now, we evaluate the model only on the test set. Notice that `px.scatter` only require 1 function call to plot both negative and positive labels, and can additionally set a continuous color scale based on the `y_score` output by our kNN model.
72
85
73
86
```python
74
87
import numpy as np
75
88
import plotly.express as px
76
89
import plotly.graph_objects as go
77
-
from sklearn.datasets import make_classification
90
+
from sklearn.datasets import make_moons
91
+
from sklearn.model_selection import train_test_split
78
92
from sklearn.neighbors import KNeighborsClassifier
79
93
80
-
X, y = make_classification(n_features=2, n_redundant=0, random_state=0)
## Multi-class prediction confidence with `go.Heatmap`
136
182
183
+
It is also possible to visualize the prediction confidence of the model using `go.Heatmap`. In this example, you can see how to compute how confident the model is about its prediction at every point in the 2D grid. Here, we define the confidence as the difference between the highest score and the score of the other classes summed, at a certain point.
184
+
137
185
```python
138
186
import numpy as np
139
187
import plotly.express as px
@@ -145,8 +193,9 @@ margin = 1
145
193
146
194
# We will use the iris data, which is included in px
0 commit comments