[go: up one dir, main page]

0% found this document useful (0 votes)
6 views2 pages

Model Evaluation Methods

The document outlines various evaluation methods for classification and regression models, including accuracy, precision, recall, F1 score, confusion matrix, ROC curve, AUC for classification, and MAE, MSE, RMSE, R² score for regression. Each method is described with its formula and best use case, along with example code snippets using sklearn. A summary table categorizes the methods by type and their optimal applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views2 pages

Model Evaluation Methods

The document outlines various evaluation methods for classification and regression models, including accuracy, precision, recall, F1 score, confusion matrix, ROC curve, AUC for classification, and MAE, MSE, RMSE, R² score for regression. Each method is described with its formula and best use case, along with example code snippets using sklearn. A summary table categorizes the methods by type and their optimal applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Model Evaluation Methods

Evaluation Methods for Classification

1. Accuracy:
- Correct predictions / Total predictions
- Best for balanced datasets
Code:
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)

2. Precision:
- Correct positive predictions / Total positive predictions made
- Best when false positives are costly
Code:
from sklearn.metrics import precision_score
precision = precision_score(y_test, y_pred)

3. Recall (Sensitivity):
- Correct positive predictions / All actual positives
- Best when false negatives are costly
Code:
from sklearn.metrics import recall_score
recall = recall_score(y_test, y_pred)

4. F1 Score:
- Harmonic mean of Precision and Recall
- Good balance of precision and recall
Code:
from sklearn.metrics import f1_score
f1 = f1_score(y_test, y_pred)

5. Confusion Matrix:
- Matrix showing TP, FP, TN, FN
Code:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)

6. ROC Curve & AUC:


- ROC: True Positive Rate vs False Positive Rate
- AUC: Overall performance
Code:
from sklearn.metrics import roc_auc_score, roc_curve
auc = roc_auc_score(y_test, y_pred_prob)

Evaluation Methods for Regression

1. Mean Absolute Error (MAE):


Model Evaluation Methods

- Average absolute error


Code:
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_test, y_pred)

2. Mean Squared Error (MSE):


- Average of squared differences
Code:
from sklearn.metrics import mean_squared_error
mse = mean_squared_error(y_test, y_pred)

3. Root Mean Squared Error (RMSE):


- Square root of MSE (penalizes large errors)
Code:
import numpy as np
rmse = np.sqrt(mse)

4. R² Score:
- Measures how well predictions fit the real data (1 = perfect)
Code:
from sklearn.metrics import r2_score
r2 = r2_score(y_test, y_pred)

Summary Table

| Type | Method | Best For |


|--------------|----------------|-----------------------------------|
| Classification | Accuracy | Balanced datasets |
| Classification | Precision | Avoiding false positives |
| Classification | Recall | Avoiding false negatives |
| Classification | F1 Score | Balance of precision and recall |
| Classification | ROC/AUC | Overall classification quality |
| Regression | MAE | Simple error analysis |
| Regression | MSE/RMSE | Penalizing larger errors |
| Regression | R² Score | Model goodness of fit |

You might also like