[go: up one dir, main page]

0% found this document useful (0 votes)
20 views1 page

Model Tuning

The document is a cheatsheet for model tuning using Modeva, detailing various tuners such as Grid Search, Random Search, Optuna, and Particle Swarm Optimization (PSO). It provides examples of how to set up hyperparameter search spaces and configurations for each tuning method. The document is intended for users looking to optimize machine learning models by finding the best hyperparameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views1 page

Model Tuning

The document is a cheatsheet for model tuning using Modeva, detailing various tuners such as Grid Search, Random Search, Optuna, and Particle Swarm Optimization (PSO). It provides examples of how to set up hyperparameter search spaces and configurations for each tuning method. The document is intended for users looking to optimize machine learning models by finding the best hyperparameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Model Tuning with Modeva :: CHEATSHEET

Model Tuning is to find the optimal hyperparameters that maximizes the performance of a machine from modeva . models import
learning model on a given task. Modeva provides multiple model tuners that do the search. ModelTuneGridSearch ,
ModelTuneRandomSearch ,
ModelTuneOptuna ,
ModelTunePSO

Grid Search Hyperparameter Search Space


Example: Regular Grid
hpo = Mo del Tun e G ri d Se a rc h ( dataset = ds , model = model )
result = hpo . run ( param_grid = param_grid , param_grid = {
metric =(" ACC " , " AUC " , " Brier " , " LogLoss ") ) " n_estimators ": [100 , 500] ,
" eta ": [0.001 , 0.01 , 0.1]}

Random Search Example: Random Space

hpo = M od el Tu n e R a n d o m S e a r c h ( dataset = ds , model = model ) from scipy . stats import uniform , randint
result = hpo . run ( pa r a m_ d is t r ib u ti o ns = param_space , param_space = {
n_iter =20 , metric =(" ACC " , " AUC ") ) " max_depth ": randint (1 , 10) ,
" n_estimators ": randint (100 , 1000) ,
" eta ": uniform (0.001 , 0.3) }
Optuna Tuner Example: PSO Bounds and Types

import optuna param_bounds = {


hpo = ModelTuneOptuna ( dataset = ds , model = model ) " max_depth ": [1 , 10] ,
result = hpo . run ( pa r a m_ d is t r ib u ti o ns = param_space , " eta ": [0.001 , 0.3]}
sampler =" tpe " , #{" grid " ," random " ," tpe " ," gp " ," cma - es " ," qmc "} param_bounds = {
cv =5 , metric =(" ACC " , " AUC " , " LogLoss ") ) " max_depth ": " int " ,
" eta ": " float "}

PSO (Particle Swarm Optimization) Search


Model Tuning Result
hpo = ModelTunePSO ( dataset = ds , model = model )
result = hpo . run ( param_bounds = param_bounds , result . table
param_types = param_types , result . plot (" parallel " , figsize =(8 , 5) )
n_iter =2 , n_particles =10 , metric =" AUC ") result . plot (" x : para " , " y : metric ")

Modeva v1.0 | Released: March 2025 | https://modeva.ai

You might also like