You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the H2o ML Framework supports an enum-encoding scheme. It would be nice to have this for sklearn as well. As far as I know there are no contributions made to add this for sklearn tree-based-models.
This would be useful to handle categorical features without some curse-of-dimensionality issue (One-Hot) and any kind of ordinality implied. This seems to be a nice approach to find the best split in tree-based-models (e.g. random forest) for categorical features. There is also an implementation of this in LightGBM: Read Section about optimal split for categorical features where there are 2^(k-1) - 1 possible subsets of the k-categorical features for splitting.
Anyone has thoughts about this?
The text was updated successfully, but these errors were encountered:
unless I've not understood correctly, having a way to represent categorical
columns is only part of the issue. And pandas at least has a representation
we're starting to exploit in the coming release.
I don't think we have given up on #4899 as an implementation of categorical
splits in trees, but we also haven't put enough time into reviewing and
merging it.
Hello,
the H2o ML Framework supports an enum-encoding scheme. It would be nice to have this for sklearn as well. As far as I know there are no contributions made to add this for sklearn tree-based-models.
This would be useful to handle categorical features without some curse-of-dimensionality issue (One-Hot) and any kind of ordinality implied. This seems to be a nice approach to find the best split in tree-based-models (e.g. random forest) for categorical features. There is also an implementation of this in LightGBM: Read Section about optimal split for categorical features where there are
2^(k-1) - 1
possible subsets of the k-categorical features for splitting.Anyone has thoughts about this?
The text was updated successfully, but these errors were encountered: