8000 Isolation forest - decision_function & average_path_length method are memory inefficient · Issue #12040 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

Isolation forest - decision_function & average_path_length method are memory inefficient #12040

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
anant9 opened this issue Sep 8, 2018 · 5 comments · Fixed by #13283
Closed

Comments

@anant9
Copy link
anant9 commented Sep 8, 2018

Description

Isolation forest consumes too much memory due to memory ineffecient implementation of anomoly score calculation. Due to this the parallelization with n_jobs is also impacted as anomoly score cannot be calculated in parallel for each tree.

Steps/Code to Reproduce

Run a simple Isolation forest with n_estimators as 10 and as 50 respectively.
On memory profiling, it can be seen that each building of tree is not taking much memory but in the end a lot of memory is consumed as a for loop is iteration over all trees and calculating the anomoly score of all trees together and then averaging it.
-iforest.py line 267-281

        for i, (tree, features) in enumerate(zip(self.estimators_,
                                                 self.estimators_features_)):
            if subsample_features:
                X_subset = X[:, features]
            else:
                X_subset = X
            leaves_index = tree.apply(X_subset)
            node_indicator = tree.decision_path(X_subset)
            n_samples_leaf[:, i] = tree.tree_.n_node_samples[leaves_index]
            depths[:, i] = np.ravel(node_indicator.sum(axis=1))
            depths[:, i] -= 1

        depths += _average_path_length(n_samples_leaf)

        scores = 2 ** (-depths.mean(axis=1) / _average_path_length(self.max_samples_))

        # Take the opposite of the scores as bigger is better (here less
        # abnormal) and add 0.5 (this value plays a special role as described
        # in the original paper) to give a sense to scores = 0:
        return 0.5 - scores

Due to this, in case of more no. of estimators(1000), the memory consumed is quite high.

Expected Results

Possible Solution:
The above for loop should only do the averaging of anomoly score from each estimator instead of calculation. The logic of isoforest anomoly score calculation can be moved to base estimator class so it is done for each tree( i guess bagging.py file-similar to other method available after fitting)

Actual Results

The memory consumption is profound as we increase no. of estimators.

model=Isolationforest()
model.fit(data)

The fit method calls decision function & average anomoly score which are taking quite a lot memory.
the memory spike is too high in the very end, that is in finall call to average_path_length() method.

depths += _average_path_length(n_samples_leaf)

Versions

isoForest_memoryConsumption.docx

@rth
Copy link
Member
rth commented Sep 8, 2018

Thank you for the report and the detailed analysis.

A pull request to improve the memory usage in IsolationForest would be very much appreciated!

Also if possible please use code formatting in Github comments -- it really helps readability (I edited your comment above) , and it possible to use some other format than .docx for sharing documents (as it's difficult to open it on Linux). Thanks!

@anant9
Copy link
Author
anant9 commented Sep 8, 2018

Thanks for a very prompt response.
I'm new to GitHub, not really sure of the process here. :)

Wants to first confirm that it's a valid issue & possible to resolve memory consumption as I have mentioned.
Current memory consumption is quite high(~5GB) for 1000 estimators.

The attached document has images too & so .docx. Any preference as what format to use for future sharing.

@rth
Copy link
Member
rth commented Sep 8, 2018

If I understand correctly, the issue is that in IsolationForest.decision_function one allocates two arrays n_samples_leaf and depths or shape (n_samples, n_estimators). When n_samples is quite large (I imagine it's ~200k in your case?) for large n_estimators this can indeed take a lot of memory. Then there are even more such allocations in _average_path_length.

I agree this can be probably optimized as you propose. The other alternative could be just to chunk X row wise then concatenate the results (see related discussion in #10280).

If you make a Pull Request with the proposed changes (see Contributing docs), even if it's work in progress, it will be easier to discuss other possible solutions there while looking at the code diff.

Edit: PDF might be simpler to open, or just posting the results in a comment on Github if it's not too much content. You can also hide some content with the details tag.

@anant9
Copy link
Author
anant9 commented Sep 13, 2018

Hello, yes that exactly the issue with isolation forest. The dataset is indeed large 257K samples with 35 numerical features. However, that even needs to be more than that as per my needs and so looking for memory efficiency too in addition to time.

I have gone through the links and they are quite useful to my specific usecases(I was even facing memory issues with sillloutte score and brute algo).
I'm also exploring dask package that works on chunks using dask arrays/dataframes and if can alternatively be used in places where sklearn is consuming memory.

Will be first working on handling the data with chunks and probably in coming weeks will be making the PR for isoforest modification as have to go through the research paper on iso forest algo too. Also looking for other packages/languages than sklearn as how they are doing isoforest.
Here's the bagging implementation seems quite different, i.e. I think the tree is getting build for each sample instead of simply making n_estimators tree and then apply on each sample- In any case I have to understand few other things before starting work/discussion on this in detail.

@ngoix
Copy link
Contributor
ngoix commented Feb 25, 2019

working on this for the sprint. So to avoid arrays of shape (n_samples, n_estimators) in memory, we can either:

  1. updating the average when going through the estimators which will decrease the in-memory shape down to (n_samples,)
  2. chunk the samples row wise

We can also do both options I guess.
I'm not sure if 1) can be done easily though, looking into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants
0