8000 Support pickling the `Llama` instance · Issue #27 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content
Support pickling the Llama instance #27
Closed
@abetlen

Description

@abetlen

As pointed out by here, the Llama class cannot currently be pickled because it has pointers to C memory addresses. To implement this we'll need to write custom __getstate__ and / or __reduce__ methods for pickling as well as a __setstate__ methods for unpickling

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0