Closed
Description
Hello World,
Im looking for a way to be able to load multiple models at diffrent times but im looking for way to unload the model from GPU/RAM when Im done using in the main process.
For example, in your main process you call llama-cpp-python
you call your model and you interface with the model.
However say you need to switch to another model via the main process, typically you suppose to unload the model from ram and then load the next model but I can seem to figure out a way to beable to gracefully tell llama-cpp
to shutdown and unload from memory so I dont run into OOM issues.
Anyone have any ideas on how to do this?