8000 Memory handling on an infinite loop simulation · Issue #195 · modelon-community/PyFMI · GitHub
[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory handling on an infinite loop simulation #195

Open
pedro-ricardo opened this issue Aug 7, 2023 · 19 comments
Open

Memory handling on an infinite loop simulation #195

pedro-ricardo opened this issue Aug 7, 2023 · 19 comments
Assignees
Labels
question Issue only concerns a or multiple questions.

Comments

@pedro-ricardo
Copy link

Hello there ... firstly thank you for maintaining this library 👍
I have a question about how I can handle memory correctly.

I'm using PyFMI to set an infinite simulation coupled with an OpcUa Server to keep track and interact with the running model.

Here is some characteristics of my algorithm:

  • I've built a simulation using the Master algorithm because I have lots of FMUs to link in my model.
  • The infinite loop I've setup contains a simulation of a step and a sleep to make it realtime.
  • I've seen that the Master class do not have a do_step function so to do this I'm using the simulate(start_time=time, final_time=time+step) function with {'initialize':False} set in options.

My problem is that running this long enough will lead to a memory increasing until it reaches a maximum allowed by the OS.

I do not need the FMUs to keep track of the history of values of the simulations, I only want the latest values in each step ...
I believe the error is here...

Assuming the error is that ... could you guide me to call the correct functions to clean this history so I can run the simulation indefinitely?

Also if you think that the error is somewhere else, could you give me some advice on how to setup this correctly?

Thank you.

@modelonrobinandersson
Copy link
Member

Hi @pedro-ricardo, thank you for reaching out about your question. I have a couple of follow-up questions to understand your situation better.

  1. Is your FMU running any external code?
  2. Is this an ME FMU or CS FMU?
  3. Are you using default options besides initialize: False?
  4. Approximately how long time does it take until you see the memory issue?
  5. During the simulation a log file is generated. Can you see if the size of the this file increases continuously with the simulation? It is unfortunately very easy to run into very large simulation log files that can take up your entire disk space if given the right circumstances

@modelonrobinandersson modelonrobinandersson self-assigned this Aug 7, 2023
@modelonrobinandersson modelonrobinandersson added the question Issue only concerns a or multiple questions. label Aug 7, 2023
@pedro-ricardo
Copy link
Author

Thanks for the quick response...

  1. Is your FMU running any external code?

No, the FMUs don't even have an algorithm part, it's just parameters, input, output and the equation parts.

  1. Is this an ME FMU or CS FMU?

All are CS FMUs

  1. Are you using default options besides initialize: False?

Yes, I do the following in the loop:

step = 0.5
std_opt = master.simulate_options()
std_opt['step_size'] = 0.01
std_opt['result_handling'] = "none"
if (current_time>0.0):
    std_opt['initialize'] = False

master.simulate( start_time=current_time, final_time=current_time+step, options=std_opt )
current_time = current_time+step

time.sleep(step)
  1. Approximately how long time does it take until you see the memory issue?

1000 seconds in real time, but it does happen faster if I fix the sleep step to 0.5s and enlarge the final_time of each simulation

  1. During the simulation a log file is generated. Can you see if the size of the this file increases continuously with the simulation? It is unfortunately very easy to run into very large simulation log files that can take up your entire disk space if given the right circumstances

I don't see any file getting large ... but I do set the same log file for all models with: pyfmi.load_fmu(path, log_file_name='./sim_log.txt', log_level=2) maybe that is doing something funny. I've checked folder and disk space .. it is normal.

@pedro-ricardo
Copy link
Author

This is the memory behavior I see:
image
it started with 200Mb and ended up with more than 10Gb before I killed it

@modelonrobinandersson
Copy link
Member

Hi @pedro-ricardo, is master here the object returned from the function pyfmi.load_fmu or the returned object from class Master? Even though I don't think this has anything to do with your logging you can omit the log_file_name and let PyFMI use a default (based on the FMU). See also if your memory explodes when you set log_level=0.

@pedro-ricardo
Copy link
Author

The master in the code snippet is: master = pyfmi.Master(model_list, connections) and the model_list is filled by appending pyfmi.load_fmu returned objects.

I'll ommit the log_file_name and set log_level=0 to check.

@pedro-ricardo
Copy link
Author

See also if your memory explodes when you set log_level=0

The memory still explodes and now I also have a bunch of .txt empty files

@modelonrobinandersson
Copy link
Member

@pedro-ricardo I wonder if the memory goes up because when you do time.sleep(step) Python does not not wait step seconds before it continues, since time.sleep does release the GIL.
As an attempt to isolate the behavior, instead of an infinite loop, run it for n steps, after the n steps simply do input('Waiting...') and monitor if the memory consumption goes down after a while (in case there are up to n of time.sleep waiting to finish).

@pedro-ricardo
Copy link
Author

Hello there @modelonrobinandersson,
I'm sorry for the delay, the company asked to change our code to another library to check ... and it is running smooth in FMPy now.
But I'm still interested in finding where was my mistake with PyFMI.
The main difference when I switched to FMPy is that instead of the master algorithm they join the FMUs in a larger FMU, and then step it.

About the test you asked for, I've added an 11 iterations loop with 100s in each step (that will get me close to 1000s where the problem occurs) without the sleep and waited for and input after the loop finished.

image

As you can see, the memory keeps high after the loop ended. That time span after the loop ended is 4 minutes.

@modelonrobinandersson
Copy link
Member

@pedro-ricardo can you share more of the code snippet you shared earlier just so we can understand the situation fully, it would be nice to find the issue to avoid more people running into this.
Can you perhaps from where you construct the class Master all the way to time.sleep as you have above? Also so the loop is visible.

@pedro-ricardo
Copy link
Author

Ok, I've built a simple debug function, tested it and got the same behaviour.
I've also added some documentation on the parameters so you can see what this function receives

Here it is:

# --------------------
@staticmethod
def debug_start(models:dict, connections:list, initial_params:dict):
    ''' A simplified execution loop for debugging the Master algorithm\n
    * `models`: Dictionary with model name as key and the result of `pyfmi.load_fmu`
        functions as value. Example:\n
        ```
        {'model_name1':<pyfmi.fmi.FMUModelCS2 object at 0x159dba0>,
        'model_name2': <pyfmi.fmi.FMUModelCS2 object at 0x159c650>}
        ```
    * `connections`: List of connection dictionaries in the format:
        `{'from':(model_obj,variable_name),'to':(model_obj,variable_name)}`. Example:\n
        ```
        [ {'from': (<pyfmi.fmi.FMUModelCS2 object at 0x159dba0>, 'Out1'),
            'to': (<pyfmi.fmi.FMUModelCS2 object at 0x15c16e0>, 'Inp1')},
        {'from': (<pyfmi.fmi.FMUModelCS2 object at 0x159dba0>, 'Out2'),
            'to': (<pyfmi.fmi.FMUModelCS2 object at 0x15c16e0>, 'Inp2')} ]
        ```
    * `initial_params`: Dictionary with model names the initial values
        for model inputs and parameters. Example:\n
        ```
        {'model_name1':{'Par1': 500, 'Inp1': 530},
        'model_name2': {'Par1': 50, 'Par2': 40} }
        ```
    '''
    
    # Go though all model objects and set the initial parameters
    for m_name in models.keys():
        for prop, val in initial_params[m_name].items():
            models[m_name].set(prop,val)

    # Parse models and connections to pyfmi Master format.
    model_list = list(models.values())
    connection_list = [c['from']<
8000
span class="pl-c1">+c['to'] for c in connections]
    
    # Build master algorithm 
    master = pyfmi.Master(model_list, connection_list)

    # Set initial configurations
    current_time = 0.0
    std_opt = master.simulate_options()
    std_opt['step_size'] = 0.01
    std_opt['result_handling'] = "none"

    # Simulate
    for i in range(11):
        
        if (i>0):
            # Disable initialization after first iteration
            std_opt['initialize'] = False

        # Call master simulation
        master.simulate( start_time=current_time,
            final_time=current_time+100, options=std_opt)            
        
        # Update simulation Time
        current_time = current_time+100

    # Wait after finish
    input('Press any Key to end ....')
# --------------------

@pedro-ricardo
Copy link
Author

I'm not using the time.sleep nor the infinite loop above because the loop you suggested had the same memory problem and it is simpler.

@pedro-ricardo
Copy link
Author

Isn't it possible that this pyfmi.Master class is just accumulating all the simulated data inside it? Maybe I should get the returned object and perform some kind of free/clean.

This coupled simulation I'm doing is pretty big and the step_size is small.
For the purpose of running this as "real time", I will not need this internal history to be kept by pyfmi

@modelonrobinandersson
Copy link
Member

@pedro-ricardo thank you a lot. Sorry for the questions but I have a couple of suspicions:

  1. Can you see if you get the same behavior if opts['logging'] is set to False (if you have it as True)? Note this by default False, but I see some data is appended to lists if this is True.
  2. The class instance Master also has a dict named self.models_dict, can you see if this changes over time? You can try simply using print(master.models_dict). It would be helpful to see if it still looks "reasonable" by the time you see your memory consumption explode. I can also find several places in the code where the following is commented out as #store_communication_point(self.models_dict) except for def simulate(...) and wonder if this is causing the memory to go up.

Let me know your findings.

@pedro-ricardo
Copy link
Author
  1. Added std_opt['logging'] = False but nothing changed.

  2. The print(master.models_dict) is very large to keep track of ... instead I've added the following snippet to save this log on the end of each iteration.

       with open(f'./debug_it{i}','w') as fid:
           fid.write( pprint.pformat(master.models_dict) )

Comparing the subsequent debug files with diff the only thing that is changing is the reference of the object in result key:

34c34
<                'result': <pyfmi.common.io.ResultHandlerDummy object at 0x7f80c8d3bee0>}),
---
>                'result': <pyfmi.common.io.ResultHandlerDummy object at 0x7f80c8d3bdf0>}),
68c68
<                'result': <pyfmi.common.io.ResultHandlerDummy object at 0x7f80c8d3bf40>}),
---
>                'result': <pyfmi.common.io.ResultHandlerDummy object at 0x7f80c8d3bee0>}),

And so on for all other models ...

This change occurs for all iterations.

@modelonrobinandersson
Copy link
Member

@pedro-ricardo thank you. I will see if I can reproduce this on my end in order to debug it further.

@pedro-ricardo
Copy link
Author

Let me know if you can reproduce it.
If nothing happens on your side, please share your test with me.

@pedro-ricardo
Copy link
Author

@modelonrobinandersson
Were you able to reproduce the behavior?

@modelonrobinandersson
Copy link
Member

Hi @pedro-ricardo, sorry I have not had time yet. I hopefully will get some time later this week.

@modelonrobinandersson
Copy link
Member
modelonrobinandersson commented Aug 17, 2023

Hi @pedro-ricardo
I was able to test locally with two simple FMUs. My script can be seen below. My memory consumption is completely flat the entire time and I see nothing that is drastically increasing as the simulation is running. Is there any chance your FMU is running external code?

This is what I tried:

from pyfmi import Master, load_fmu

fmu1 = load_fmu('LinearStability_SubSystem1.fmu')
fmu2 = load_fmu('LinearStability_SubSystem2.fmu')

models = [fmu1, fmu2]
connections = [(fmu1, "y1", fmu2, "u2"),
               (fmu2, "y2", fmu1, "u1")]

master = Master(models, connections)
current_time = 0.0
delta = 0.005 # small value just to force the simulation to run for a while
opts = master.simulate_options()
opts['step_size'] = 0.0001
opts['result_handling'] = "none"

# Simulate
for i in range(1000):
    if i>0:
        opts['initialize'] = False
    master.simulate( start_time = current_time, final_time = current_time+delta, options = opts)
    current_time += delta

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Issue only concerns a or multiple questions.
Projects
None yet
Development

No branches or pull requests

2 participants
0