-
Notifications
You must be signed in to change notification settings - Fork 818
Added UWSGI sharedarea support #129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Can you explain how this works and why it's better than the approach we already use? |
You mean prometheus_multiproc_dir ? It's use http://uwsgi-docs.readthedocs.io/en/latest/SharedArea.html to store metrics. It's fast and safe.You don't need to wipe prometheus_multiproc_dir between restarts. |
How does it work in the backend? From a quick peek the code you have can be no faster than what we already have, as it's doing the same locking (which is the slow bit). |
backend? What do you mean? I create sample project https://github.com/Lispython/prometheus_client_example to describe problems with prometheus_multiproc_dir. Can you confirm that https://github.com/Lispython/prometheus_client_example/blob/master/flask_app.py and https://github.com/Lispython/prometheus_client_example/blob/master/uwsgi.ini#L4 is a correct usage of lib for multiprocessed applications? |
Is is using IPC, SHM, mmapped files, anonymous mmaped files, network sockets, unix sockets or what? I'm not inclined to add an additional method of doing things that only works for some users and doesn't bring a major benefit with it. |
It can use mmapped files or anonymous mmaped files. @brian-brazil whats about second question? About prometheus_multiproc_dir Try to make slow load (100 RPS during 1 minute) to application from example. Our application config have only 4 workers. Whet you call generate_latest function, it's generate different results for every request, not 6000 calls. |
How is it doing interprocess locking?
How slow is the /metrics to render? I'd presume it's pretty fast. |
@brian-brazil you don't understand. Step 1. Run application and make load via Step 2. Check uwsgi last log string. Uwsgi processed 6000 requests. Step 3. Get metrics request.
Process pid=13 Get another metrics request.
Process pid=11 And every /metrics request return different requests counter. |
Given that's roughly a quarter of the requests, I suspect you haven't hooked in the mutliproc setup properly. |
If this is not right setup https://github.com/Lispython/prometheus_client_example/blob/master/uwsgi.ini#L4 Can you show right setup? |
Have you hooked in the multiproc collector? |
Some results.
|
Are you pre-forking? We don't support that currently. |
|
@Lispython i tried using your branch, but unfortunately the result is still different each time i call the metrics endpoint. Am i missing something in the configuration? This is how i use the prometheus client:
And this is my uwsgi .ini file
and this is the result
and second time
|
Hi, @orhan89 , you right. After this discussion I make another tests and got similar results. generate_latest get values from local memory, not from _ValueClass( _UWSGISharedareaDict). Without a complete refactoring it is impossible to do what we need. I create my own library that can support extensible storages (local memory, uwsgi sharedarea, mmap, etc) and uwsgi exporter included: https://github.com/Lispython/pyprometheus Demo project: https://github.com/Lispython/pyprometheus_demo Sorry for poor docs and examples, its still in progress. |
Added new value class UWSGISharedareaInit, that store values on uwsgi sharedarea.