Python prometheus multiprocessing client which used redis as metric storage.

pip install prometheus-aioredis-client==0.2.0



Prometheus client that stored metric in Redis. Use it for share metrics in multiprocessing application.

Use one instance Redis for metrics.

Library pass simple performance test. If your performance tests find errors or leak write about it:

Be careful for use it in production.


  • Support pythons 3.8, 3.9, 3.10
  • Support Counter
  • Support Summary
  • Support Histogram
  • Support Gauge with auto clearing of dead processes gauge values (based on Redis expire).


$ pip install prometheus-aioredis-client


Simple aiohttp app example

from aiohttp import web
import aioredis
import prometheus_aioredis_client as prom

counter = prom.Counter(
    "counter", "Counter documentation"
    # global prom.REGISTRY is default of metrics registry
    # you can define another registry:
    # registry=prom.Registry(task_manager=prom.TaskManager())
    # dont forget close all using registries

registry = prom.Registry()

async def on_start(app):
    app['redis_pool'] = await aioredis.from_url(
    # setup redis connection in registry
    # all metrics in this registry will use this connection

async def on_stop(app):
    # wait for closing all tasks and delete gauge metric values
    await prom.REGISTRY.cleanup_and_close()
    await app['redis_pool'].close()

async def inc(r):
    # create future and put it in event loop
    return web.Response(body=(await prom.REGISTRY.output()), content_type='text')

async def a_inc(r):
    # wait while increment value
    await counter.a_inc()
    return web.Response(body=(await prom.REGISTRY.output()), content_type='text')

if __name__ == '__main__':
    app = web.Application()
    app.router.add_get("/inc", inc)
    app.router.add_get("/a_inc", a_inc)


Counter based on atomic "incrby" command. All processes increment one value in Redis.

import prometheus_aioredis_client as prom

c = prom.Counter(
    "my_first_counter" # name of metric
    "Docstring for counter"

async def some_func():
    # you can wait incrementation
    await c.a_inc(2)
    # or make future

# counter with labels
cl = prom.Counter(
    "Docstring for counter"
    ['one', 'two']

async def some_func2():
    c1.labels("first", "second").inc()
    c1.labels("first", "another").inc()

You can call Redis commands keys my_first_counter* and keys counter_with_labels* for watch all created keys.


Its like a Counter. All processes increment one value.

import prometheus_aioredis_client as prom

s = prom.Summary(
    "Docstring for counter",

async def some_func():


import prometheus_aioredis_client as prom

h = prom.Histogram(
    "Docstring for counter",
    [1, 20, 25.5]

async def some_func():
    # Buckets '20' and '25.5' will be incremented.
    # Bucket '1' stay zero value.


All gauge metric of all processes got unique identifier. You can see this identifier in label gauge_index.

Gauge index is not a PID. It is simple Redis counter.

If you want stop process you should make await Registry.cleanup_and_close() before. This function wait all futures and drop gauge metrics which relate to the process.

If you use gunicorn max_requests or uwsgi harakiri cleanup_and_close will not called.

But it is not problem because gauge metrics set with expire param and after expire period will be deleted.

Expire period can be set in Gauge constructor:

import prometheus_aioredis_client as prom

h = prom.Gauge(
    expire=20 # expire value after 20 seconds

async def some_func():

What happen if you set gauge metric less than once every 20 seconds?

Everything will be fine because Registry.task_manager contains refresh coroutine. This coroutine refresh all gauge values every N seconds.

N should be less then smallest expire param.

Default expire for Gauge metrics 60 seconds. Default refresh period 30 seconds.

You can define refresh period:

import prometheus_aioredis_client as prom