Caduceus notifies you if your scheduled tasks/cron jobs did not run.

caduceus, tasks, dead, man, switch, notify
pip install caduceus==0.0.11



pipeline status

What is Caduceus?

Caduceus is that long stick with the intertwined snakes that Hermes used to carry around. It is also a service that will notify you if your scheduled tasks/cronjobs fail to run.


You know how you set all these cronjobs to run, and added fancy error reporting and things, only to realize too late that this doesn't help you at all when the server has been down for a month and nobody noticed? Caduceus won't let this happen again.

Rather than trigger on failure, Caduceus triggers on absence of success. Services have to actively check in (by visiting a URL), and, if they don't, Caduceus notifies you by email that the task has failed. If the service starts working again, Caduceus will notify you of that as well.


To install Caduceus, you can just get it from PyPI:

pip install caduceus

Alternatively, you can pull the Docker image:

docker pull


To run Caduceus, you need to configure it. This is done by placing a file called caduceus.toml in the directory you want to run Caduceus in. That directory is where the Caduceus SQLite database will be created.

If you installed Caduceus from the repo or with pip, just run it:


It will load the configuration from the file, create its database and start running on http://localhost:5000/.

To run it via Docker:

docker run -v (pwd):/caduceus


Here's a sample configuration file (which is also available as caduceus.toml.example in the repository):

# An optional secret key to use for checking in.
secret_key = "somelongkey"

# Where you want the notification emails sent if services don't check in.
recipient_emails = [ "", "" ]

# SMTP server configuration, for sending email.
from_addr = ""
hostname = ""
port = 25
username = "myuser"
password = "mypassword"
encryption = "none"  # Can also be "ssl" or "starttls".

apikey = "#############:####################"
chat_id = "99999999"

# How alerts will be sent by default.
default_channels = [ "telegram" ]

# Your alerts go here.
# An alert needs a short name (here, `cron`), and an interval it needs to check in by.
every = "1h"
# You can override the alerting channels per-alert.
channels = [ "email" ]

every = "1d"
channels = [ "email", "telegram" ]

# For alerts that use email, you can also override the recipient emails.
recipient_emails = [ "" ]

every = "1s"
# You can tell Caduceus to only notify every minute, instead of every second,
# to prevent spam.
notify_every = "1m"

The above config defines three services, raidscrub, backups and alwaysfail. raidscrub needs to check in every hour, backups needs to check in every day, and alwaysfail needs to check in every second. That's why it was called that.

However, as emailing you every second would get spammy, notify_every is set to one minute, so Caduceus will only email you once a minute, even though the alert will be considered failed if it doesn't check in once per second. You will get an initial email right when the failure is detected (there is a 10 second notification resolution) and then emails every minute after that.

Always leave a bit of leeway in your tasks, to account for running time. If a task starts at midnight one day and runs for an hour, it'll check in at 1am. If the next day it runs for 61 minutes, it will check in more than a day later, so you'll get a "failed" email. To avoid that, add an buffer of 10% or so to your alerts.

Checking in

Checking in is done by retrieving a URL on the server. The URL for checking in and resetting the alert timer is /reset/<alert name>/. For example, to check in to backups if you haven't specified a secret_key (and if Caduceus is running on, you'd simply do:


If you did specify a secret key, just include it:

curl<your secret_key>

If your alert is set up for, say, one hour, and your task does not check in, you will get an email one hour after its last checkin, saying "your task has not checked in". If it still doesn't check in, you'll get another email an hour after that, then an hour after that, etc, until it does, at which point you'll get an email saying that the job is now fine.