io.github.dnovitski:logback-awslogs-appender

An Amazon Web Services (AWS) Logs (CloudWatch) appender for Logback


License
LGPL-2.1+

Documentation

Build Status

Logback AWSLogs appender

An Amazon Web Services CloudWatch Logs appender for Logback.

Thank you for your help:

Quick start

pom.xml:

<project>
    <dependencies>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.21</version>
        </dependency>
        <dependency>
            <groupId>ca.pjer</groupId>
            <artifactId>logback-awslogs-appender</artifactId>
            <version>1.4.0</version>
        </dependency>
    </dependencies>
</project>

logback.xml

The simplest config that actually (synchronously) send logs to CloudWatch (see More configurations section for a real life example):

<configuration>

    <appender name="AWS_LOGS" class="ca.pjer.logback.AwsLogsAppender"/>

    <root>
        <appender-ref ref="AWS_LOGS"/>
    </root>

</configuration>

With every possible defaults:

  • The Layout will default to EchoLayout.
  • The Log Group Name will default to AwsLogsAppender.
  • The Log Stream Name will default to a timestamp formated with yyyyMMdd'T'HHmmss.
  • The AWS Region will default to the AWS SDK default region (us-east-1) or the current instance region.
  • The maxFlushTimeMillis will default to 0, so appender is in synchronous mode.

AwsLogsAppender will search for AWS Credentials using the DefaultAWSCredentialsProviderChain.

The foud Credentials must have at least this Role Policy:

{
  "Statement": [
    {
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

Code

As usual with SLF4J:

// get a logger
org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(MyClass.class);

// log
logger.info("HelloWorld");

More configurations

It may be worth quoting this from AWS, beacause this is why we need to have unique logStreamName:

When you have two processes attempting to perform the PutLogEvents API call to the same log stream, there is a chance that one will pass and one will fail because of the sequence token provided for that same log stream. Because of the sequencing of these events maintained in the log stream, you cannot have concurrently running processes pushing to the same log-stream.

A real life logback.xml would probably look like this (when all options are specified):

<configuration packagingData="true">

    <!-- Register the shutdown hook to allow logback to cleanly stop appenders -->
    <!-- this is strongly recommend when using AwsLogsAppender in async mode, -->
    <!-- to allow the queue to flush on exit -->
    <shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>

    <!-- Timestamp used into the Log Stream Name -->
    <timestamp key="timestamp" datePattern="yyyyMMddHHmmssSSS"/>

    <!-- The actual AwsLogsAppender (asynchronous mode because of maxFlushTimeMillis > 0) -->
    <appender name="ASYNC_AWS_LOGS" class="ca.pjer.logback.AwsLogsAppender">

        <!-- Send only WARN and above -->
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>WARN</level>
        </filter>

        <!-- Nice layout pattern -->
        <layout>
            <pattern>%d{yyyyMMdd'T'HHmmss} %thread %level %logger{15} %msg%n</pattern>
        </layout>

        <!-- Hardcoded Log Group Name -->
        <logGroupName>/com/acme/myapp</logGroupName>

        <!-- Log Stream Name UUID Prefix -->
        <logStreamUuidPrefix>mystream/</logStreamUuidPrefix>

        <!-- Hardcoded AWS region -->
        <!-- So even when running inside an AWS instance in us-west-1, logs will go to us-west-2 -->
        <logRegion>us-west-2</logRegion>

        <!-- Maximum number of events in each batch (50 is the default) -->
        <!-- will flush when the event queue has 50 elements, even if still in quiet time (see maxFlushTimeMillis) -->
        <maxBatchLogEvents>50</maxBatchLogEvents>

        <!-- Maximum quiet time in millisecond (0 is the default) -->
        <!-- will flush when met, even if the batch size is not met (see maxBatchLogEvents) -->
        <maxFlushTimeMillis>30000</maxFlushTimeMillis>

        <!-- Maximum block time in millisecond (5000 is the default) -->
        <!-- when > 0: this is the maximum time the logging thread will wait for the logger, -->
        <!-- when == 0: the logging thread will never wait for the logger, discarding events while the queue is full -->
        <maxBlockTimeMillis>5000</maxBlockTimeMillis>

        <!-- Retention value for log groups, 0 for infinite see -->
        <!-- https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutRetentionPolicy.html for other -->
        <!-- possible values -->
        <retentionTimeDays>0</retentionTimeDays>

        <!-- Use custom credential instead of DefaultCredentialsProvider -->
        <accessKeyId>YOUR_ACCESS_KEY_ID</accessKeyId>
        <secretAccessKey>YOUR_SECRET_ACCESS_KEY</secretAccessKey>
    </appender>

    <!-- A console output -->
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyyMMdd'T'HHmmss} %thread %level %logger{15} %msg%n</pattern>
        </encoder>
    </appender>

    <!-- Root with a threshold to INFO and above -->
    <root level="INFO">
        <!-- Append to the console -->
        <appender-ref ref="STDOUT"/>
        <!-- Append also to the (async) AwsLogsAppender -->
        <appender-ref ref="ASYNC_AWS_LOGS"/>
    </root>

</configuration>

See The logback manual - Chapter 3: Logback configuration for more config options.